Skip navigation.

Feed aggregator

Announcing the Special Guest Speakers for Brighton & Atlanta BI Forum 2015

Rittman Mead Consulting - Mon, 2015-03-09 08:13

As well as a great line-up of speakers and sessions at each of the Brighton & Atlanta Rittman Mead BI Forum 2015 events in May, I’m very pleased to announce our two guest speakers who’ll give the second keynotes, on the Thursday evening of the two events just before we leave for the restaurant and the appreciation events. This year our special guest speaker in Atlanta is John Foreman, Chief Data Scientist at MailChimp and author of the book “Data Smart: Using Data Science to Transform Information into Insight”; and in Brighton we’re delighted to have Reiner Zimmerman, Senior Director of Product Management at Oracle US and the person behind the Oracle DW & Big Data Global Leaders program.

NewImage

I first came across John Foreman when somebody recommended his book to me, “Data Smart”, a year or so ago. At that time Rittman Mead were getting more-and-more requests from our customers asking us to help with their advanced analytics and predictive modelings needs, and I was looking around for resources to help myself and the team get to grips with some of the more advanced modelings and statistical techniques Oracle’s tools now support – techniques such as clustering and pattern matching, linear regression and genetic algorithms.

One of the challenges when learning these sorts of techniques is not getting to caught up in the tools and technology – R was our favoured technology at the time, and there’s lots to it – so John’s book was particularly well-timed as it goes through these types of “data science” techniques but focuses on Microsoft Excel as the analysis tool, with simple examples and a very readable style.

Back in his day job, John is Chief Data Scientist at MailChimp and has become a particularly in-demand speaker following the success of his book, and I was very excited to hear from Charles Elliott, our Practice Manager for Rittman Mead America, that he lived near John in Atlanta and had arranged for him to keynote at our Atlanta BI Forum event. His Keynote will be entitled “How Mailchimp used qualitative and quantitative analysis to build their next product” and we’re very much looking forward to meeting him at our event in Atlanta on May 13th-15th 2015.

NewImage

Our second keynote speaker at the Brighton Rittman Mead BI Forum 2015 event is non-other than Reiner Zimmerman, best known in EMEA for organising the Oracle DW Global Leaders Program. We’ve known Reiner for several years now as Rittman Mead are one of the associate sponsors for the program, which aims to bring together the leading organizations building data warehouse and big data systems on the Oracle Engineered Systems platform.

A bit like the BI Forum (but even more exclusive), the DW Global Leaders program holds meetings in the US, EMEA and AsiaPac over the year and is a fantastic networking and knowledge-sharing group for an exclusive set of customers putting together the most cutting-edge DW and big data systems on the latest Oracle technology. Reiner’s also an excellent speaker and a past visitor to the BI Forum, and his session entitled “Hadoop and Oracle BDA customer cases from around the world” will be a look at what customers are really doing, and the value they’re getting, from building big data systems on the Oracle platform.

Registration is now open for both the Brighton and Atlanta BI Forum 2015 events, with full details including the speaker line-up and how to register on the event website. Keep an eye on the blog for more details of both events later this week including more on the masterclass by myself and Jordan Meyer, and a data visualisation “bake-off” we’re going to run on the second day of each event. Watch this space…!

Categories: BI & Warehousing

Version Control for PL/SQL Webinar

Gerger Consulting - Mon, 2015-03-09 05:43

Thanks to everyone who attended the "Introduction to Gitora, the free version control tool for PL/SQL" webinar. You can watch a recording of the webinar below.



Introducing Gitora, the free version control tool for PL/SQL from Yalim K. Gerger on Vimeo.

You can also view the slides of the webinar below:


Introducing Gitora, free version control tool for PL/SQL from Yalım K. Gerger

Categories: Development

Blueprint for a Post-LMS, Part 5

Michael Feldstein - Sun, 2015-03-08 17:38

By Michael FeldsteinMore Posts (1021)

In parts 1, 2, 3, and 4 of this series, I laid out a model for a learning platform that is designed to support discussion-centric courses. I emphasized how learning design and platform design have to co-evolve, which means, in part, that a new platform isn’t going to change much if it is not accompanied by pedagogy that fits well with the strengths and limitations of the platform. I also argued that we won’t see widespread changes in pedagogy until we can change faculty relationships with pedagogy (and course ownership), and I proposed a combination of platform, course design, and professional development that might begin to chip away at that problem. All of these ideas are based heavily on lessons learned from social software  and from cMOOCs.

In this final post in the series, I’m going to give a few examples of how this model could be extended to other assessment types and related pedagogical approaches, and then I’ll finish up by talking about what it would take to make the peer grading system described in part 2 be (potentially) accepted by students as at least a component of a grading system in a for-credit class.

Competency-Based Education

I started out the series talking about Habitable Worlds, a course out of ASU that I’ve written about before and that we feature in the forthcoming e-Literate TV series on personalized learning. It’s an interesting hybrid design. It has strong elements of competency-based education (CBE) and mastery learning, but the core of it is problem-based learning (PBL). The competency elements are really just building blocks that students need in the service of solving the big problem of the course. Here’s course co-designer and teacher Ariel Anbar talking about the motivation behind the course:

Click here to view the embedded video.

It’s clear that the students are focused on the overarching problem rather than the competencies:

Click here to view the embedded video.

And, as I pointed out in the first post in the series, they end up using the discussion board for the course very much like professionals might use a work-related online community of practice to help them work through their problems when they get stuck:

Click here to view the embedded video.

This is exactly the kind of behavior that we want to see and that the analytics I designed in part 3 are designed to measure. You could attach a grade to the students’ online discussion behaviors. But it’s really superfluous. Students get their grade from solving the problem of the course. That said, it would be helpful to the students if productive behaviors were highlighted by the system in order to make them easier to learn. And by “learn,” I don’t mean “here are the 11 discussion competencies that you need to display.” I mean, rather, that there are different patterns of productive behavior in a high-functioning group. It would be good for students to see not only the atomic behaviors but different patterns and even how different patterns complement each other within a group. Furthermore, I could imagine that some employers might be interested in knowing the collaboration style that a potential employee would bring to the mix. This would be a good fit for badges. Notice that, in this model, badges, competencies, and course grades serve distinct purposes. They are not interchangeable. Competencies and badges are closer to each other than either is to a grade. They both indicate that the student has mastered some skill or knowledge that is necessary to the central problem. But they are different from each other in ways that I haven’t entirely teased out in my own head yet. And they are not sufficient for a good course grade. To get that, the student must integrate and apply them toward generating a novel solution to a complex problem.

The one aspect of Habitable Worlds that might not fit with the model I’ve outlined in this series is the degree to which it has a mandatory sequence. I don’t know the course well enough to have a clear sense, but I suspect that the lessons are pretty tightly scripted, due in part to the fact that the overarching structure of the course is based on an equation. You can’t really drop out one of the variables or change the order willy-nilly in an equation. There’s nothing wrong with that in and of itself, but in order to take full advantage of the system I’ve proposed here, the course design must have a certain amount of play in it for faculty teaching their individual classes to contribute additions and modifications back. It’s possible to use the discussion analytics elements without the social learning design elements, but then you don’t get potential the system offers for faculty buy-in “lift.”

Adding Assignment Types

I’ve written this entire series talking about “discussion-based courses” as if that were a thing, but it’s vastly more common to have discussion and writing courses. One interesting consequences of the work that we did abstracting out the Discourse trust levels is that we created a basic (and somewhat unconventional) generalized peer review system in the process. As long as conversation is the metric, we can measure the conversational aspects generated by any student-created artifact. For example, we could create a facility in OAE for students to claim the RSS feeds from their blogs. Remember, any integration represents a potential opportunity to make additional inferences. Once a post is syndicated into the system and associated with the student, it can generate a Discourse thread just like any other document. That discussion can be included in  With a little more work, you could have student apply direct ratings such as “likes” to the documents themselves. Making the assessment work for these different types isn’t quite as straightforward as I’m making it sound, either from a user experience design perspective or from a technology perspective. But the foundation is there to build on.

One of the commenters on part 1 of the series provided another interesting use case:

I’m the product manager for Wiki Education Foundation, a nonprofit that helps professors run Wikipedia assignments, in which the students write Wikipedia articles in place of traditional term papers. We’re building a system for managing these assignments, from building a week-by-week assignment plan that follows best practices, to keeping track of student activity on Wikipedia, to pulling in view data for the articles students work on, to finding automated ways of helping students work through or avoid the typical stumbling blocks for new Wikipedia editors.

Wikipedia is its own rich medium for conversation and interaction. I could imagine taking that abstracted peer review system and just hooking it up directly to student activity within Wikipedia itself. Once we start down this path, we really need to start talking about IMS Caliper and federated analytics. This has been a real bottom-up analysis, but we quickly reach the point where we want to start abstracting out the particular systems or even system types, and start looking at a general architecture for sharing learning data (safely). I’m not going to elaborate on it here—even I have to stop at some point—but again, if you made it this far, you might find it useful to go back and reread my original post on the IMS Caliper draft standard and the comments I made on its federated nature in my most recent walled garden post. Much of what I have proposed here from an architectural perspective is designed specifically with a Caliper implementation in mind.

Formal Grading

I suppose my favorite model so far for incorporating the discussion trust system into a graded, for-credit class is the model I described above where the analytics act as more of a coach to help students learn productive discussion behavior, while the class grade actually comes from their solution to the central problem, project, or riddle of the course. But if we wanted to integrate the trust analytics as part of the formal grading system, we’d have to get over the “Wikipedia objection,” meaning the belief that somehow vetting by a single expert produces more reliably generates accurate results than crowdsourcing. Some students will want grades from their teachers and will tend to think that the trust levels are bogus as a grade. (Some teachers will agree.) To address their concerns, we need three things. First, we need objectivity, by which I mean that the scoring criteria themselves are being applied the same to everyone. “Objectivity” is often about as real in student evaluation as it is journalism (which is to say, it isn’t), but people do want some sense of fairness, which is probably a better goal. Clear ratings criteria applied to everyone equally gives some sense of fairness. Second, the trust scores themselves must be transparent, by which I mean that students should be able to see how they earned their trust scores. They should also be able to see various paths to improving their scores. And finally, there should be auditability, by which I mean that, in the event that a student is given a score by her peers that her teacher genuinely disagrees with (e.g., a group ganging up to give one student thumbs-downs, or a lot of conversation being generated around something that is essentially not helpful to the problem-solving effort), there is an ability for the faculty member to override that score. This last piece can be a rabbit hole, both in terms of user interface design and in terms of eroding the very sense you’re trying to build of a trust network, but it is probably necessary to get buy-in. The best thing to do is to pilot the trust system (and the course design that is supposed to inspire ranking-worthy conversation) and refine it to the point where it inspires a high degree of confidence before you start using it for formal grading.

That’s All

No, really. Even I run out of gas. Eventually.

For a while.

The post Blueprint for a Post-LMS, Part 5 appeared first on e-Literate.

Partner Webcast – Oracle Private Cloud: Database as a Service (DBaaS) using Oracle Enterprise Manager 12c

Large enterprises today have hundreds and thousands of databases of various versions, configurations and patch levels. Another challenge is around time to provision new databases. When an end...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Analyzing easily the blocked process report

Yann Neuhaus - Sun, 2015-03-08 13:06

Which DBA has not yet face a performance problem issued by several blocked processes? In reality, I’m sure a very little number of them. Troubleshooting a blocked issue scenario is not always easy and may require to use some useful tools to simplify this hard task. A couple of months ago, I had to deal this scenario at one of my customer. During some specific periods in the business day, he noticed that its application slowed down and he asked to me how to solve this issue.

Fortunately, SQL Server provides a useful feature to catch blocked processes. We have just to configure the “blocked process threshold (s)” server option. There are plenty of blogs that explain how to play with this parameter. So I let you perform your own investigation by using your favourite search engine.

Having a blocked process report is useful but often in such situation, there are a lot of processes that sometimes blocked each other’s and we have to find out among this can of worms the real responsible. So, my main concern was the following: how to extract information from the blocked process report and how to correlate all blocked processes together. After some investigation I found a useful script written by Michael J S Swart here. Usually I prefer to write my own script but I didn't had the time and I had to admit this script met perfectly my need. The original version provides the blocked hierarchy and the XML view of the issue. It’s not so bad because we have all the information to troubleshoot our issue. However, my modification consists to change this XM view by adding useful information in tabular format to make the reading of the final result easier. Here the modified version of the script:

 

CREATE PROCEDURE [dbo].[sp_blocked_process_report_viewer_dbi] (        @Trace nvarchar(max),        @Type varchar(10) = 'FILE' )   AS   SET NOCOUNT ON   -- Validate @Type IF (@Type NOT IN('FILE', 'TABLE', 'XMLFILE'))        RAISERROR ('The @Type parameter must be ''FILE'', ''TABLE'' or ''XMLFILE''', 11, 1)   IF (@Trace LIKE '%.trc' AND @Type <> 'FILE')        RAISERROR ('Warning: You specified a .trc trace. You should also specify @Type = ''FILE''', 10, 1)   IF (@Trace LIKE '%.xml' AND @Type <> 'XMLFILE')        RAISERROR ('Warning: You specified a .xml trace. You should also specify @Type = ''XMLFILE''', 10, 1)          CREATE TABLE #ReportsXML (        monitorloop nvarchar(100) NOT NULL,        endTime datetime NULL,        blocking_spid INT NOT NULL,        blocking_ecid INT NOT NULL,        blocking_bfinput NVARCHAR(MAX),        blocked_spid INT NOT NULL,        blocked_ecid INT NOT NULL,        blocked_bfinput NVARCHAR(MAX),        blocked_waitime BIGINT,        blocked_hierarchy_string as CAST(blocked_spid as varchar(20)) + '.' + CAST(blocked_ecid as varchar(20)) + '/',        blocking_hierarchy_string as CAST(blocking_spid as varchar(20)) + '.' + CAST(blocking_ecid as varchar(20)) + '/',        bpReportXml xml not null,        primary key clustered (monitorloop, blocked_spid, blocked_ecid),        unique nonclustered (monitorloop, blocking_spid, blocking_ecid, blocked_spid, blocked_ecid) )   DECLARE @SQL NVARCHAR(max); DECLARE @TableSource nvarchar(max);   -- define source for table IF (@Type = 'TABLE') BEGIN        -- everything input by users get quoted        SET @TableSource = ISNULL(QUOTENAME(PARSENAME(@Trace,4)) + N'.', '')              + ISNULL(QUOTENAME(PARSENAME(@Trace,3)) + N'.', '')              + ISNULL(QUOTENAME(PARSENAME(@Trace,2)) + N'.', '')              + QUOTENAME(PARSENAME(@Trace,1)); END   -- define source for trc file IF (@Type = 'FILE') BEGIN         SET @TableSource = N'sys.fn_trace_gettable(N' + QUOTENAME(@Trace, '''') + ', -1)'; END   -- load table or file IF (@Type IN('TABLE', 'FILE')) BEGIN        SET @SQL = N'                    INSERT #ReportsXML(blocked_ecid, blocked_spid, blocked_bfinput , blocking_ecid, blocking_spid,                                 blocking_bfinput, blocked_waitime, monitorloop, bpReportXml,endTime)              SELECT                     blocked_ecid,                     blocked_spid,                     blocked_inputbuffer,                     blocking_ecid,                     blocking_spid,                     blocking_inputbuffer,                 blocked_waitime,                     COALESCE(monitorloop, CONVERT(nvarchar(100), endTime, 120), ''unknown''),                     bpReportXml,                     EndTime              FROM ' + @TableSource + N'              CROSS APPLY (                     SELECT CAST(TextData as xml)                     ) AS bpReports(bpReportXml)              CROSS APPLY (                     SELECT monitorloop = bpReportXml.value(''(//@monitorLoop)[1]'', ''nvarchar(100)''), blocked_spid = bpReportXml.value(''(/blocked-process-report/blocked-process/process/@spid)[1]'', ''int''), blocked_ecid = bpReportXml.value(''(/blocked-process-report/blocked-process/process/@ecid)[1]'', ''int''),                            blocked_inputbuffer = bpReportXml.value(''(/blocked-process-report/blocked-process/process/inputbuf/text())[1]'', ''nvarchar(max)''), blocking_spid = bpReportXml.value(''(/blocked-process-report/blocking-process/process/@spid)[1]'', ''int''), blocking_ecid = bpReportXml.value(''(/blocked-process-report/blocking-process/process/@ecid)[1]'', ''int''),                            blocking_inputbuffer = bpReportXml.value(''(/blocked-process-report/blocking-process/process/inputbuf/text())[1]'', ''nvarchar(max)''), blocked_waitime = bpReportXml.value(''(/blocked-process-report/blocked-process/process/@waittime)[1]'', ''bigint'')                     ) AS bpShredded              WHERE EventClass = 137';                     EXEC (@SQL); END   IF (@Type = 'XMLFILE') BEGIN        CREATE TABLE #TraceXML(              id int identity primary key,              ReportXML xml NOT NULL            )               SET @SQL = N'              INSERT #TraceXML(ReportXML)              SELECT col FROM OPENROWSET (                            BULK ' + QUOTENAME(@Trace, '''') + N', SINGLE_BLOB                     ) as xmldata(col)';          EXEC (@SQL);               CREATE PRIMARY XML INDEX PXML_TraceXML ON #TraceXML(ReportXML);          WITH XMLNAMESPACES        (              'http://tempuri.org/TracePersistence.xsd' AS MY        ),        ShreddedWheat AS        (              SELECT                     bpShredded.blocked_ecid,                     bpShredded.blocked_spid,                     bpShredded.blocked_inputbuffer,                     bpShredded.blocked_waitime,                     bpShredded.blocking_ecid,                     bpShredded.blocking_spid,                     bpShredded.blocking_inputbuffer,                     bpShredded.monitorloop,                     bpReports.bpReportXml,                     bpReports.bpReportEndTime              FROM #TraceXML              CROSS APPLY                     ReportXML.nodes('/MY:TraceData/MY:Events/MY:Event[@name="Blocked process report"]')                     AS eventNodes(eventNode)              CROSS APPLY                     eventNode.nodes('./MY:Column[@name="EndTime"]')                     AS endTimeNodes(endTimeNode)              CROSS APPLY                     eventNode.nodes('./MY:Column[@name="TextData"]')                     AS bpNodes(bpNode)              CROSS APPLY(                     SELECT CAST(bpNode.value('(./text())[1]', 'nvarchar(max)') as xml),                            CAST(LEFT(endTimeNode.value('(./text())[1]', 'varchar(max)'), 19) as datetime)              ) AS bpReports(bpReportXml, bpReportEndTime)              CROSS APPLY(                     SELECT                            monitorloop = bpReportXml.value('(//@monitorLoop)[1]', 'nvarchar(100)'),                            blocked_spid = bpReportXml.value('(/blocked-process-report/blocked-process/process/@spid)[1]', 'int'),                            blocked_ecid = bpReportXml.value('(/blocked-process-report/blocked-process/process/@ecid)[1]', 'int'),                            blocked_inputbuffer = bpReportXml.value('(/blocked-process-report/blocked-process/process/inputbuf/text())[1]', 'nvarchar(max)'),                            blocking_spid = bpReportXml.value('(/blocked-process-report/blocking-process/process/@spid)[1]', 'int'),                            blocking_ecid = bpReportXml.value('(/blocked-process-report/blocking-process/process/@ecid)[1]', 'int'),                            blocking_inputbuffer = bpReportXml.value('(/blocked-process-report/blocking-process/process/inputbuf/text())[1]', 'nvarchar(max)'),                            blocked_waitime = bpReportXml.value('(/blocked-process-report/blocked-process/process/@waittime)[1]', 'bigint')              ) AS bpShredded        )        INSERT #ReportsXML(blocked_ecid,blocked_spid,blocking_ecid,blocking_spid,              monitorloop,bpReportXml,endTime)        SELECT blocked_ecid,blocked_spid,blocking_ecid,blocking_spid,              COALESCE(monitorloop, CONVERT(nvarchar(100), bpReportEndTime, 120), 'unknown'),              bpReportXml,bpReportEndTime        FROM ShreddedWheat;               DROP TABLE #TraceXML   END   -- Organize and select blocked process reports ;WITH Blockheads AS (        SELECT blocking_spid, blocking_ecid, monitorloop, blocking_hierarchy_string        FROM #ReportsXML        EXCEPT        SELECT blocked_spid, blocked_ecid, monitorloop, blocked_hierarchy_string        FROM #ReportsXML ), Hierarchy AS (        SELECT monitorloop, blocking_spid as spid, blocking_ecid as ecid,              cast('/' + blocking_hierarchy_string as varchar(max)) as chain,              0 as level        FROM Blockheads               UNION ALL               SELECT irx.monitorloop, irx.blocked_spid, irx.blocked_ecid,              cast(h.chain + irx.blocked_hierarchy_string as varchar(max)),              h.level+1        FROM #ReportsXML irx        JOIN Hierarchy h              ON irx.monitorloop = h.monitorloop              AND irx.blocking_spid = h.spid              AND irx.blocking_ecid = h.ecid ) SELECT        ISNULL(CONVERT(nvarchar(30), irx.endTime, 120),              'Lead') as traceTime,        SPACE(4 * h.level)              + CAST(h.spid as varchar(20))              + CASE h.ecid                     WHEN 0 THEN ''                     ELSE '(' + CAST(h.ecid as varchar(20)) + ')'              END AS blockingTree,        irx.blocked_waitime,        bdp.last_trans_started as blocked_last_trans_started,        bdp.wait_resource AS blocked_wait_resource,        bgp.wait_resource AS blocking_wait_resource,        bgp.[status] AS blocked_status,        bdp.[status] AS blocking_status,        bdp.lock_mode AS blocked_lock_mode,        bdp.isolation_level as blocked_isolation_level,        bgp.isolation_level as blocking_isolation_level,        bdp.app AS blocked_app,        DB_NAME(bdp.current_db) AS blocked_db,        '-----> blocked statement' AS blocked_section,        CAST('' + irx.blocked_bfinput + '' AS XML) AS blocked_input_buffer,        CASE              WHEN bdp.frame_blocked_process_xml IS NULL THEN CAST('' + irx.blocked_bfinput + '' AS XML)              ELSE bdp.frame_blocked_process_xml        END AS frame_blocked_process_xml,        DB_NAME(bgp.current_db) AS blocking_db,        bgp.app AS blocking_app,        'blocking statement ----->' AS blocking_section,        CAST('' + irx.blocking_bfinput + '' AS XML) AS blocking_input_buffer,        CASE              WHEN bgp.frame_blocking_process_xml IS NULL THEN CAST('' + irx.blocking_bfinput + '' AS XML)              ELSE bgp.frame_blocking_process_xml        END AS frame_blocking_process_xml,        irx.bpReportXml from Hierarchy h left join #ReportsXML irx        on irx.monitorloop = h.monitorloop        and irx.blocked_spid = h.spid        and irx.blocked_ecid = h.ecid outer apply (        select              T.x.value('(./process/@waitresource)[1]', 'nvarchar(256)') AS wait_resource,              T.x.value('(./process/@lasttranstarted)[1]', 'datetime') as last_trans_started,              T.x.value('(./process/@lockMode)[1]', 'nvarchar(60)') as lock_mode,              T.x.value('(./process/@status)[1]', 'nvarchar(60)') as [status],              T.x.value('(./process/@isolationlevel)[1]', 'nvarchar(60)') as isolation_level,              T.x.value('(./process/@currentdb)[1]', 'int') as current_db,              T.x.value('(./process/@clientapp)[1]', 'nvarchar(200)') as app,              cast(              (select SUBSTRING(txt.text,(ISNULL(T.x.value('./@stmtstart', 'int'), 0) / 2) + 1,                            ((CASE ISNULL(T.x.value('./@stmtend', 'int'), -1)                                   WHEN -1 THEN DATALENGTH(txt.text)                                   ELSE T.x.value('./@stmtend', 'int')                               END - ISNULL(T.x.value('./@stmtstart', 'int'), 0)) / 2) + 1) + CHAR(13) AS statement_txt                        from bpReportXml.nodes('//blocked-process/process/executionStack/frame') AS T(x)                        cross apply sys.dm_exec_sql_text(T.x.value('xs:hexBinary(substring((./@sqlhandle), 3))', 'varbinary(max)')) AS txt                        for XML path('')) as xml) AS frame_blocked_process_xml          from bpReportXml.nodes('//blocked-process') AS T(x) ) AS bdp outer apply (        select              T.x.value('(./process/@waitresource)[1]', 'nvarchar(256)') AS wait_resource,              T.x.value('(./process/@status)[1]', 'nvarchar(60)') as [status],              T.x.value('(./process/@isolationlevel)[1]', 'nvarchar(60)') as isolation_level,              T.x.value('(./process/@currentdb)[1]', 'int') as current_db,              T.x.value('(./process/@clientapp)[1]', 'nvarchar(200)') as app,              cast(              (select SUBSTRING(txt.text,(ISNULL(T.x.value('./@stmtstart', 'int'), 0) / 2) + 1,                            ((CASE ISNULL(T.x.value('./@stmtend', 'int'), -1)                                   WHEN -1 THEN DATALENGTH(txt.text)                                   ELSE T.x.value('./@stmtend', 'int')                               END - ISNULL(T.x.value('./@stmtstart', 'int'), 0)) / 2) + 1) + CHAR(13) AS statement_txt                        from bpReportXml.nodes('//blocking-process/process/executionStack/frame') AS T(x)                        cross apply sys.dm_exec_sql_text(T.x.value('xs:hexBinary(substring((./@sqlhandle), 3))', 'varbinary(max)')) AS txt                        for XML path('')) as xml) AS frame_blocking_process_xml               from bpReportXml.nodes('//blocking-process') AS T(x) ) AS bgp order by h.monitorloop, h.chain   DROP TABLE #ReportsXML

 

Unfortunately I can’t show my customer context so I will show only a sample of my own test to explain how we can use this script. In fact, the generated result set is splitted into three main sections.

First section: Hierarchy blocked tree, lock resources and transaction isolation level

 

blog_33_-_1_-_result_lock_section

 

Let’s begin by the first category. You can see here the hierarchy tree and the blocked interactions that exist between the different processes. The above picture shows the process id = 72 that is blocking the process id = 73. In turn, the process = 73 is blocking other sessions (with id = 75, 77). Furthermore, the process 74 is at the same level than the process id = 73 and it is blocked by the process id = 72. Finally the process id = 76 is blocked by the process id = 74. A real can of worms isn’t it?

Displaying the blocking hierarchy tree is very useful in this case. In addition, I added the transaction isolation level used by all processes, the status of the processes, the locks and the resources related to the issue. As a reminder, these information are already in the blocked processes report and my task consisted in extracting these information in tabular format. We will use all of them later in this blog post. For the moment, let’s focus on the first hierarchy branch: 72 -> 73 -> 75 -> 77 and the resource that all concerned processes are hitting:

KEY: 6:72057594045595648 (089241b7b846) that we can split in three main parts

6 : Database id = 6 => AdventureWorks2012

72057594045595648 : The container hobt id of the partition that give us the schema, table and index as follows:

select        s.name as [schema_name],        o.name as table_name,        i.name as index_name from sys.partitions as p join sys.objects as o        on p.object_id = o.object_id join sys.indexes as i        on i.object_id = p.object_id              and i.index_id = p.index_id join sys.schemas as s        on s.schema_id = o.schema_id where p.hobt_id = 72057594045595648

 

blog_33_-_2_-_partition

 

Person.Person.PK_Person_BusinessEntityID is a clustered index that includes the BusinessEntityID column.

 

(089241b7b846) :

The lock resource value that identifies the index key in the table Person.Person locked by the process id = 72. We may use the undocumented function %%lockres%% to locate the correct row in the table as follows:

 

select        BusinessEntityID from Person.Person where %%lockres%% = '(089241b7b846)'

 

blog_33_-_3_-_lockres

 

At this point we know that the blocking process has started a transaction in repeatable read transaction isolation level and has not yet released the lock on the index key with value 14. This is why the session id = 73 is still pending because it attempts to access to the same resource by putting an S lock.

Let’s continue with the next sections of the result set:

 

Second section: blocking and blocked input buffers and their related frames

This second part provides detailed information of blocked statement information including the concerned application and the concerned databases as well.

 

blog_33_-_4_-_blocked_session_section

 

Likewise, the last part provides the same kind of information but for the blocking statement(s):

 

blog_33_-_5_-_blocking_session_section

 

We will correlate the information of the above sections. For example, if we take a look directly at the blocking input buffer of the process id = 72 we will discover the responsible that is the following stored procedure:

 

<blockingInputBuffer> EXECTestUpdatePersonNameStyle@NameStyle &lt;/blockingInputBuffer&gt;

 

Next, the blocking frame identifies exactly the portion of code inside the stored procedure where the blocking issue has occurred:


WAITFOR DELAY '00:02:00';

 

Ok it seems that the stored procedure has started an explicit transaction with the repeatable read transaction isolation level and includes a WAITFOR DELAY command with a duration of 2 minutes. During this time, the different resources are still holding by the transaction because there is no transaction commit or transaction rollback and we are in repeatable read transaction isolation level. Let’s take a look at the stored procedure code:

 

ALTER PROCEDURE [dbo].[TestUpdatePersonNameStyle] (        @NameStyle BIT,        @BusinessEntityID INT ) AS   SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;   BEGIN TRAN   SELECT PhoneNumber FROM Person.PersonPhone WHERE BusinessEntityID = @BusinessEntityID;   UPDATE Person.Person SET NameStyle = @NameStyle WHERE BusinessEntityID = @BusinessEntityID + 100;   WAITFOR DELAY '00:02:00';   ROLLBACK TRAN;

 

We can confirm that we found in the first section, the repeatable read transaction isolation level used by the blocking session. In reality, it seems that we have two different resources holding by the above transaction. The first (index key = 14) and the second (index key = 14 + 100).

Now let’s switch to the blocked statement part. A quick look at the input buffer tells us that the session id = 73 is trying to access the same resource than the UPDATE part of the blocking process. It confirms what we saw in the first section: the process id = 73 is in suspended state because it is trying to put a S lock on the concerned resource that is not compatible with an X lock from the UPDATE statement of the process id = 72.


SELECT * FROM Person.Person WHERE BusinessEntityID = 114;  

 

I will do not the same demonstration for all the lines in the result set but let’s finish by the process id = 74. Let’s go back to the first section. We can see that session id = 74 is trying to put an X lock on the following resource:

KEY: 6:72057594045726720 (58e9f9de4ab6)

Let’s apply the same rule that earlier and we may easily find the corresponding index key on the table Person.PersonPhone this time.

 

blog_33_-_6_-_lockres

 

Now let’s continue to the next sections and let’s take a look at the blocking frame:

 


WAITFOR DELAY '00:02:00';

 

The same thing that the first case…. Finally let’s take a look at the blocking input buffer:

 

  BEGIN TRAN; IF EXISTS(SELECT 1 FROM Person.Person WHERE BusinessEntityID = 14) DELETE FROM Person.PersonPhone WHERE BusinessEntityID = 14; ROLLBACK TRAN;    

 

This time, it concerns an explicit transaction but with a different transaction isolation level: read committed mode. You can correlate with the first section by yourself. The blocking point concerns only the second part of the above query as indicated by the blocked_lock column in the first section: The process id = 74 is trying to put an X lock on a resource that is still holding by the process id = 72 (SELECT statement in repeatable read transaction isolation level).

The issue that I faced with my customer was pretty similar. In fact you have just to replace the WAITFOR DELAY command by a series of other pieces of code which deferred drastically the transaction commit time. In this case, having a precise idea of the blocking tree and the other information readable directly on a tabular format helped us to save a lot of time in order to resolve this issue.

Happy troubleshooting!

UKOUG Systems Event and Exadata Content

Jason Arneil - Sun, 2015-03-08 09:55

I’ve been involved in organising a couple of upcoming UKOUG events.

I will be involved with the engineered systems stream for the annual UKOUG conference, returning after an absence of a couple of years, to once again being held in Birmingham.

While the planning for this is at a very early stage, Martin Widlake will be giving you the inside scoop on this.

The event I really want to talk about though is an event that is much more immediate:

The UKOUG Systems Event, this is a one day, multi-stream event which is being held in London on May 20th.

This event will feature at least 1 and possibly 2 Exadata streams. I am sure we will have a really good range of speakers with a wealth of Exadata experience.

In addition to Exadata there will be a focus on other engineered systems platforms as well as Linux/Solaris and virtualisation. So a wide range of topics being covered in a number of different streams. If you feel you have a presentation that might be of interest, either submit a paper, or feel free to get in touch with me to discuss further.

Note the submission deadline is 18th March.

But the real big news though is that the event is likely to feature some serious deep dive material from Roger Macnicol. Roger is one of the people within Oracle actually responsible for writing the smart scan code.

If you want to understand Exadata smart scans you will not be able to get this information anywhere else in the whole of Europe.

I had the privilege of seeing Roger present at E4 last year, and the information he can provide is so good you even had super smart people like Tanel Poder scribbling down a lot of the information that Roger was providing.

So to repeat, if you are interested in knowing about how smart scan works we are hoping to be able to provide a talk with the level of detail that is only possible from having one of the people responsible for smart scan from inside Oracle come to give it. In addition to this he will be presenting on BDA.

If all that was not enough, there should be a nice relaxed social event at the end of the conference where you will be able to chat over any questions you may still have!


Blueprint for a post-LMS, Part 4

Michael Feldstein - Sat, 2015-03-07 18:17

By Michael FeldsteinMore Posts (1021)

In part 1 of this series, I talked about some design goals for a conversation-based learning platform, including lowering the barriers and raising the incentives for faculty to share course designs and experiment with pedagogies that are well suited for conversation-based courses. Part 2 described a use case of a multi-school faculty professional development course which would give faculty an opportunity to try out these affordances in a low-stakes environment. In part 3, I discussed some analytics capabilities that could be added to a discussion forum—I used the open source Discourse as the example—which would lead to richer and more organic assessments in conversation-based courses. But we haven’t really gotten to the hard part yet. The hard part is encouraging experimentation and cross-fertilization among faculty. The problem is that faculty are mostly not trained, not compensated, and otherwise not rewarded for their teaching excellence. Becoming a better teacher requires time, effort, and thought, just as becoming a better scholar does. But even faculty at many so-called “teaching schools” are given precious little in the way of time or resources to practice their craft properly, never mind improving it.

The main solution to this problem that the market has offered so far is “courseware,” which you can think of as a kind of course-in-a-box. In other words, it’s an attempt to move as much as the “course” as possible into the “ware”, or the product. The learning design, the readings, the slides, and the assessments are all created by the product maker. Increasingly, the students are even graded by the product.

courseTarget20130412-TOP-no

This approach as popularly implemented in the market has a number of significant and fairly obvious shortcomings, but the one I want to focus on for this post is these packages are still going to be used by faculty whose main experience is the lecture/test paradigm.[1] Which means that, whatever the courseware learning design originally was, it will tend to be crammed into a lecture/test paradigm. In the worst case, the result is that we have neither the benefit of engaged, experienced faculty who feel ownership of the course nor an advanced learning design that the faculty member has not learned how to implement.

One of the reasons that this works from a commercial perspective is that it relies on the secret shame that many faculty members feel. Professors were never taught to teach, nor are they generally given the time, money, and opportunities necessary to learn and improve, but somehow they have been made to feel that they should already know how. To admit otherwise is to admit one’s incompetence. Courseware enables faculty to keep their “shame” secret by letting the publishers do the driving. What happens in the classroom stays in the classroom. In a weird way, the other side of the shame coin is “ownership.” Most faculty are certainly smart enough to know that neither they nor anybody else is going to get rich off their lecture notes. Rather, the driver of “ownership” is fear of having the thing I know how to do in my classroom taken away from me as “mine” (and maybe exposing the fact that I’m not very good at this teaching thing in the process). So many instructors hold onto the privacy of their classrooms and the “ownership” of their course materials for dear life.

Obviously, if we really want to solve this problem at its root, we have to change faculty compensation and training. Failing that, the next best thing is to try to lower the barriers and increase the rewards for sharing. This is hard to do, but there are lessons we can learn from social media. In this post, I’m going to try to show how learning design and platform design in a faculty professional development course might come together toward this end.

You may recall from part 2 of this series that use case I have chosen is a faculty professional development “course,” using our forthcoming e-Literate TV series about personalized learning as a concrete example. The specific content isn’t that important except to make the thought experiment a little more concrete. The salient details are as follows:

  1. The course is low-stakes; nobody is going to get mad if our grading scheme is a little off. To the contrary, because it’s a group of faculty engaged in professional development about working with technology-enabled pedagogy, the participants will hopefully bring a sense of curiosity to the endeavor.
  2. The course has one central, course-long problem or question: What, if anything, do we (as individual faculty, as a campus, and as a broader community of teachers) want to do with so-called “personalized learning” tools and approaches? Again, the specific question doesn’t matter so much as the fact that there is an overarching question where the answer is going to be specific to the people involved rather than objective and canned. That said, the fact that the course is generally about technology-enabled pedagogy does some work for us.
  3. Multiple schools or campuses will participate in the course simultaneously (though not in lock-step, as I will discuss in more detail later in this post). Each campus cohort will have a local facilitator who will lead some local discussions and customize the course design for local needs. That said, participants will also be able (and encouraged) to have discussions across campus cohorts.
  4. The overarching question naturally lends itself to discussion among different subgroups of the larger inter-campus group, e.g., teachers of the same discipline, people on the same campus, among peer schools, etc.

That last one is critical. There are natural reasons for participants to want to discuss different aspects of the overarching question of the course with different peer groups within the course. Our goal in both course and platform design is to make those discussions as easy and immediately rewarding as possible. We are also going to take advantage of the electronic medium to blur the distinction between contributing a comment, or discussion “post,” with longer contributions such as documents or even course designs.

We’ll need a component for sharing and customizing the course materials, or “design” and “curriculum,” for the local cohorts. Again, I will choose a specific piece of software in order to make the thought experiment more concreted, but as with Discourse in part 3 of this series, my choice of example is in no way intended to suggest that it is the only or best implementation. In this case, I’m going to use the open source Apereo OAE for this component in the thought experiment.

When multiple people teach their own courses using existing the same curricular materials (like a textbook, for example), there is almost always a lot of customization that goes on at the local level. Professor A skips chapters 2 and 3. Professor B uses her own homework assignments instead of the end-of-chapter problems. Professor C adds in special readings for chapter 7. And so on. With paper-based books, we really have no way of knowing what gets used and reused, what gets customized (and how it gets customized), and what gets thrown out. Recent digital platforms, particularly from the textbook publishers, are moving in the direction of being able to track those things. But academia hasn’t really internalized this notion that courses are more often customized than built from scratch, never mind the idea that their customizations could (and should) be shared for the sake of collective improvement. What we want is a platform that makes the potential for this virtuous cycle visible and easy to take advantage of without forcing participants to sacrifice any local control (including the control to take part or all of their local course private if that’s what they want to do).

OAE allows a user to create content that can be published into groups. But published doesn’t mean copied. It means linked. We could have the canonical copy of the ETV personalized learning MOOC (for example), that includes all the episodes from all the case studies plus any supplemental materials we think are useful. The educational technology director at Some State University (SSU) could create a group space for faculty and other stakeholders from her campus. She could choose to pull some, but not all, of the materials from the canonical course into her space. She could rearrange the order. You may recall from part 3 that Discourse can integrate with WordPress, spawning a discussion for every new blog post. We could easily imagine the same kind of integration with OAE. Since anything the campus facilitator pulls from the canonical course copy will be surfaced in her course space rather than copied into it, we would still have analytics on use of the curricular materials across the cohorts, then any discussions in Discourse that are related to the original content items would maintain their linkage (including the ability to automatically publish the “best” comments from the thread back into SSU’s course space). The facilitator could also add her own content, make her space private (from the default of public), and spawn private cohort-specific conversations. In other words, she could make it her own course.

I slipped the first bit of magic into that last sentence. Did you catch it? When the campus facilitator creates a new document, the system can automatically spawn a new discussion thread in Discourse. By default, new documents from the local cohort become available for discussion to all cohorts. And with any luck, some of that discussion will be interesting and rewarding to the person creating the document. The cheap thrill of any successful social media platform is having the (ideally instant) gratification of seeing somebody respond positively to something you say or do. That’s the feeling we’re trying to create. Furthermore, because of the way OAE share documents across groups, if the facilitator in another cohort were to pull your document into her course design, it wouldn’t have to be invisible to you the way creating a copy is. We could create instant and continuously updated feedback on the impact of your sharing. Some documents (and discussions) in some cohorts might need to be private, and OAE supports that, but the goal is to get private, cohort- (or class-)internal sharing feel something like direct messaging feels on Twitter. There is a place for it, but it’s not what makes the experience rewarding.

To that end, we could even feed sharing behavior from OAE into the trust analytics I described in part 3 of this post series. One of the benefits of abstracting the trust levels from Discourse into an external system that has open APIs is that it can take inputs from different systems. It would be possible, for example, to make having your document shared into another cohort on OAE or having a lot of conversation generated from your document count toward your trust level. I don’t love the term “gamification,” but I do love the underlying idea that a well-designed system should make desired behaviors feel good. That’s also a good principle for course design.

I’m going to take a little detour into some learning design elements here, because they are critical success factors for the platform experience. First, the Problem-based Learning (PBL)-like design of the course is what makes it possible for individual cohorts to proceed at their own pace, in their own order, and with their own shortcuts or added excursions and still enable rich and productive discussions across cohorts. A course design that requires that units be released to the participants one week at a time will not work, because discussions will get out of sync as different cohorts proceed differently, and synchronization matters to the course design. If, on the other hand, synchronization across cohorts doesn’t matter because participants are going to the discussion authentically as needed to work out problems (the way they do all the time in online communities but much less often in typical online courses), then discussions will naturally wax and wane with participant needs and there will be no need to orchestrate them. Second, the design is friendly to participation through local cohorts but doesn’t require it. If you want to participate in the course as a “free agent” and have a more traditional MOOC-like experience, you could simply work off the canonical copy of the course materials and follow the links to the discussions.

End of detour. There’s one more technology piece I’d like to add to finish off the platform design for our use case. Suppose that all the participants could log into the system with their university credentials through an identity management scheme like InCommon. This may seem like a trivial implementation detail that’s important mainly for participant convenience, but it actually adds the next little bit of magic to the design. In part 3, I commented that integrating the discussion forum with a content source enables us to make new inferences because we now know that a discussion is “about” the linked content in some sense, and because content creators often have stronger motivations than discussion participants to add metadata like tags or learning objectives that tell us more about the semantics. One general principle that is always worth keeping in mind when designing learning technologies these days is that any integration presents an potential opportunity for new inferences. In the case of single sign-on, we can go to a data source like IPEDS to learn a lot about the participants’ home institutions and therefore about their potential affinities. Affinities are the fuel that provides any social platform with its power. In our use case, participants might be particularly interested in seeing comments from their peer institutions. If we know where they are coming from, then we can do that automatically rather than forcing them to enter information or find each other manually. In a course environment, faculty might want to prioritize the trust signals from students at similar institutions over those from very different institutions. We could even generate separate conversation threads based on these cohorts. Alternatively, people might want to find people with high trust levels who are geographically near them in order to form meetups or study groups.

And that’s it, really. The platform consists of a discussion board, a content system, and a federated identity management system that have been integrated in particular ways and used in concert with particular course design elements. There is nothing especially new about either the technology or the pedagogy. The main innovation here, to the degree that there is one, is combining them in a way that creates the right incentives for the participants. When I take a step back and really look at it, it seems too simple and too much like other things I’ve seen and too little like other things I’ve seen and too demanding of participants to possibly work. Then again, I said the same thing about blogs, Facebook, Twitter, and Instagram. They all seemed stupid to me before I tried them. Facebook still seems stupid to me, and I haven’t tried Instagram, but the point remains that these platforms succeeded not because of any obvious feat of technical originality but because they got the incentive structures right in lots of little ways that added up to something big. What I’m trying to do here with this design proposal is essentially to turn the concept of courseware inside out, changing the incentive structures in lots of little ways that hopefully add up to something bigger. Rather than cramming as much of the “course” as possible into the “ware,” reinforcing the isolation of the classroom in the process, I’m trying to make the “ware” generated by the living, human-animated course, making learning and curriculum design inherently social processes and hopefully thereby circumventing the shame reflex. And I’m trying to do that in the context of a platform and learning design that attempt to both reward and quantify social problem solving competencies in the class itself.

I don’t know if it will fly, but it might. Stranger things have happened.[2]

In the last post in this series, I will discuss some extensions that would probably have to be made in order to use this approach in a for-credit class as well as various miscellaneous considerations. Hey, if you’ve made it this far, you might as well read the last one and find out who dunnit.

 

  1. Of course, I recognize that some disciplines don’t do a lot of lecture/test (although they may do lecture/essay). These are precisely the disciplines in which courseware has been the least commercially successful.
  2. My wife agreeing to marry me, for instance.

The post Blueprint for a post-LMS, Part 4 appeared first on e-Literate.

restore validate archivelog

Michael Dinh - Sat, 2015-03-07 13:06

A common mistake I see in backup validation is not validating archivelog or Level 1 backup.

Here I will demonstrate various methods to validate achivelog.

Validate archivelog and does not list details for archivelog backup set. Too little information?

RMAN> restore validate archivelog from time “TO_DATE(‘2015-MAR-04 22:03:32′,’YYYY-MON-DD HH24:MI:SS’)”;

Starting restore at 2015-MAR-07 10:34:02
using channel ORA_DISK_1

channel ORA_DISK_1: scanning archived log /oradata/archivelog/hawklas/hawk_ba986d3b_1_871886678_245.arc
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/product/11.2.0/dbhome_1/dbs/3mq17ivt_1_1
channel ORA_DISK_1: piece handle=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/3mq17ivt_1_1 tag=TAG20150307T095717
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/product/11.2.0/dbhome_1/dbs/3nq17j0b_1_1
channel ORA_DISK_1: piece handle=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/3nq17j0b_1_1 tag=TAG20150307T095717
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
Finished restore at 2015-MAR-07 10:34:05

Validate archivelog and list details for backupset. Too much information?

RMAN> restore validate preview archivelog from time “TO_DATE(‘2015-MAR-04 22:03:32′,’YYYY-MON-DD HH24:MI:SS’)”;

Starting restore at 2015-MAR-07 10:34:55
using channel ORA_DISK_1


List of Backup Sets
===================


BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
117     45.54M     DISK        00:00:04     2015-MAR-07 09:57:21
        BP Key: 117   Status: AVAILABLE  Compressed: NO  Tag: TAG20150307T095717
        Piece Name: /u01/app/oracle/product/11.2.0/dbhome_1/dbs/3mq17ivt_1_1

  List of Archived Logs in backup set 117
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    149     351055     2015-MAR-04 22:03:32 371415     2015-MAR-05 14:54:39
  1    150     371415     2015-MAR-05 14:54:39 372594     2015-MAR-05 15:14:04
  1    151     372594     2015-MAR-05 15:14:04 373366     2015-MAR-05 15:34:04
  1    152     373366     2015-MAR-05 15:34:04 374012     2015-MAR-05 15:54:04
  1    153     374012     2015-MAR-05 15:54:04 374560     2015-MAR-05 16:14:05
  1    154     374560     2015-MAR-05 16:14:05 375084     2015-MAR-05 16:34:04
  1    155     375084     2015-MAR-05 16:34:04 375500     2015-MAR-05 16:54:05
  1    156     375500     2015-MAR-05 16:54:05 376122     2015-MAR-05 17:14:03
  1    157     376122     2015-MAR-05 17:14:03 376539     2015-MAR-05 17:34:05
  1    158     376539     2015-MAR-05 17:34:05 376952     2015-MAR-05 17:54:05
  1    159     376952     2015-MAR-05 17:54:05 377664     2015-MAR-05 18:14:04
  1    160     377664     2015-MAR-05 18:14:04 378236     2015-MAR-05 18:34:05
  1    161     378236     2015-MAR-05 18:34:05 378694     2015-MAR-05 18:54:05
  1    162     378694     2015-MAR-05 18:54:05 379347     2015-MAR-05 19:14:04
  1    163     379347     2015-MAR-05 19:14:04 379628     2015-MAR-06 07:16:37
  1    164     379628     2015-MAR-06 07:16:37 379909     2015-MAR-06 07:26:16
  1    165     379909     2015-MAR-06 07:26:16 380968     2015-MAR-06 07:46:16
  1    166     380968     2015-MAR-06 07:46:16 381650     2015-MAR-06 08:06:16
  1    167     381650     2015-MAR-06 08:06:16 382073     2015-MAR-06 08:26:16
  1    168     382073     2015-MAR-06 08:26:16 382491     2015-MAR-06 08:46:20
  1    169     382491     2015-MAR-06 08:46:20 382987     2015-MAR-06 09:06:22
  1    170     382987     2015-MAR-06 09:06:22 383474     2015-MAR-06 09:26:22
  1    171     383474     2015-MAR-06 09:26:22 383894     2015-MAR-06 09:46:21
  1    172     383894     2015-MAR-06 09:46:21 384457     2015-MAR-06 10:06:22
  1    173     384457     2015-MAR-06 10:06:22 384876     2015-MAR-06 10:26:22
  1    174     384876     2015-MAR-06 10:26:22 385294     2015-MAR-06 10:46:28
  1    175     385294     2015-MAR-06 10:46:28 385792     2015-MAR-06 11:06:28
  1    176     385792     2015-MAR-06 11:06:28 386280     2015-MAR-06 11:26:26
  1    177     386280     2015-MAR-06 11:26:26 386698     2015-MAR-06 11:46:27
  1    178     386698     2015-MAR-06 11:46:27 387196     2015-MAR-06 12:06:28
  1    179     387196     2015-MAR-06 12:06:28 387681     2015-MAR-06 12:26:28
  1    180     387681     2015-MAR-06 12:26:28 388098     2015-MAR-06 12:46:32
  1    181     388098     2015-MAR-06 12:46:32 388673     2015-MAR-06 13:06:33
  1    182     388673     2015-MAR-06 13:06:33 389092     2015-MAR-06 13:26:34
  1    183     389092     2015-MAR-06 13:26:34 389508     2015-MAR-06 13:46:33
  1    184     389508     2015-MAR-06 13:46:33 389985     2015-MAR-06 14:06:32
  1    185     389985     2015-MAR-06 14:06:32 390472     2015-MAR-06 14:26:33
  1    186     390472     2015-MAR-06 14:26:33 390888     2015-MAR-06 14:46:32
  1    187     390888     2015-MAR-06 14:46:32 391453     2015-MAR-06 15:06:32
  1    188     391453     2015-MAR-06 15:06:32 391878     2015-MAR-06 15:26:34
  1    189     391878     2015-MAR-06 15:26:34 392298     2015-MAR-06 15:46:40
  1    190     392298     2015-MAR-06 15:46:40 392809     2015-MAR-06 16:06:40
  1    191     392809     2015-MAR-06 16:06:40 393282     2015-MAR-06 16:26:40
  1    192     393282     2015-MAR-06 16:26:40 393699     2015-MAR-06 16:46:40
  1    193     393699     2015-MAR-06 16:46:40 394186     2015-MAR-06 17:06:41
  1    194     394186     2015-MAR-06 17:06:41 394671     2015-MAR-06 17:26:46
  1    195     394671     2015-MAR-06 17:26:46 395087     2015-MAR-06 17:46:45
  1    196     395087     2015-MAR-06 17:46:45 395649     2015-MAR-06 18:06:44
  1    197     395649     2015-MAR-06 18:06:44 396072     2015-MAR-06 18:26:44
  1    198     396072     2015-MAR-06 18:26:44 396489     2015-MAR-06 18:46:46
  1    199     396489     2015-MAR-06 18:46:46 396984     2015-MAR-06 19:06:46
  1    200     396984     2015-MAR-06 19:06:46 397481     2015-MAR-06 19:26:46
  1    201     397481     2015-MAR-06 19:26:46 397897     2015-MAR-06 19:46:45
  1    202     397897     2015-MAR-06 19:46:45 398392     2015-MAR-06 20:06:45
  1    203     398392     2015-MAR-06 20:06:45 398880     2015-MAR-06 20:26:44
  1    204     398880     2015-MAR-06 20:26:44 399299     2015-MAR-06 20:46:46
  1    205     399299     2015-MAR-06 20:46:46 399775     2015-MAR-06 21:06:46
  1    206     399775     2015-MAR-06 21:06:46 400258     2015-MAR-06 21:26:45
  1    207     400258     2015-MAR-06 21:26:45 400680     2015-MAR-06 21:46:44
  1    208     400680     2015-MAR-06 21:46:44 403781     2015-MAR-06 22:06:34

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
118     30.93M     DISK        00:00:02     2015-MAR-07 09:57:33
        BP Key: 118   Status: AVAILABLE  Compressed: NO  Tag: TAG20150307T095717
        Piece Name: /u01/app/oracle/product/11.2.0/dbhome_1/dbs/3nq17j0b_1_1

  List of Archived Logs in backup set 118
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    209     403781     2015-MAR-06 22:06:34 404216     2015-MAR-06 22:26:33
  1    210     404216     2015-MAR-06 22:26:33 404648     2015-MAR-06 22:46:32
  1    211     404648     2015-MAR-06 22:46:32 405189     2015-MAR-06 23:06:32
  1    212     405189     2015-MAR-06 23:06:32 405678     2015-MAR-06 23:26:35
  1    213     405678     2015-MAR-06 23:26:35 406110     2015-MAR-06 23:46:39
  1    214     406110     2015-MAR-06 23:46:39 406628     2015-MAR-07 00:06:38
  1    215     406628     2015-MAR-07 00:06:38 407191     2015-MAR-07 00:26:40
  1    216     407191     2015-MAR-07 00:26:40 407622     2015-MAR-07 00:46:39
  1    217     407622     2015-MAR-07 00:46:39 408298     2015-MAR-07 01:06:39
  1    218     408298     2015-MAR-07 01:06:39 408734     2015-MAR-07 01:26:38
  1    219     408734     2015-MAR-07 01:26:38 409167     2015-MAR-07 01:46:40
  1    220     409167     2015-MAR-07 01:46:40 409684     2015-MAR-07 02:06:39
  1    221     409684     2015-MAR-07 02:06:39 410318     2015-MAR-07 02:26:39
  1    222     410318     2015-MAR-07 02:26:39 410780     2015-MAR-07 02:46:38
  1    223     410780     2015-MAR-07 02:46:38 411462     2015-MAR-07 03:06:38
  1    224     411462     2015-MAR-07 03:06:38 411884     2015-MAR-07 03:26:40
  1    225     411884     2015-MAR-07 03:26:40 412300     2015-MAR-07 03:46:40
  1    226     412300     2015-MAR-07 03:46:40 412794     2015-MAR-07 04:06:39
  1    227     412794     2015-MAR-07 04:06:39 413315     2015-MAR-07 04:26:38
  1    228     413315     2015-MAR-07 04:26:38 413736     2015-MAR-07 04:46:38
  1    229     413736     2015-MAR-07 04:46:38 414223     2015-MAR-07 05:06:40
  1    230     414223     2015-MAR-07 05:06:40 414710     2015-MAR-07 05:26:38
  1    231     414710     2015-MAR-07 05:26:38 415134     2015-MAR-07 05:46:38
  1    232     415134     2015-MAR-07 05:46:38 417948     2015-MAR-07 06:06:26
  1    233     417948     2015-MAR-07 06:06:26 418380     2015-MAR-07 06:26:26
  1    234     418380     2015-MAR-07 06:26:26 418813     2015-MAR-07 06:46:27
  1    235     418813     2015-MAR-07 06:46:27 419405     2015-MAR-07 07:06:26
  1    236     419405     2015-MAR-07 07:06:26 419841     2015-MAR-07 07:26:28
  1    237     419841     2015-MAR-07 07:26:28 420275     2015-MAR-07 07:46:34
  1    238     420275     2015-MAR-07 07:46:34 420777     2015-MAR-07 08:06:33
  1    239     420777     2015-MAR-07 08:06:33 421312     2015-MAR-07 08:26:32
  1    240     421312     2015-MAR-07 08:26:32 421745     2015-MAR-07 08:46:32
  1    241     421745     2015-MAR-07 08:46:32 422279     2015-MAR-07 09:06:32
  1    242     422279     2015-MAR-07 09:06:32 422793     2015-MAR-07 09:26:34
  1    243     422793     2015-MAR-07 09:26:34 423233     2015-MAR-07 09:46:40
  1    244     423233     2015-MAR-07 09:46:40 423510     2015-MAR-07 09:57:16
List of Archived Log Copies for database with db_unique_name HAWKLAS
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
245     1    245     A 2015-MAR-07 09:57:16
        Name: /oradata/archivelog/hawklas/hawk_ba986d3b_1_871886678_245.arc


channel ORA_DISK_1: scanning archived log /oradata/archivelog/hawklas/hawk_ba986d3b_1_871886678_245.arc
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/product/11.2.0/dbhome_1/dbs/3mq17ivt_1_1
channel ORA_DISK_1: piece handle=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/3mq17ivt_1_1 tag=TAG20150307T095717
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/product/11.2.0/dbhome_1/dbs/3nq17j0b_1_1
channel ORA_DISK_1: piece handle=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/3nq17j0b_1_1 tag=TAG20150307T095717
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
Finished restore at 2015-MAR-07 10:34:59

RMAN>

Validate archivelog and list summary for backupset. Just enough information?

RMAN> restore validate preview summary archivelog from time “TO_DATE(‘2015-MAR-04 22:03:32′,’YYYY-MON-DD HH24:MI:SS’)”;

Starting restore at 2015-MAR-07 10:36:58
using channel ORA_DISK_1


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
117     B  A  A DISK        2015-MAR-07 09:57:21 1       1       NO         TAG20150307T095717
118     B  A  A DISK        2015-MAR-07 09:57:33 1       1       NO         TAG20150307T095717
List of Archived Log Copies for database with db_unique_name HAWKLAS
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
245     1    245     A 2015-MAR-07 09:57:16
        Name: /oradata/archivelog/hawklas/hawk_ba986d3b_1_871886678_245.arc


channel ORA_DISK_1: scanning archived log /oradata/archivelog/hawklas/hawk_ba986d3b_1_871886678_245.arc
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/product/11.2.0/dbhome_1/dbs/3mq17ivt_1_1
channel ORA_DISK_1: piece handle=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/3mq17ivt_1_1 tag=TAG20150307T095717
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/product/11.2.0/dbhome_1/dbs/3nq17j0b_1_1
channel ORA_DISK_1: piece handle=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/3nq17j0b_1_1 tag=TAG20150307T095717
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
Finished restore at 2015-MAR-07 10:37:01

RMAN> list backup of archivelog from time “TO_DATE(‘2015-MAR-04 22:03:32′,’YYYY-MON-DD HH24:MI:SS’)” summary;

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
115     B  A  A DISK        2015-MAR-06 07:16:40 1       1       YES        TAG20150306T071638
117     B  A  A DISK        2015-MAR-07 09:57:21 1       1       NO         TAG20150307T095717
118     B  A  A DISK        2015-MAR-07 09:57:33 1       1       NO         TAG20150307T095717

Why is backupset 115 not used in restore validate?
RMAN> list backupset 115; – contains backup for seq 149-163

List of Backup Sets
===================


BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
115     3.42M      DISK        00:00:01     2015-MAR-06 07:16:40
        BP Key: 115   Status: AVAILABLE  Compressed: YES  Tag: TAG20150306T071638
        Piece Name: /u01/app/oracle/product/11.2.0/dbhome_1/dbs/3kq14l6n_1_1

  List of Archived Logs in backup set 115
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    149     351055     2015-MAR-04 22:03:32 371415     2015-MAR-05 14:54:39
  1    150     371415     2015-MAR-05 14:54:39 372594     2015-MAR-05 15:14:04
  1    151     372594     2015-MAR-05 15:14:04 373366     2015-MAR-05 15:34:04
  1    152     373366     2015-MAR-05 15:34:04 374012     2015-MAR-05 15:54:04
  1    153     374012     2015-MAR-05 15:54:04 374560     2015-MAR-05 16:14:05
  1    154     374560     2015-MAR-05 16:14:05 375084     2015-MAR-05 16:34:04
  1    155     375084     2015-MAR-05 16:34:04 375500     2015-MAR-05 16:54:05
  1    156     375500     2015-MAR-05 16:54:05 376122     2015-MAR-05 17:14:03
  1    157     376122     2015-MAR-05 17:14:03 376539     2015-MAR-05 17:34:05
  1    158     376539     2015-MAR-05 17:34:05 376952     2015-MAR-05 17:54:05
  1    159     376952     2015-MAR-05 17:54:05 377664     2015-MAR-05 18:14:04
  1    160     377664     2015-MAR-05 18:14:04 378236     2015-MAR-05 18:34:05
  1    161     378236     2015-MAR-05 18:34:05 378694     2015-MAR-05 18:54:05
  1    162     378694     2015-MAR-05 18:54:05 379347     2015-MAR-05 19:14:04
  1    163     379347     2015-MAR-05 19:14:04 379628     2015-MAR-06 07:16:37

RMAN>

Backup for seq 149-163 is in backupset 115 & 117

RMAN> list backup of archivelog from sequence 149 until sequence 163 summary;

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
115     B  A  A DISK        2015-MAR-06 07:16:40 1       1       YES        TAG20150306T071638
117     B  A  A DISK        2015-MAR-07 09:57:21 1       1       NO         TAG20150307T095717

RMAN> list backupset 117; – verified

List of Backup Sets
===================

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
117     45.54M     DISK        00:00:04     2015-MAR-07 09:57:21
        BP Key: 117   Status: AVAILABLE  Compressed: NO  Tag: TAG20150307T095717
        Piece Name: /u01/app/oracle/product/11.2.0/dbhome_1/dbs/3mq17ivt_1_1

  List of Archived Logs in backup set 117
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    149     351055     2015-MAR-04 22:03:32 371415     2015-MAR-05 14:54:39
  1    150     371415     2015-MAR-05 14:54:39 372594     2015-MAR-05 15:14:04
  1    151     372594     2015-MAR-05 15:14:04 373366     2015-MAR-05 15:34:04
  1    152     373366     2015-MAR-05 15:34:04 374012     2015-MAR-05 15:54:04
  1    153     374012     2015-MAR-05 15:54:04 374560     2015-MAR-05 16:14:05
  1    154     374560     2015-MAR-05 16:14:05 375084     2015-MAR-05 16:34:04
  1    155     375084     2015-MAR-05 16:34:04 375500     2015-MAR-05 16:54:05
  1    156     375500     2015-MAR-05 16:54:05 376122     2015-MAR-05 17:14:03
  1    157     376122     2015-MAR-05 17:14:03 376539     2015-MAR-05 17:34:05
  1    158     376539     2015-MAR-05 17:34:05 376952     2015-MAR-05 17:54:05
  1    159     376952     2015-MAR-05 17:54:05 377664     2015-MAR-05 18:14:04
  1    160     377664     2015-MAR-05 18:14:04 378236     2015-MAR-05 18:34:05
  1    161     378236     2015-MAR-05 18:34:05 378694     2015-MAR-05 18:54:05
  1    162     378694     2015-MAR-05 18:54:05 379347     2015-MAR-05 19:14:04
  1    163     379347     2015-MAR-05 19:14:04 379628     2015-MAR-06 07:16:37
  1    164     379628     2015-MAR-06 07:16:37 379909     2015-MAR-06 07:26:16
  1    165     379909     2015-MAR-06 07:26:16 380968     2015-MAR-06 07:46:16
  1    166     380968     2015-MAR-06 07:46:16 381650     2015-MAR-06 08:06:16
  1    167     381650     2015-MAR-06 08:06:16 382073     2015-MAR-06 08:26:16
  1    168     382073     2015-MAR-06 08:26:16 382491     2015-MAR-06 08:46:20
  1    169     382491     2015-MAR-06 08:46:20 382987     2015-MAR-06 09:06:22
  1    170     382987     2015-MAR-06 09:06:22 383474     2015-MAR-06 09:26:22
  1    171     383474     2015-MAR-06 09:26:22 383894     2015-MAR-06 09:46:21
  1    172     383894     2015-MAR-06 09:46:21 384457     2015-MAR-06 10:06:22
  1    173     384457     2015-MAR-06 10:06:22 384876     2015-MAR-06 10:26:22
  1    174     384876     2015-MAR-06 10:26:22 385294     2015-MAR-06 10:46:28
  1    175     385294     2015-MAR-06 10:46:28 385792     2015-MAR-06 11:06:28
  1    176     385792     2015-MAR-06 11:06:28 386280     2015-MAR-06 11:26:26
  1    177     386280     2015-MAR-06 11:26:26 386698     2015-MAR-06 11:46:27
  1    178     386698     2015-MAR-06 11:46:27 387196     2015-MAR-06 12:06:28
  1    179     387196     2015-MAR-06 12:06:28 387681     2015-MAR-06 12:26:28
  1    180     387681     2015-MAR-06 12:26:28 388098     2015-MAR-06 12:46:32
  1    181     388098     2015-MAR-06 12:46:32 388673     2015-MAR-06 13:06:33
  1    182     388673     2015-MAR-06 13:06:33 389092     2015-MAR-06 13:26:34
  1    183     389092     2015-MAR-06 13:26:34 389508     2015-MAR-06 13:46:33
  1    184     389508     2015-MAR-06 13:46:33 389985     2015-MAR-06 14:06:32
  1    185     389985     2015-MAR-06 14:06:32 390472     2015-MAR-06 14:26:33
  1    186     390472     2015-MAR-06 14:26:33 390888     2015-MAR-06 14:46:32
  1    187     390888     2015-MAR-06 14:46:32 391453     2015-MAR-06 15:06:32
  1    188     391453     2015-MAR-06 15:06:32 391878     2015-MAR-06 15:26:34
  1    189     391878     2015-MAR-06 15:26:34 392298     2015-MAR-06 15:46:40
  1    190     392298     2015-MAR-06 15:46:40 392809     2015-MAR-06 16:06:40
  1    191     392809     2015-MAR-06 16:06:40 393282     2015-MAR-06 16:26:40
  1    192     393282     2015-MAR-06 16:26:40 393699     2015-MAR-06 16:46:40
  1    193     393699     2015-MAR-06 16:46:40 394186     2015-MAR-06 17:06:41
  1    194     394186     2015-MAR-06 17:06:41 394671     2015-MAR-06 17:26:46
  1    195     394671     2015-MAR-06 17:26:46 395087     2015-MAR-06 17:46:45
  1    196     395087     2015-MAR-06 17:46:45 395649     2015-MAR-06 18:06:44
  1    197     395649     2015-MAR-06 18:06:44 396072     2015-MAR-06 18:26:44
  1    198     396072     2015-MAR-06 18:26:44 396489     2015-MAR-06 18:46:46
  1    199     396489     2015-MAR-06 18:46:46 396984     2015-MAR-06 19:06:46
  1    200     396984     2015-MAR-06 19:06:46 397481     2015-MAR-06 19:26:46
  1    201     397481     2015-MAR-06 19:26:46 397897     2015-MAR-06 19:46:45
  1    202     397897     2015-MAR-06 19:46:45 398392     2015-MAR-06 20:06:45
  1    203     398392     2015-MAR-06 20:06:45 398880     2015-MAR-06 20:26:44
  1    204     398880     2015-MAR-06 20:26:44 399299     2015-MAR-06 20:46:46
  1    205     399299     2015-MAR-06 20:46:46 399775     2015-MAR-06 21:06:46
  1    206     399775     2015-MAR-06 21:06:46 400258     2015-MAR-06 21:26:45
  1    207     400258     2015-MAR-06 21:26:45 400680     2015-MAR-06 21:46:44
  1    208     400680     2015-MAR-06 21:46:44 403781     2015-MAR-06 22:06:34

RMAN>

2015 OPN FMW Partner Forum: the coolest thing

Darwin IT - Sat, 2015-03-07 06:06
This week I attended the Twentieth Oracle Partner Network Fusion Middleware Forum, this year in the Boscolo Hotel in Budapest, Hungary.

It was a great event, where a lot of subjects were covered, a lot of great people met. According to one of the product managers we were the smartest Oracle Region in the world (let's not uncover who said that...)

We've seen a lot of cool stuf. Let me try to put up a list. But it can't be anywhere near to complete.

The first day, tuesday the 3th there were some nice keynotes. With pretty interesting stuf, although it was a little dissapointing to here that Oracle's focus on BPM 12c the upcoming year or so, is on quality. Of course that's a good thing, but I concluded that it means that on the functionality side it's going to be quite silent. And that is a pity since they pushed very hard on ACM (Adaptive Case Management)  last 2 years. It means as well, I think, that ACM is not going to get in to the Process cloud for quite some time. And also that is 'not so cool', since I think ACM could be an important driver for the Process Cloud Services. Quite uncool thus.

What was very cool was the demo on Internet of Things, and the Stream Explorer. Also nice was the presentation on API catalog. Very cool, as always, was the presentation/demo on Mobile Application Framework, and Mobile Cloud Services, by Grant Roberts.

About Sub-zero-cool was the duo-hack&tation of the Rest/JSON support of 12c together with Mobile Application Framework by Lucas Jellema and Luc Bors. Great job guys.

But the coolest things weren't amongst these. Not even the presentations we aren't allowed to blog and tweet about. Not even the workshops we did that were so secret, that we were driven to the Oracle office and I can't remember how many times we were pressed not to tweet and blog about it and how many times we were told to delete the VM's afterwards.

Not even the great venue of the Bosocolo Luxury Residence:
 
No, to me, amongst the 2 coolest things was the run  I did on wednesday:



Which I started at the hotel, then ran right to the Donau, where there is this Island, that has an Atletics  area. Did a little round there and went back. It's about 7.5 to 8km. Unfortunately I did not time it, I think I did it in about 40 to 45 minutes.
But really the coolest thing was the final run I did today:

 The Pest-Buda-Pest run, where I crossed the Donau over the same bridge that I ran to on wednesday, then along the Donau, passing the castle and other very beautiful buildings and then cross the Donau again 2 bridges more to the south. I did this 9 km. in about 51 minutes: My avarage heart-rate:

And the Calories I compensated on the diners:


And here my euforic proof that I really did the run:
 This was really the greatest moment of the week. In one and an half hour I'm of to the airport, nothing I see of do in the next hours can compete this!

Thanks Jürgen for the great week, next year I definately bring my running shoes again.









Collaborate 2015

Jim Marion - Fri, 2015-03-06 18:16

Collaborate 2015 is just a few weeks away. I will be presenting PeopleSoft PeopleTools Developer: Tips and Techniques on Tuesday, 4/14/2015 at 3:15 PM in Surf B. If you are presenting a PeopleTools technical session, then please post a comment with your session title, date, time, and venue. I look forward to seeing you next month!

Blueprint for a post-LMS, Part 3

Michael Feldstein - Fri, 2015-03-06 16:03

By Michael FeldsteinMore Posts (1021)

In the first part of this series, I identified four design goals for a learning platform that supports conversation-based courses. In the second part, I brought up a use case of a kind of faculty professional development course that works as a distributed flip, based on our forthcoming e-Literate TV series on personalized learning. In the next two posts, I’m going to go into some aspects of the system design. But before I do that, I want to address a concern that some readers have raised. Pointing to my apparently infamous “Dammit, the LMS” post, they raise the question of whether I am guilty of a certain amount of techno-utopianism. Whether I’m assuming just building a new widget will solve a difficult social problem. And whether any system, even if it starts out relatively pure, will inevitably become just another LMS as the same social forces come into play.

chasmillustration6

I hope not. The core lesson of “Dammit, the LMS” is that platform innovations will not propagate unless the pedagogical changes that take advantages of those changes also propagate, and pedagogical changes will not propagate without changes in the institutional culture in which they are embedded. Given that context, the use case I proposed in part 2 of this series is every bit as important as the design goals in part 1 because it provides a mechanism by which we may influence the culture. This actually aligns well with the “use scale appropriately” design goal from part 1, which included this bit:

Right now, there is a lot of value to the individual teacher of being able to close the classroom door and work unobserved by others. I would like to both lower barriers to sharing and increase the incentives to do so. The right platform can help with that, although it’s very tricky. Learning Object Repositories, for example, have largely failed to be game changers in this regard, except within a handful of programs or schools that have made major efforts to drive adoption. One problem with repositories is that they demand work on the part of the faculty while providing little in the way of rewards for sharing. If we are going to overcome the cultural inhibitions around sharing, then we have to make the barrier as low as possible and the reward as high as possible.

When we get to part 4 of the series, I hope to show how the platform, pedagogy, and culture might co-evolve through a combination of curriculum design, learning design, platform design, prepared for faculty as participants in a low-stakes environment. But before we get there, I have to first put some building blocks in place related to fostering and assessing educational conversation. That’s what I’m going to try to do in this post.

You may recall from part 1 of this series that trust, or reputation, has been the main proxy for expertise throughout most of human history. Credentials are a relatively new invention designed to solve the problem that person-to-person trust networks start to break down when population sizes get beyond a certain point. The question I raised was whether modern social networking platforms, combined with analytics, can revive something like the original trust network. LinkedIn is one example of such an effort. We want an approach that will enable us to identify expertise through trust networks based on expertise-relevant conversations of the type that might come up in a well facilitated discussion-based class.

It turns out that there is quite a bit of prior art in this area. Discussion board developers have been interested in ways to identify experts in the conversation for as long as internet-based discussions have grown large enough that people need help figuring out who to pay attention to and who to ignore (and who to actively filter out). Keeping the signal-to-noise ratio was a design goal, for example, in the early versions of the software developed to manage the Slashdot community in the late 1990s. (I suspect some of you have even earlier examples.) Since that design goal amounts to identifying community-recognized expertise and value in large-scale but authentic conversations (authentic in the sense that people are not participating because they were told to participate), it makes sense to draw on that accumulated experience in thinking through our design challenges. For our purposes, I’m going to look at Discourse, an open source discussion forum that was designed by some of the people who worked on the online community Stack Overflow.

Discourse has a number of features for scaling conversations that I won’t get into here, but their participant trust model is directly relevant. They base their model on one described by Amy Jo Kim in her book Community Building on the Web:

The progression, visitor > novice > regular > leader > elder, provides a good first approximation of levels for an expertise model. (The developers of Discourse change the names of the levels for their own purposes, but I’ll stick with the original labels here.) Achieving a higher level in Discourse unlocks certain privileges. For example, only leaders or elders can recategorize or rename discussion threads. This is mostly utilitarian, but it has an element of gamification to it. Your trust level is a badge certifying your achievements in the discussion community.

The model that Discourse currently uses for determining participant trust levels is pretty simple. For example, in order to get to the middle trust level, a participant must do the following:

  • visiting at least 15 days, not sequentially
  • casting at least 1 like
  • receiving at least 1 like
  • replying to at least 3 different topics
  • entering at least 20 topics
  • reading at least 100 posts
  • spend a total of 60 minutes reading posts

This is not terribly far from a very basic class participation grade. It is grade-like in the sense that it is a five-point evaluative scale, but it is simple like a the most basic of participation grades in the sense that it mostly looks at quantity rather than quality of participation. The first hint of a difference is “receiving at least 1 like.” A “like” is essentially a micro-scale peer grade.

We could also imagine other, more sophisticated metrics that directly assess the degree to which a participant is considered to be a trusted community member. Here are a few examples:

  • The number of replies or quotes that a participant’s comments generate
  • The number of mentions the participant generates (in the @twitterhandle sense)
  • The number of either of the above from participants who have earned high trust levels
  • The number of “likes” you get for posts in which a participant mentions or quotes another post
  • The breadth of the network of people with whom the participant converses
  • Discourse analysis of the language used in the participant’s post to see if they are being helpful or if they are asking clear questions (for example)

Some of these metrics use the trust network to evaluate expertise, e.g., “many participants think you said something smart here” or “trusted participants think you said something smart here.” But some directly measure actual competencies, e.g., the ability to find pre-existing information and quote it. You can combine these into a metric of the ability to find pre-existing relevant information and quote it appropriately by looking at posts that contain quote and were liked by a number of participants or by trusted participants.

Think about these metrics as the basis for a grading system. Does the teacher want to reward students who show good teamwork and mentoring skills? Then she might increase the value of metrics like “post rated helpful by a participant with less trust” or “posts rated helpful by many participants.” If she wants to prioritize information finding skills, then she might increase the weight of appropriate quoting of relevant information. Note that, given a sufficiently rich conversation with a sufficiently rich set of metrics, there will be more than one way to climb the five-point scale. We are not measuring fine-grained knowledge competencies. Rather, we are holistically assessing the student’s capacity to be a valuable and contributing member of a knowledge-building community. There should be more than one way to get high marks at that. And again, these are high-order competencies that most employers value highly. They are just not broken down into itsy bitsy pass-or-fail knowledge chunks.

Unfortunately, Discourse doesn’t have this rich array of metrics or options for combining them. So one of the first things we would want to do in order to adapt it for our use case is abstract Discourse’s trust model, as well as all the possible inputs, using IMS Caliper (or something based on the current draft of it, anyway). There are a few reasons for this. First, we’d want to be able to add inputs as we think of them. For example, we might want to include how many people start using a tag that a participant has introduced. You don’t want to have to hard code every new parameter and every new way of weighing the parameters against them. Second, we’re eventually going to want to add other forms of input from other platforms (e.g., blog posts) that contribute to a participant’s expertise rating. So we need the ratings code in a form that is designed for extension. We need APIs. And finally, we’d want to design the system so that any vendor, open source, or home-grown analytics system could be plugged in to develop the expertise ratings based on the inputs.

Discourse also has integration with WordPress which is interesting not so much because of WordPress itself but because the nature of the integration points toward more functionality that we can use, particularly for analytics purposes. The Discourse WordPress plugin can automatically spawn a discussion tread in Discourse automatically for every new post in WordPress. This is interesting because it gives us a semantic connection between a discussion and a piece of (curricular) material. We automatically know what the discussion is “about.” It’s hard to get participants in a discussion to do a lot of tagging of their posts. But it’s a lot easier to get curricular materials tagged. If we know that a discussion is about a particular piece of content and we know details about the subjects or competencies that the content is about (and whether that content contains an explanation to be understood, a problem to be solved, or something else), then we can make some relatively good inferences about what it says about a person’s expertise when she makes a several highly rated comments in discussions about content items that share the same competency or topic tag. Second, Discourse has the ability to publish the comments on the content back to the post. This is a capability that we’re going to file away for use in the next part of this series.

If we were to abstract the ratings system from Discourse, add an API that lets it take different variables (starting with various metadata about users and posts within Discourse), and add a pluggable analytics dashboard that let teachers and other participants experiment with different types of filters, we would have a reasonably rich environment for a cMOOC. It would support large-scale conversations that could be linked to specific pieces of curricular content (or not). It would help people find more helpful comments and more helpful commenters. It could begin to provide some fairly rich community-powered but analytics-enriched evaluations of both of these. And, in our particular use case, since we would be talking about analytics-enriched personalized learning products and strategies, having some sort of pluggable analytics that are not hidden by a black box could give participants more hands-on experience with how analytics can work in a class situation, what they do well, what they don’t do well, and how you should manage them as a teacher. There are some additional changes we’d need to make in order to bring the system up to snuff for traditional certification courses, but I’ll save those details for part 5.

The post Blueprint for a post-LMS, Part 3 appeared first on e-Literate.

Oracle compression, availability and licensing

Yann Neuhaus - Fri, 2015-03-06 15:03

Various methods of table compression have been introduced at each release. Some require a specific storage system Some requires specific options. Some are only for static data. And it's not always very clear for the simple reason that their name has changed. 

Name change for technical reasons (ROW/COLUMN STORE precision when a columnar compression has been introduced) or for marketing reason (COMPRESS FOR OLTP gave the idea that other - Exadata - compression level may not be suited for OLTP).

Of course that brings a lot of ambiguity such as:

  • HCC is called 'COLUMN STORE' even if it has nothing to do with the In-Memory columns store
  • COMPRESS ADVANCED is only one part of Advanced Compression Option
  • EHCC (Exadata Hybrid Columnar Compression) is not only for Exadata
  • COMPRESS FOR OLTP is not called like that anymore, but is still the only compression suitable for OLTP
  • HCC Row-Level Locking is not for ROW STORE but for COLUMN STORE. It's suited for DML operation but is different than FOR OLTP. Anyway COLUMN STORE compression can be transformed to ROW STORE compression during updates. And that locking feature is licenced with the Advanced Compression Option, and available in Exadata only... 
  • When do you need ACO (Advanced Compression Option) or not?

Let's make it clear here.

A good Rule of Thumb

Denes Kubicek - Fri, 2015-03-06 09:03
I like the newest blog post from Joel Kallman and especially his rule of thumb:

"My rule of thumb - when you're editing code in a text area/code editor in the Application Builder of APEX and you see the scroll bar, it's time to consider putting it into a PL/SQL package."

I would go further and say that even the smallest application you create should have at least one package for all the PL/SQL code you write.
Categories: Development

Invoking REST Service from Oracle ACM Java Activity

Andrejus Baranovski - Fri, 2015-03-06 05:56
In this post I will show you, how to call REST service from ACM Java activity class method. This could be useful in the situations, when you would like to have programmatic ACM activity integrated with REST service data. We could access ACM payload data from within the method overriden in the class implementing Case Activity Callback.

ACM activity implemented on top of Java class, contains the same properties and configuration as the regular one. You could define input/output data, execution properties, etc.:


Here is the Java code to invoke REST service from the Java class implementing ACM activity. I'm giving an example to parse ACM payload and access Last Name attribute. REST service is invoked through a library packaged with FMW 12c:


One important hint - you must place ACM activity class under SOA/src path (by default it goes into SOA/SCA-INF/src). This will ensure class will be compiled and executable on runtime:


REST Client library is referenced from FMW 12c installation, no need to add any external libraries:


Sample application implements Cancel Hotel Booking activity, the one which is based on Java class. It executes and calls a method:


Here is the output - Last Name printed out from the payload and REST service call result:


Here you can download sample application, this contains both REST service and ACM examples - HotelBookingProcessing_v2.zip. REST service application implementation - ADF BC Range Paging and REST Pagination. ACM application implementation - Adaptive Case Management 12c and ADF Human Tasks.

API Catalog 12c is here

Darwin IT - Fri, 2015-03-06 02:27
You do create services right? And your service portfolio grows and grows does it? Are they used by others? And do you know what services are already there in your organization? Oh, right, you did build up a list using Excel, didn't you? Of course, that's what I would do.

Unless I had a license on Oracle Enterprise Repository. Since then I already was alowed to use API Catalog.
 
Actually, I thaught the AIA11g training several times. And one of the  parts is harvesting your services into Oracle Enterprise Manager. Where you can do impact analasis on your services. It relates xsd's, wsdl's, composites (both EBS and ABCS) and interrelates them with eachother based on the AIA taxonomy. OER supports governance from the conception of services to the use in production of them. API Catalog does not support this whole lifecycle governance, but often that is to much for a customer. I would say, to me as  a developer/architect is would be. But API Catalog would help me to build up a light weight portfolio of my services at a customer.

I recognized OER bits and pieces and API catalog indeed uses that. It allows you to do a harvesting of your services from OSB and SOASuite using (about) the same harvesting scripts that are recognized from AIA.

This week on the OPN FMW Partner Forum 2015, I learned that many things we know from AIA11g are (going to be) build in the different products like SOASuite, the API/OER suite, etc. You could compare that to how things went with Oracle Designer and Headstart for Designer.

Other products in the API suite are:
  • Oracle API Manager is an addon to Service Bus. Is a Developer portal for Oracle Service Bus.
  • Oracle API Gateway is an OEM-ed product targeted to the DMZ to manage access to your services/API's and also has some API Management functionallity in its self.
 For now please install Catalog and harvest your services. Then you can copy and paste the descriptions from your Excel sheet or Wiki into API Catalog and in a later stage reuse them in API Manager. Then throw away your Excelsheet....


The Ideal APEX Application (When & Where You Write Code)

Joel Kallman - Fri, 2015-03-06 01:23
The real title of this post should be "What I Really Meant to Say Was....".

Bob Rhubart of the Oracle Technology Network OTNArchBeat fame was kind enough to give me an opportunity to shoot a 2-minute Tech Tip.  I love Bob's goals for a 2-minute Tech Tip - has to be technical, can't be marketing fluff, and you have to deliver it in 120 seconds - no more, no less.  So I took some notes, practiced it out loud a couple times, and then I was ready.  But because I didn't want to sound like I was merely reading my notes, I ad-libbed a little and...crumbled under the clock.  I don't think I could have been more confusing and off the mark.  Oh...did I forget to mention that Bob doesn't like to do more than one take?



So if I could distill what I wished to convey into a few easily consumable points:
  1. Use the declarative features of APEX as much as possible, don't write code.  If you have to choose between writing something in a report region with a new template, or hammer out the same result with a lovingly hand-crafted PL/SQL region, opt for the former.  If you have a choice between a declarative condition (e.g., Item Not Null) or the equivalent PL/SQL expression, choose the declarative condition.  It will be faster at execution time, it will be easier to manage and report upon, it will be easier to maintain, it will be less pressure on your database with less parsing of PL/SQL.
  2. When you need to venture outside the declarative features of APEX and you need to write code in PL/SQL, be smart about it.  Define as much PL/SQL in statically compiled units (procedures, functions, packages) in the database and simply invoke them from your APEX application.  It will be easier to maintain (because it will simply be files that correspond to your PL/SQL procedures/functions/packages), it will be easier to version control, it will be easier to diff and promote, you can choose which PL/SQL optimization level you wish, you can natively compile, and it will be much more efficient on your database.
  3. Avoid huge sections of JavaScript and use Dynamic Actions wherever possible.  If you have the need for a lot of custom JavaScript, put it into a library and into a file, served by your Web Server (or, at a minimum, as a shared static file of your application).
  4. APEX is just a thin veneer over your database - architect your APEX applications as such.  Let the Oracle Database do the heavy lifting.  Your APEX application definition should have very little code. It should be primarily comprised of SQL queries and simple invocations of your underlying PL/SQL programs.

My rule of thumb - when you're editing code in a text area/code editor in the Application Builder of APEX and you see the scroll bar, it's time to consider putting it into a PL/SQL package.  And of course, if you catch yourself writing the same PL/SQL logic a second time, you should also consider putting it into a PL/SQL package.

There's more to come from the Oracle APEX team on @OTNArchBeat.

RMAN-06023: no backup or copy of datafile # found to restore

Michael Dinh - Thu, 2015-03-05 20:55

There’s a great note from MOS – Checklist for an RMAN Restore (Doc ID 1554636.1) but how many of you review this before performing a restore?

If you don’t then you are as guilty as I am.

RMAN> restore database until time "TO_DATE('2015-MAR-04 19:53:54','YYYY-MON-DD HH24:MI:SS')" preview summary;

Starting restore at 2015-MAR-05 18:03:28
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=9 device type=DISK

datafile 5 will be created automatically during restore operation
datafile 6 will be created automatically during restore operation
datafile 7 will be created automatically during restore operation
datafile 8 will be created automatically during restore operation
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 03/05/2015 18:03:28
RMAN-06026: some targets not found - aborting restore
RMAN-06023: no backup or copy of datafile 4 found to restore
RMAN-06023: no backup or copy of datafile 3 found to restore
RMAN-06023: no backup or copy of datafile 2 found to restore
RMAN-06023: no backup or copy of datafile 1 found to restore

Let’s check backup for datafile 5,6,7,8 – looks good

RMAN> list backup of datafile 5,6,7,8 summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
98      B  0  A DISK        2015-MAR-04 19:54:24 1       1       YES        LEVEL0
99      B  0  A DISK        2015-MAR-04 19:54:26 1       1       YES        LEVEL0
100     B  0  A DISK        2015-MAR-04 19:54:27 1       1       YES        LEVEL0
101     B  0  A DISK        2015-MAR-04 19:54:29 1       1       YES        LEVEL0
109     B  1  A DISK        2015-MAR-04 22:03:26 1       1       YES        LEVEL1
110     B  1  A DISK        2015-MAR-04 22:03:28 1       1       YES        LEVEL1
111     B  1  A DISK        2015-MAR-04 22:03:29 1       1       YES        LEVEL1
112     B  1  A DISK        2015-MAR-04 22:03:30 1       1       YES        LEVEL1

RMAN>
RMAN> list backup of datafile 1 summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
94      B  0  A DISK        2015-MAR-04 19:54:13 1       1       YES        LEVEL0
105     B  1  A DISK        2015-MAR-04 22:02:57 1       1       YES        LEVEL1

RMAN>

The until time was not far enough to include backup for data 1 ..

RMAN> restore database until time "TO_DATE('2015-MAR-04 19:54:30','YYYY-MON-DD HH24:MI:SS')" preview summary;

Starting restore at 2015-MAR-05 18:35:56
using channel ORA_DISK_1


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
94      B  0  A DISK        2015-MAR-04 19:54:12 1       1       YES        LEVEL0
96      B  0  A DISK        2015-MAR-04 19:54:19 1       1       YES        LEVEL0
95      B  0  A DISK        2015-MAR-04 19:54:16 1       1       YES        LEVEL0
97      B  0  A DISK        2015-MAR-04 19:54:22 1       1       YES        LEVEL0
98      B  0  A DISK        2015-MAR-04 19:54:23 1       1       YES        LEVEL0
99      B  0  A DISK        2015-MAR-04 19:54:26 1       1       YES        LEVEL0
100     B  0  A DISK        2015-MAR-04 19:54:27 1       1       YES        LEVEL0
101     B  0  A DISK        2015-MAR-04 19:54:29 1       1       YES        LEVEL0


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
102     B  A  A DISK        2015-MAR-04 19:54:30 1       1       YES        ARCHIVELOG
Media recovery start SCN is 342495
Recovery must be done beyond SCN 342508 to clear datafile fuzziness
Finished restore at 2015-MAR-05 18:35:56

Where on earth did I get the time “2015-MAR-04 19:54:30″ to begin with?

I used Low Time from ARCHIVELOG backup which was not sufficient for demonstration purpose.

RMAN> list backupset 102;


List of Backup Sets
===================


BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
102     3.50K      DISK        00:00:00     2015-MAR-04 19:54:30
        BP Key: 102   Status: AVAILABLE  Compressed: YES  Tag: ARCHIVELOG
        Piece Name: /oradata/backup/arc_HAWK_3130551611_20150304_37q10orm_1_1

  List of Archived Logs in backup set 102
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    140     342477     2015-MAR-04 19:53:54 342514     2015-MAR-04 19:54:30

RMAN>

Instead, use “Next Time” which is also the same as “Completion Time” from list backup summary;

RMAN> list backup summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
90      B  A  A DISK        2015-MAR-04 19:53:58 1       1       YES        ARCHIVELOG
91      B  A  A DISK        2015-MAR-04 19:54:00 1       1       YES        ARCHIVELOG
92      B  A  A DISK        2015-MAR-04 19:54:02 1       1       YES        ARCHIVELOG
93      B  A  A DISK        2015-MAR-04 19:54:05 1       1       YES        ARCHIVELOG
94      B  0  A DISK        2015-MAR-04 19:54:13 1       1       YES        LEVEL0
95      B  0  A DISK        2015-MAR-04 19:54:16 1       1       YES        LEVEL0
96      B  0  A DISK        2015-MAR-04 19:54:20 1       1       YES        LEVEL0
97      B  0  A DISK        2015-MAR-04 19:54:22 1       1       YES        LEVEL0
98      B  0  A DISK        2015-MAR-04 19:54:24 1       1       YES        LEVEL0
99      B  0  A DISK        2015-MAR-04 19:54:26 1       1       YES        LEVEL0
100     B  0  A DISK        2015-MAR-04 19:54:27 1       1       YES        LEVEL0
101     B  0  A DISK        2015-MAR-04 19:54:29 1       1       YES        LEVEL0
102     B  A  A DISK        2015-MAR-04 19:54:30 1       1       YES        ARCHIVELOG
103     B  F  A DISK        2015-MAR-04 19:54:34 1       1       NO         TAG20150304T195432
104     B  A  A DISK        2015-MAR-04 22:02:49 1       1       YES        ARCHIVELOG
105     B  1  A DISK        2015-MAR-04 22:02:57 1       1       YES        LEVEL1
106     B  1  A DISK        2015-MAR-04 22:03:04 1       1       YES        LEVEL1
107     B  1  A DISK        2015-MAR-04 22:03:18 1       1       YES        LEVEL1
108     B  1  A DISK        2015-MAR-04 22:03:24 1       1       YES        LEVEL1
109     B  1  A DISK        2015-MAR-04 22:03:26 1       1       YES        LEVEL1
110     B  1  A DISK        2015-MAR-04 22:03:28 1       1       YES        LEVEL1
111     B  1  A DISK        2015-MAR-04 22:03:29 1       1       YES        LEVEL1
112     B  1  A DISK        2015-MAR-04 22:03:30 1       1       YES        LEVEL1
113     B  A  A DISK        2015-MAR-04 22:03:32 1       1       YES        ARCHIVELOG
114     B  F  A DISK        2015-MAR-04 22:03:35 1       1       NO         TAG20150304T220333

RMAN>

Social Insights from the #LeadOnCA Watermark Conference

Linda Fishman Hoyle - Thu, 2015-03-05 18:30

A Guest Post by Meg Bear,  (pictured left), Group Vice President, Oracle Social Cloud Platform

Tuesday, February 24 was an inspiring day of thoughtful discussion at the Lead On Silicon Valley Watermark Conference for Women. Over 5,000 people gathered to discuss the issues that matter the most to women in the workforce. I am proud that Oracle sponsored this fantastic event to support the development of women leaders.

These discussions didn’t just happen in person – they carried over to the digital realm as well. Using the Oracle Social Cloud Social Relationship Manager (SRM) platform, we learned that over 6.6 million people were reached via #LeadOnCA. Hillary Clinton was the most talked about speaker (1,922 mentions) and the main theme of the conference was “Women and Men” which encompassed messages about gender equality, and the glass ceiling.

Oracle Social Cloud SRM also provided real time social media visualization of #LeadOnCA commentary across social networks.


Oracle Social Cloud’s data visualization of social media posts about #LeadOnCA

As people posted about #LeadOnCA on social networks, our advanced listening technology filtered these into a beautiful visual displays throughout the conference. As they say, a picture is worth a thousand words, and our expertise allows participants to see what people are talking about in real time.

I’d like to thank Watermark for putting on this event and for their mission to increase representation of women in leadership roles. It is exciting to think of what the future holds for empowered women.

Reading VCAP_SERVICES Postgresql service credentials within Bluemix

Pas Apicella - Thu, 2015-03-05 17:03
The following shows how you can easily read the VCAP_SERVICES postgresql credentials within your Java Code using the maven repo. This assumes your using the ElephantSQL Postgresql service. A single connection won't be ideal but for demo purposes might just be all you need.

1. First add the maven dependency as follows. This will add WebSphere Application Server Liberty Profile to your project
  
<dependency>
<groupId>com.ibm.tools.target</groupId>
<artifactId>was-liberty</artifactId>
<version>LATEST</version>
<type>pom</type>
<scope>provided</scope>
</dependency>

2. In your code , something as follows gets you the Connection details to make a JDBC connection withinyou java code.
  
private static Connection getConnection() throws Exception
{
Map<String, String> env = System.getenv();

if (env.containsKey("VCAP_SERVICES")) {

JSONObject vcap = (JSONObject) JSON.parse(env.get("VCAP_SERVICES"));
JSONObject service = null;

// We don't know exactly what the service is called,
// but it will contain "elephantsql"
for (Object key : vcap.keySet()) {
String keyStr = (String) key;
if (keyStr.toLowerCase().contains("elephantsql")) {
service = (JSONObject) ((JSONArray) vcap.get(keyStr)).get(0);
break;
}
}

if (service != null) {
JSONObject creds = (JSONObject) service.get("credentials");
URI uri = URI.create((String) creds.get("uri"));
String url = "jdbc:postgresql://" + uri.getHost() + ":" +
uri.getPort() +
uri.getPath();
String username = uri.getUserInfo().split(":")[0];
String password = uri.getUserInfo().split(":")[1];
return DriverManager.getConnection(url, username, password);
}
}

throw new Exception("No ElephantSQL service URL found. Make sure you " +
"have bound the correct services to your app.");
}

http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Extracting Data from Cloud Apps

Dylan's BI Notes - Thu, 2015-03-05 15:20
I think that it would be easier if the cloud application can be aware of the data integration needs and publish the interfaces proactively. Here are some basic requirements for the applications that can be considered as data integration friendly: 1. Publish the object data model This is required for source analysis. For example, here is […]
Categories: BI & Warehousing