Skip navigation.

Feed aggregator

Monitoring RMAN Operations

Michael Dinh - Sat, 2015-08-22 23:41

Just a reference to source and my version of the script.

This is for restore since there are OUTPUTS.

Script to monitor RMAN Backup and Restore Operations (Doc ID 1487262.1)

$ sqlplus / as sysdba @mon_rman_restore.sql

SQL*Plus: Release 10.2.0.4.0 - Production on Sun Aug 23 01:14:31 2015

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


Session altered.


  SID SERIAL# USERNAME	 LOGON_TIME	 OSUSER     PROCESS	   SPID 	MACHINE        ST PROGRAM
----- ------- ---------- --------------- ---------- -------------- ------------ -------------- -- --------------------------------
 3290	   12 SYS	 22-08-15 20:36  oracle     31267	   31298	prod2      I  rman@prod2 (TNS V1-V3)
 3292	    9 SYS	 22-08-15 20:36  oracle     31267	   31297	prod2      I  rman@prod2 (TNS V1-V3)
 3289	   11 SYS	 22-08-15 20:36  oracle     31267	   31299	prod2      A  rman@prod2 (TNS V1-V3)
 3279	    1 SYS	 22-08-15 20:36  oracle     31267	   31301	prod2      A  rman@prod2 (TNS V1-V3)
 3285	   14 SYS	 22-08-15 20:36  oracle     31267	   31300	prod2      A  rman@prod2 (TNS V1-V3)
 3278	    1 SYS	 22-08-15 20:36  oracle     31267	   31302	prod2      A  rman@prod2 (TNS V1-V3)
 3277	    1 SYS	 22-08-15 20:36  oracle     31267	   31303	prod2      A  rman@prod2 (TNS V1-V3)
 3275	    1 SYS	 22-08-15 20:36  oracle     31267	   31305	prod2      A  rman@prod2 (TNS V1-V3)
 3276	    1 SYS	 22-08-15 20:36  oracle     31267	   31304	prod2      A  rman@prod2 (TNS V1-V3)
 3274	    1 SYS	 22-08-15 20:36  oracle     31267	   31306	prod2      A  rman@prod2 (TNS V1-V3)
 3273	    1 SYS	 22-08-15 20:36  oracle     31267	   31307	prod2      A  rman@prod2 (TNS V1-V3)
 3272	    1 SYS	 22-08-15 20:37  oracle     31267	   31308	prod2      A  rman@prod2 (TNS V1-V3)
 3270	    1 SYS	 22-08-15 20:37  oracle     31267	   31310	prod2      A  rman@prod2 (TNS V1-V3)
 3271	    1 SYS	 22-08-15 20:37  oracle     31267	   31309	prod2      A  rman@prod2 (TNS V1-V3)

14 rows selected.


  SID SERIAL# CHANNEL			 SEQ# EVENT			     STATE		SECS	  SOFAR  TOTALWORK % COMPLETE
----- ------- -------------------- ---------- ------------------------------ ------------ ---------- ---------- ---------- ----------
 3274	    1 rman channel=d08		54992 RMAN backup & recovery I/O     WAITING		   0	 342523    6815742	 5.03
 3275	    1 rman channel=d07		18384 RMAN backup & recovery I/O     WAITING		   0	 501503    7340030	 6.83
 3278	    1 rman channel=d04		48839 RMAN backup & recovery I/O     WAITING		   3	 502704    7340030	 6.85
 3272	    1 rman channel=d10		13502 RMAN backup & recovery I/O     WAITING		   3	 495473    6815742	 7.27
 3270	    1 rman channel=d12		39023 RMAN backup & recovery I/O     WAITING		   0	 535039    7340030	 7.29
 3271	    1 rman channel=d11		51018 RMAN backup & recovery I/O     WAITING		   0	 536703    7340030	 7.31
 3276	    1 rman channel=d06		  121 RMAN backup & recovery I/O     WAITING		   0	 503423    6815742	 7.39
 3277	    1 rman channel=d05		  276 RMAN backup & recovery I/O     WAITING		   3	 553855    7389182	  7.5
 3285	   14 rman channel=d02		56444 RMAN backup & recovery I/O     WAITING		   3	 611128    7340030	 8.33
 3289	   11 rman channel=d01		 2482 RMAN backup & recovery I/O     WAITING		   3	 846732    7340030	11.54
 3279	    1 rman channel=d03		 5065 RMAN backup & recovery I/O     WAITING		   3	 882685    7340030	12.03
 3273	    1 rman channel=d09		49115 RMAN backup & recovery I/O     WAITING		   3	1004287    7340030	13.68

12 rows selected.


  SID CHANNEL		   STATUS		OPEN_TIME	       SOFAR_MB   TOTAL_MB % COMPLETE TYPE
----- -------------------- -------------------- -------------------- ---------- ---------- ---------- ---------
FILENAME
----------------------------------------------------------------------------------------------------
 3270 rman channel=d12	   IN PROGRESS		23-AUG-2015 01:06:36	4180.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH1_9qqf6d01_49466_1.bus

 3275 rman channel=d07	   IN PROGRESS		23-AUG-2015 01:06:59	3918.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH3_a0qf6hcg_49472_1.bus

 3289 rman channel=d01	   IN PROGRESS		23-AUG-2015 01:02:00	6615.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH8_9pqf6crq_49465_1.bus

 3285 rman channel=d02	   IN PROGRESS		23-AUG-2015 01:05:46	4647.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH7_9rqf6d1e_49467_1.bus

 3279 rman channel=d03	   IN PROGRESS		23-AUG-2015 01:01:26	6895.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH6_9uqf6d3c_49470_1.bus

 3278 rman channel=d04	   IN PROGRESS		23-AUG-2015 01:07:02	3922.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH2_9sqf6d1t_49468_1.bus

 3277 rman channel=d05	   IN PROGRESS		23-AUG-2015 01:06:20	4327.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH5_9oqf6coh_49464_1.bus

 3276 rman channel=d06	   IN PROGRESS		23-AUG-2015 01:07:00	3933.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH8_a2qf6i9i_49474_1.bus

 3274 rman channel=d08	   IN PROGRESS		23-AUG-2015 01:09:24	2674.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH7_a3qf6ic7_49475_1.bus

 3273 rman channel=d09	   IN PROGRESS		23-AUG-2015 00:59:40	7846.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH3_9vqf6d3d_49471_1.bus

 3272 rman channel=d10	   IN PROGRESS		23-AUG-2015 01:07:07	3869.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH5_a4qf6idl_49476_1.bus

 3271 rman channel=d11	   IN PROGRESS		23-AUG-2015 01:06:35	4193.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH4_9tqf6d1v_49469_1.bus

 3273 rman channel=d09	   IN PROGRESS		23-AUG-2015 00:59:42	   3923      24576	15.96 OUTPUT
+DATA01/prod2/datafile/xxxdata01.305.888454781

 3279 rman channel=d03	   IN PROGRESS		23-AUG-2015 01:01:28	3447.88      24576	14.03 OUTPUT
+DATA01/prod2/datafile/xxxdata01.307.888454887

 3289 rman channel=d01	   IN PROGRESS		23-AUG-2015 01:02:02	   3308      24576	13.46 OUTPUT
+DATA01/prod2/datafile/xxxdata01.309.888454921

 3273 rman channel=d09	   IN PROGRESS		23-AUG-2015 00:59:41	3923.88   32767.98	11.97 OUTPUT
+DATA01/prod2/datafile/xxxidx01.304.888454781

 3279 rman channel=d03	   IN PROGRESS		23-AUG-2015 01:01:27	3448.88   32767.98	10.53 OUTPUT
+DATA01/prod2/datafile/xxxidx01.306.888454887

 3289 rman channel=d01	   IN PROGRESS		23-AUG-2015 01:02:01	   3308   32767.98	 10.1 OUTPUT
+DATA01/prod2/datafile/xxxidx01.308.888454921

 3285 rman channel=d02	   IN PROGRESS		23-AUG-2015 01:05:47	2387.38      24576	 9.71 OUTPUT
+DATA01/prod2/datafile/xxxdata01.311.888455147

 3276 rman channel=d06	   IN PROGRESS		23-AUG-2015 01:07:03	1966.88      20480	  9.6 OUTPUT
+DATA01/prod2/datafile/xxxdata01.449.867145931.tts

 3272 rman channel=d10	   IN PROGRESS		23-AUG-2015 01:07:08	1935.88      20480	 9.45 OUTPUT
+DATA01/prod2/datafile/xxxidx01.325.888455227

 3277 rman channel=d05	   IN PROGRESS		23-AUG-2015 01:06:22	2163.88      24960	 8.67 OUTPUT
+DATA01/prod2/datafile/xxxdata01.313.888455181

 3271 rman channel=d11	   IN PROGRESS		23-AUG-2015 01:06:36	2096.88      24576	 8.53 OUTPUT
+DATA01/prod2/datafile/xxxdata01.315.888455195

 3270 rman channel=d12	   IN PROGRESS		23-AUG-2015 01:06:38	   2090      24576	  8.5 OUTPUT
+DATA01/prod2/datafile/xxxidx01.317.888455197

 3278 rman channel=d04	   IN PROGRESS		23-AUG-2015 01:07:03	   1964      24576	 7.99 OUTPUT
+DATA01/prod2/datafile/xxxdata01.323.888455223

 3275 rman channel=d07	   IN PROGRESS		23-AUG-2015 01:07:01	1958.88      24576	 7.97 OUTPUT
+DATA01/prod2/datafile/xxxidx01.319.888455221

 3285 rman channel=d02	   IN PROGRESS		23-AUG-2015 01:05:47	   2388   32767.98	 7.29 OUTPUT
+DATA01/prod2/datafile/xxxdata01.310.888455147

 3277 rman channel=d05	   IN PROGRESS		23-AUG-2015 01:06:21	   2164   32767.98	  6.6 OUTPUT
+DATA01/prod2/datafile/xxxidx01.312.888455181

 3274 rman channel=d08	   IN PROGRESS		23-AUG-2015 01:09:25	1337.88      20480	 6.53 OUTPUT
+DATA01/prod2/datafile/xxxidx01.327.888455365

 3271 rman channel=d11	   IN PROGRESS		23-AUG-2015 01:06:35	   2097   32767.98	  6.4 OUTPUT
+DATA01/prod2/datafile/xxxdata01.314.888455195

 3270 rman channel=d12	   IN PROGRESS		23-AUG-2015 01:06:37	2090.88   32767.98	 6.38 OUTPUT
+DATA01/prod2/datafile/xxxidx01.316.888455197

 3276 rman channel=d06	   IN PROGRESS		23-AUG-2015 01:07:02	   1967   32767.98	    6 OUTPUT
+DATA01/prod2/datafile/xxxdata01.320.888455221

 3278 rman channel=d04	   IN PROGRESS		23-AUG-2015 01:07:03	1964.38   32767.98	 5.99 OUTPUT
+DATA01/prod2/datafile/xxxidx01.321.888455223

 3275 rman channel=d07	   IN PROGRESS		23-AUG-2015 01:07:00	   1960   32767.98	 5.98 OUTPUT
+DATA01/prod2/datafile/xxxidx01.318.888455219

 3272 rman channel=d10	   IN PROGRESS		23-AUG-2015 01:07:07	   1936   32767.98	 5.91 OUTPUT
+DATA01/prod2/datafile/xxxidx01.324.888455227

 3274 rman channel=d08	   IN PROGRESS		23-AUG-2015 01:09:25	1338.88   32767.98	 4.09 OUTPUT
+DATA01/prod2/datafile/xxxdata01.326.888455365


36 rows selected.

Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
$ 
SET linesize 160 trimspool ON pages 1000 
ALTER session SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
col sid FOR 9999 
col serial# FOR 99999 
col spid FOR 9999 
col username FOR a10 
col osuser FOR a10 
col status FOR a2 
col program FOR a32 
col logon_time FOR a15 
col module FOR a30 
col action FOR a35 
col process FOR a14 
col machine FOR a14
SELECT s.sid,
  s.serial#,
  s.username,
  TO_CHAR(s.logon_time,'DD-MM-RR hh24:mi') logon_time,
  s.osuser,
  s.process,
  p.spid,
  s.machine,
  SUBSTR(s.status,1,1) status,
  s.program
FROM v$session s, v$process p
WHERE s.program LIKE '%rman%'
AND s.paddr = p.addr (+)
ORDER BY s.logon_time, s.sid
;
col event FOR a30 
col channel FOR a20 
col state FOR a12
SELECT o.sid,
  o.serial#,
  client_info channel,
  seq#,
  event,
  state,
  seconds_in_wait secs,
  sofar,
  totalwork,
  ROUND(sofar/totalwork*100,2) "%COMPLETE"
FROM v$session_longops o, v$session s
WHERE program LIKE '%rman%'
AND opname NOT LIKE '%aggregate%'
AND o.sid       =s.sid
AND totalwork  != 0
AND sofar       totalwork
AND wait_time   = 0
AND NOT action IS NULL
ORDER BY 10
;
col filename FOR a110 
col status FOR a20
SELECT a.sid,
  client_info channel,
  a.status,
  open_time,
  ROUND(BYTES      /1024/1024,2) SOFAR_MB,
  ROUND(total_bytes/1024/1024,2) TOTAL_MB,
  ROUND(BYTES      /TOTAL_BYTES*100,2) "%COMPLETE",
  a.type,
  filename
FROM v$backup_async_io a, v$session s
WHERE NOT a.STATUS IN ('UNKNOWN')
AND a.sid           =s.sid
AND a.status       'FINISHED'
ORDER BY 8, 7 DESC
;
EXIT

Windows 10 Again

Tim Hall - Sat, 2015-08-22 11:55

DiagnosticsI wrote a few months ago about having a play with Windows 10 (here).

I’m visiting family today, catching up on all the Windows desktop (and mobile phone) support that I missed while I was away.

I purposely postponed the Windows 10 update on the desktops before I went away, but now I’m back I did the first of them.

The update itself was fine, but it did take a long time. Nothing really to write home about.

I’ve installed the latest version of Classic Shell on the machine, so the experience is similar to what they had before, Windows 8.1 and Classic Shell, which felt like Windows 7. :)

I’ve also switched out their shortcuts from Edge (Spartan) to Internet Explorer 11. They already use a combination of IE, Firefox and Chrome, so I didn’t want to add another thing into the mix. Also, the nephews use the Java plugin for some web-based games, so it is easier to leave them with IE for the time being. Maybe I will introduce Edge later…

So all in all, the user experience is pretty much unchanged compared to what they had before. I guess I will see how many calls Captain Support gets over the coming weeks! :)

Cheers

Tim…

Windows 10 Again was first posted on August 22, 2015 at 6:55 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Bucharest's Oracle EPC Ambassadors Show 'n' Wow with Oracle Applications Cloud UX

Usable Apps - Sat, 2015-08-22 09:14

The Oracle EMEA Presales Center (EPC) team (@OracleEPC), based in Bucharest, Romania has delivered an awesome Oracle Applications Cloud User Experience (UX) day. 

UX Team in Readers' Cafe Bucharest

The team carries the message: Passion and enthusiasm for UX. In style.

The event was for local customers and partners to find out more about the Oracle Applications Cloud UX strategy, to see and hear how we innovate with UX, and to explore the Oracle Applications Cloud in a personal, hands-on way. I was honored to kick off the proceedings, being keen to gauge the local market reaction to the cloud and innovation, and to answer any questions.

 Still part of UX

Look mum, no UI! But there's still a UX! IoT and web services are part of our Cloud UX story.

An eager and curious audience in Bucharest's Metropolis Centre was treated to an immersive UX show about strategy, science, and storytelling: What's UX? What does UX mean for users and the business? Simplicity, Mobility, Extensibility, Glance, Scan, Commit, the Oracle Cloud as platform, wearables, IoT and web services, and PaaS4SaaS, it was all covered.

The Oracle EPC team was the real enablers. Upstairs in the very funky Readers Café, these UX ambassadors brought the Oracle Applications Cloud UX message to life for customers in style, demoing "by walking around", and staffing stations for deeper discussions about the Oracle HCM Cloud, Oracle Sales Cloud, Oracle ERP Cloud, and PaaS4SaaS.

Oracle EPC team styling the Simplicity, Mobility, Extensibility UX message

The new wearables: Simplicity, Mobility, Extensibility.  

The Oracle EPC team let the UX do the talking by putting the Oracle Applications Cloud into the hands of customers, answering any questions as users enthusiastically swiped and tapped on Apple iPads to explore for themselves.

Oracle ERP Cloud demo in Readers Cafe Bucharest

Oracle Applications Cloud UX orchestration: Music to customer and partner ears.

Later, I was given a walking and video tour of the Oracle EPC operation in the fab Oracle building in Bucharest, co-ordinated by Oracle HCM Cloud and UX champ Vlad Babu (@vladbabu). I learned about the central work that EPC do so passionately across EMEA and APAC in providing content, context, and services to enable the Oracle sales effort: bid management, cloud and technology learning, making web solutions, demos and POC creation, video storytelling, rainmaking with insight, building mobile and PaaS4SaaS integration demos, and more.

I was blown away. To echo Oracle CEO Mark Hurd's (@markvhurd) words, "I didn’t know you did that. I didn’t know you had that."

I do now. And so do our customers.

Our Commitment to UX 

Be clear about what this event meant: It's a practical demonstration of Oracle's tremendous investment in user experience with great design, people, and technology and a testament to global success through bringing it all together. It's a clear message about the UX team's commitment to putting boots on the ground in EMEA, and other regions to listen, watch, and enable. That's why I'm here in EMEA.

Listening to the people who matter. And responding. That's UX.

UX is about listening to customers, partners, and users. It's about empathy. It's about being there.

The Bucharest event is just the beginning of great things to come and even greater things to happen for Oracle Applications Cloud customers and partners in EMEA and APAC. I'll be back. See you soon!

Be Prepared 

If you missed the event, check out our free Oracle Applications Cloud UX eBook, and find out how you can participate in the Oracle Cloud UX and future events in your area from the Usable Apps website. Keep up to date by following along on Twitter (@usableapps). 

Shout-outs

Thanks to Vlad Babu and Monica Costea for making it all happen, the co-ordination of the Oracle Applications UX team in the U.S., to Oracle EPC management for their support, and to Marcel Comendant for the images used on this page and on Twitter.

Presenting in Perth on 9 September and Adelaide on 11 September (Stage)

Richard Foote - Sat, 2015-08-22 05:54
For those of you lucky enough to live on the western half of Australia, I’ll be presenting at a couple of events in both Perth and Adelaide in the coming weeks. On Wednesday, 9th September 2015, I’ll be presenting on Oracle Database 12c New Features For DBAs (and Developers) at a “Let’s Talk Oracle” event […]
Categories: DBA Blogs

US Department of Education: Almost a good idea on ed tech evaluation

Michael Feldstein - Fri, 2015-08-21 16:53

By Phil HillMore Posts (358)

Richard Culatta from the US Department of Education (DOE, ED, never sure of proper acronym) wrote a Medium post today describing a new ED initiative to evaluate ed tech app effectiveness.

As increasingly more apps and digital tools for education become available, families and teachers are rightly asking how they can know if an app actually lives up to the claims made by its creators. The field of educational technology changes rapidly with apps launched daily; app creators often claim that their technologies are effective when there is no high-quality evidence to support these claims. Every app sounds world-changing in its app store description, but how do we know if an app really makes a difference for teaching and learning?

He then describes the traditional one-shot studies of the past (control group, control variables, year or so of studies, get results) and notes:

This traditional approach is appropriate in many circumstances, but just does not work well in the rapidly changing world of educational technology for a variety of reasons.

The reasons?

  • Takes too long
  • Costs too much and can’t keep up
  • Not iterative
  • Different purpose

This last one is worth calling out in detail, as it underlies the assumptions behind this initiative.

Traditional research approaches are useful in demonstrating causal connections. Rapid cycle tech evaluations have a different purpose. Most school leaders, for example, don’t require absolute certainty that an app is the key factor for improving student achievement. Instead, they want to know if an app is likely to work with their students and teachers. If a tool’s use is limited to an after-school program, for example, the evaluation could be adjusted to meet this more targeted need in these cases. The collection of some evidence is better than no evidence and definitely better than an over-reliance on the opinions of a small group of peers or well-designed marketing materials.

The ED plans are good in terms of improving the ability to evaluate effectiveness in such a manner that accounts for rapid technology evolution. The general idea of ED investing in the ability to provide better decision-making information is a good one. It’s also very useful to see ED recognize context of effectiveness claims.

The problem I see, and it could be a fatal one, is that ED is asking the wrong question for any technology or apps related to teaching and learning. [emphasis added]

The important questions to be asked of an app or tool are: does it work? with whom? and in what circumstances? Some tools work better with different populations; educators want to know if a study included students and schools similar to their own to know if the tool will likely work in their situations.

Ed tech apps by themselves do not “work” in terms of improving academic performance[1]. What “works” are pedagogical innovations and/or student support structure that are often enabled by ed tech apps. Asking if apps works is looking at the question inside out. The real question should be “Do pedagogical innovations or student support structures work, under which conditions, and which technology or apps support these innovations?”.

Consider our e-Literate TV coverage of Middlebury College and one professor’s independent discover of flipped classroom methods.

How do you get valuable information if you ask the question “Does YouTube work” to increase academic performance? You can’t. YouTube is a tool that the professor used. Now you could get valuable information if you ask the question “Does flipped classroom work for science courses, and which tools work in this context?” You could even ask “For the tools that support this flipped classroom usage, does the choice of tool (YouTube, Vimeo, etc) correlate with changes in student success in the course?”.

I could see that for certain studies, you could use the ED template and accomplish the same goal inside out (define the conditions as specific pedagogical usage or student support structures), thus giving valuable information. What I fear is that the pervasive assumption embedded in the program setup, asking over and over “does this app work” will prove fatal. You cannot put technology as the center of understanding academic performance.

I’ll post this as a comment to Richard’s Medium post as well. With a small change in the framing of the problem, this could be a valuable initiative from DOE.

Update: Changed DOE to ED for accuracy.

Update: This is not fully to the level of response, but Rolin Moe got Richard Culatta to respond to his tweet about this article.

@RMoeJo it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence

— Richard Culatta (@rec54) August 25, 2015

Rolin Moe: Most important thing I have read all year – @philonedtech points out technocentric assumptions of US ED initiative

Richard Culatta: it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence

  1. And yes, they throw in a line that it is not just about academic performance but also administrative claims. But the whole setup is on teaching and learning usage, which is the primary focus of my comments.

The post US Department of Education: Almost a good idea on ed tech evaluation appeared first on e-Literate.

Oracle Applications Customer Connect Has a New Look

Linda Fishman Hoyle - Fri, 2015-08-21 15:23

A Guest Post by Katrine Haugerud (pictured left), Senior Director, Oracle Product Management

We are pleased to announce the new, more modern look for our Customer Connect Community. This is based on the Oracle User Interface design paradigm.

Here are some of the enhancements you may have already noticed.

Landing page (pre-login)

On the landing page you can access information that does not require you to login. This includes Release Readiness resources, Help content, and more.

If you are an existing member you can use the Sign In link at the top of the page or the Sign In button on the Welcome banner to login. If you are not yet a community member, use the Register button to find out how you can request an account.


Homepage (post-login)

After logging in and getting to your homepage, you will notice that the overall navigation and structure of our Community have not changed much, but we have revitalized it with the new Oracle UI look and feel.

The banners are bigger and better to help you stay on top of important conferences, events, announcements, and other resources. We have also improved the Events Calendar so you can, at-a-glance, see what and when events are coming up, without having to navigate to the Events page. The Tab navigation is also streamlined to make it easier to retrieve the forums or content areas you are looking for!

We hope you’ll find this new look refreshing―and don’t forget to give us feedback by posting on the Site Feedback and Questions forum.

Remember this is Your Community!

X-Window Fun

Michael Dinh - Fri, 2015-08-21 14:43
When ssh -X to another host, I am able to use X-Windows.
[dinh@ca01ts~]$ ssh -X dinh@192.168.1.137
dinh@192.168.1.137's password:
Last login: Fri Aug 21 11:55:47 2015 from 10.237.102.38
/usr/bin/xauth: creating new authority file /home/dinh/.Xauthority
[dinh@arrow ~]$ xclock
Warning: Missing charsets in String to FontSet conversion
^C
However, sudo to another user and X-Windows breaks.
[dinh@arrow ~]$ sudo su - oracle
[sudo] password for dinh:
[oracle@arrow ~]$ xclock
X11 connection rejected because of wrong authentication.
X connection to localhost:10.0 broken (explicit kill or server shutdown).
[oracle@arrow ~]$
Work Around: Just list the xauth.
[dinh@arrow ~]$ xauth list
arrow/unix:10 MIT-MAGIC-COOKIE-1 8dfb6c468329ff0d5f5d962b094a82d3
Magic is here.
[dinh@arrow ~]$ xauth list | grep unix`echo $DISPLAY | cut -c10-12` > /tmp/xauth
[dinh@arrow ~]$ sudo su - oracle
[sudo] password for dinh:
[oracle@arrow ~]$ xauth add `cat /tmp/xauth`
xauth: creating new authority file /home/oracle/.Xauthority
[oracle@arrow ~]$ xclock
Warning: Missing charsets in String to FontSet conversion
^C
BINGO!

Formatting Rich Text Comments in BI Publisher

Javier Delgado - Fri, 2015-08-21 13:27
In the last years, BI Publisher has become the go to tool to cover most reporting needs in PeopleSoft, replacing other technologies such as Crystal Reports and SQR in many scenarios.



The basic concept behind many reporting tools is separating data and presentation logic, so report designers can work in parallel with developers who know the data model in detail. BI Publisher is the PeopleSoft reporting tool that achieves this separation in a more thorough way. It does so by using XML as the information exchange format between the data generation and the report generator. Practically all systems have a way to export data in XML nowadays, and PeopleSoft is not the exception, with options ranging from Connected Queries, File Layouts to PeopleCode managed XMLDocs. From my point of view, this is major advantage over other technologies like Crystal Reports, which in its PeopleSoft version could only extract data from PeopleSoft queries (if you needed to extract somehow complex information, you would need to create an extraction program).

Other advantages of BI Publisher are the bursting capabilities (separating report output based on certain data fields) and the possibility to generate online reports without using Process Scheduler.
Formatting Rich Text FieldsI have to admit that I'm far from being a reporting expert, but in one of my latest projects I came accross the need to develop several of them. One of these reports needed to display comments previously entered by users in rich text format. BI Publisher provides a function to do so:
<?html2fo:elementname?>
However, this function has a problem I was not able to solve (I admit there could be other solutions but I could not find anything as part of my research in a few forums): if you are building a report with certain style guidelines, the rich text would always be rendered using Arial 12pt as the base font. This resulted in a very funny looking report, with large fonts coexisting with smaller ones. Of course, there was the option to also use Arial 12pt as the report base font, but users are not always ready to change their aesthetic requirements.

In the end, we found out that the html2fo function would render the rich text using the inline style of HTML elements. PeopleSoft normally does not set a font-family nor font-weight (please check the note at the end of the document), so BI Publisher automatically applies the default style, which is Arial 12pt. However if you set the style be yourself, BI Publisher would accept it.

The following code shows an extract of how we set this style:

Function FormatRichTextForBIP(&text As string, &fontSize As string) Returns string;
   Local string &result;
   
   If All(&fontSize) Then
      &result = "<div style='font-family: verdana;font-size: " | &fontSize | ";'>" | &text | "</div>";
   Else
      &result = "<div style='font-family: verdana;'>" | &text | "</div>";
   End-If;
   
   Return &result;
End-Function;


(...)
&reportingRec.COMMENTS.Value = FormatRichTextField(&inputRec.COMMENTS.Value, "12pt");
(...)

Note: This approach would not work if within the rich text the user has included different font sizes. This basic approach works when no font-family or font-weights are applied within the stored rich text HTML. In any case, this is a solvable issue, although it may require some more work. What you need to do is parse the rich text and replace the desired style clauses.

Virtual Technology Summit - Spotlight on Middleware

OTN TechBlog - Fri, 2015-08-21 13:00

Register now for OTN's new Virtual Technology Summit - September 16, 2015. Hear from Oracle ACEs, Java Champions and Oracle Product Experts, as they share their insights and expertise through Hands-on-Labs, highly technical presentations and demos that enable you to master the skills you need to meet today's IT challenges. Chat live with folks and ask your questions as you attend sessions.

Middleware Spotlight: Middleware in the Cloud: PaaS Gets Real - The middleware track in the Fall 2015 edition of the OTN Virtual Technology Summit puts the spotlight on Oracle's Mobile Cloud Service (MCS), Process Cloud Service (PCS), and Java Cloud Service (JCS), three of the more than two dozen new services available on the Oracle Cloud Platform. In each of the three deep-dive sessions a recognized expert from the OTN community walks you through a technical how-to to demonstrate how you can use these PaaS services, and compares each to its on-premise counterparts. PaaS services loom large in the future for developers and architects, so if you're developing enterprise mobile applications, or working with Oracle BPM or WebLogic, you'll want to make sure these #OTNVTS sessions are on your calendar. There are three sessions in the Middleware track:

  • Mobile by Design: Developing with Mobile Cloud Service - In this session you will learn how to use Oracle Mobile Cloud Service to enable your enterprise applications and existing services for simpler, faster, more transparent and more secure mobile access. Learn how to build a robust Mobile Backend as a Service (MBaaS) using Oracle Mobile Suite, and learn how to customize that MBaaS using one of many thousands of available Node.js modules.

  • Getting Started with Oracle Process Cloud Service (PCS): Oracle's BPM on the Cloud - One of the great frustrations businesses face today is the time, money and effort it takes to solve their own business problems. Oracle Process Cloud Service (PCS) addresses this recurring issue by providing a Business Process Management (BPM) solution in the cloud. This session will show how PCS offers a more business-oriented approach to deliver automated process solutions. The demonstration will include tips on getting started, how processes are modeled, the use of business rule decisions, how user interfaces are designed and tested, the integration of services into the process, and how end users interact with the completed application.

  • Getting Started with Java Cloud Service: A WebLogic Administrator's View - Oracle's Java Cloud Service (JCS) is here -- not just the SaaS Extension flavor, but full-blown JCS: clustered WebLogic, Coherence (optionally) and Traffic Director, in the cloud yet under your full control. This session looks at full JCS from the WebLogic administrator's perspective: what it offers, how it's managed, and what sort of applications can be deployed. The session will include some serious digging around - logged into the virtual servers exploring how WebLogic is installed, how the domains are configured, and how to make the customizations needed for real-world applications.
Register today!

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are already a member of the Oracle Technology Network Community, you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it: Oracle Community - Rewards & Recognition FAQ.

68 Percent of Statistics Are Meaningless, Purdue University Edition

Michael Feldstein - Fri, 2015-08-21 10:13

By Michael FeldsteinMore Posts (1043)

I don’t know of any other way to put this. Purdue University is harming higher education by knowingly peddling questionable research for the purpose of institutional self-aggrandizement. Purdue leadership should issue a retraction and an apology.

We have covered Purdue’s Course Signals extensively here at e-Literate. It is a pioneering program, and evidence does suggest that it helps at-risk students pass courses. That said, Purdue came out with a later study that is suspect. The study in question claimed that students who used Course Signals in consecutive classes were more likely to see improved performance over time, even in courses that did not use the tool. Mike Caulfield looked at the results and had an intuition that the result of the study was actually caused by selection bias. Students who stuck around to take courses in consecutive semesters were more likely to…stick around and take more courses in consecutive semesters. So students who stuck around to take more Course Signals courses in consecutive semesters would, like their peers, be more likely to stick around and take more courses. Al Essa did a mathematical simulation and proved Mike’s intuition that Purdue’s results could be the result of selection bias. Mike wrote up a great explainer here on e-Literate that goes into all the details. If there was indeed a mistake in the research, it was almost certainly an honest one. Nevertheless, there was an obligation on Purdue’s part to re-examine the research in light of the new critique. After all, the school was getting positive press from the research and had licensed the platform to SunGard (now Ellucian). Furthermore, as a pioneering and high-profile foray into learning analytics, Course Signals was getting a lot of attention and influencing future research and product development in the field. We needed a clearer answer regarding the validity of the findings.

Despite our calls here on the blog, and our efforts to contact Purdue directly, and attention the issue got in the academic press, Purdue chose to remain silent on the issue. Our sources informed us at the time that Purdue leadership was aware of the controversy surrounding the study and made a decision not to respond. Keep in mind that the research was conducted by Purdue staff rather than faculty. As a results, those researchers did not have the cover of academic freedom and were not free to address the study on their own without first getting a green light from their employer. To make matters more complicated, none of the researchers on that project still work at Purdue anymore. So the onus was on the institution to respond. They chose not to do so.

That was bad enough. Today it became clear that Purdue is actively promoting that questionable research. In a piece published today in Education Dive, Purdue’s “senior communications and marketing specialist” Steve Tally said

the initial five- and six-year raw data about the impact of Signals showed students who took at least two Signals-enabled courses had graduation rates that were 20% higher. Tally said the program is most effective in freshman and sophomore year classes.

“We’re changing students’ academic behaviors,” Tally said, “which is why the effect is so much stronger after two courses with Signals rather than one.” A second semester with Signals early on in students’ degree programs could set behaviors for the rest of their academic careers.

It’s hard to read this as anything other than a reference the study that Mike and Al challenged. Furthermore, the comment about “raw data” suggests that Purdue has made no effort to control for the selection bias in question. Two years after the study was challenged, they have not responded, not looked into it, and continue to use it to promote the image of the university.

This is unconscionable. If an academic scholar behaved that way, she would be ostracized in her field. And if a big vendor like Pearson or Blackboard behaved that way, it would be broadly vilified in the academic press and academic community. Purdue needs to come clean. They need to defend the basis on which they continue to make claims about their program the same way a scholar applying for tenure at their institution would be expected to be responsible for her claims. Purdue’s peer institutions likewise need to hold the school accountable and let them know that their reputation for integrity and credibility is at stake.

The post 68 Percent of Statistics Are Meaningless, Purdue University Edition appeared first on e-Literate.

Are you ready to be a private cloud service provider?

Pythian Group - Thu, 2015-08-20 20:35

When defining what a cloud service is, we need to know that it is not a technology per se, but its an architectural and operational paradigm. It is a self-service computing environment offering the ability to create, consume, and pay for services. In this architecture, computing resources are elastically supplied from a shared pool and charged based on metered use and it uses service catalogs to provide a menu of options and service levels.

According to the IDC  the “total cloud IT infrastructure spending (server, disk storage, and ethernet switch) will grow by 21% year over year to $32 billion in 2015, accounting for approximately 33% of all IT infrastructure spending, which will be up from about 28% in 2014. Private cloud IT infrastructure spending will grow by 16% year over year to $12 billion, while public cloud IT infrastructure spending will grow by 25% in 2015 to $21 billion.

Meaning that the growth for this architecture (Private,Public or Hybrid) will not stop for the foreseeable future, so we first need to understand what drives it and how to translate your current architecture into a 3rd platform architecture.

2015-08-19_1240 Source: Image from IDC 3rd Platform Study

The principles of a cloud architecture support the following necessary capabilities:

  • Resource pooling – Services can be adjusted to suit each client’s needs without any changes being apparent to the client or end user.
  • Rapid elasticity – The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.
  • On-demand self-service – Provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider
  • Measured service – Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer
  • Broad network access – Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms
Business Drivers

Cloud will not be a true fit for everybody or for every case. We need to understand and determine the business drivers before we implement a cloud architecture.

  1. Increment our agility within our enterprise by providing:
    1. The ability to remove certain human procedures and have the end user be a self-service consumer
    2. A well-defined service catalog
    3. Capability to adapt to workload changes by provisioning or deprovisioning system resources
  2. Reduce enterprise costs by:
    1. Using shared system resources for our different applications and internal business divisions
    2. Being capable of determining the actual usage of system resources to show the benefit of our architecture
    3. Capable of automating mundane and routine tasks
  3. Reduce enterprise risks
    1. By having greater control of the resources we have and how they are being used
    2. Have more unified security across our business
    3. Providing different levels of high availability to our enterprise
Service Catalog

The most critical part when defining any type of service is defining what is it that we are going to provide. Take McDonalds for example. When we get to a counter, there is a well-defined catalog of what products we can consume in that establishment. It will be a certain type of hamburger and junk food. To define it more clearly, we can’t go into McDonalds and order a pizza or Italian food, as that is not in their business or service catalog.

When defining our business enterprise service catalog, we need to define the What, as to what type of service we want to provide, what service levels we want to provide, what policies we are going to apply to the service, and what our capabilities are to provide it.

The business service catalog will translate into a technical enterprise catalog, defining every detail of how we will provide our business services. Here we need to define the How. How are we going to deploy the service? How are we going to provide the service levels? How are we going to apply the business policies and how are we going to manage our services?

As mentioned, this is not a technology, but it is an architecture, and like any, we first must understand where we are to know where we are going. So we, in our current organization, first need to capture our existing assets, skills, and processes so that we can then validate the future state of our architecture. 2015-08-19_1312

Meter, Charge, and Optimize

Business consumers want to know what they are consuming and what it costs, even if they don’t actually want to pay for the service. Additionally, from an operational perspective, as different tenants start sharing the same piece of platform or infrastructure, there needs to be accountability on the usage, or else resources may be over-allocated. To mitigate this, we often meter the usage and optionally chargeback [or show back] the tenants. Though an IT organization may not actually charge back its LOBs, this provides a transparent mechanism to budget resources and optimize the cloud platform on an ongoing basis.

Conclusion

These are just a few points to be aware of if you want to become a private cloud provider, but this is also helpful for any cloud architecture, as we need to understand what drives the change, what it is we are going provide, and how we are going to deliver and measure the services that we are providing.

Note– This was originally published on rene-ace.com

The post Are you ready to be a private cloud service provider? appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Git for Beginners

Pythian Group - Thu, 2015-08-20 20:04
git, simplified

Perhaps you’ve come across a great cache of publicly available SQL scripts that would be very useful in monitoring your databases, and these scripts are hosted on github.  Getting those scripts is as simple as clicking the Download button.

What if, however, you wish to contribute to the script library?

Or perhaps you would like to collaborate with coworkers on a project and want to host the files on github.

How do you get the files to your local server so that changes can be saved and pushed to the master repo?

Github is often the answer for that.

Some time ago github was probably considered by most IT folks as a tool for developers.  That has changed, as now git and github are popularly used to manage changes and allow collaboration on many kinds of projects that require file management.

If you are reading this blog, you are probably a DBA.  What better way to manage SQL scripts and allow others to contribute than with github?

Let’s simplify the use of git and make it usable for casual users. In other words, DBAs who want to access a SQL repo, and don’t want to relearn git every time, need to access the repo.

The methods shown here are not the same ones that would be used by a team of developers. Typically developers would create a fork of a project, clone that fork, modify files, and then issue pull requests to the main repo owner. There would also be branches to the development tree, merging, etc.

For this demo, there will still be a need to fork your own copy of the repo, but that is as far as it will go at this time.

Read more about creating a fork: https://help.github.com/articles/fork-a-repo/

In the spirit of keeping this simple, there will be no branching in this demo; I’ll only show the basics required to contribute to a project.

With simplicity as a goal, the following steps are to be performed in this demo:

  • Create a copy (fork) of the main repo in github
  • Clone the repo to a work environment (my linux workstation)
  • Add a file to the local repo
  • Commit the changes and push to my forked repo on github
  • Issue a ‘pull request’ asking the main repo admin to include my changes

So while it will be necessary to create a fork of the project, we won’t be dealing with branches off the mainline.

 Assumptions:

– you already have a github account

– git is installed on your laptop, server, whatever.

Git Repos

Two users will be used for this demo: jkstill and pytest.

The following repos will be used.

Main Repo: https://github.com/jkstill/git-demo

Developer’s (you) repo: https://github.com/pytest/git-demo

The Main Repo is public, so you can run this demo using your own account if you like.

Fork the Repo

The following steps were performed by the pytest user on github.

Login to https://github.com/ using a browser.

Navigate to https://github.com/jkstill/git-demo

Click on the ‘Fork’ icon and follow any instructions; this should only take a few seconds.

After forking this repo as pytest, my browser was now directed to https://github.com/pytest/git-demo

ssh key setup

This only needs to be done once.

The following examples are for github user pytest.

The pytest account will be used to demonstrate the concepts. Later I will explain more about ssh usage as it pertains to github, but for now this is probably sufficient.

create a new ssh key for use with github
   ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa_pytest_github -C 'github'
add key to github account

While logged in to your github account in a browser, find the account settings icon.

The icon for account settings is in upper right corner of browser window.

Navigate to the Add SSH Key section.

account settings -> SSH Keys -> Add SSH Key

The key added will be the public key. So in this case, the contents of ~/.ssh/id_rsa_pytest_github.pub would be pasted in the the text box that appears when the Add SSH Key button is pushed.

authenticate to github – the ‘git@github.com’ is required

Make sure to authenticate the key with github.

   ssh -i ~/.ssh/id_rsa_pytest_github -t git@github.com

Here is a successful example:

> ssh -i ~/.ssh/id_rsa_github -t git@github.com

Host key fingerprint is DE:AD:BE:EF:2b:00:2b:36:63:1b:56:4d:eb:df:a6:42

+--[ RSA 2048]----+
|        .        |
|       + .       |
|      . B .      |
|     o * +       |
|    Y * S        |
|   + O o . .     |
|    .   Z . o    |
|       . . t     |
|        . .      |
+-----------------+
PTY allocation request failed on channel 0
Hi pytest! You've successfully authenticated, but GitHub does not provide shell access.
Clone the REPO

Now you are ready to clone the newly forked repo to your workstation. At this point, it is assumed that git is already installed in your development environment. If git is not installed then you will need to install it.  There are many resources available whichever platform you are working on; installation will not be covered here.

The following command will clone your forked copy of the repo in the current directory:

> git clone https://github.com/pytest/git-demo
Cloning into 'git-demo'...
remote: Counting objects: 7, done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 7 (delta 0), reused 7 (delta 0), pack-reused 0
Unpacking objects: 100% (7/7), done.
Checking connectivity... done

> cd git-demo
/home/jkstill/github/pytest/git-demo

> ls -la
total 20
drwxr-xr-x 3 jkstill dba 4096 Aug 18 15:45 .
drwxr-xr-x 4 jkstill dba 4096 Aug 18 15:45 ..
drwxr-xr-x 8 jkstill dba 4096 Aug 18 15:45 .git
-rw-r--r-- 1 jkstill dba  113 Aug 18 15:45 .gitignore
-rw-r--r-- 1 jkstill dba   47 Aug 18 15:45 README.md

Note: it is possible to use the ~/.ssh/config file to specify multiple ssh keys for use with git. This is useful when you may be using multiple accounts.

The command I used to do this operation is below as I do have multiple accounts:

  git clone git-as-pytest:pytest/git-demo

You can read more about this in a later section of this article.

Now cd to the new repo:  cd git-demo

There should be two files and a directory as seen in the previous example.

Modify or add a script

Now you can modify a script or add a new script and then commit to your local repo.

In this case, we will add a script fra_config.sql to the local repo.

-- fra_config.sql
-- show location and size of FRA

col fra_location format a30
col fra_size format a16

select fra_location, fra_size from (
   select name, value
   from v$parameter2
   where name like 'db_recovery_file_dest%'
)d
pivot ( max(value) for name in (
      'db_recovery_file_dest' as FRA_LOCATION,
      'db_recovery_file_dest_size' as FRA_SIZE
   )
)
/

Modified files can be seen with git status:

> git status
# On branch master
# Untracked files:
#   (use "git add <file>..." to include in what will be committed)
#
#       fra_config.sql
nothing added to commit but untracked files present (use "git add" to track)

Now add the file to the list of those that should be tracked and check the status again:

> git add fra_config.sql


> git status
# On branch master
# Changes to be committed:
#   (use "git reset HEAD <file>..." to unstage)
#
#       new file:   fra_config.sql
#

As we are happy with the results, it is time to commit to the local repo:

> git commit -m 'Added the new file fra_config.sql'
[master 86eaf7c] Added the new file fra_config.sql
1 file changed, 18 insertions(+)
create mode 100644 fra_config.sql

> git status
# On branch master
# Your branch is ahead of 'origin/master' by 1 commit.
#   (use "git push" to publish your local commits)
#
nothing to commit, working directory clean

Shouldn’t we have put a date in that file? OK, a date and time was added, changes to the file displayed, the file was added to the list of those to commit, and the commit made:

> git diff fra_config.sql | cat
diff --git a/fra_config.sql b/fra_config.sql
index 03b98fd..37c58ac 100644
--- a/fra_config.sql
+++ b/fra_config.sql
@@ -1,6 +1,7 @@

-- fra_config.sql
-- show location and size of FRA
+-- jkstill 2015-08-18 16:03:00 PDT

col fra_location format a30
col fra_size format a16

> git add fra_config.sql

> git commit -m 'added timestamp'
[master 83afd35] added timestamp
1 file changed, 1 insertion(+)

> git status
# On branch master
# Your branch is ahead of 'origin/master' by 2 commits.
#   (use "git push" to publish your local commits)
#
nothing to commit, working directory clean

Committing can and should be done frequently, as the commit affects only the local repository.

This makes it possible to see (and retrieve) incremental changes to a file as you work on it.

Once you are satisfied with all changes, push the changes to the repo. Notice that git status knows that 2 commits have been performed locally that are not seen in the master repository.

Configure the Remote

Before pushing to the main repo, there is a little more configuration work to do. While this method is not strictly necessary, it does simplify the use of git.

You will need to edit the file ~/.ssh/config; create it if it does not already exist.

Here’s my example file where a host git-as-pytest has been created. This host will be used to connect to github.

GSSAPIAuthentication no
VisualHostKey=yes

Host git-as-pytest
  HostName github.com
  User git
  IdentityFile /home/jkstill/.ssh/id_rsa_pytest_github
  IdentitiesOnly yes

Now edit the file ./.git/config.  Find the line that remote “origin” and change the URL as seen in this example.

[core]
  repositoryformatversion = 0
  filemode = true
  bare = false
  logallrefupdates = true
[remote "origin"]
  #url = https://github.com/pytest/git-demo
  url = git-as-pytest:pytest/git-demo.git
  fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
  remote = origin
  merge = refs/heads/master

Now you should be able to push the changes to the master repo:

> git push origin master
Counting objects: 7, done.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 787 bytes | 0 bytes/s, done.
Total 6 (delta 2), reused 0 (delta 0)
To git-as-pytest:pytest/git-demo.git
788e5b1..83afd35  master -> master

The changes to your files can be seen in your repo on github.com

Issue a PULL request

Once you think the file or files are ready to be included in the master repository, you will issue a pull request to the admin of the master repo.

The repo admin can then pull the changes and examine them. Once it has been determined that the changes can be made to the master repo, the admin will push the changes.

Issuing the pull request

View the repo in your browser, press the ‘pull request’ icon and follow the instructions. This action will cause an email to be sent to the repo admin with URL to view the pull request.   The admin can then examine and test the changes, and merge the pull request (if appropriate) into the mainline.

If the pull request results in your changes being merged, github will send you an email.

After the Pull request has been merged

Now other users can get the updates with the following commands

  git pull
  git status
  git commit

These commands will merge the repo from github with this one.

As there is the possibility of overwriting files you are working on, be sure this is the right thing to do.

Now that you have the basics, you can get started.

Please feel free to use  the https://github.com/jkstill/git-demo repo to follow along with the steps shown here.

The post Git for Beginners appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Messed-Up App of the Day: Crux CCH-01W

Cary Millsap - Thu, 2015-08-20 16:26
Today’s Messed-Up App of the Day is the “Crux CCH-01W rear-view camera for select 2007-up Jeep Wrangler models.”

A rear-view camera is an especially good idea in the Jeep Wrangler, because it is very difficult to see behind the vehicle. The rear seat headrests, the wiper motor housing, the spare tire, and the center brake light all conspire to obstruct much of what little view the window had given you to begin with.
The view is so bad that it’s easy to, for example, accidentally demolish a mailbox.
I chose the Crux CCH-01W because it is purpose-built for our 2012 Jeep Wrangler. It snaps right into the license plate frame. I liked that. It had 4.5 out of 5.0 stars in four reviews at crutchfield.com, my favorite place to buy stuff like this. I liked that, too.
But I do not like the Crux CCH-01W. I returned it because our Jeep will be safer without this camera than with it. Here’s the story.
My installation process was probably pretty normal. I had never done a project like this before, so it took me longer than it should have. Crux doesn’t include any installation instructions with the camera, which is a little frustrating, but I knew that from the reviews. There is a lot of help online, and Crutchfield helped as much as I needed. After all the work of installing it, it was a huge thrill when I first shifted into Reverse and—voilà!—a picture appeared in my dashboard.
However, that was where the happiness would end. When I tried to use the camera, I noticed right away that the red, yellow, and green grid lines that the camera superimposes upon its picture didn’t make any sense. The grid lines showed that I was going to collide with the vehicle on my left that clearly wasn’t in jeopardy (an inconvenient false negative), and they showed that I was all-clear on the right when in fact I was about to ram into my garage door facing (a dangerous false positive).

The problem is that the grid lines are offset about two feet to the left. Of course, this is because the camera is about two feet to the left of the vehicle’s centerline. It’s above the license plate, below the left-hand tail light.

So then, to use these grid lines, you have to shift them in your mind about two feet to the right. In your mind. There’s no way to adjust them on the screen. Since this camera is designed exclusively for the left-hand corner of a 2007-up Jeep Wrangler, shouldn’t the designers have adjusted the location of the grid lines to compensate?
So, let’s recap. The safety device I bought to relieve driver workload and improve safety will, unfortunately, increase driver workload and degrade safety.
That’s bad enough, but it doesn’t end there. There is a far worse problem than just the misalignment of the grid lines.
Here is a photo of a my little girl standing a few feet behind the Jeep, directly behind the right rear wheel:

And here is what the camera shows the driver while she is standing there:
No way am I keeping that camera on the vehicle.
It’s easy to understand why it happens. The camera, which has a 120° viewing angle, is located so far off the vehicle centerline that it creates a blind spot behind the right-hand corner of the vehicle and grid lines that don’t make sense.
The Crux CCH-01W is one of those products that seems like nobody who designed it ever actually had to use it. I think it should never have been released.
As I was shopping for this project, my son and a local professional installer advised me to buy a camera that mounted on the vehicle centerline instead of this one. I didn’t take their advice because the reviews for the CCH-01W were good, and the price was $170 less. Fortunately, Crutchfield has a generous return policy, and the center-mounting 170°-view replacement camera that I’ll install this weekend has arrived today.
I’ve learned a lot. The second installation will go much more quickly than the first.

Deploying OHS on NFS ? Think again : Locking/Performance Issue

Online Apps DBA - Thu, 2015-08-20 16:03

performance

This post is related to Oracle HTTP Server (OHS) performance issue from our Oracle Fusion Middleware Training (next batch starts on 30th Aug, 2015) where we cover OHS on Day2 (Installation, Configuration, High Availability, Troubleshooting, integrating OHS with WebLogic as proxy etc) .

One of the trainee from our previous batch asked about the common issues that Oracle Fusion Middleware Administrator encounters. We recently implemented Oracle Fusion Middleware including OHS for one of our customer where file system on OHS server presented from SAN as NFS mount.

Error message while accessing the OHS Instance on Enterprise Manager  “Failed to invoke operation Load on MBean”

error1

After checking the ohs log file ($ORCALE_INSTANCE/ diagnostics/ logs/ OHS/ ohs1),  I got the exact error behind the issue that was ” apr_proc_mutex_lock failed. Attempting to shutdown process gracefully”

[2015-05-06T08:18:23.8610+00:00] [OHS] [INCIDENT_ERROR:10] [OHS-9999] [core.c] [host_id: example.company.com] [host_addr: 192.168.1.141] [pid: 22680] [tid: 140414451361536] [user: oracle] [VirtualHost: main] (5)Input/output error:  apr_proc_mutex_lock failed. Attempting to shutdown process gracefully.

[2015-05-06T08:18:23.8628+00:00] [OHS] [INCIDENT_ERROR:10] [OHS-9999] [core.c] [host_id: example.company.com] [host_addr: 192.168.1.141] [pid: 22679] [tid: 140414451361536] [user: oracle] [VirtualHost: main] (5)Input/output error:  apr_proc_mutex_lock failed. Attempting to shutdown process gracefully.

[2015-05-06T08:18:23.8863+00:00] [OHS] [INCIDENT_ERROR:10] [OHS-9999] [core.c] [host_id: example.company.com] [host_addr: 192.168.1.141] [pid: 22681] [tid: 140414451361536] [user: oracle] [VirtualHost: main] (5)Input/output error:  apr_proc_mutex_lock failed. Attempting to shutdown process gracefully.

[2015-05-06T08:18:23.8894+00:00] [OHS] [INCIDENT_ERROR:10] [OHS-9999] [core.c] [host_id: example.company.com] [host_addr: 192.168.1.141] [pid: 22678] [tid: 140414451361536] [user: oracle] [VirtualHost: main] (5)Input/output error:  apr_proc_mutex_lock failed. Attempting to shutdown process gracefully.

[2015-05-06T08:18:24.8024+00:00] [OHS] [INCIDENT_ERROR:10] [OHS-9999] [core.c] [host_id: example.company.com] [host_addr: 192.168.1.141] [pid: 22867] [tid: 140414451361536] [user: oracle] [VirtualHost: main] (5)Input/output error:  apr_proc_mutex_lock failed. Attempting to shutdown process gracefully.

[2015-05-06T08:18:24.8506+00:00] [OHS] [INCIDENT_ERROR:10] [OHS-9999] [core.c] [host_id: example.company.com] [host_addr: 192.168.1.141] [pid: 22872] [tid: 140414451361536] [user: oracle] [VirtualHost: main] (5)Input/output error:  apr_proc_mutex_lock failed. Attempting to shutdown process gracefully.

 

Root Cause : http.lock file (located in default location at $ORACLE_INSTANCE/ diagnostics/ logs/ OHS/ ohs1 ) was under file system on NFS mount causing performance issue which in turn causing OHS to fail. It is recommended to keep OHS lock file under Local Disk and not on the shared storage. If the $ORACLE_INSTANCE is present on NFS mount then the above error will occur as any delay in response from OHS will force OPMN process to restart OHS.

 

Solution: Change the http.lock file location to Local_Disk (like /tmp or any other place on local disk) which is not NFS mounted and update that location under mpm_prefork_module and mpm_worker_module in httpd.conf (located in $ORACLE_INSTANCE/ config/ OHS/ ohs1) file as below

<IfModule mpm_prefork_module>
StartServers         5
MinSpareServers      5
MaxSpareServers     10
MaxClients         150
MaxRequestsPerChild  0
AcceptMutex fcntl
LockFile “<LOCAL_DISK_PATH>”
</IfModule>

<IfModule mpm_worker_module>
StartServers         2
MaxClients         150
MinSpareThreads     25
MaxSpareThreads     75
ThreadsPerChild     25
MaxRequestsPerChild  0
AcceptMutex fcntl
LockFile ” <LOCAL DISK_PATH>”
</IfModule>

 

Reference:

  • My Oracle Support Note 1460851.1 (NFS Locking Issues)

 

Oracle Fusion Middleware
Interview Questions

Test your knowledge on Oracle Fusion Middleware or Use them to find your dream Job as Oracle Administrator

Click here to download Oracle FMW Interview Questions

 

If you want to learn more about Oracle Fusion Middleware products, register for our Oracle Fusion Middleware Training (Register before 24th August and Get Discount of 100 USD Apply code: F1OFF at time of checkout)

Note: We provide 100% Money back guarantee so in unlikely case of you being not happy after 2 sessions, just drop us a mail before third session and We’ll refund FULL money.

The post Deploying OHS on NFS ? Think again : Locking/Performance Issue appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Virtual Technology Summit - Spotlight on Operating Systems, Virtualization Technologies and Hardware

OTN TechBlog - Thu, 2015-08-20 13:00

Register now for OTN's new Virtual Technology Summit - September 16, 2015. Hear from Oracle ACEs, Java Champions and Oracle Product Experts, as they share their insights and expertise through Hands-on-Labs, highly technical presentations and demos that enable you to master the skills you need to meet today's IT challenges. Chat live with folks and ask your questions as you attend sessions.

Operating Systems, Virtualization Technologies and Hardware Spotlight: Implementing Your Cloud - Most IT organizations have roadmaps for cloud infrastructure. Most vendors have some sort of story as to how they can get you to the cloud. Oracle specifically has itself to the idea that you can run your applications identically in our public cloud and your private cloud. The question is: How? In this track we'll roll up our sleeves and show you how to implement your clouds using Oracle hardware, software, and best practices. There are four sessions in the Systems track:

  • Best Practices Building Efficient and Secure Cloud Infrastructure - Learn how to create virtual machines(VMs), deploy VM's using Templates, rapidly migrate those VM's to the Cloud, and deploy Oracle Applications & Databases in minutes on a flexible, secure, Private Cloud Infrastructure. Additionally, experience Oracle's Enterprise Cloud Infrastructure with Oracle's Enterprise Manager-Cloud Control to automatically provision both Operating Systems and Oracle Databases in a DBaaS model.

  • What's New in Solaris 11.3 - Oracle Solaris 11 is a complete and secure cloud platform. With best of breed technologies for computer, networking and storage, learn how Oracle Solaris can help transform your IT operations to move to the cloud and make it simple to do. In this session we will cover some of the latest innovations engineered in Oracle Solaris 11.3 to manage a secure and integrated, large-scale cloud environment.

  • Optimizing NAS Storage for Secure Cloud Infrastructures - The rapid expansion of secure and reliable cloud capabilities is fundamentally changing IT operations. Over time, an increasing percentage of your data will reside off premise in a public or hybrid cloud. You won't just need fast and efficient storage to accommodate ever increasing information growth. You'll need highly secure storage to assure your critical data is well protected - independent of where it resides. This presentation covers the unique characteristics of Oracle's ZFS Storage Appliance and it's cache centric hybrid architecture ideally suited for cloud applications providing fast, efficient and secure data storage for public, private and hybrid cloud infrastructure, so you can migrate toward the cloud with confidence.

  • Automate your Oracle Solaris 11 and Linux Deployments with Puppet - Puppet is a popular open source configuration management tool that is used by many organizations to automate the setup and configuration of servers and virtual machines. Solaris 11.2 includes native support for puppet and extends the resources that can be managed to Solaris specific things like zones and ZFS. This presentation will give system administrators that are new to puppet an introduction and a way to get started with automating the configuration of Oracle Linux and Oracle Solaris systems, talk about how puppet integrates with version control and other projects and look at the Solaris specific resource types.
Register today!

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are already a member of the Oracle Technology Network Community, you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it: Oracle Community - Rewards & Recognition FAQ.

Difference Between Oracle’s Table and Mongo’s Collection

Pythian Group - Thu, 2015-08-20 11:44

Roughly speaking, the notion of ‘Tables’ in Oracle is similar to MongoDB’s ‘Collections’. They are NOT identical though. Before we examine the differences between Oracle’s Table and MongoDB’s Collection, let’s see what Table in Oracle and Collection in MongoDB are.

Table in Oracle:

A table in Oracle is made up of a fixed number of columns for any number of rows. Every row in a table has the same columns.

Collection in MongoDB:

A collection in MongoDB is made up of documents. The concept of Documents is similar to rows in a table, but it’s not identical. A document can have its own unique set of columns. In MongoDB, columns are called fields.

So in MongoDB, fields are defined at the document level (or we can say in Oracle lingo that columns are defined at the row level), whereas in Oracle the columns are defined at the table level.

That is actually the main difference between Oracle’s Table and Mongo’s collection among other subtle differences such as collections are schema-less, whereas Table in Oracle has to be in some schema.

Example of an Oracle table:

EMP

EMPID    NAME    CITY
1                Smith    Karachi
2               Adam    Lahore
3               Jim        Wah Cantt
4               Ken         Quetta

CREATE TABLE EMP (EMPID  NUMBER(5),NAME VARCHAR2(20),CITY VARCHAR2(25));

INSERT INTO EMP VALUES (1,’SMITH’,’KARACHI’);
INSERT INTO EMP VALUES (2,’ADAM’,’LAHORE’);
INSERT INTO EMP VALUES (3,’JIM’,’WAH CANTT’);
INSERT INTO EMP VALUES (4,’KEN’,’KARACHI’);

Select * from EMP;

In the above example, the table is ‘EMP’, with 4 rows. All 4 rows have a fixed number of columns EMPID, NAME, and CITY.

Example of a MongoDB Collection:

db.EMP.insert({EMPID: ‘1’,NAME: ‘Smith’, CITY: ‘Karachi’})
db.EMP.insert({EMPID: ‘2’,NAME: ‘Adam’, CITY: ‘Wah Cantt’, Designation: ‘CTO’})
db.EMP.insert({EMPID: ‘3,NAME: ‘Jim’, Designation: ‘Technician’})
db.EMP.insert({EMPID: ‘4’,NAME: ‘Ken’})

> db.EMP.find()

{ “_id” : ObjectId(“55d44757283d7d463aec4cc1”), “EMPID” : “1”, “NAME” : “Smith”, “CITY” : “Karachi” }
{ “_id” : ObjectId(“55d44757283d7d463aec4cc2”), “EMPID” : “2”, “NAME” : “Adam”, “CITY” : “Wah Cantt”, “Designation” : “CTO” }
{ “_id” : ObjectId(“55d44757283d7d463aec4cc3”), “EMPID” : “3”, “NAME” : “Jim”, “Designation” : “Technician” }
{ “_id” : ObjectId(“55d44757283d7d463aec4cc4”), “EMPID” : “4”, “NAME” : “Ken” }

In the above example, first we inserted 4 documents into collection ‘EMP’. Notice that all 4 documents have different number of columns. db.EMP.find() command is to display these documents.

Hope that helps……

The post Difference Between Oracle’s Table and Mongo’s Collection appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Challenge Of Student Transition Between Active And Passive Learning Models

Michael Feldstein - Thu, 2015-08-20 08:59

By Phil HillMore Posts (358)

Last week the Hechinger Report profiled an innovative charter school in San Diego called High Tech High (insert surfer jokes here) that follows an active, project based learning (PBL) model. The school doesn’t use textbooks, and they don’t base the curriculum on testing. The question they ask is whether this approach prepares students for college.

As a result, for [former HTH student Grace] Shefcik, college – with its large classes and lecture-based materials– came as a bit of a shock at first. At the University of California, Santa Cruz, she is one of more than 15,000 undergraduates, her assignments now usually consist of essays and exams. At High Tech High, Shefcik had just 127 students in her graduating class, allowing her to form close relationships with peers and teachers.

The premise of the article is that PBL prepares students for life but maybe not for college. Grace described the big difference between high school, with constant feedback and encouragement, to college, where you rarely get feedback. Other students describe their frustration in not knowing how to study for tests once they get to college.

After a recent screening of “Most Likely to Succeed” at the New Schools Summit in Burlingame, California, High Tech High CEO Larry Rosenstock told an audience, “We actually find that many of our students find themselves bored when they get to college.”

Teachers and administrators at High Tech High don’t tell many stories about their students reporting boredom, but they do hear about experiences like Shefcik’s. They say students find themselves overwhelmed by the different environment at college and have a difficult time making the transition to lecture-hall learning.

Students do tend to adjust, but this process can take longer than it does for traditionally-taught students.

But sometimes it takes High Tech High graduates a semester or a year at college or university before they feel like they’ve cracked the code.

“I had a harder time transitioning than other students,” said Mara Jacobs, a High Tech High graduate who just finished her second year at Cornell University in Ithaca, New York, and is the daughter of major donors Gary and Jerri-Ann Jacobs. “I couldn’t just do the work if I wasn’t bought into how I was being taught.”

My problem with the article is that it makes the assumption that all colleges outside of small private institutions base their entire curriculum on passive lectures and testing, not acknowledging many of the innovations and changes coming from these same colleges. We have profiled personalized learning approaches in our e-Literate TV series, including a PBL approach at Arizona State University for the Habitable Worlds course (see this episode for in-depth coverage).

Nevertheless, the general point remains that it is difficult for students to transition between active learning models and passive lecture and test models. The Hechinger Report calls out the example of K-12 students moving into college, but we talked to faculty and staff at UC Davis who saw the flip side of that coin – students used to passive learning at high school trying to adapt to an active learning science course in college.

Phil Hill: While the team at UC Davis is seeing some encouraging initial results from their course redesign, these changes are not easy. In our discussions, the faculty and staff provided insight into the primary barriers that they face when looking to build on their success and get other faculty members to redesign their courses.

Catherine Uvarov: Well, I have had some very interesting experiences with students. Last quarter, my class was mostly incoming freshman, and it’s like their very first quarter at UC Davis, so they have never taken a UC Davis class before. My class is pretty different from either classes they’ve taken in high school or other classes that they were still taking in their first quarter at Davis because these changes are not as widespread as they could be.

Some students push back at first, and they’re like, “Oh, my, gosh, I have to read the book. Oh, my, gosh, I have to open the textbook. Oh, my, gosh, I have to do homework every week. I have to do homework every day.” They kind of freaked out a little bit in the beginning, but as the quarter progressed, they realized that they are capable of doing this type of learning style.

There’s more info at both the Hechinger Report article and in the ASU and UC Davis case studies, but taken together they point out the challenges students face when transitioning between pedagogical models. These transitions can occur between high school and college, but more often they occur from course to course. Active learning and PBL are not just minor changes away from lecture and test – they require a new mindset and set of habits from students.

The post Challenge Of Student Transition Between Active And Passive Learning Models appeared first on e-Literate.

Authorized REST request to MCS with SoapUI

Darwin IT - Thu, 2015-08-20 07:38
In my former post I explained how to do a REST request to a Mobile Cloud Service API using an Unauthorized access. To do so you need to add an HTTP Header property using a Base64 encoded key. But how to do that for authorized access. Using Postman you should be able to add HTTP  Basic authentication, provide the access details and update the request. In SoapUI, it's more or less the same trick: just provide the HTTP Basic Authentication details, and SoapUI does the encoding for you:
 Now if you run this and open up the SoapUI Log: you'll see log entries with the message that is send over the line to MCS.
Thu Aug 20 15:19:59 CEST 2015:DEBUG:>> "GET /mobile/custom/incidentreport_M10/incidents/?contact=Lynn HTTP/1.1[\r][\n]"
Thu Aug 20 15:19:59 CEST 2015:DEBUG:>> "Accept-Encoding: gzip,deflate[\r][\n]"
Thu Aug 20 15:19:59 CEST 2015:DEBUG:>> "oracle-mobile-backend-id: 01d3b3a2-7a6b-42c8-b314-d6e8c8f3e898[\r][\n]"
Thu Aug 20 15:19:59 CEST 2015:DEBUG:>> "Host: unit23585.oracleads.com:7201[\r][\n]"
Thu Aug 20 15:19:59 CEST 2015:DEBUG:>> "Connection: Keep-Alive[\r][\n]"
Thu Aug 20 15:19:59 CEST 2015:DEBUG:>> "User-Agent: Apache-HttpClient/4.1.1 (java 1.5)[\r][\n]"
Thu Aug 20 15:19:59 CEST 2015:DEBUG:>> "Authorization: Basic am9lX20xMDpuNlApIXBOdTQkMA==[\r][\n]"
Thu Aug 20 15:19:59 CEST 2015:DEBUG:>> "[\r][\n]"

Here You see in the last line that SoapUI encoded the username/password details into the Authorization header property. Below you'll see the response:

Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "HTTP/1.1 200 OK[\r][\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "Connection: keep-alive[\r][\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "Date: Thu, 20 Aug 2015 13:20:00 GMT[\r][\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "Content-Length: 486[\r][\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "Content-Type: text/html; charset=utf-8[\r][\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "Set-Cookie: JSESSIONID=3W1cVVJQTgGmGZMXQy2G3pVG0QvWByQWtmJr212Mh5nQ9hB0yy4b!-920535662; path=/; HttpOnly[\r][\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "oracle-mobile-runtime-version: 15.3.3-201507070814[\r][\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "X-ORACLE-DMS-ECID: 5a67a51e479fa73b:43dd1c99:14f4a5d80d0:-8000-000000000000729a[\r][\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "X-Powered-By: Servlet/2.5 JSP/2.1[\r][\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "X-Powered-By: Express[\r][\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "[\r][\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "{[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " "Body" : {[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " "GetIncidentsByCustomerResponse" : {[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " "Incident" : [ {[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " "Date" : "2015-07-22 17:02:14 GMT",[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " "Id" : 10,[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " "ImageLink" : "storage/collections/2e029813-d1a9-4957-a69a-fbd0d7431d77/objects/6cdaa3a8-097e-49f7-9bd2-88966c45668f?user=lynn1014",[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " "Priority" : "Medium",[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " "Status" : "InProgress",[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " "TechnicianAssigned" : "joe@fixit.com",[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " "Title" : "Leaking Water Heater",[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " "UserName" : "Lynn"[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " } ][\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " }[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< " }[\n]"
Thu Aug 20 15:20:00 CEST 2015:DEBUG:<< "}"

Now, currently MCS supports basic authentication, but I've learned that in the next release OAuth will be supported.

The Future of IT Staffing- People vs. Robots

Chris Foot - Thu, 2015-08-20 07:21

Image and video hosting by TinyPic

The Increasing Complexity of the IT Tech Stack Will Drive the Need for Robotic Automation

I’ve been involved in database technologies for 20 years now. During that time, I have read numerous prognostications from various industry pundits proclaiming that the next release of so-and-so database would be so simple to administer that the product would no longer require DBAs for support.  Replace “database” with any technology, and you’ll find that the same industry mantra occurs.

I’m still waiting for the administrator-less database.

During my tenure in the IT field, I’ve found the following equation to be true:

 a New Features
+ New Functionality
+ New Products
+ New Technologies
+ New Architectures
+ New Business Challenges
= Increased IT Support Complexity

Although databases have become easier to administer, each new release of the database product RDX supports contains a host of new features and functionality.  Database vendors know that they must add new features to be competitive.  A competitive marketplace forces all software vendors to maximize their product’s feature set.

New technology architectures and products designed to solve a business or technical problem or improve operations are unveiled on a seemingly weekly basis.  I intended to rattle off a dozen or so disruptive, industry-changing technologies that have originated over the last few years, but any list I generate would not include all technologies that will have a significant impact on IT operations.  Plus, we all have our own opinion on what the most important, disruptive technologies will be.

What all IT professionals would agree upon is the statement below:

We understand that the only constant in the IT profession – is change itself.

We know that this continuous explosion of new technologies will never stop and that increasingly restrictive time constraints faced by many IT support personnel prevent them from analyzing, selecting, implementing and administering them.  IT professionals need solutions that reduce the amount of time they spend maintaining current systems to allow them to focus on improving future service.

Process automation, although having a wide range of application, has the common goal of replacing human activities with technology to reduce cost and improve the quality of repetitive processes. For information technology departments, the goal will be to deploy robots to reduce the amount of time humans spend on repetitive, mundane, low ROI activities.

Process automation will allow IT personnel to use that extra time to improve business operations, think strategically, plan, innovate and deal with the ever-increasing rise in information technology complexity.

Additionally, process automation will allow IT departments to use the capabilities and strengths that robotic processes provide to fully leverage the benefits of their human counterparts- benefits that cannot be provided by robotic processes.

Unlike manufacturing’s deployment of automation to totally replace humans:

The relationship between humans and robots in the IT space will be harmonious, interdependent and collaborative – not competitive.

IT professionals will interact with increasingly intelligent robotic processes as they would with any technology designed to support their needs.  Here’s a quick example from RDX’s own automation project.  We support hundreds of mission-critical database environments.  When a problem occurs, it is our responsibility to absolutely resolve it as quickly as possible. Every minute counts.

RDX’s robotic processes automate the collection of diagnostic information required to perform problem analysis. The robots either solve the problem on their own or interact with RDX human support personnel. The key to faster problem resolution is to reduce the amount of time collecting diagnostic data and spend that time analyzing it.  Robotic processes can collect the diagnostic data far more quickly than their human counterparts.  RDX’s support personnel use their historical and collective knowledge, ability to analyze, creativity (thinking outside of the box) and innovation to solve the problem.  This cooperative, interdependent human/robotic interaction allows RDX to reduce our Mean Time to Resolution (MTTR).

Benefits of Humans

  • Creativity, thinking outside of the box
  • INNOVATION
  • Planning
  • Easily adapt to changing inputs and external influences
  • Ability to quickly analyze conditions with complex intersecting rules
  • Natural curiosity
  • Collective knowledge, group problem solving
  • Ability to identify key facets of information from large, varied input sets
  • Social and cultural understanding

Benefits of Robotic Processes

  • Consistency – Repetitive tasks are performed with no deviation. Less deviation = higher quality
  • Speed of execution
  • Scalability – Build the robotic process once and deploy as needed
  • Provide the ability to leverage pockets of tribal, operational knowledge by capturing, standardizing and embedding that expertise in robotic automations

Future IT Departments will Consist of Both Humans and Robots

As I stated in a previous article, a competitive market arena will continue to accelerate the features and functionality provided by automation products. As the offerings mature, they will become more robust, more intelligent and more cost effective.

As a result, the set of activities assigned to humans and robots will be fluid in nature.  As more activities are assigned to robots, their human counterparts’ roles will continue to evolve.   Robots will free IT professionals to focus on strategic activities that only humans can perform.  Robots will not replace us; they will allow us to analyze, implement and administer increasingly complex technology architectures.  Architectures that solve business problems, increase competitive advantage, improve decision making and reduce the cost of doing business.

That is a good thing for technology professionals and the business operations we support.

 

 

 

 

 

The post The Future of IT Staffing- People vs. Robots appeared first on Remote DBA Experts.