12c Temporal Validity Support in SQL Developer Data Modeler, from that JEFF SMITH.
Sneak Peek at What’s Next for Oracle Exalogic Elastic Cloud, from WebLogic Partner Community EMEA.
System Statistics About DOP Downgrades, from The Data Warehouse Insider.
Solaris Swift using ZFS Storage Appliance, from Jim Kremer’s Blog.
Stateful Packet Inspection, from the Solaris Firewall blog.
Weblogic Console and BPM Worklist. Authentication using OpenLDAP, from the AMIS Technology Blog.
Tips on SQL Plan Management and Oracle Database In-Memory Part 1, from the Oracle Optimizerblog.
Clash Of Slashes ( / versus \ ), from Brewing tests with CAFE BABE.
Two good postings from The Java Source:
Solving Problems Using the Stream API
Several patch set announcements from Business Analytics - Proactive Support:
Patch Set Update: Hyperion Essbase 184.108.40.206.508
Patch Set Update: Hyperion Tax Provision 220.127.116.11.100
Patch Set Update: Essbase Analytics Link for HFM 18.104.22.168.500
EPM Patch Set Updates - July 2015
From the Oracle E-Business Suite Support blog:
Have you used R12: Master Data Fix Diagnostic to Validate Data Related to Purchase Orders and Requisitions?
Identifying Missing Application and Database Tier Patches for EBS 12.2
Receivables: Important system options to review for Autoinvoice
Webcast: Oracle Product Hub Web Services - Setup, Use, and Troubleshooting
Overview of Physical Inventory in Oracle Assets
What's The High Cost Of Not Patching?
From the Oracle E-Business Suite Technology blog:
Identifying Missing App Tier and Database Tier Patches for EBS 12.2
Windows 10 Certified with Oracle E-Business Suite
ED and CBE: Example of higher ed “structural barrier to change” that is out of institutions’ control
By Phil HillMore Posts (356)
There has been a great conversation going on in the comments to my recent post “Universities As Innovators That Have Difficulty Adopting Their Own Changes” on too many relevant issues to summarize (really, go read the ongoing comment thread). They mostly center on the institution and faculty reward system, yet those are not the only sources of structural barriers to change that lead institutions to this “difficulty adopting their own changes”. Increasingly there are outside forces that both encourage change and resist change, and it is important to recognize the impact of the entire higher education ecosystem.
Yesterday Amy Laitinen from New America wrote an excellent article titled “Whatever Happened to the Department’s Competency-Based Education Experiments?” highlighting just such an example.
About this time two years ago, President Obama went on his college affordability bus tour and unveiled his plan to take on the rising costs of higher education in front of thousands of students at SUNY Buffalo. Promoting innovation and competition was a key part of his plan and President Obama held up competency-based education (CBE) up as one of the “innovative new ways to prepare our students for a 21st century economy and maintain a high level of quality without breaking the bank.” The President touted Southern New Hampshire University’s College for America CBE approach. The university “gives course credit based on how well students master the material, not just on how many hours they spend in the classroom,” he explained. “So the idea would be if you’re learning the material faster, you can finish faster, which means you pay less and you save money.” This earned applause from students in the audience as well as from CBE practitioners around the country. [snip]
The problem is that day was nearly two years ago and the CBE experimental sites are not yet off the ground. It’s not because institutions aren’t ready and willing. They are. But the Department of Education has been dragging its feet. It took the Department nearly a year after the President’s announcement to issue a notice describing what the experiments would look like. Perhaps this could have been done more quickly, but CBE is complicated and it’s understandable that the Department wanted to be thorough in its review of the relevant laws and regulations (they turned out much more forward-thinking than I would have imagined). But the notice did go out, schools did apply, and schools were accepted to participate. But the experiment hasn’t started, because schools haven’t received guidance on how to do their experiments.
Amy goes on to describe how schools are repeatedly asking for guidance and how foundations like Lumina and Gates are doing the same, yet the Education Department (ED) has not or will not provide such guidance.
Matt Reed, writing at Inside Higher Ed this morning, asks why (or why not) does the ED not step up to move along the program, offers some possible answers and solicits input:
- They’re overwhelmed. They approved the concept of CBE without first thinking through all of the implications for other policies, and now they’re playing catchup. This strikes me as highly likely.
- They’re focused more on “gainful employment,” for-profit providers, student loan issues, and, until recently, the effort to produce college ratings. With other things on fire, something like CBE could easily get overshadowed. I consider this possibility entirely compatible with the previous one.
- They’re stuck in a contradiction. At the very same time that they’re trying to encourage experimentation with moving away from the credit hour in the context of CBE, they’re also clamping down on the credit hour in the context of online teaching. It’s possible to do either, but doing both at the same time requires a level of theoretical hair-splitting far beyond what they’re usually called upon to do. My guess is that an initial rush of enthusiasm quickly gave way to dispirited foot-dragging as they realized that the two emphases can’t coexist.
- Their alien overlords in Area 51, in conjunction with the Illuminati and the Trilateral Commission… (You can fill in the rest. I’m not a fan of this one, but any explanation of federal government behavior on the Internet has to include at least one reference to it. Let’s check that box and move on.)
Rather than add my own commentary or conjecture on the subject, I would prefer to just highlight this situation and note how we need to look beyond just colleges and universities, and even faculty reward systems, to understand the structural barriers to change for higher education.
The post ED and CBE: Example of higher ed “structural barrier to change” that is out of institutions’ control appeared first on e-Literate.
During this live webcast, Redstone will provide an overview of Distributed Index, and we'll show how a satisfied customer is using this game changing technology. Distributed Index is an Oracle Validated Integration that complements and integrates with WebCenter Content 10g and 11g.
Mark Heindselman, Emerson Process Management's Director, Knowledge Network and Information Services will discuss and demonstrate Emerson's use case.
At the conclusion of the live demonstration, we'll field questions from the audience.
All registered attendees will receive a no-cost WebCenter Content system evaluation. This evaluation will predict the potential time savings that can be realized post Distributed Index implementation.
Mark Heindselman, Director, Knowledge Network, Emerson Process Management
- Distributed Index Overview
- Customer Case Study
- Live In-Production Solution Demo
- Q & A
The purpose of the APEX Gaming Competition is simply to show off what you can do with APEX, and instead of crafting a business solution or transactional application, the goal here is a bit more whimsical and fun. The solution can be desktop or mobile or both. Personally, if I had the time, I'd like to write a blackjack simulator and try and improve upon the basic strategy. I'm not sure that could be classified as a "game", but it would enable me to go to Las Vegas and clean house!
If you're looking to make a name for yourself in the Oracle community, one way to do it is through ODTUG. And if you're looking to make a name for yourself in the APEX community, one way to stand out is through the APEX Gaming Competition. Just ask Robert Schaefer from Köln, Germany. Robert won the APEX Theming Competition in 2014, and now everyone in the APEX Community knows who Robert is! I've actually had the good fortune of meeting Robert in person - twice!
Yesterday I listened to the APEX Talkshow podcast with Jürgen Schuster and Shakeeb Rahman (Jürgen is a luminary in the APEX community and Shakeeb is on the Oracle APEX development team, he is the creator of the Universal Theme). And in this podcast, I was reminded how Shakeeb's first introduction to Oracle was...by winning a competition, when he was a student! You simply never know what the future holds. So - whether you're a student or a professional, whether you're in Ireland or the Ivory Coast, this is an opportunity for you to shine in front of this wonderful global APEX Community. Submissions close in 2 months, so hurry! Go to http://competition.odtug.com
A quick taxi ride got us to the conference hotel really quickly, so we were nice and early for the PEOUG event.
After the introductions by Miguel Palacios, it was time for the first sessions of the event. Of the English speakers, first up were Debra Lilley and Dana Singleterry. Debra had some problems with her laptop, so she did her presentation using mine and all went well. Dana did his session over the net, so I sent a few Tweets to let him know how things looked and sounded from our end. I figured a bit of feedback would help reassure him there weren’t any technical issues.
My first session of the day came next. I had a good sized audience and some of the people were brave enough to ask questions at the end. I had some in English and some in Spanish using the translation service to help me.
Debra fixed her laptop by the time her next session started, but her clicker died, so she borrowed mine. Dana’s second session was at the same time as Debra’s, so I flitted between the two, sending a few feedback Tweets to Dana about his session again.
After lunch, both Ronald and I each had back-to-back sessions. I did my Cloud Database and Analytic Functions talks. I feel like they went well. I hope the crowd did too.
There was one more set to talks, all from Spanish speakers, including a very full web session by Edelweiss from Uruguay. After that we got together for the closing session and some prize draws. I didn’t understand what was being said, but everyone seemed really happy and in good spirits, so I think the whole day was well received. Certainly all the feedback we got was very positive!
Big thanks to Miguel, Enrique and everyone at PEOUG for inviting us and making us feel welcome. Thanks to the attendees for coming to the sessions and making us feel special by asking for photos. Also, big thanks to the ACE Program for making this possible for us!
So that marks the end of this years OTN Tour of Latin America for me. Sorry to the countries in the northern leg. I hope I will be able to visit your folks soon!
Debra and I are going to visit Pikachu Machu Picchu over the next couple of days, then it’s back home to normal life for a while.
I’ll write a summary post to close off this little adventure when I get home. Once again, thank you all!
Tim…OTN Tour of Latin America 2015 : PEOUG, Peru – Day 1 was first posted on August 13, 2015 at 2:28 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Cloudera welcomes InfoCaptor as a certified partner for data analytics and visualization. InfoCaptor delivers self-service BI and analytics to data analysts and business users in enterprise organizations, enabling more users to mine and search for data that uncovers valuable business insights and maximizes value from an enterprise data hub
Rudrasoft, the software company that specializes in data analytics dashboard solutions, announced today that it has released an updated version of its popular InfoCaptor software, which includes integration with Cloudera Enterprise. The integration takes advantage of Impala and Apache Hive for analytics.
“Our clients are increasingly looking to adopt Hadoop for their data storage and analytics requirements and their common concern is the lack of an economical web-based platform that works with their traditional data warehouses, RDBMS and with Cloudera Enterprise,”
Cloudera-certified InfoCaptor, adds native Impala functionality within Visualizer so users can leverage Date/time functions for date hierarchy visualizations, time series plots and leverage all the advanced hierarchical visualization natively on Cloudera Enterprise.
“Impala is the fastest SQL engine on Hadoop and InfoCaptor can render millions of data points into beautiful visualizations in just a blink of an eye,” said Nilesh Jethwa [founder]. “This is a great promise for the big data world and affordable analytics with sub-second response time, finally CEOs and CIOs across industries can truly dream of cultivating a data driven culture and make it a reality.”
“Cloudera welcomes InfoCaptor as a certified partner for data analytics and visualization. InfoCaptor delivers self-service BI and analytics to data analysts and business users in enterprise organizations, enabling more users to mine and search for data that uncovers valuable business insights and maximizes value from an enterprise data hub,” said Tim Stevens, vice president of Business and Corporate Development at Cloudera.
InfoCaptor is an Enterprise Business Analytics and Dashboard software meant for:
- Data Discovery
- Adhoc Reports
InfoCaptor brings the power of d3js.org visualizations and the simplicity of Microsoft Excel and puts it in the hands of a non-technical user. This same user can build Circle Pack, Chord, Cluster and Treemap/Sunburst visualizations on top of Cloudera Enterprise using simple drag and drop operations.
InfoCaptor can connect with data from virtually any source in the world, including SQL database from Microsoft Excel, Microsoft Access, Oracle, SQL Server, MySQL, Sqlite, PostgreSQL, IBM DB2 and now Impala and Hive. It supports both JDBC and ODBC protocols.
InfoCaptor also serves as a powerful visualization software and it includes over 30 vector-based map Visualizations, close to 40 types of chart visualizations, over 100 flowchart icons and other HTML widgets. InfoCaptor also provides a free style dashboard editor that allows quick dashboard mockups and prototyping. With this ability users can place widgets directly anywhere on the page and use flowchart style icons and connectors for annotation and storytelling.
Users can download the application and install it within their firewall.
Alternatively, a cloud offering is also available at https://my.infocaptor.com or Download dashboard software
InfoCaptor is a very modestly priced Analytics and Visualization software
” Personal Dashboard License can be purchased for $149/year
” Server license starts at $599/year
” Cloud based subscription starts at $29/user/month
Visit http://www.infocaptor.com or email bigdata(at)infocaptor(dot)com for Demo and Price list
Since 12c the notion of composite instance is superseded by that of flow instance, which refers to the complete chain of calls starting from one main instance to any other composite, and further. Every flow has a unique flowId which is automatically propagated from one instance to the other.
Propagation of Flow Instance TitleThis propagation does not only apply to the flowId, but also to the flowInstanceTitle, meaning that if you set the flowInstanceTitle for the main instance all called composites automatically get the same title.
So if the flowInstanceTitle is set on the main instance:
Then you will automatically see it for every child instance as well:
Trust but verify is my motto, so I tried it for a couple of combinations of composite types calling each other, including:
- BPM calling BPEL calling another BPEL
- BPM initiating a another composite with a Mediator and BPEL via an Event
- Mediator calling BPEL
Flow Instance AbortionWhen you abort the instance of the parent, then all child instances are aborted as well.
In the following flow trace you see a main BPM process that kicks of:
- A (fire&forget) BPEL process
- Throws an Event that is picked up by a Mediator
- Calls another BPM process
- Schedules a human task
On its turn the BPEL process in step 1 kicks of another BPEL process (request/response). Finally the BPM process in step 3 also has a human task:
Once the instance of the main process is aborted, all child instances are automatically aborted as well, including all Human Tasks and composites that are started indirectly.
The flip-side of the coin is that you will not be able to abort any individual child instance. When you go to a child composite, select a particular child instance and abort, the whole flow will be aborted. That is different from how it worked with 11g, and I can imagine this will not always meet the requirements you have.
Another thing that I find strange is that the Mediator that is started by means of an event, even is aborted when the consistency level is set to 'guaranteed' (which means that event delivery happens in a local instead of a global transaction). Even though an instance is aborted, you may have a requirement to process that event.
But all in all, a lot easier to get rid of a chain of processes instances than with 11g!!
I Did. You Should. Here's the Nomination Form. Here's Why:
The Oracle Database Developer Choice Awards celebrate and recognize technical expertise and contributions in the Oracle Database community. We all have an expert in our lives. Here's your chance to nominate them for an award. Who is that expert? Sometimes we're inspired by a technical presentation at an Oracle User Group event or SIG; but more often it's that person who is on the discussion space with an answer or suggestion just when you need it. Either way, it's easier to develop great solutions when we "know" someone in our community...and they are is rooting for us, and even helping us along.
Here's my nomination: John Stegeman. And he will be embarrassed and might send me a howler for nominating him.
John Stegeman is a regular in the Oracle Database Community. When he's not answering questions in the Oracle Database General Questions space, he's commenting on social discussions around the Watercooler. He's a Grand Titan with 243,225 points on the Oracle Community platform and an Oracle ACE. He might not be digging in coding an app today, but then again he just might be because he's clearly got SQL chops. And, while he may not realize it, he's been a tremendous help to me as a community manager because his activities help me keep a pulse on what's going on in the community :)
Here's what you need to know:
- Nominations are open through August 15.
- A panel of judges, composed of Oracle ACEs and Oracle employees, will then choose a set of finalists.
- The worldwide Oracle technologist community votes on the finalists from September 15 through October 15.
- The winners of the Oracle Database Developer Choice Awards will be announced at the YesSQL! Celebration during Oracle OpenWorld 2015.
Get your nominations in ASAP!
Ciao for now,
After the CLOUG event, Francisco drove us to the airport, where Kerry, Ronald, Debra and I parked ourselves in the lounge for a while. Lots of eating then ensued! Kerry was flying back home, but the rest of us were on our way to Lima, Peru, for the PEOUG event.
The flight across to Lima was pretty straight forward, taking about 4 hours, if you include the time sitting and waiting to take off. I think the flight time was about 3 hours and 30 mins. We arrived at the airport at about 02:00 and we were all pretty beat up. It was an effort to even speak, which if you know me is a rather extreme state.
I had a complete brain fade and forgot we were being picked up by Enrique Orbegozo, but fortunately he caught us before we disappeared onto the shuttle, so it ended OK. I’m so sorry Enrique!
We arrived at the hotel at about 03:00. I can’t speak for the others, but I was feeling like the living dead. I got to my room and I don’t remember anything else until the morning!
Debra has Hilton Honors status, so I got signed into the lounge for the day, which meant free food. We had a lazy day. Apart from a 10 minute walk down to the coast and back, it was a hotel day, trying to recharge the batteries. Some food, sitting in the pool and sitting in the lounge with our laptops, trying to catch up with the world.
This morning we are off to the PEOUG event. The last event of the southern leg of the OTN Tour of Latin America 2015. I’ve got three presentations to do, plus some backups in case speakers don’t show.
Tim…OTN Tour of Latin America 2015 : PEOUG, Peru – Day -1 was first posted on August 12, 2015 at 12:04 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
I applied the current July patch sets to a 11.2 and a 12.1 test database. Now I have a 22.214.171.124.7 and a 126.96.36.199.4 test database. It is helpful to have test databases that are on the most current patch sets and releases. If I see unexpected behavior on some other database I can try the same thing on the patched test databases to see if some patch changed the behavior to what I expect. Also, our production databases are all on 188.8.131.52 or earlier releases so I can check whether the new fully patched 12.1 release has different behavior than our older systems.
Here are the patch numbers:
6880880 – current version of opatch
20760982 – 184.108.40.206.7
20831110 – 220.127.116.11.4
My test environments are on x86-64 Linux.
For me, 2015 has been the year of the quantified self.
I’ve been tracking my activity using various wearables: Nike+ Fuelband, Basis Peak, Jawbone UP24, Fitbit Surge, and currently, Garmin Vivosmart. I just set up Automatic to track my driving; check out Ben’s review for details. I couldn’t attend QS15, but luckily, Thao (@thaobnguyen) and Ben went and provided a complete download.
And, naturally, I’m fascinated by biohacking because, at its core, it’s the same idea, i.e. how to improve/modify the body to do more, better, faster.
Ever since I read about RFID chip implanting in the early 00s, I’ve been curiously observing from the fringe. This post on the Verge today included a short video about biohacking that was well worth 13 and half minutes.
If you like that, check out the long-form piece, Cyborg America: inside the strange new world of basement body hackers.
This stuff is fascinating to me. People like Kevin Warwick and Steve Mann have modified themselves for the better, but I’m guessing the future of biohacking lies in healthcare and military applications, places where there’s big money to be made.
My job is to look ahead, and I love doing that. At some point during this year, Tony asked me what the future held; what were my thoughts on the next big things in technology.
I think the human body is the next frontier for technology. It’s an electrical source that could solve the modern battery woes we all have; it’s an enormous source for data collection, and you can’t forget it in a cab or on a plane. At some point, because we’ll be so dependent on it, technology will become parasitic.
And I for one, welcome the cyborg overlords.
Find the comments.Possibly Related Posts:
- Polymer Vision Demos SVGA Rollable Screen
- It’s Happens to Everyone Eventually
- Which Operating System Would You Have Your Child Use?
- As Goes the Economy, So Goes Open Source?
- DIY Development
The morning was a little confusing. I got up and went to breakfast, but there was no Debra. Once I had finished I got the front desk to call her and found out her clock was an hour behind. Chile has changed its timezone to match Brazil, but some Apple devices don’t seem to realise this, even if they are set to auto-update. One of those devices being Debra’s phone. When we asked at the hotel, they said it’s been a problem for a number of people.
Francisco drove us to the venue and we moved straight into the auditorium. After an introduction by Francisco, it was time for the first session. It was a three track event, so I’m just going to talk about the sessions I was in.
- Kerry had a different version of the agenda, which had him on at a later time, so he hadn’t arrived by the time his session was due to start. I was originally down as the second session, so we switched and I went first with my “Pluggable Databases : What they will break and why you should use them anyway!”. Being in a auditorium is always hard unless it is full, as people spread out and you feel like you are presenting to empty chairs.
- Next up was Kerry Osborne, with his “In-Memory In Action” session. I had to duck out of this early to get to across to my next session, which was on the other side of the building.
- My next session was “It’s raining data! Oracle Databases in the Cloud”. This was in a much smaller room, so it felt really full and much more personal. As a result, the audience interaction felt a lot better. I spent quite a bit of time talking to people after the session, which is my favourite bit of this conference thing.
- I got over to see the tail end of Ronald Bradford‘s session on “Testing and verifying your MySQL backup strategy”. I’ve got a couple of things I need to check in my own MySQL backups now.
- Next up was Kyle Hailey speaking about “SQL Performance Tuning”. Kyle has a very visual approach, which works for me!
- After lunch it was back to me for “Oracle Database Consolidation : It’s not all about Oracle database 12c!”
- Next up was Kyle Hailey with “Database performance tuning”, which focussed on using ASH to identify problems and was once again, very visual.
- The final person up was Debra, with “Do Oracle Cloud Applications Add Up?”. The answer is yes, they do add up, to 42!
After the final session, we hung around for a prize giving and a quick photo opportunity, then had to say our goodbyes and go straight off to the airport to get our flight to Lima.
Thanks to Francisco and the folks at CLOUG for inviting me, as well as all the attendees that came to my sessions and spoke to me during the day. I love speaking directly to people about the technology, so when people come to ask questions I’m in my element. Big thanks to OTN and the ACE Program for helping to make this happen for me.
Tim…OTN Tour of Latin America 2015 : CLOUG, Chile – Day 1 was first posted on August 11, 2015 at 4:27 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
TekTalk Webinar: Breakthrough in Enterprise-Wide Contract Lifecycle Management
Email not displaying correctly?
Lifecycle Management Thursday, August 20, 2015 at 1 PM EST / 10 AM PST
Contracts rule B2B relationships. Whether you’re a growing mid-market company or a large-scale global organization, you need an effective system to manage surges in contract volumes and ensure accuracy in reporting.
TekStream and Oracle would like to invite you to a webinar on an exciting new solution for Contract Lifecycle Management (CLM).
This solution provides organizations with a consolidated and secure platform to logically ingest and organize contracts and supporting documents. It offers total contact lifecycle management with intuitive workflow processing as well as native integration to many existing ERP systems. With this new solution, contracts and other critical documents will no longer be locked in enterprise systems; the entire enterprise can gain seamless access from one centralized repository.
The webinar is scheduled for Thursday, August 20th at 1 PM EST/10 AM PST.
TekStream’s Contract Lifecycle Management (CLM) software is built on Oracle’s industry leading document management system, WebCenter Content, and is designed to seamlessly integrate with enterprise applications like JD Edwards, PeopleSoft and Oracle’s Enterprise Business Suite (EBS). Combining Oracle’s enterprise level applications with TekStream’s deep understanding of managing essential business information, delivers a contract management tool powerful enough to facilitate even the most complex processes. TekStream’s solution tracks and manages all aspects of your contract work streams from creation and approval to completion and expiration. Companies can rely on TekStream’s CLM to ensure compliance and close deals faster.
Join us to understand how our innovative new solution can address the cost and complexity of Contract Lifecycle Management and provide the following benefits:
- Centralized repository for all in-process and executed contracts.
- Increase efficiency through better control of the contract process.
- Support for “Evergreen” contracts help to improve contract renewal rates.
- Improve compliance to regulations and standards by providing clear and concise reporting of procedures and controls.
For more information about Oracle Documents Cloud Service, please contact firstname.lastname@example.org or call 844-TEK-STRM.
My thoughts on SQL plan management decision points: SQL Patches are also available, primarily to avoid a specific problem not to enforce a particular plan, and are not covered in the above flowchart.
In my last post, I had presumed there is a bug since I discovered an empty clusterware alert log in its conventional location i.e. $ORACLE_HOME/log/<hostname>in 18.104.22.168 standard cluster.
[grid@host01 ~]$ crsctl query crs activeversion; Oracle Clusterware active version on the cluster is [22.214.171.124.0] [root@host01 host01]# ls -l /u01/app/12.1.0/grid/log/host01/alerthost01.log -rw-rw-r– 1 grid oinstall 0 Jun 15 14:10 /u01/app/12.1.0/grid/log/host01/alerthost01.log
But as commented by Ricardo Portillo Proni, in 12c, the location of alert log has been changed to $ORACLE_BASE/diag/crs/<hostname>/crs/trace/
Hence, I could successfully the alert log on node host01 in directory $ORACLE_BASE/diag/crs/host01/crs/trace/
[grid@host01 trace]$ ls -l $ORACLE_BASE/diag/crs/host01/crs/trace/alert* -rw-rw—- 1 root oinstall 812316 Aug 11 10:22 /u01/app/grid/diag/crs/host01/crs/trace/alert.log
Another noticeable thing is that name of clusterware alert log has been changed to alert.log as compared to alert<hostname>.log in 11g.
I would like to mention that I have verified the above only in 126.96.36.199 standard cluster.
In 188.8.131.52 flex cluster though, the location and name of alert log location is same as in 11g i.e. $ORACLE_HOME/log/host01
[root@host01 host01]# crsctl query crs activeversion Oracle Clusterware active version on the cluster is [184.108.40.206.0] [root@host01 host01]# ls -l $ORACLE_HOME/log/host01/alert* -rw-rw-r-- 1 grid oinstall 497364 Aug 11 11:00 /u01/app/12.1.0/grid/log/host01/alerthost01.log
220.127.116.11 standard cluster
- Name of alert log : alert.log
- location of alert log: $ORACLE_BASE/diag/crs/host01/crs/trace
18.104.22.168 flex cluster
- Name of alert log : alert<hostname>.log
- location of alert log: $ORACLE_HOME/log/host01
Hope it helps!
Pls refer to comments for further information.
Related Links :
Comments: 8 comments on this item
You might be interested in this:
- 11G R2 RMAN
- TUNING RMAN PART - I
- TUNING PGA : PART - I
- 11G DATAGUARD: AUTOMATIC BLOCK MEDIA RECOVERY (AUTO BMR)
The post Oracle 22.214.171.124c Standard Cluster: New Location / Name For Alert Log appeared first on ORACLE IN ACTION.
By Phil HillMore Posts (356)
Back in June I had the pleasure of giving the keynote at the Online Teaching Conference (#CCCOTC15) in San Diego, put on by the California Community College system. There was quite a bit of valuable backchannel discussions as well as sharing of the slides. The theme of the talk was:
Emerging Trends in Online / Hybrid Education and Implications for Faculty
As online and hybrid education enter the third decade, there are significant efforts to move beyond the virtualization of traditional face-to-face classroom and move more towards learner-centric approaches. This shift has the potential to change the discussion of whether online and hybrid approaches “can be as good as” traditional approaches to a discussion of how online and hybrid approaches “can provide better learning opportunities”.
For those who would like to see the keynote, I am including the video and slides below. Pat James’ introduction starts at 05:50, and my keynote starts at 09:15.
And my apologies for fumbling on the slide / website / video switches. I was not prepared for the mandatory usage of a PC instead of Mac.
Some key sections that seemed to resonate during the talk:Historical Context and Unbundling (~13:15)
I registered myself for Oracle OpenWorld and I have my hotel reserved and my flights ticketed.
I think it has been over 12 years – probably more like 15 years – since I went to OpenWorld. I went at least once between December 1994 and November 2003 when I still lived in Florida and was working on Oracle databases. But since I moved from Florida I do not believe that I have been to the conference. I have presented at Collaborate and ECOUG conferences since then. I’m thinking that maybe next year I will try to present at the RMOUG conference. I live in Arizona so RMOUG is close. ECOUG was a nice distance when I still lived near the East Coast. I like the smaller conferences and I have a better shot at getting a presentation accepted there.
But, this year it is OpenWorld and I am looking forward to it. I may get a chance to interact with some Delphix employees and customers. Also, I’m hoping to check out some technical presentations by the Oak Table members. And it does not hurt to hear from Oracle itself on its technology. No doubt there will be many of Oracle’s top technical leaders presenting. And, any interaction I get with fellow DBA’s will be great. It is always good to hear from people about their own experiences which may differ from mine.
Anyway, I’m all booked for OpenWorld. Hope to see you there.
I have been doing a lot of writing recently. Some of my writing has been with my sister, with whom I write murder mysteries using the nom-de-plume Maddi Davidson. Recently, we’ve been working on short stories, developing a lot of fun new ideas for dispatching people (literarily speaking, though I think about practical applications occasionally when someone tailgates me).
Writing mysteries is a lot more fun than the other type of writing I’ve been doing. Recently, I have seen a large-ish uptick in customers reverse engineering our code to attempt to find security vulnerabilities in it. <Insert big sigh here.> This is why I’ve been writing a lot of letters to customers that start with “hi, howzit, aloha” but end with “please comply with your license agreement and stop reverse engineering our code, already.”
I can understand that in a world where it seems almost every day someone else had a data breach and lost umpteen gazillion records to unnamed intruders who may have been working at the behest of a hostile nation-state, people want to go the extra mile to secure their systems. That said, you would think that before gearing up to run that extra mile, customers would already have ensured they’ve identified their critical systems, encrypted sensitive data, applied all relevant patches, be on a supported product release, use tools to ensure configurations are locked down – in short, the usual security hygiene – before they attempt to find zero day vulnerabilities in the products they are using. And in fact, there are a lot of data breaches that would be prevented by doing all that stuff, as unsexy as it is, instead of hyperventilating that the Big Bad Advanced Persistent Threat using a zero-day is out to get me! Whether you are running your own IT show or a cloud provider is running it for you, there are a host of good security practices that are well worth doing.
Even if you want to have reasonable certainty that suppliers take reasonable care in how they build their products – and there is so much more to assurance than running a scanning tool - there are a lot of things a customer can do like, gosh, actually talking to suppliers about their assurance programs or checking certifications for products for which there are Good Housekeeping seals for (or “good code” seals) like Common Criteria certifications or FIPS-140 certifications. Most vendors – at least, most of the large-ish ones I know – have fairly robust assurance programs now (we know this because we all compare notes at conferences). That’s all well and good, is appropriate customer due diligence and stops well short of “hey, I think I will do the vendor’s job for him/her/it and look for problems in source code myself,” even though:
- A customer can’t analyze the code to see whether there is a control that prevents the attack the scanning tool is screaming about (which is most likely a false positive)
- A customer can’t produce a patch for the problem – only the vendor can do that
- A customer is almost certainly violating the license agreement by using a tool that does static analysis (which operates against source code)
I should state at the outset that in some cases I think the customers doing reverse engineering are not always aware of what is happening because the actual work is being done by a consultant, who runs a tool that reverse engineers the code, gets a big fat printout, drops it on the customer, who then sends it to us. Now, I should note that we don’t just accept scan reports as “proof that there is a there, there,” in part because whether you are talking static or dynamic analysis, a scan report is not proof of an actual vulnerability. Often, they are not much more than a pile of steaming … FUD. (That is what I planned on saying all along: FUD.) This is why we require customers to log a service request for each alleged issue (not just hand us a report) and provide a proof of concept (which some tools can generate).
If we determine as part of our analysis that scan results could only have come from reverse engineering (in at least one case, because the report said, cleverly enough, “static analysis of Oracle XXXXXX”), we send a letter to the sinning customer, and a different letter to the sinning consultant-acting-on-customer’s behalf – reminding them of the terms of the Oracle license agreement that preclude reverse engineering, So Please Stop It Already. (In legalese, of course. The Oracle license agreement has a provision such as: "Customer may not reverse engineer, disassemble, decompile, or otherwise attempt to derive the source code of the Programs..." which we quote in our missive to the customer.) Oh, and we require customers/consultants to destroy the results of such reverse engineering and confirm they have done so.
Why am I bringing this up? The main reason is that, when I see a spike in X, I try to get ahead of it. I don’t want more rounds of “you broke the license agreement,” “no, we didn’t,” yes, you did,” “no, we didn’t.” I’d rather spend my time, and my team’s time, working on helping development improve our code than argue with people about where the license agreement lines are.
Now is a good time to reiterate that I’m not beating people up over this merely because of the license agreement. More like, “I do not need you to analyze the code since we already do that, it’s our job to do that, we are pretty good at it, we can – unlike a third party or a tool – actually analyze the code to determine what’s happening and at any rate most of these tools have a close to 100% false positive rate so please do not waste our time on reporting little green men in our code.” I am not running away from our responsibilities to customers, merely trying to avoid a painful, annoying, and mutually-time wasting exercise.
For this reason, I want to explain what Oracle’s purpose is in enforcing our license agreement (as it pertains to reverse engineering) and, in a reasonably precise yet hand-wavy way, explain “where the line is you can’t cross or you will get a strongly-worded letter from us.” Caveat: I am not a lawyer, even if I can use words like stare decisis in random conversations. (Except with my dog, because he only understands Hawaiian, not Latin.) Ergo, when in doubt, refer to your Oracle license agreement, which trumps anything I say herein!
With that in mind, a few FAQ-ish explanations:
Q. What is reverse engineering?
A. Generally, our code is shipped in compiled (executable) form (yes, I know that some code is interpreted). Customers get code that runs, not the code “as written.” That is for multiple reasons such as users generally only need to run code, not understand how it all gets put together, and the fact that our source code is highly valuable intellectual property (which is why we have a lot of restrictions on who accesses it and protections around it). The Oracle license agreement limits what you can do with the as-shipped code and that limitation includes the fact that you aren’t allowed to de-compile, dis-assemble, de-obfuscate or otherwise try to get source code back from executable code. There are a few caveats around that prohibition but there isn’t an “out” for “unless you are looking for security vulnerabilities in which case, no problem-o, mon!”
If you are trying to get the code in a different form from the way we shipped it to you – as in, the way we wrote it before we did something to it to get it in the form you are executing, you are probably reverse engineering. Don’t. Just – don’t.
Q. What is Oracle’s policy in regards to the submission of security vulnerabilities (found by tools or not)?
A. We require customers to open a service request (one per vulnerability) and provide a test case to verify that the alleged vulnerability is exploitable. The purpose of this policy is to try to weed out the very large number of inaccurate findings by security tools (false positives).
Q. Why are you going after consultants the customer hired? The consultant didn’t sign the license agreement!
A. The customer signed the Oracle license agreement, and the consultant hired by the customer is thus bound by the customer’s signed license agreement. Otherwise everyone would hire a consultant to say (legal terms follow) “Nanny, nanny boo boo, big bad consultant can do X even if the customer can’t!”
Q. What does Oracle do if there is an actual security vulnerability?
A. I almost hate to answer this question because I want to reiterate that customers Should Not and Must Not reverse engineer our code. However, if there is an actual security vulnerability, we will fix it. We may not like how it was found but we aren’t going to ignore a real problem – that would be a disservice to our customers. We will, however, fix it to protect all our customers, meaning everybody will get the fix at the same time. However, we will not give a customer reporting such an issue (that they found through reverse engineering) a special (one-off) patch for the problem. We will also not provide credit in any advisories we might issue. You can’t really expect us to say “thank you for breaking the license agreement.”
Q. But the tools that decompile products are getting better and easier to use, so reverse engineering will be OK in the future, right?
A. Ah, no. The point of our prohibition against reverse engineering is intellectual property protection, not “how can we cleverly prevent customers from finding security vulnerabilities – bwahahahaha – so we never have to fix them – bwahahahaha.” Customers are welcome to use tools that operate on executable code but that do not reverse engineer code. To that point, customers using a third party tool or service offering would be well-served by asking questions of the tool (or tool service) provider as to a) how their tool works and b) whether they perform reverse engineering to “do what they do.” An ounce of discussion is worth a pound of “no we didn’t,” “yes you did,” “didn’t,” “did” arguments. *
Q. “But I hired a really cool code consultant/third party code scanner/whatever. Why won’t mean old Oracle accept my scan results and analyze all 400 pages of the scan report?”
A. Hoo-boy. I think I have repeated this so much it should be a song chorus in a really annoying hip hop piece but here goes: Oracle runs static analysis tools ourselves (heck, we make them), many of these goldurn tools are ridiculously inaccurate (sometimes the false positive rate is 100% or close to it), running a tool is nothing, the ability to analyze results is everything, and so on and so forth. We put the burden on customers or their consultants to prove there is a There, There because otherwise, we waste a boatload of time analyzing – nothing** – when we
could be spending those resources, say, fixing actual security vulnerabilities.
Q. But one of the issues I found was an actual security vulnerability so that justifies reverse engineering, right?
A. Sigh. At the risk of being repetitive, no, it doesn’t, just like you can’t break into a house because someone left a window or door unlocked. I’d like to tell you that we run every tool ever developed against every line of code we ever wrote, but that’s not true. We do require development teams (on premises, cloud and internal development organizations) to use security vulnerability-finding tools, we’ve had a significant uptick in tools usage over the last few years (our metrics show this) and we do track tools usage as part of Oracle Software Security Assurance program. We beat up – I mean, “require” – development teams to use tools because it is very much in our interests (and customers’ interests) to find and fix problems earlier rather than later.
That said, no tool finds everything. No two tools find everything. We don’t claim to find everything. That fact still doesn’t justify a customer reverse engineering our code to attempt to find vulnerabilities, especially when the key to whether a suspected vulnerability is an actual vulnerability is the capability to analyze the actual source code, which – frankly – hardly any third party will be able to do, another reason not to accept random scan reports that resulted from reverse engineering at face value, as if we needed one.
Q. Hey, I’ve got an idea, why not do a bug bounty? Pay third parties to find this stuff!
A. <Bigger sigh.> Bug bounties are the new boy band (nicely alliterative, no?) Many companies are screaming, fainting, and throwing underwear at security researchers**** to find problems in their code and insisting that This Is The Way, Walk In It: if you are not doing bug bounties, your code isn’t secure. Ah, well, we find 87% of security vulnerabilities ourselves, security researchers find about 3% and the rest are found by customers. (Small digression: I was busting my buttons today when I found out that a well-known security researcher in a particular area of technology reported a bunch of alleged security issues to us except – we had already found all of them and we were already working on or had fixes. Woo hoo!)
I am not dissing bug bounties, just noting that on a strictly economic basis, why would I throw a lot of money at 3% of the problem (and without learning lessons from what you find, it really is “whack a code mole”) when I could spend that money on better prevention like, oh, hiring another employee to do ethical hacking, who could develop a really good tool we use to automate finding certain types of issues, and so on. This is one of those “full immersion baptism” or “sprinkle water over the forehead” issues – we will allow for different religious traditions and do it OUR way – and others can do it THEIR way. Pax vobiscum.
Q. If you don’t let customers reverse engineer code, they won’t buy anything else from you.
A. I actually heard this from a customer. It was ironic because in order for them to buy more products from us (or use a cloud service offering), they’d have to sign – a license agreement! With the same terms that the customer had already admitted violating. “Honey, if you won’t let me cheat on you again, our marriage is through.” “Ah, er, you already violated the ‘forsaking all others’ part of the marriage vow so I think the marriage is already over.”
The better discussion to have with a customer —and I always offer this — is for us to explain what we do to build assurance into our products, including how we use vulnerability finding tools. I want customers to have confidence in our products and services, not just drop a letter on them.
Q. Surely the bad guys and some nations do reverse engineer Oracle’s code and don’t care about your licensing agreement, so why would you try to restrict the behavior of customers with good motives?
A. Oracle’s license agreement exists to protect our intellectual property. “Good motives” – and given the errata of third party attempts to scan code the quotation marks are quite apropos – are not an acceptable excuse for violating an agreement willingly entered into. Any more than “but everybody else is cheating on his or her spouse” is an acceptable excuse for violating “forsaking all others” if you said it in front of witnesses.
At this point, I think I am beating a dead – or should I say, decompiled – horse. We ask that customers not reverse engineer our code to find suspected security issues: we have source code, we run tools against the source code (as well as against executable code), it’s actually our job to do that, we don’t need or want a customer or random third party to reverse engineer our code to find security vulnerabilities. And last, but really first, the Oracle license agreement prohibits it. Please don’t go there.
* I suspect at least part of the anger of customers in these back-and-forth discussions is because the customer had already paid a security consultant to do the work. They are angry with us for having been sold a bill of goods by their consultant (where the consultant broke the license agreement).
** The only analogy I can come up with is – my bookshelf. Someone convinced that I had a prurient interest in pornography could look at the titles on my bookshelf, conclude they are salacious, and demand an explanation from me as to why I have a collection of steamy books. For example (these are all real titles on my shelf):
- Thunder Below! (“whoo boy, must be hot stuff!”)
- Naked Economics (“nude Keynesians!”)***
- Inferno (“even hotter stuff!”)
- At Dawn We Slept (“you must be exhausted from your, ah, nighttime activities…”)
My response is that I don’t have to explain my book tastes or respond to baseless FUD. (If anybody is interested, the actual book subjects are, in order, 1) the exploits of WWII submarine skipper and Congressional Medal of Honor recipient CAPT Eugene Fluckey, USN 2) a book on economics 3) a book about the European theater in WWII and 4) the definitive work concerning the attack on Pearl Harbor. )
*** Absolutely not, I loathe Keynes. There are more extant dodos than actual Keynesian multipliers. Although “dodos” and “true believers in Keynesian multipliers” are interchangeable terms as far as I am concerned.
**** I might be exaggerating here. But maybe not.
Noel (@noelportugal) talked about the technical bits in a post last week, and today, ODTUG posted an interview featuring our fearless leader, Jeremy Ashley (@jrwashley), and Noel from the conference wherein they talk about Internet of Things (IoT) and the IoT bits included in the Hunt.
If you read here, you’ll know that IoT has been a long-time passion of Noel’s, dating back to well before Internet-connected devices were commonplace and way before they had an acronym.
Thanks to ODUTG for giving us the opportunity to do something cool and fun using our nerdy passion, IoT.Possibly Related Posts:
- Mid-June Roundup
- Kscope15 Scavenger Hunt
- The Week That Was Kscope15
- Raspi Shutdown Key
- More Kscope15 Impressions