Skip navigation.

Feed aggregator

Adding Watermarks to PeopleSoft fields

Duncan Davies - Wed, 2015-02-11 10:00

The Cedar tech team has recently discovered a great tweak to improve the end-user experience in PeopleSoft.

Many well designed websites use Watermark text to provide a visual hint to the user what they should put into a field. We felt that PS Self Service users would appreciate the enhancement.

screenshot

Head over to the Cedar Blog to find out more:

http://www.cedarconsulting.co.uk/news-details/February-2015-Adding-Watermarks-to-PeopleSoft-Fields/


Exadata & In-Memory Real World Performance Artikel (German)

Randolf Geist - Wed, 2015-02-11 08:49
Heute wurde auf "informatik-aktuell.de" ein aktueller Artikel von mir veröffentlicht. Es geht darin um die Analyse eines Falles bei einem meiner Kunden, der auf Exadata nicht die erwartete Performance erreicht hat.

In dem Artikel werden unterschiedliche Abfrage-Profile analysiert und erklärt, wie diese unterschiedlichen Profile die speziellen Features von Exadata und In-Memory beeinflussen.

Teil 1 des Artikels
Teil 2 des Artikels

SCOM: don't forget to set the license key!

Yann Neuhaus - Wed, 2015-02-11 07:28

During some client testing operations I installed some months ago a version of System Center Operation Manager 2012 on a Virtual Machine.
Today I try to open this version of SCOM to perform some new tests and I had the surprise to receive a beautiful error message:

b2ap3_thumbnail_Scom_SetLicense1.jpg

In fact, I never thought to verify that my version was licensed or not and as SCOM gives by default an evaluation period of 180 days I don't face the problem before...
I decided to download the new version of SCOM 2012 R2, as I prefer to have the last version, and install it.
By default the installation center tells me that it will upgrade my old version which is a good point.
After the upgrade, SCOM 2012 R2 raises a warning which specifies to not forget to use the PowerShell cmdlet Set-SCOMLicense to license the current version... This was that I forget last time!!

b2ap3_thumbnail_Scom_SetLicense2.jpg

To check the version of my fresh installation, I opened the Operations Manager Shell and then I ran the following command:

b2ap3_thumbnail_Scom_SetLicense3.jpg

Get-SCOMManagementGroup | select SkuForProduct, skuforlicense, version, timeofexpiration

b2ap3_thumbnail_Scom_SetLicense4.jpg

At this point, I see that I have an Evaluation version of SCOM which will expire the third of August, so in six months.
To avoid the same disillusion than today, I decided to set my license key immediately with the following command:

Set-SCOMLicense -ProductId '?????-?????-?????-?????-?????'

b2ap3_thumbnail_Scom_SetLicense5.jpg

A confirmation is asked before validating the license key.
I can now execute a second time my first query to check if the license key is registered:

b2ap3_thumbnail_Scom_SetLicense6.jpg

Now, the license's term is Retail and the expiration date is longer than I will never live ;-)
I can also check directly in the Help/About menu of SCOM to see the same.

b2ap3_thumbnail_Scom_SetLicense7.jpg

See you!

Webcast Tomorrow: Delivering Next-Gen Digital Experiences

WebCenter Team - Wed, 2015-02-11 07:27

Becoming a digital business is imperative for all organizations that wish to deliver the next wave of revenue growth, service excellence, and business efficiency. Today’s enterprises need to connect “experiences” to outcomes, encompassing the entire customer engagement lifecycle. Line-of-business (LOB) and IT leaders have come to agree on the key business priorities: to grow revenue, acquire and retain customers, and improve customer satisfaction—all while reducing costs and minimizing risk. It’s widely understood that the digital experience (DX) has become the cornerstone for all brand experiences. It’s crucial that organizations get this right since the stakes for getting it wrong are extremely high. 

Think about all the twists and turns that the average customer takes on his or her journey with your brand—all the touchpoints including traditional advertising, social marketing, mobile apps, and website interactions. Collectively, the experiences on this digital journey should connect customers with your brand, products, and services based on your knowledge and insight into their personas and online behaviors. 

In short, the engagement vision of the organization has to match the reality of what customers experience. That’s why innovative marketers rely on digital experience and engagement technologies to engage their audiences, launch new products, establish comprehensive services, and develop new and compelling business models. Marketers have learned how to empower their audiences—including customers, citizens, employees, suppliers, and partners—as well as how to optimize their business operations based on industry-leading digital experiences.

How do you deliver an experience that is frictionless and reflects your understanding of who your customers and audiences are so you can meet their needs, secure their loyalty, and effectively sell your products and services to them? You do it by eliminating information silos and fragmented customer engagement tools by creating a single digital experience platform that can be leveraged across all marketing functions and used to connect marketing to the rest of the enterprise. Digital experiences that are informed by data unify the customer experience and help you build a base of engaged, loyal advocates. 

We invite you to join us tomorrow for a webcast "Delivering Next-Gen Digital Experiences", where Chris Preston will discuss and provide an in-depth look at technologies that enable IT leaders to connect digital customer experiences to business outcomes. Register today!

Oracle Corporation Digital Strategies For Customer Engagement Growth

Free In-Memory Column Store Workshop with Maria Colgan

Marco Gralike - Wed, 2015-02-11 03:50
I am very proud to be able to announce a free workshop on Friday the…

What Is That Light-Green Oracle Database CPU Wait Time?

This page has been permenately moved. Please CLICK HERE to be redirected.

Thanks, Craig.
What Really Is That Light-Green Oracle Database CPU Wait Time?

Have you ever wondered what that light-green "cpu wait time" really means in Oracle Enterprise Manager? It's what I call, the "gap" time. The "gap" time is the "missing" or the "leftover" time when DB Time does not equal the DB CPU (foreground process CPU consumption) plus the non-idle wait time. And, it happens more often than you might think.
If you have ever noticed that the database time seems too large, then you need to read this article. And, if you really want to know what the light-green "cpu wait time" in your OEM charts is, then you need to read this article. It's that good.

If you're serious about Oracle performance tuning and analysis, you'll want to know I just posted my complete 2015 public training schedule. It's on the main OraPub.com page HERE. Remember, alumni receive a 50% discount...crazy I know.
My Experiment Shows...
My experiment shows a strong relationship between the "gap" time and operating system CPU utilization. This means that a significant portion of the "gap" time is Oracle foreground processes sitting in the CPU run queue ready to consume CPU. This CPU run queue time is not part of DB CPU but it part of DB Time. So, when the CPU run queue time increases, so does DB Time and so does the "gap" time. And I have the data to show it! And you can run the same experiment yourself.
Let me put this another way. Most of the DB Time "gap" is Oracle foreground processes waiting in the operating system CPU run queue so they can eventually and truly consume CPU.

This is really important: When an Oracle foreground process is not consuming CPU but is sitting in the CPU run queue, Oracle Active Session History (ASH) facility records the session sample state as "CPU" and if the Oracle process is a foreground process (not a background process) Oracle's time model records this time as DB Time but not DB CPU. So in both the ASH and time model cases, someone must do some math to calculate this "cpu wait time".

But that name... "cpu wait"!
CPU Wait Time Is A Lousy Name
"CPU wait time" is a lousy name. Why? Mainly because it has caused lots of confusion and speculation. The name would be more appropriately called something like, "cpu queue time." Three reasons come to mind.
First, wait time means something special to Oracle DBAs. To an Oracle DBA anything associate with a "wait" should have a wait event name, a wait occurance, the time should be instrumented (i.e., measured) and should be recorded in the many wait interface related views, such as v$system_event or v$session.
Second, from an Oracle perspective the process is truly "on cpu" because the process is not "waiting." Remember, an Oracle session is either in one of two states; CPU or WAIT. There is no third choice. So the words "CPU Wait" are really confusing.
Third, from an OS perspective or simply a non-Oracle perspective, the Oracle process is sitting in the CPU run queue.
I'm sure in some Oracle Corporation meeting the words "cpu wait" were considered a great idea, but it has caused lots of confusion. And I'm sure it's here to stay.
What Does This "CPU WAIT" Look Like In OEM?
In OEM, the "cpu wait" is a light green color. I grabbed a publically available screenshot off the internet and posted it below. Look familiar? 

OK, so it's really easy to spot in OEM. And if you've seen it before you know EXACTLY what I'm referring to.
What Is CPU Wait Time?
First, let's review what we do know.
1. DB CPU is true Oracle foreground process CPU consumption as reported by the OS through a system call, such as getrusage.
2. CPU Wait time is derived, that is, somebody at Oracle wrote code to calculate the "cpu wait" time.
3. CPU Wait time is a lousy name because it causes lots of confusion.
4. CPU Wait time is shown in OEM as a light green color. DB CPU is shown as a dark/normal green color.
Second, I need to define what I'll call the DB Time "gap." This is not error and I am not implying something is wrong with database time, that it's not useful or anything like that. All I am saying is that sometimes DB Time does not equal DB CPU plus the non-idle wait time. Let's put that in a formula:
DB Time = DB CPU + non Idle Wait Time + gap
Really, What Is CPU Wait Time?
Now I'm ready to answer the question, "What is CPU WAIT time?" Here is the answer stated multiple ways.
"CPU Wait" time is Oracle foreground process OS CPU run queue time.
I ran an experiment (detailed below) and as the OS CPU utilization increased, so did the DB Time "gap" implying that the gap is CPU run queue time or at least a significant part of it.
I ran an experiment and there was a strong correlation between OS CPU utilization and the DB Time "gap" implying that the gap is CPU run queue time.
I ran an experiment and using queuing theory I was able to predict the "gap" time implying that the gap is CPU run queue time. (Whoops... sorry. That's what I'll present in my next post!)
So I'm very comfortable stating that when DB Time is greater than Oracle process CPU consumption plus the non-idle wait time, it's probably the result of Oracle foreground process CPU run queue time.

Yes, there could be some math problems on Oracle's side, there could be uninstrumented time (for sure it's happened before), the operating system could be reporting bogus values or a host of other potential issues. But unless there is an obvious wrong value, I'm sticking with the experimental evidence.
Now I'm going to show the experimental "evidence" that is, that the DB Time "gap" time correlates with the OS CPU utilization.
Let The Data Drive Our Understanding
You can download all the data collection scripts, raw experimental data, Mathematica notepad files, graphic files, etc HERE in a single zip file.

You should be able to run the experiment on any Linux Oracle test system. All you need is a logical IO load and for that I used my free opload tool which, you can download HERE.
The experiment placed an increasing logical IO load on an Linux Oracle 12c system until the operating system CPU utilization exceeded 90%. The load was increased 18 times. During each of the 18 loads, I gathered 31 three minute samples. Each sample contains busy time (v$osstat), idle time (v$osstat), logical IO (v$sysstat "session logical reads"), non-idle wait time (v$system_event where wait_class != 'Idle'), DB CPU (v$sys_time_model), background cpu time (v$sys_time_model), database time (v$sys_time_model DB time) and the sample time (dual table current_timestamp).
The CPU utilization was calculated using the "busy idle" method that I blog about HERE. This method is detailed in my Utilization On Steroids online video seminar.
The workload is defined as the logical IOs per second, lio/s.
Below is a table summarizing the experimental data. The times shown are the averages. If you look at the actual raw experimental data contained in the analysis pack, you'll notice the data is very consistent. This is not suprising since the load I placed should produce a very consistent workload.
Do you see the gaps? Look closely at load 18. The DB Time is 8891.4 seconds. But the sum of DB CPU (996.8 seconds) and the non-idle wait time (2719.2) seconds only equals 3716.0. Yet DB Time is 8891.4. So the "gap" is 5175.3 which is DB Time (8891.3) minus DB CPU (996.8) minus the non-idle wait time (2719.2).

Note: Load 11 and 12 where excluded because of a problem with my data collection. Sorry.

While we can numberically see the DB Time "gap" increase as the CPU utilization increases, check out the graphic in the next section!

The Correlation Between CPU Utilization And DB Time Gap
We can numerically and visually see that as the CPU utilization increases, so does the DB Time "gap." But is there a strong mathematical correlation? To determine this, I used all the experimental samples (except load 11 and 12). Because there was 17 different workloads and with each workload I gathered 31 samples, the correlation comprises of something like 527 samples. Pretty good sample set I'd say.

The correlation coefficient is a strong 0.891. The strongest is 1.0 and the weakest is 0.

Graphically, here is the scatterplot showing the relationship between the CPU utilization and the workload.

Don't expect the DB Time "gap" and OS CPU utilization correlation to be perfect. Remember that DB Time does not include Oracle background process CPU consumption, yet it is obviously part of the OS CPU utilization.

Summary
My experiment indicated the light-green "CPU wait time" is primarily Oracle foreground process operating system CPU run queue time. This is DB Time "gap" time.

My experiment also showed the "gap" time is highly correlated with CPU utilization. Which means, as the CPU utilization increases, so does the "gap" time.

If there are Oracle Database instrumentation bugs or a host of other potential problems, that will also affect the "gap" time.

If you want a more complete and detailed DB Time formula is would be this:

DB Time = DB CPU + Non Idle Wait Time + gap time

In my next post, I'll show you how to calculate the gap time based on queuing theory!

Thanks for reading!

Craig.








Categories: DBA Blogs

Public Appearances 2015

Tanel Poder - Tue, 2015-02-10 16:14

Here’s where I’ll hang out in the following months:

11-12 Feb 2015: IOUG Exadata SIG Virtual Conference (free online event)

  • Presentation: Exadata Performance: Latest Improvements and Less Known Features
  • It’s a free online event, so sign up here

18-19 Feb 2015: RMOUG Training Days (in Denver)

  • I won’t speak there this year, but plan to hang out on Wednesday evening and drink beer
  • More info here

1-5 March 2015: Hotsos Symposium 2015

31 May – 2 June 2015: Enkitec E4

  • Even more awesome Exadata (and now also Hadoop) content there!
  • I plan to speak there again, about Exadata performance and/or integrating Oracle databases with Hadoop
  • More info here

Advanced Oracle Troubleshooting v3.0 training

  • One of the reasons why I’ve been so quiet in recent months is that I’ve been rebuilding my entire Advanced Oracle Troubleshooting training material from ground up.
  • This new seminar focuses on systematic Oracle troubleshooting and internals of database versions all the way to Oracle 12c.
  • I will launch the AOT seminar v3.0 in early March – you can already register your interest here!

 

Related Posts

Security via policies

Yann Neuhaus - Tue, 2015-02-10 10:15

 

Few weeks ago, I presented the session on security via Policies for "Les journées SQL Server 2014", organized by the French SQL Server User Group (GUSS) in Paris.

b2ap3_thumbnail_presentation.JPG

Data Integration Tips: ODI – Use Scenarios in packages

Rittman Mead Consulting - Tue, 2015-02-10 07:16

 

Here is another question I often hear during ODI Bootcamp or I often read in the ODI Space on OTN :
In a package, should I directly use mappings (or interfaces prior to 12c) or is it better to use the scenarios generated from these mappings?

An ODI package is a workflow used to sequence the different steps of execution of a data integration process. Some of these steps might change the value of a variable, evaluate a variable, perform an administrative task like sending an email or retrieving a file from an FTP. But the core thing is the execution of a mapping and this can be done in two different ways.

 

Direct Use

It is possible to directly drag and drop a mapping in the package.

Mapping in a Package

In this example the mapping M_FACT_SALES_10 is directly executed within the package, in the same session. This means that the value of the variables will be the same as the ones in the package.

Execution of a package holding a mapping

 

If there are several mappings, they will be executed sequentially, there is no parallelism here. (Damn, I will never get the double L at the right place in that word… Thanks autocorrect!)

The code for the mapping is generated using the current state of the mapping in the work repository, including any changes done since the package was created. But if a scenario is created for the package PCK_FACT_LOAD_MAP, it will need to be regenerated – or a new version should be generated – to take any change of M_FACT_SALES_10 into account.

One good point for this approach is that we can build a data integration process that will be entirely rolled back in case of failure. If one of the step fails, the data will be exactly as it was before the execution started. This can be done by using the same transaction for all the mappings and disable the commit option for all of them but the last one. So it’s only when everything succeeds that the changes will be committed and available for everyone. Actually it’s not even required to commit on the last step as ODI will anyway issue a commit at the end of a successful session. This technique works of course only if you don’t have any DDL statement on the target tables. You will find more information about it on this nice blog post from Bhabani Rajan : Transaction Control in ODI. Thanks to David Allan for reminding me this!

 

Using Scenarios

But you can also generate a scenario and then drag and drop it in a packageScenario in a package

Sessions and variables

In this case, the execution of the mapping will not be done within the package session. Instead, on step of this session will use the OdiStartScen command to trigger a new session to execute the scenario.

Execution of a package holding a scenario

We can see here that the second step of the package session (401) got only one task which will run the command to start a new session. The only step in the new session (402) is the mapping and it has the three same tasks as the previous example. Thanks to the fact this is a different session, you could choose to execute it using another context or agent. My colleague Michael brought an interesting use case to me. When one step of your package must extract data from a file on a file server that has its own standalone agent installed to avoid permission issues, you could specify that agent in the Scenario step of your package. So the session for that scenario will be started on that standalone agent while all the other sessions will use the execution agent of the parent session.

 

But what about variables configured to keep no history? As this is a different session, the values are lost outside of the scope of one session. We therefore need to pass it as startup parameters to the scenario of the mapping.

Passing variables as startup parameters To do this, I go to the Command tab on the scenario step and I add my variables there with the syntax

"-<PROJECT_CODE>.<VARIABLE_NAME>=<VALUE_TO_ASSIGN>"

where <VALUE_TO_ASSIGN> can be the value of variable in the current session itself (401 in our case).

 Code executed

The scenario step name was originally “Execution of the Scenario M_FACT_SALES_10 version 001″ and I slightly modified it to remove the mention of a version number. As I want to always execute the last version of the scenario, I also changed the version in the parameters and set it to -1 instead of 001. There is a small bug in the user interface so if you click somewhere else, it will change back to 001. The workaround is to press the Return key (or Enter) after typing your value.

So by using a scenario, you can choose which version of a scenario you want to execute. Either a specific version (e.g. 003) or the last one, using the value -1. It also means that if you won’t break anything if the package is executed while you are working on the mapping. The package will still use the frozen code of the scenario even if you changed the mapping.

If you create a scenario for the package PCK_FACT_LOAD_SCEN, there is no need to regenerate it when a new scenario is (re-)generated for the mapping. By using -1, it will reference the newly generated scenario.

 Asynchronous Execution

Another advantage of using scenarios is that it supports parallelism (Yes, I got it right!!).

Asynchronous scenario executions

Here I set the “Synchronous / Asynchronous” option to “Asynchronous Mode” for my two scenarios so the two sessions will start at the same time. By adding an OdiWaitForChildSessions step, I can wait for the end of all the sessions before doing anything else. This step will also define in which case you want to report an error. By default, it’s when all the child sessions are in error. But I usually change to parameter to zero, so any failure will cause my package execution to fail as well.

getPrevStepLog()

Just a short note, be careful when using the method getPrevStepLog() from the substitution API in a step after executing a scenario. That method will retrieve information about the previous step in the current Session. So about the ODI command execution OdiStartScen, and not about the execution of the scenario itself.

 

Summary

Here is a small recap table to compare both approach :

comparison table

In conclusion, development is generally more robust when using scenarios in packages. There is more control on what is executed and on the parallelism. The only good reason to use mappings directly is to keep everything within the same transaction.

Keep also in mind that Load Plans might be a better alternative than packages with better parallelism, error handling and restartability. Unless you need loops or a persistent session… A comparison between Load Plans and Packages can be found in the Oracle blog post introducing Load Plans.

 

More tips and core principles demystification coming soon so stay tuned. And follow @mRainey, @markrittman and myself – @JeromeFr – on twitter to be informed of anything happening in the ODI world.

Categories: BI & Warehousing

Some changes to be aware of, as Oracle Application Express 5 nears...

Joel Kallman - Tue, 2015-02-10 06:25
As the release of Oracle Application Express 5 gets closer, I thought it's worth pointing out some changes that customers should be aware of, and how an upgrade to Oracle Application Express 5 could impact their existing applications.


  1. As Trent Schafer (@trentschafer) noted in his latest blog post, "Reset an Interactive Report (IR)", there have been numerous customer discussions and blog posts which show how to directly use the gReport JavaScript object to manipulate an Interactive Report.  The problem?  With the massive rewrite to support multiple Interactive Reports in Oracle Application Express 5, gReport no longer exists.  And as Trent astutely points out, gReport isn't documented.  And that's the cautionary tale here - if it's not documented, it's not considered supported or available for use and is subject to change, effectively without notice.  While I appreciate the inventiveness of others to do amazing things in their applications, and share that knowledge with the Oracle APEX community, you must be cautious in what you adopt.
  2. In the rewrite of Interactive Reports, the IR component was completely revamped from top to bottom.  The markup used for IRs in APEX 5 is dramatically improved:  less tables, much smaller and more efficient markup, better accessibility, etc.  However, if you've also followed this blog post from Shakeeb Rahman (@shakeeb) from 2010, and directly overrode the CSS classes used in Interactive Reports, that will no longer work in IRs in APEX 5.  Your custom styling by using these classes will not have any effect.
  3. As the Oracle Application Express 5 Beta documentation enumerates, there is a modest list of deprecated features and a very small list of features which are no longer supported.  "Deprecated" means "will still work in APEX 5, but will go away in a future release of APEX, most likely the next major release of APEX".  In some cases, like the deprecated page attributes for example, if you have existing applications that use these attributes, they will still function as in earlier releases of APEX, but you won't have the ability to set it for new pages.  Personally, I'm most eager to get rid of all uses of APEX_PLSQL_JOB - customers should use SYS.DBMS_SCHEDULER - it's far richer in functionality.
Please understand that we have carefully considered all of these decisions - even labored for days, in some cases.  And while some of these changes could be disruptive for existing customers, especially if you've used something that is internal and not documented, we would rather have the APEX Community be made aware of these changes up front, rather than be silent about it and hope for the best.

Annonce : Remise du prix de la meilleure thèse

Jean-Philippe Pinte - Tue, 2015-02-10 03:12
Oracle France a remis à M. Gérald Patterson (ISEP / 2ième promotion du Master Cloud Computing) le trophée Oracle pour sa thèse intitulée Improving Cloud Computing availability with Openstack Enhanced Performance.
http://www.cloud-computing-formation.fr/rencontre/


Ce fut également l'occasion de présenter le aux élèves de l'ISEP.

Exception from executeScript in Alfresco Share

Yann Neuhaus - Tue, 2015-02-10 03:00


I didn't have the opportunity to post a new entry about Alfresco in this blog for a long time now, so I will fix this! In this blog entry, I will talk about a bug I encountered a few months ago. I resolved it but I, so far, not had the time to share my knowledge with you.
 

I. Description of the issue


This bug appears no matter what the version of Alfresco is used, regardless of the components that are installed, aso... So what is this bug? In fact, this bug isn't blocking anything. Actually it has no impact on the daily work, however, it fills up the Alfresco log files very quickly which can be problematic if you are an administrator searching for information in these log files! Indeed, each time a user accesses a page of Alfresco, between 10 and 50 Java Exceptions are generated (always the same), this create gigabytes log files in minutes/hours. Here is the exception I'm talking about:
 

...
Jul 08, 2014 10:42:16 AM org.apache.catalina.startup.Catalina start
INFO: Server startup in 95898 ms
2013-07-08 10:45:02,300 INFO [web.site.EditionInterceptor] [http-apr-8080-exec-1] Successfully retrieved license information from Alfresco.
2013-07-08 10:45:02,417 ERROR [extensions.webscripts.AbstractRuntime] [http-apr-8080-exec-3] Exception from executeScript - redirecting to status template error: 06080001 Unknown method specified to remote store API: has
  org.springframework.extensions.webscripts.WebScriptException: 06080001 Unknown method specified to remote store API: has
  at org.alfresco.repo.web.scripts.bean.BaseRemoteStore.execute(BaseRemoteStore.java:326)
  at org.alfresco.repo.web.scripts.RepositoryContainer$3.execute(RepositoryContainer.java:426)
  at org.alfresco.repo.transaction.RetryingTransactionHelper.doInTransaction(RetryingTransactionHelper.java:433)
  at org.alfresco.repo.web.scripts.RepositoryContainer.transactionedExecute(RepositoryContainer.java:495)
  at org.alfresco.repo.web.scripts.RepositoryContainer.transactionedExecuteAs(RepositoryContainer.java:533)
  at org.alfresco.repo.web.scripts.RepositoryContainer.executeScript(RepositoryContainer.java:276)
  at org.springframework.extensions.webscripts.AbstractRuntime.executeScript(AbstractRuntime.java:377)
  at org.springframework.extensions.webscripts.AbstractRuntime.executeScript(AbstractRuntime.java:209)
  at org.springframework.extensions.webscripts.servlet.WebScriptServlet.service(WebScriptServlet.java:118)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
  at org.alfresco.web.app.servlet.GlobalLocalizationFilter.doFilter(GlobalLocalizationFilter.java:61)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
  at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
  at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
  at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
  at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
  at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
  at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:929)
  at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
  at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
  at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1002)
  at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:585)
  at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1813)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722)
...

 

The first time I encountered this exception, it was on an Alfresco v4.x installation, up and running for some years with a lot of extensions/customizations (mostly .AMP files). If you need more information about AMPs, it means Alfresco Module Package, it's the better way to extend Alfresco. Please take a look at some of my old blogs to find information about how to create this kind of stuff!
 

I had this exception on more than one Alfresco server, and because of that, I firstly thought that this exception came from an AMP... Therefore, I went through all these extensions but despite hours of research, I found nothing.


II. How to replicate the issue


I tried to replicate this issue with a fresh installation of Alfresco, same version, same extensions, aso... But I haven't been able to do so, at the first place. Finally, one day, I found out that there were something strange with the Alfresco servers on which the Java Exceptions appeared: the "Sites" folder weren't there. Indeed, after the installation of a new Alfresco server, the deletion of the default site ("Sample: Web Site Design Project") and the deletion of the "Sites" folder from the Repository browser, the exception appeared magically in the Alfresco log files... Now that we know from where this issue comes from, it's quite easy to replicate it:

  1. Install a new Alfresco server with the same version from the bundle executable/binaries (quicker)
  2. Start the Alfresco server and open the Alfresco Share UI (http://localhost:8080/share) using the admin account
  3. Navigate to the Sites finder (http://localhost:8080/share/page/site-finder)
  4. Click on "Search" to display all existing sites (only the default one is present: "Sample: Web Site Design Project")
  5. Click on "Delete" to delete the swsdp site
  6. Navigate to the Repository (http://localhost:8080/share/page/repository)
  7. Remove the "Sites" folder on the Repository page (/Company Home/Sites)
  8. Refresh the page and take a look at your logs


After doing that, you should be able to see a lot of exceptions like the one describe above. Issue replicated!


III. How to solve the issue


Being able to replicate an issue is good, but knowing how to solve it is better!

If the "Sites" folder has been deleted in the first place, it was because the Alfresco Sites weren't used at all. Therefore, the simplest solution to resolve this issue was to get the "Sites" folder back. But it's not that easy because this folder has a particular type, some particular attributes, aso... You can't just create a new folder, rename it "Sites" and hope that it will work ;). Starting from here, what you can do to solve this issue is:

  1. Restore the "Sites" folder using a backup
  2. Replicate the "Sites" folder from another Alfresco server


If you don't have a way to restore the "Sites" folder like it was my case (after some months, no backup left), here is what you can do to fix this issue:
 

Let's say that the Alfresco server, where the "Sites" folder doesn't exist anymore, is named "A". Please take a look at the end of this blog entry for some screenshots that may help you.

  1. Install a new Alfresco server with the same version as "A" from the bundle executable/binaries. This can be on your local machine. Let's name this Alfresco server "B"
  2. Start the Alfresco server "B" and open the Alfresco Share UI (http://localhost:8080/share) using the admin account
  3. Navigate to the Sites finder (http://localhost:8080/share/page/site-finder)
  4. Click on "Search" to display all existing sites (only the default one is present: "Sample: Web Site Design Project")
  5. Click on "Delete" to delete the swsdp site
  6. Navigate to the Repository (http://localhost:8080/share/page/repository) (DON'T delete the "Sites" folder)
  7. Configure a replication target on "B" to point to "A" (take a look at the Alfresco doc: http://docs.alfresco.com/4.1/tasks/adminconsole-replication-transfertarget.html)
  8. Enable the replication:
    1. Add the Alfresco Share url and the RepositoryId of "A" into the share-config-custom.xml file of "B" (take a look at the Alfresco doc: http://docs.alfresco.com/4.1/tasks/adminconsole-replication-lockedcontent.html)
    2. Add the "replication.enabled=true" into the alfresco-global.properties file of "B" (take a look at the Alfresco doc: http://docs.alfresco.com/4.1/tasks/replication-share.html)
    3. Restart "B" for the changes to be taken into account by Alfresco
  9. Configure a replication job on "B" to replicate the "Sites" folder from "B" to "A" (http://localhost:8080/share/page/console/admin-console/replication-jobs)
  10. Run the replication job on "B"


Configure the Replication Target on B (step 7 - create a folder named "TransfertToA" and edit its permissions):

CreateReplicationTarget.png


Find the Repository ID of A (step 8.1):

FindRepositoryId.png


Configure the share-config-custom.xml file of B (step 8.1):

EnableTheReplication.png


Once the replication job has run on "B", the exceptions will disappear from the log files of "A". I didn't go deeper so I don't really know if you can create new sites using this newly imported "Sites" folder but if you removed this folder in the first place, I would guess that you don't really need it ;).
 

Thank you for reading this post and I hope this will help. If you need more information, don't hesitate to let a little comment below. See you soon for more blogs!
 

 

A Sneak Preview of e-Literate TV at ELI

Michael Feldstein - Tue, 2015-02-10 00:58

By Michael FeldsteinMore Posts (1013)

Phil and I will be chatting with Malcolm Brown and Veronica Diaz about our upcoming e-Literate TV series on personalized learning in a featured session at ELI tomorrow. We’ll be previewing short segments of video case studies that we’ve done on an elite New England liberal arts college, an urban community college, and large public university. Audience participation in the discussion is definitely encouraged. It will be tomorrow at 11:45 AM in California C for those of you who are here at the conference, and also webcast for those of you registered for the virtual conference.

We hope to see you there.

The post A Sneak Preview of e-Literate TV at ELI appeared first on e-Literate.

Fronting Oracle Maven Repository with Artifactory

Steve Button - Mon, 2015-02-09 22:44
The JFrog team announced this week the release of Artifactory 3.5.1, which is a minor update that now works with the Oracle Maven Repository.

http://www.jfrog.com/confluence/display/RTF/Artifactory+3.5.1

I spent a little while yesterday having a look at it, working through the configuration of a remote repository and testing it with a maven project to see how it worked.

Once I'd downloaded it and started it up -- much love for simple and obvious bin/*.sh scripts -- it was a very simple two step process:

1. Since we live behind a firewall first add a proxy configuration to point at our proxy server.



2. Add a new remote repository and pointed it at the Oracle Maven Repository, specifying its URL and using my OTN credentials as username and password.


The Artifactory 3.5.1 documentation stated that the Advanced Settings >  Lenient host authentication and Enable cookie management options must be checked when accessing the Oracle Maven Repository.


The Test button is handy to verify the server settings have been entered correctly.

3. Use the Home tab > Client Settings > Maven Settings link to generate and save a settings.xml file that uses the artifactory server.



With the repository running, configured and the settings.xml saved, its then possible to try it out with an existing maven project such as https://github.com/buttso/weblogic-with-arquillian.

I also nuked my local repository to force/verify that the dependencies were fetched through the specified Artifactory server.

$ rm -fr ~/.m2/repository/com/oracle
$ mvn -s artifactory-settings.xml test

Viewing the output of the mvn process and the running Artifactory server you can see that maven is downloading dependencies from http://localhost:8081/artifactory and correspondingly Artifactory is downloading the requested artifact from https://maven.oracle.com.


Once the maven process has completed and all the requested artifacts downloaded, Artifactory will have cached them locally for future use.
 
Using the Search functionality of the Artifactory Web UI you can search for weblogic artifacts.


Using the Repository Browser functionality of the Artifactory Web UI you can view and navigate around the contents of the remote Oracle Maven Repository.

Nice JFrog > Artifactory team - thanks for the quick support of our repository.

One further thing I'd look at doing is enabling the Configure Passwords Encryption option in the Security settings to encrypt your OTN password, so that it's not stored in cleartext in the etc/artifactory.config.latest.xml file.


Goldengate – start replicat ATSCN or AFTERSCN ?

Michael Dinh - Mon, 2015-02-09 21:36

When using Goldengate to instantiate target database from an Oracle source database, replicat process can be started to concide with extract based the method used for instantiation, e.g. RMAN or datapump.

ATSCN is used to start replicat if RMAN is used to instantiate target.
From Database Backup and Recovery Reference, UNTIL SCN specifies an SCN as an upper limit.
RMAN restore or recover up to but not including the specified SCN.

AFTERSCN is used to start replicat if datapump is used to instantiate target.
The export operation performed is consistent as of FLASHBACK_SCN.

Hope this helps to clear up when to use ATSCN versus AFTERSCN.

Referemce:
Oracle GoldenGate Best Practices: Instantiation from an Oracle Source Database –  Doc ID 1276058.1


JavaScript Stored Procedures and Node.js Applications with Oracle Database 12c

Kuassi Mensah - Mon, 2015-02-09 20:44
JavaScript Stored Procedures and Node.js Applications with Oracle Database 12c                                       Kuassi Mensah                                    db360.blogspot.com | @kmensah | https://www.linkedin.com/in/kmensah
Keywords: JavaScript, Node.js, Java, JVM, Nashorn, Avatar.jsIntroduction                                                            Node.js and server-side JavaScript are hot and trendy; per the latest “RedMonk Programming Languages Rankings[1], JavaScript and Java are the top two programming languages. For most developers building modern Web, mobile, and cloud based applications, the ability to use the same language across all tiers (client, middle, and database) feels like Nirvana but the IT landscape is not a green field; enterprises have invested a lot in Java (or other platforms for that matter) therefore, the integration of JavaScript with it becomes imperative. WebSockets and RESTful services enable loose integration however, the advent of JavaScript engines on the JVM (Rhino, Nashorn, DynJS), and Node.js APIs on the JVM (Avatar.js, Nodyn, Trireme), make possible and very tempting to co-locate Java and Node applications on the same JVM.
This paper describes the steps for running JavaScript stored procedures[2]directly on the embedded JVM in Oracle database 12c and the steps for running Node.js applications on the JVM against Orace database 12c, using Avatar.js, JDBC and UCP.          
JavaScript and the Evolution of Web Applications Architecture                                   At the beginning, once upon a time, long time ago, JavaScript was a browser-only thing while business logic, back-end services and even presentations where handled/produced in middle-tiers using Java or other platforms and frameworks. Then JavaScript engines (Google’s V8, Rhino) leave the browsers and gave birth to server-side JavaScript frameworks and Node.js.Node Programming ModelNode.js and similar frameworks bring ease of development rapid prototyping, event-driven, and non-blocking programming model[3]to JavaScript. This model is praised for its scalability and good enough performance however, unlike Java, Node lacks standardization in many areas such as database access i.e., JDBC equivalent, and may lead, without discipline, to the so called “callback hell[4]”.
Nonetheless, Node is popular and has a vibrant community and a large set of frameworks[5].Node Impact on Web Applications ArchitectureWith the advent of Node, REST and Web Sockets, the architecture of Web applications has evolved into (i) plain JavaScript on browsers (mobiles, tablets, and desktops); (ii) server-side JavaScript modules (i.e., Node.js, ORM frameworks) interacting with Java business logic and databases.The new proposal for Web applications architecture is the integration of Node.js and Java on the JVM.  Let’s discuss the enabling technologies: JavaScript engine on the JVM and Node API on the JVM and describe typical use cases with Oracle database 12c.  JavaScript on the JVMWhy implement a JavaScript engine and run JavaScript on the JVM? For starters, i highly recommend Mark Swartz ‘s http://moduscreate.com/javascript-and-the-jvm/and Steve Yegge’s  http://steve-yegge.blogspot.com/2008/06/rhinos-and-tigers.htmlblog posts. In summary, the JVM brings (i) portability; (ii) manageability; (iii) Java tools; (iv) Java libraries/technologies such as JDBC, Hadoop; and (v) the preservation of investments in Java. 
There are several implementations/projects of Java based JavaScript engines including Rhino, DynJS and Nashorn.Rhino
First JavaScripe engine entirery written in Java; started at NetScape in 1997 then, became an open-source Mozilla project[6]. Was for quite some time the default JavaScript engine in Java SE, now  replaced by Nashorn in Java SE 8. DynJS
DynJS is another open-source JavaScript engine for the JVM. Here is the project homepage http://dynjs.org/. Nashorn
Introduced in Java 7 but “production” in Java 8[7], the goal of project Nashorn (JEP 174), is to enhance the performance and security of the Rhino JavaScript engine on the JVM. It integrates with javax.script API (JSR 223) and allows seamless interaction between Java and JavaScript (i.e., invoking Nashorn from Java and invoking Java from Nashorn).

To illustrate the reach of Nashorn on the JVM and the interaction between Java and JavaScript, let’s run some JavaScript directly on the database-embedded JVM in Oracle database 12c. JavaScript Stored Procedures with Oracle database 12c Using Nashorn
Why would anyone run JavaScript in the database? For the same reasons you’d run Java in Oracle database. Then you might ask why run Java in the database, in the first place? As discussed in my book[8], the primary motivations are: (i) reuse skills and code, i.e., which programming languages are your new hire knowledgeable of or willing to learn; (ii) avoid data shipping[9] i.e., in-place processing of billions of data/documents; (iii) combine SQL with foreign libraries to achieve new database capability thereby extending SQL and the reach of the RDBMS, e.g., Web Services callout, in-database container for Hadoop[10]. Some developers/architects prefer a tight separation between the RDBMS and applications therefore, no programming language in the database[11]but there are many pragmatic developers/architects who run code near data, whenever it is more efficient than shipping data to external infrastructure.
Co-locating functions with data on the same compute engine is shared by many programming models such as Hadoop. With the surge and prevalence of Cloud computing, RESTful service based architecture is the new norm. Data-bound services can be secured and protected by the REST infrastructure, running outside the RDBMS. Typical use case: a JavaScript stored procedures service would process millions/billions of JSON documents in the Oracle database and would return the result sets to the service invoker.
To conclude, running Java, JRuby, Python, JavaScript, Scala, or other programming language on the JVM in the database is a sound architectural choice. The best practices consist in: (i) partitioning applications into data-bound and compute-bound modules or services; (ii) data-bound services are good candidates for running in the database; (iii) understand DEFINER’s vs INVOKER’s right[12]and grant only the necessary privilege and/or permission. 

The Steps
The following steps allow implementing JavaScipt stored procedure  running in Oracle database; these steps represent an enhancement from the ones presented at JavaOne and OOW 2014 -- which consisted in reading the JavaScript from the database file system; such approach required granting extra privileges to the database schema for reading from RDBMS file system something not recommended from security perspective. Here is a safer approach:

1.      Nashorn is part of Java 8 but early editions can be built for Java 7; the embedded JavaVM in Oracle database 12c supports Java 6 (the default) or Java 7. For this proof of concept, install Oracle database 12c with Java SE 7 [13]2.      Build a standard Nashorn.jar[14]; (ii) modify the Shell code to interpret the given script name as an OJVM resource; this consists mainly in invoking getResourceAsStream()on the current thread's context class loader ; (iii) rebuild Nashorn.jar with the modified Shell 3.  Load the modified Nashorn jar into an Oracle database shema e.g., HR
 loadjava -v -r -u hr/ nashorn.jar4.      Create a new dbms_javascript  package for invoking Nashorn’s Shell with a script name as parameter
create or replace package dbms_javascript as
  procedure run(script varchar2);
end;
/
create or replace package body dbms_javascript as
  procedure run(script varchar2) as
  language java name 'com.oracle.nashorn.tools.Shell.main(java.lang.String[])';
end;
/
Then call dbms_javascript,run(‘myscript.js’)from SQL which will invoke Nashorn  Shell to execute the previously loaded myscript.js.5.  Create a custom role, we will name it NASHORN, as follows, connected as SYSTEM
SQL> create role nashorn;
SQL> call dbms_java.grant_permission('NASHORN', 'SYS:java.lang.RuntimePermission', 'createClassLoader', '' );
SQL> call dbms_java.grant_permission('NASHORN', 'SYS:java.lang.RuntimePermission', 'getClassLoader', '' );
SQL> call dbms_java.grant_permission('NASHORN', 'SYS:java.util.logging.LoggingPermission', 'control', '' );Best practice: insert those statements in a nash-role.sqlfile and run the script as SYSTEM6.      Grant the NASHORN role created above to the HR schema as follows (connected as SYSTEM):

SQL> grant NASHORN to HR;

7.      Insert the following JavaScript code in a file e.g., database.js stored on your client machine’s (i.e., a machine from which you will invoke loadjava as explained in the next step).
This script illustrates using JavaScript and Java as it uses the server-side JDBC driver to execute a PreparedStatement to retrieve the first and last names from the EMPLOYEES table.var Driver = Packages.oracle.jdbc.OracleDriver;
var oracleDriver = new Driver();
var url = "jdbc:default:connection:";   // server-side JDBC driver
var query ="SELECT first_name, last_name from employees";
// Establish a JDBC connection
var connection = oracleDriver.defaultConnection();
// Prepare statement
var preparedStatement = connection.prepareStatement(query);
// execute Query
var resultSet = preparedStatement.executeQuery();
// display results
     while(resultSet.next()) {
     print(resultSet.getString(1) + "== " + resultSet.getString(2) + " " );
     }
// cleanup
resultSet.close();
preparedStatement.close();
connection.close();

8.      Load database.js in the database as a Java resource (not a vanilla class)
loadjava –v –r –u hr/ database.js9.      To run the loaded scriptsqlplus hr/
SQL>set serveroutput on
SQL>call dbms_java.set_output(80000)
SQL>call dbms_javascript.run(‘database.js’);The Nashorn Shell reads ‘database.js’ script stored as Java Resource from internal table; the JavaScript in its turn invokes JDBC to execute a PreparedStatement and the result set is displayed on the console. The message “ORA=29515: exit called from Java code with status 0” is due to the invocation of java.lang.Runtime.exitInternal; and status 0 means normal exit (i.e., no error). The fix is to remove that call from Nashorn. Node.js on the JVMAs discussed earlier, Node.js is becoming the man-in-the-middle between Web applications front ends and back-end legacy components and since companies have invested a lot in Java, it is highly desirable to co-locate Node.js and Java components on the same JVM for better integration thereby eliminating the communication overhead. There are several projects re-implementing Node.js APIs on the JVM including: Avatar.js, Nodyn, and Trireme. This paper will only discuss Oracle’s Avatar.js.Project Avatar.js[15]The goal of project Avatar.js is to furnish “Node.js on the JVM”; in other words, an implementation of Node.js APIs, which runs on top of Nashorn and enables the co-location of Node.js programs and Java components. It has been outsourced by Oracle under GPL license[16]. Many Node frameworks and/or applications have been certified to run unchanged or slightly patched, on Avatar.js.
There are binary distributions for Oracle Enterprise Linux, Windows and MacOS (64-bits). These builds can be downloaded from https://maven.java.net/index.html#welcome. Search for avatar-js.jar and platform specific libavatar-js libraries (.dll, .so, dylib). Get the latest and rename the jar and the specific native libary accordingly. For example: on  Linux, rename the libary to avatar-js.so; on Windows, rename the dll to avatar-js.dll and add its location to your PATH (or use -Djava.library.path=).
RDBMSes in general and Oracle database in particular remain the most popular persistence engines and there are RDBMS specific Node drivers[17]as well as ORMs frameworks. However, as we will demonstrate in the following section, with Avatar.js, we can simply reuse existing Java APIs including JDBC and UCP for database access.
Node Programming with Oracle Database using Avatar.js, JDBC and UCP The goal of this proof of concept is to illustrate the co-location of a Node.js application, the Avatar.js library, the Oracle JDBC driver and the Oracle Universal Connection Pool (UCP) on the same Java 8 VM.The sample application consists in a Node.js application which performs the following actions: (i) Request a JDBC-Thin connection from the Java pool (UCP)(ii)Create a PreparedStatement object for “SELECT FIRST_NAME, LAST_NAME FROM EMPLOYEES”
(iii)Execute the statement and return the ResultSet in a callback
(iv)Retrieve the rows and display in browser on port 4000(v) Perform all steps above in a non-blocking fashion – this is Node.js’s raison d’être. The demo also uses Apache ab load generator to simulate concurrent users running the same application in the same/single JVM instance.For the Node application to scale in the absence of asynchronous JDBC APIs, we need to turn synchronous calls into non-blocking ones and retrieve the result set via callback.Turning Synchronous JDBC Calls into Non-Blocking CallsWe will use the following wrapper functions to turn any JDBC call into a non-blocking call i.e., put the JDBC call into a thread pool and free up the Node event loop thread.var makeExecutecallback = function(userCallback) {
 return function(name, args){
      ...
      userCallback(undefined, args[1]);
  }
} function submit(task, callback, msg) {
    var handle = evtloop.acquire();
    try {    var ret = task();
               evtloop.post(new EventType(msg, callback, null, ret)); {catch{}    evtloop.submit(r);
}
Let’s apply these wrapper functions to executeQuery JDBC call, to illustrate the conceptexports.connect = function(userCallback) {..} // JDBC and UCP settingsStatement.prototype.executeQuery = function(query, userCallback) {
         var statement = this._statement;
          var task = function() {
          return statement.executeQuery(query);
       }
     submit(task, makeExecutecallback(userCallback), "jdbc.executeQuery");
} Similarly the same technique will be applied to other JDBC statement APIs.Connection.prototype.getConnection = function() {…}Connection.prototype.createStatement = function() {..}Connection.prototype.prepareCall = function(storedprocedure) {..}Statement.prototype.executeUpdate = function(query, userCallback) {..}Returning Query ResultSet through a CallbackThe application code fragment hereafter shows how: for every HTTP request: (i) a connection is requested, (ii) the PreparedStatement is executed, and (iii) the result set printed on port 4000.   ...   var ConnProvider = require('./connprovider').ConnProvider;
var connProvider = new ConnProvider(function(err, connection){.. });

var server = http.createServer(function(request, response) {
  connProvider.getConn(function(name,data){..});     
  connProvider.prepStat(function(resultset) {
                while (resultset.next()) {
                   response.write(resultset.getString(1) + " --" + resultset.getString(2));
                   response.write('
');
    }
    response.write('');
    response.end();   
}
server.listen(4000, '127.0.0.1');
Using Apache AB, we were able to scale to hundreds of simultaneous invocations of the Node application. Each instance grabs a Java connection from The Universal Connection Pool (UCP), executes the SQL statements through JDBC then return the result set via a Callbak on port 4000.ConclusionsThrough this paper, i discussed the rise of JavaScript for server-side programming and how Java is supporting such evolution; then – something we set out to demonstrate – furnished step by step details for implementing and running JavaScript stored procedures in Oracle database 12c using Nashorn as well as running Node.js applications using Avata.js, Oracle JDBC, UCP against Oracle database 12c.As server-side JavaScript (typified by Node.js) gains in popularity it’ll have to integrate with existing components (COBOL is still alive!!). Developers, architects will have to look into co-locating JavaScript with Java, across middle and database tiers.



[1] http://redmonk.com/sogrady/2015/01/14/language-rankings-1-15/ [2] I’ll discuss the rationale for running programming languages in the database, later in this paper. [3] Request for I/O and resource intensive components run in separate process then invoke a Callback in the main/single Node  thread, when done. [4] http://callbackhell.com/ [5] Search the web for “Node.js frameworks[6] https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Rhino [7] Performance being one of the most important aspect [8] http://www.amazon.com/exec/obidos/ASIN/1555583296 [9] Rule of thumb: when processing more than ~20-25% of target data, do it in-place, where data resides (i.e., function shipping). [10] In-database Container for Hadoop is not available, as of this writing. [11] Other than database’s specific procedural language, e.g.,  Oracle’s PL/SQL [12] I discuss this in chapter 2 of my book; see also Oracle database docs. [13] See Multiple JDK Support in http://docs.oracle.com/database/121/JJDEV/E50793-03.pdf [14] Oracle does not furnish a public download of Nashorn.jar for Java 7; search “Nashorn.jar for Java 7”. [15]  https://avatar-js.java.net/ [16] https://avatar-js.java.net/license.html [17] The upcoming Oracle Node.js driver was presented at OOW 2014. 

EMEA Exadata, Manageability & Hardware Partner Forum

Oracle EMEA Exadata, Manageability, Servers & Storage Partner Community Forum ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Maven Repository - Viewing Contents in Eclipse

Steve Button - Mon, 2015-02-09 16:18
With the Oracle Maven Repository now accessible one way to have explore its contents is to use the Maven Repositories viewer feature available in most development tools. I've seen the repository contents displayed easily in NetBeans so I decided to take a look at what it looks like in Eclipse as well.

I had to make a few minor setting changes to get it to work so decided to document them here.  If you've gotten it to work with less setting changes, let me know!

As initial setup, I configured my local maven environment to support access to the Oracle Maven Repository.  This is documented here https://maven.oracle.com/doc.html.  I also installed maven-3.2.5 that includes the updated Wagon module that supports authentication.

Next I downloaded and used the new network installer that the Oracle Eclipse team has published on OTN to install the latest version of Oracle Enterprise Pack for Eclipse.



This network installer lets developers select the version of Eclipse to install and the set of Oracle extensions --  Weblogic, GlassFish and other stuff -- to add in to it.

 Once Eclipse is installed, you can add the Maven Repository viewer by selecting   Window > Show View > Other > Maven Repositories from the Eclipse toolbar.



I also added a Console > Maven viewer to see what was happening under the covers and arranged them so they were visible at the same time:


With the Maven views ready to go, expand the Global Repositories node. This will show Maven Central (any other repositories you may have configured) and the Oracle Maven Repository if you have configured it correctly in the settings.xml file.

The initial state of the Oracle Maven Repository doesn't show any contents indicating that its index hasn't been downloaded to display.

Right mouse clicking on it and selecting the Rebuild Index option causes an error to be shown in the console output indicating that the index could not be accessed.


To get it to work, I made the following changes to my environment.  
Configure Eclipse to Use Maven 3.2.5Using the Eclipse > Preferences > Maven > Installation dialog, configure Eclipse to use Maven 3.2.5.  This is preferred version of Maven to use to access the Oracle Maven Repository since it automatically includes the necessary version of the Wagon HTTP module that supports the required authentication configuration and request flow.


Configure Proxy Settings in Maven Settings File ** If you don't need a proxy to access the Internet then step won't be needed **
 
If you sit behind a firewall and need to use a proxy server to access public repositories then you need to configure a proxy setting inside the maven settings file.

Interestingly for command line maven use and NetBeans a single proxy configuration in settings.xml was enough to allow the Oracle Maven Repository to be successfully accesses and its index and artifacts used.

However with Eclipse, this setting alone didn't allow the Oracle Maven Repository to be accessed.  Looking at the repository URL for the Oracle Maven Repository you can see ity's HTTPS based -- https://maven.oracle.com and it appears for Eclipse that a specific HTTPS based proxy setting is required for Eclipse to access HTTPS based repositories.


Rebuild Index SuccessWith the settings in place, the Rebuild Index operation succeeds and the contents of the Oracle Maven Repository are displayed in the repository viewer.



Have your say ...

Tim Dexter - Mon, 2015-02-09 15:25

Another messaging exchange last week with Leslie ...

OK, so we practised it a bit after our first convo and things got a little cheesy but hopefully you get the message.

Hit this link and you too can give some constructive feedback on the Oracle doc for BI (not just BIP.) I took the survey; its only eight questions or more if you want to share more of your input. Please take a couple of minutes to help us shape the documentation of future. 

Categories: BI & Warehousing