Skip navigation.

Feed aggregator

Cartesian join

Jonathan Lewis - Wed, 2015-04-15 11:40

Some time ago I pulled off the apocryphal “from 2 hours to 10 seconds” trick for a client using a technique that is conceptually very simple but, like my example from last week, falls outside the pattern of generic SQL. The problem (with some camouflage) is as follows: we have a data set with 8 “type” attributes which are all mandatory columns. We have a “types” table with the same 8 columns together with two more columns that are used to translate a combination of attributes into a specific category and “level of relevance”. The “type” columns in the types table are, however, allowed to be null although each row must have at least one column that is not null – i.e. there is no row where every “type” column is null.

The task is to match each row in the big data set with all “sufficiently similar” rows in the types table and then pick the most appropriate of the matches – i.e. the match with the largest “level of relevance”. The data table had 500,000 rows in it, the types table has 900 rows. Here’s a very small data set representing the problem client data (cut down from 8 type columns to just 4 type columns):


create table big_table(
	id		number(10,0)	primary key,
	v1		varchar2(30),
	att1		number(6,0),
	att2		number(6,0),
	att3		number(6,0),
	att4		number(6,0),
	padding		varchar2(4000)
);

create table types(
	att1		number(6,0),
	att2		number(6,0),
	att3		number(6,0),
	att4		number(6,0),
	category	varchar2(12)	not null,
	relevance	number(4,0)	not null
);

insert into big_table values(1, 'asdfllkj', 1, 1, 2, 1, rpad('x',4000));
insert into big_table values(2, 'rirweute', 1, 3, 1, 4, rpad('x',4000));

insert into types values(   1, null, null, null, 'XX',  10);
insert into types values(   1, null, null,    1, 'YY',  20);
insert into types values(   1, null,    1, null, 'ZZ',  20);

commit;

A row from the types table is similar to a source row if it matches on all the non-null columns. So if we look at the first row in big_table, it matches the first row in types because att1 = 1 and all the other attN columns are null; it matches the second row because att1 = 1 and att4 = 1 and the other attN columns are null, but it doesn’t match the third row because types.att3 = 1 and big_table.att3 = 2.

Similarly, if we look at the second row in big_table, it matches the first row in types, doesn’t match the second row because types.att4 = 1 and big_table.att4 = 4, but does match the third row. Here’s how we can express the matching requirement in SQL:


select
	bt.id, bt.v1,
	ty.category,
	ty.relevance
from
	big_table	bt,
	types		ty
where
	nvl(ty.att1(+), bt.att1) = bt.att1
and	nvl(ty.att2(+), bt.att2) = bt.att2
and	nvl(ty.att3(+), bt.att3) = bt.att3
and	nvl(ty.att4(+), bt.att4) = bt.att4
;

You’ll realise, of course, that essentially we have to do a Cartesian merge join between the two tables. Since there’s no guaranteed matching column that we could use to join the two tables we have to look at every row in types for every row in big_table … and we have 500,000 rows in big_table and 900 in types, leading to an intermediate workload of 450,000,000 rows (with, in the client case, 8 checks for each of those rows). Runtime for the client was about 2 hours, at 100% CPU.

When you have to do a Cartesian merge join there doesn’t seem to be much scope for reducing the workload, however I didn’t actually know what the data really looked like so I ran a couple of queries to analyse it . The first was a simple “select count (distinct)” query to see how many different combinations of the 8 attributes existed in the client’s data set. It turned out to be slightly less than 400.

Problem solved – get a list of the distinct combinations, join that to the types table to translate to categories, then join the intermediate result set back to the original table. This, of course, is just applying two principles that I’ve discussed before: (a) be selective about using a table twice to reduce the workload, (b) aggregate early if you can reduce the scale of the problem.

Here’s my solution:


with main_data as (
	select
		/*+ materialize */
		id, v1, att1, att2, att3, att4
	from
		big_table
),
distinct_data as (
	select
		/*+ materialize */
		distinct att1, att2, att3, att4
	from	main_data
)
select
	md.id, md.v1, ty.category, ty.relevance
from
	distinct_data	dd,
	types		ty,
	main_data	md
where
	nvl(ty.att1(+), dd.att1) = dd.att1
and	nvl(ty.att2(+), dd.att2) = dd.att2
and	nvl(ty.att3(+), dd.att3) = dd.att3
and	nvl(ty.att4(+), dd.att4) = dd.att4
and	md.att1 = dd.att1
and	md.att2 = dd.att2
and	md.att3 = dd.att3
and	md.att4 = dd.att4
;

And here’s the execution plan.


---------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                            |    12 |  2484 |    11  (10)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION |                            |       |       |            |          |
|   2 |   LOAD AS SELECT           | SYS_TEMP_0FD9D6619_8FE93F1 |       |       |            |          |
|   3 |    TABLE ACCESS FULL       | BIG_TABLE                  |     2 |   164 |     2   (0)| 00:00:01 |
|   4 |   LOAD AS SELECT           | SYS_TEMP_0FD9D661A_8FE93F1 |       |       |            |          |
|   5 |    HASH UNIQUE             |                            |     2 |   104 |     3  (34)| 00:00:01 |
|   6 |     VIEW                   |                            |     2 |   104 |     2   (0)| 00:00:01 |
|   7 |      TABLE ACCESS FULL     | SYS_TEMP_0FD9D6619_8FE93F1 |     2 |   164 |     2   (0)| 00:00:01 |
|*  8 |   HASH JOIN                |                            |    12 |  2484 |     6   (0)| 00:00:01 |
|   9 |    NESTED LOOPS OUTER      |                            |     6 |   750 |     4   (0)| 00:00:01 |
|  10 |     VIEW                   |                            |     2 |   104 |     2   (0)| 00:00:01 |
|  11 |      TABLE ACCESS FULL     | SYS_TEMP_0FD9D661A_8FE93F1 |     2 |   104 |     2   (0)| 00:00:01 |
|* 12 |     TABLE ACCESS FULL      | TYPES                      |     3 |   219 |     1   (0)| 00:00:01 |
|  13 |    VIEW                    |                            |     2 |   164 |     2   (0)| 00:00:01 |
|  14 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D6619_8FE93F1 |     2 |   164 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   8 - access("MD"."ATT1"="DD"."ATT1" AND "MD"."ATT2"="DD"."ATT2" AND
              "MD"."ATT3"="DD"."ATT3" AND "MD"."ATT4"="DD"."ATT4")
  12 - filter("DD"."ATT1"=NVL("TY"."ATT1"(+),"DD"."ATT1") AND
              "DD"."ATT2"=NVL("TY"."ATT2"(+),"DD"."ATT2") AND
              "DD"."ATT3"=NVL("TY"."ATT3"(+),"DD"."ATT3") AND
              "DD"."ATT4"=NVL("TY"."ATT4"(+),"DD"."ATT4"))

Critically I’ve taken a Cartesian join that had a source of 500,000 and a target of 900 possible matches, and reduced it to a join between the 400 distinct combinations and the 900 possible matches. Clearly we can expect this to to take something like one twelve-hundredth (400/500,000) of the work of the original join – bringing 7,200 seconds down to roughly 6 seconds. Once this step is complete we have an intermediate result set which is the 4 non-null type columns combined with the matching category and relevance columns – and can use this in a simple and efficient hash join with the original data set.

Logic dictated that the old and new results would be the same – but we did run the two hour query to check that the results matched.

Footnote: I was a little surprised that the optimizer produced a nested loops outer join rather than a Cartesian merge in the plan above – but that’s probably an arterfact of the very small data sizes in my test.There’s presumably little point in transferring the data into the PGA when the volume is so small.

Footnote 2: I haven’t included the extra steps in the SQL to eliminate the reduce the intermediate result to just “the most relevant” – but that’s just an inline view with an analytic function. (The original code actually selected the data with an order by clause and used a client-side filter to eliminate the excess!).

Footnote 3: The application was a multi-company application – and one of the other companies had not yet gone live on the system because they had a data set of 5 million rows to process and this query had never managed to run to completion in the available time window.  I’ll have to get back to the client some day and see if the larger data set also collapsed to a very small number of distinct combinations and how long the rewrite took with that data set.

 


APEX 5.0 will be released today

Denes Kubicek - Wed, 2015-04-15 11:14
Great news. Just finished wathching the Google hangout with the APEX team. They confirmed that APEX will be released today. The download should be made available soon. Stay tuned.

Categories: Development

Faster Download of PeopleSoft Images

Duncan Davies - Wed, 2015-04-15 10:32

With the advent of Selective Adoption, many more people will be downloading the huge PeopleSoft Images every 10 weeks or so. They’re large (circa 35GB) and that’s going to take a while even with a decent connection.

What makes matters worse is that the default method (clicking the Download All button) runs the downloads in serial. Even on a 1MB/sec connection that’s going to take ~10 hours to download all 35GB.

Download All

In addition, the download seems to be throttled somewhat, I’m not sure why. The speed reported in the above window varied wildly between 100KB/s and 500KB/s. Even at the top end of that range, only downloading one at a time it’s going to take almost 24 hours for the complete set.

An alternative is to run the downloads in parallel. Instead of clicking Download All, click WGET Options and download the script instead. After a little modification so that it ran on Windows I was able to run 4 copies of the script side-by-side, giving gains not only by downloading in parallel, but the downloads ran faster too:

Download All parallel wget

You can click for a bigger version, but basically the screenshot is of four downloads with a combined download speed of over 4MB/s! All downloads completed in a touch over 2 hours (and this is on a home broadband connection).

 


Webcast: Adaptive Case Management as a Service

WebCenter Team - Wed, 2015-04-15 09:50
Oracle Corporation Banner Adaptive Case Management as a Service

Often times, organizations find that they have to change the way they manage cases to conform to a new system, rather than the system being open and flexible to accommodate their specific needs. Case Managers do not want to be obligated to follow a once-size-fits-all process for case management. They require open, flexible capabilities to handle their cases that can adapt to fit their needs. Join Sofbang and Oracle to learn more about how Adaptive Case Management as a Service (CMaaS) can provide caseworkers and their clients with an adaptable, flexible, configurable platform based way of managing cases with robust, yet easy-to-use, mobile capabilities.

In this webcast you will:
  • Learn what Adaptive Case Management as a Service (CMaaS) is
  • Understand how to reduce the rigidity of a typical system and streamline the approach to case management
  • Discover mobile, UI friendly approaches to managing cases
  • Find out the Sofbang and Oracle Approach to Adaptive CMaaS
  • See a live demo of CMaaS in action
Duration: 45 minutes for Presentation + 15 minutes for Q&A

About Sofbang

Founded in 2000, Sofbang is an Oracle Platinum Partner specialized in providing Oracle Fusion Middleware, Mobile & Oracle Cloud solutions to clients in the Government, Education and Utilities sectors, as well as the mid-market commercial space. Sofbang provides customers with dynamic business process extensions, enterprise mobility and cloud solutions which extend, integrate and simplify enterprise business applications across an organization. We design solutions with Scalability, Flexibility and Extendibility in mind. We call this concept Designed for Change. Our solutions help organizations reduce costs, increase revenue, enhance end-user experience, promote transparency and improve productivity. Our company was founded with the passion that comes from seeing clients achieve strategic success.

Sofbang has received awards and recognitions for developing innovative solutions and delivering outstanding value, including being recognized by CIO Review as one of the 20 Most Promising Campus Technology Providers of 2014 and the winner of the BPM.com and WfMC 2014 Global Award for Excellence in Business Process Management and Workflow for the Chicago Park District.

As an Oracle Platinum Partner, Sofbang is proud to have achieved the following specialization designations from the Oracle Partner Network, recognizing Sofbang’s continued focus in the Oracle Fusion Middleware stack for over a decade, beginning with BEA Systems Inc. Specializations are achieved through competency development, business results, expertise and proven success.

Oracle Service-Oriented Architecture
Oracle WebLogic Server 12c
Oracle Unified Business Process Management 11g
Oracle Enterprise Manager 12c
Oracle Application Grid

Sofbang is headquartered in Chicago, Illinois and is minority owned. To find out more visit: www.sofbang.com.


Red Button Top Register Now Red Button Bottom Live Webcast Calendar April 28, 2015
11:30 am CST |
12:30 pm EST Featured Speakers:

Danny Asnani
Sofbang

Vivek Ahuja –
Sofbang

Mitchell Palski –
Oracle Oracle Platinum Partner Hardware and Software Engineered to Work Together Copyright © 2015, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

Ellucian Buys Helix LMS, But Will It Matter?

Michael Feldstein - Wed, 2015-04-15 09:14

By Phil HillMore Posts (305)

At this year’s Ellucian users’ conference #elive15, one of the two big stories has been that Ellucian acquired the Helix LMS, including taking on the development team. I have previously described the Helix LMS in “Helix: View of an LMS designed for competency-based education” as well as the subsequent offer for sale in “Helix Education puts their competency-based LMS up for sale”. The emerging market for CBE-based learning platforms is quickly growing, at least in terms of pilot programs and long-term potential, and Helix is one of the most full-featured, well-designed systems out there.

The Announcement

From the announcement:

Ellucian has acquired Helix Education’s competency-based education LMS and introduced a 2015 development partner program to collaborate with customers on the next-generation, cloud-only solution.

As the non-traditional student stands to make up a significant majority of learners by 2019, Ellucian is investing in technologies that align with priorities of colleges and universities it serves. CBE programs offer a promising new way for institutions to reduce the cost and time of obtaining a high-quality degree that aligns with the skills required by today’s employers.

I had been surprised at the announcement of intent-to-sell in December, noting:

The other side of the market effect will be determined by which company buys the Helix LMS. Will a financial buyer (e.g. private equity) choose to create a standalone CBE platform company? Will a traditional LMS company buy the Helix LMS to broaden their reach in the quickly-growing CBE space (350 programs in development in the US)? Or will an online service provider and partial competitor of Helix Education buy the LMS? It will be interesting to see which companies bid on this product line and who wins.

And I am surprised at the answer – a private equity owned ERP vendor. Throughout the mid 2000s there was talk about the ERP vendors like SunGard Higher Education (SGHE) (which combined with Datatel in 2011 and renamed as Ellucian in 2012) and Oracle entering the LMS market by acquisition, yet this did not materialize beyond the dreaded strategic partnership . . . until perhaps this week. But the Helix LMS was designed specifically for CBE programs, not general usage, so is this really a move into the broader LMS market?

When I interviewed Helix Education about the LMS last summer, they stated several times that the system could be used for non-CBE programs, but there is no evidence that this has actually occurred. I’ll admit that it is more likely to expand a CBE system into general usage than it is to convert a course-based traditional LMS into a CBE system, but it is not clear that the end result of such an expansion would remain a compelling product with user experience appreciated by faculty and students. The path is not risk-free.

Based on briefings yesterday at #elive15, there is evidence that:

  • Ellucian plans to expand the Helix LMS (which will be renamed) beyond CBE; and
  • Ellucian understands that there is development still remaining for this broader usage[1].

Ellucian LMS

Courtesy Ryan Schwiebert:

Support for broad set of delivery models: CBE, Online, Hybrid, Blended, Traditional, CE/WFD

One Challenge: Strategy

But there are already signs that Ellucian is not committed to deliver an LMS with “support for broad set of delivery models”. As described at Inside Higher Ed:

At its user conference in New Orleans, Ellucian announced the acquisition of Helix Education’s learning management system. The company will “blend” the software, which supports nontraditional methods of tracking student progress, into its student information system, said Mark Jones, chief product officer at Ellucian. While he stressed that the company is not planning to become a major learning management system provider, Ellucian will make the system available to departments interested in offering competency-based education.

“The initial goal and focus is on enabling competency-based education programs to flourish,” Jones said. “In terms of being a broader L.M.S. solution, if our customers find value… we will certainly have that conversation.”

I asked Jim Ritchey, president of Delta Initiative and who is attending the conference, for his reaction to Ellucian’s strategy. Jim noted the reaction at the conference to the news “seemed to be more of a curiosity than interest”, and then added:

To me, one of the key questions is how Ellucian will “blend” the software. Do they mean that schools will be able to post the results of the competency based courses to the SIS, or are they talking about leveraging other products within the LMS? For example, some of the capabilities of Pilot could be leveraged to deliver additional capabilities to the LMS. The concern I would have is that tying the LMS to other products will cause the LMS development to be dependent on the roadmaps of the other products. Ellucian will need to find the right level of independence for the LMS so it can grow as a solution while using other products to enhance capabilities. Will the LMS get lost?

In addition there the differing nature of the products to consider. The Helix LMS is centered on the learner and the learner’s schedule, while Banner, Colleague, and PowerCampus are centered on academic terms and courses. These differing design concepts could cause the blending process to remove some of the unique value of the LMS.

Another Challenge: Execution

On paper, this deal seems significant. The company with arguably the greatest number of US higher ed clients now owns an LMS that not only has a modern design but also is targeted at the new wave of CBE programs. The real question, however, is whether Ellucian can pull this off based on their own track record.

Since the 2011 acquisition of SGHE by the private equity firm Hellman & Friedman, Ellucian has endured wave after wave of layoffs and cost cutting measures. I described in 2011 how the SGHE acquisition could pay for itself.

If Hellman & Friedman can achieve reasonable efficiencies by combing SGHE with Datatel, this investment could potentially justify itself in 5 – 7 years by focusing on cash flow operating income, even without SGHE finding a way to reverse its decline in revenue.

Add to this Ellucian’s poor track record of delivering on major product upgrades. The transition from Banner 8 to Banner 9, or later to Banner XE, was described in 2008, promised in 2010, re-promised in 2011, and updated in 2012 / 2013. Banner XE is actually a strategy and not a product. To a degree, this is more a statement of the administrative systems / ERP market in general than just on Ellucian, but the point is that this is a company in a slow-moving market. Workday’s entry into the higher education ERP market has shaken up the current vendors – primarily Ellucian and Oracle / Peoplesoft – and I suspect that many of Ellucian’s changes are in direct response to Workday’s new market power.

Ellucian has bought itself a very good LMS and a solid development team. But will Ellucian have the management discipline to finish the product development and integration that hits the sweet spot for at least some customers? Furthermore, will the Ellucian sales staff sell effectively into the academic systems market?

A related question is why Ellucian is trying to expand into this adjacent market. It seems that Ellucian is suffering from having too many products, and the LMS addition that from the outset requires a new set of development could be a distraction. As Ritchey described after the 2012 conference (paraphrasing what he heard from other attendees):

The approach makes sense, but the hard decisions have not been made. Supporting every product is easy to say and not easy to deliver. At some point in time, they will finalize the strategy and that is when we will begin to learn the future.

In The End . . .

The best argument I have read for this acquisition was provided by Education Dive.

Ellucian is already one of the largest providers of cloud-based software and this latest shift with Banner and Colleague will allow its higher education clients to do even more remotely. Enterprise resource planning systems help colleges and universities increase efficiency with technology. Ellucian touts its ERPs as solutions for automating admissions, creating a student portal for services as well as a faculty portal for grades and institutional information, simplifying records management, managing records, and tracking institutional metrics. The LMS acquisition is expected to take the data analytics piece even further, giving clients more information about students to aid in retention and other initiatives.

But these benefits will matter if and only if Ellucian can overcome its history and deliver focused product improvements. The signals I’m getting so far are that Ellucian has not figured out its strategy and has not demonstrated its ability to execute in this area. Color me watchful but skeptical.

  1. See the “development partner program” part of the announcement.

The post Ellucian Buys Helix LMS, But Will It Matter? appeared first on e-Literate.

Monitoring Page Load Time on ADF UI Client Side

Andrejus Baranovski - Wed, 2015-04-15 03:26
In certain situations, it might be useful to monitor ADF page load time. This is pretty easy to achieve with Navigation Timing API and Java Script. Navigation Timing API is supported by modern browsers and allows to retrieve client side load time. It takes into account data transfer time and actual rendering in the browser - real time it took for a user to see the content.

We could use ADF clientListener operation with load type, to identify when ADF UI is loaded. This listener should be added to the ADF UI document tag and it will be invoked at the end of page rendering. Through clientListener, we could invoke our custom JavaScript method, where we could calculate page load time on ADF UI client side:


The most important thing here, is to get page load start time. This value is retrieved from Navigation Timing API (mentioned above) - performance.timing.navigationStart. The rest is easy - we can get load time:


This is how it looks like on runtime. When I recompile ADF application and redeploy it on the server, first load is obviously slower. ADF UI is rendered on the client side (starting from page load request) in 10 seconds (look at top right corner):


Second access is a way faster - page load on the client side happens in 1 second:


You can test it yourself, download sample application (redsam/welcome1) - ADFAltaApp_v2.zip.

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - Tue, 2015-04-14 23:50

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

Integrating PFCLScan and Creating SQL Reports

Pete Finnigan - Tue, 2015-04-14 23:50

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

Categories: Security Blogs

Automatically Add License Protection and Obfuscation to PL/SQL

Pete Finnigan - Tue, 2015-04-14 23:50

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

Categories: Security Blogs

Twitter Oracle Security Open Chat Thursday 6th March

Pete Finnigan - Tue, 2015-04-14 23:50

I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

Posted by Pete On 05/03/14 At 10:17 AM

Categories: Security Blogs

PFCLScan Reseller Program

Pete Finnigan - Tue, 2015-04-14 23:50

We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

Posted by Pete On 29/10/13 At 01:05 PM

Categories: Security Blogs

PFCLScan Version 1.3 Released

Pete Finnigan - Tue, 2015-04-14 23:50

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

PFCLScan Updated and Powerful features

Pete Finnigan - Tue, 2015-04-14 23:50

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Tue, 2015-04-14 23:50

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Deploying a simple Meteor Application To IBM Bluemix

Pas Apicella - Tue, 2015-04-14 23:25
In this post I show what is necessary to deploy a simple Meteor application to IBM Bluemix public instance. In this example we already have a simple Meteor application we have tested and verified using "meteor" itself , running on localhost at port 3000.

1. Lets remove the local DB files, be careful as this will remove the local DB so you should do this when your ready to deploy to Bluemix only.

pas@pass-mbp:~/ibm/software/meteor/myfirst_app$ meteor reset
Project reset.

2. Create a manifest.yml file for the deployed application, the ENV variable ROOT_URL is required and also a buildpack which supports Meteor runtime is being pushed with the application.

applications:
 - name: pas-meteor-firstapp
   memory: 256M
   instances: 1
   path: .
   host: pas-meteor-firstapp
   domain: mybluemix.net
   buildpack: https://github.com/jordansissel/heroku-buildpack-meteor.git
env:
   ROOT_URL: http://pas-meteor-firstapp.mybluemix.net/

3. Push the application as shown below.

pas@pass-mbp:~/ibm/software/meteor/myfirst_app$ cf push -f manifest.yml
Using manifest file manifest.yml

Creating app pas-meteor-firstapp in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

Using route pas-meteor-firstapp.mybluemix.net
Binding pas-meteor-firstapp.mybluemix.net to pas-meteor-firstapp...
OK

Uploading pas-meteor-firstapp...
Uploading app files from: .
Uploading 3.7K, 11 files
Done uploading
OK

Starting app pas-meteor-firstapp in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
-----> Downloaded app package (4.0K)
Cloning into '/tmp/buildpacks/heroku-buildpack-meteor'...

-----> Moving app source into a subdirectory
       Node engine:         0.10.36
       Npm engine:          unspecified
       Start mechanism:     none
       node_modules source: none
       node_modules cached: false
       NPM_CONFIG_PRODUCTION=true
       NODE_MODULES_CACHE=true
       PRO TIP: Use 'npm init' and 'npm install --save' to define dependencies
       See https://devcenter.heroku.com/articles/nodejs-support
       PRO TIP: Include a Procfile, package.json start script, or server.js file to start your app
       See https://devcenter.heroku.com/articles/nodejs-support#runtime-behavior
-----> Installing binaries
       Downloading and installing node 0.10.36...
-----> Building dependencies
       Skipping dependencies (no source for node_modules)
-----> Checking startup method
-----> Finalizing build
       Creating runtime environment
       Exporting binary paths
       Cleaning up build artifacts
       Build successful!
       /tmp/staged/app
       └── (empty)
-----> Fetching Meteor 1.0.3.2
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 72.4M  100 72.4M    0     0  14.3M      0  0:00:05  0:00:05 --:--:-- 15.6M
-----> Unpacking Meteor 1.0.3.2
       Meteor 1.0.3.2 is installed
-----> Building Meteor App Bundle
-----> Installing App's NPM Dependencies
       npm WARN package.json meteor-dev-bundle@0.0.0 No description
       npm WARN package.json meteor-dev-bundle@0.0.0 No repository field.
       npm WARN package.json meteor-dev-bundle@0.0.0 No README data
       > fibers@1.0.1 install /tmp/staged/app/build/bundle/programs/server/node_modules/fibers
       `linux-x64-v8-3.14` exists; testing
       Binary is fine; exiting
       underscore@1.5.2 node_modules/underscore
       semver@4.1.0 node_modules/semver
       eachline@2.3.3 node_modules/eachline
       └── type-of@2.0.1
       chalk@0.5.1 node_modules/chalk
       ├── ansi-styles@1.1.0
       ├── escape-string-regexp@1.0.3
       ├── supports-color@0.2.0
       ├── has-ansi@0.1.0 (ansi-regex@0.2.1)
       └── strip-ansi@0.3.0 (ansi-regex@0.2.1)
       source-map-support@0.2.8 node_modules/source-map-support
       └── source-map@0.1.32 (amdefine@0.1.0)
-----> Uploading droplet (82M)

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-meteor-firstapp was started using this command `node build/bundle/main.js`

Showing health and status for app pas-meteor-firstapp in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: pas-meteor-firstapp.mybluemix.net
last uploaded: Wed Apr 15 05:13:01 +0000 2015

     state     since                    cpu    memory           disk           details
#0   running   2015-04-15 03:18:18 PM   0.1%   149.2M of 256M   351.1M of 1G

4. Verify application is running

pas@pass-mbp:~/ibm/software/meteor/myfirst_app$ cf app pas-meteor-firstapp
Showing health and status for app pas-meteor-firstapp in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: pas-meteor-firstapp.mybluemix.net
last uploaded: Wed Apr 15 05:13:01 +0000 2015

     state     since                    cpu    memory           disk           details
#0   running   2015-04-15 03:18:18 PM   0.0%   149.2M of 256M   351.1M of 1G

5. Access as shown below.

http://pas-meteor-firstapp.mybluemix.net/


Finally if you need to use a data store which most applications will then you need to define an ENV variable for your MongoDB store as shown below. In this example below I am using a service in the Bluemix catalog for MongoDB itself and

   MONGO_URL: mongodb://zzzzzzzzz:yyyyyyyyyyy


More Information

https://www.meteor.com/installhttp://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Good Singapore Maths Students Would Likely Make Good Oracle DBAs (Problems)

Richard Foote - Tue, 2015-04-14 19:46
An interesting mathematics based question from a Singapore high school exam has been doing the internet rounds in the past few days. Considering it’s aimed at high school students, it’s a tricky one and obviously designed to filter out the better students, in a country with a very good reputation for churning out mathematically gifted […]
Categories: DBA Blogs

Cedar’s exciting new office

Duncan Davies - Tue, 2015-04-14 17:39

Today I got my first glimpse of Cedar’s new UK head-quarters in use. I’d seen it during the renovations, however now the work is complete and we’ve started to move in properly it’s really exciting to see people actually using it for real.

Main Office(Travis, Pete, Hardik and Kirti making themselves at home)

I’m particularly pleased that we’ve managed to purchase an office that’s still got the same great location as our previous premises, but is ours to decorate as we wish (witness the fantastic mural in the background of the above photo).

The full details, including some more photos are available here:

http://www.cedarconsulting.co.uk/news-details/April-2015-Cedar-Moves-to-New-UK-Head-Office/index.html


Upcoming Webinar on April 15th about Node-oracledb driver for Node.js

Christopher Jones - Tue, 2015-04-14 16:24

Update:
   Watch the recording on the Oracle Database Development Web Series YouTube channel

Tomorrow I'll be giving a webinar covering node-oracledb, the Node.js driver for Oracle database.

Date: Wednesday, April 15th
Time: 9am (San Francisco time)
Webex - No need to register. Session will be recorded.
US Toll Free Audio (1-866-682-4770), with international numbers available (Meeting ID: 8232385# & PIN: 123456#)
Speaker: Christopher Jones
Topic:

Introduction to node-oracledb: the new Node.js driver for Oracle Database

Want to write highly scalable, event driven applications? Node.js lets you do just that.

After a quick introduction to Node.js, this session dives into the features of node-oracledb, the new, open source Node.js driver for Oracle Database, which is under active development.

To join the webinar, go to Webex

The webinar is part of an ongoing weekly series of developer sessions on a variety of topics Oracle Database Development Web Series - it's worth keeping an eye on the schedule.

A quick link to the node-oracledb homepage is http://ora.cl/wHu

April 2015 Critical Patch Update Released

Oracle Security Team - Tue, 2015-04-14 14:17

Hello, this is Eric Maurice.

Oracle today released the April 2015 Critical Patch Update. The predictable nature of the Critical Patch Update program is intended to provide customers the ability to plan for the application of security fixes across all Oracle products. Critical Patch Updates are released quarterly in the months of January, April, July, and October. Unfortunately, Oracle continues to periodically receive reports of active exploitation of vulnerabilities that have already been fixed by Oracle in previous Critical Patch Update releases. In some instances, malicious attacks have been successful because customers failed to apply Critical Patch Updates. The “Critical” in the designation of the Critical Patch Update program is intended to highlight the importance of the fixes distributed through the program. Oracle highly recommends that customers apply these Critical Patch Updates as soon as possible. Note that Critical Patch Updates are cumulative for most Oracle products. As a result, the application of the most recent Critical Patch Update brings customers to the most recent security release, and addresses all previously-addressed security flaws for these products. The Critical Patch Update release schedule for the next 12 calendar months is published on Oracle’s Critical Patch Updates, Security Alerts and Third Party Bulletin page on Oracle.com.

The April 2015 Critical Patch Update provides 98 new fixes for security issues across a wide range of product families including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle Enterprise Manager, Oracle E-Business Suite, Oracle Supply Chain Suite, Oracle PeopleSoft Enterprise, Oracle JDEdwards EnterpriseOne, Oracle Siebel CRM, Oracle Industry Applications, Oracle Java SE, Oracle Sun Systems Products Suite, Oracle MySQL, and Oracle Support Tools.

Out of these 98 new fixes, 4 are for the Oracle Database. None of the database vulnerabilities are remotely exploitable without authentication. The most severe of the database vulnerabilities (CVE-2015-0457) has received a CVSS Base Score 9.0 only for Windows for Database versions prior to 12c. This Base Score is 6.5 for Database 12c on Windows and for all versions of Database on Linux, Unix and other platforms. This vulnerability is related to the presence of the Java Virtual Machine in the database.

17 of the vulnerabilities fixed in this Critical Patch Update are for Oracle Fusion Middleware. 12 of these Fusion Middleware vulnerabilities are remotely exploitable without authentication, and the highest reported CVSSS Base Score is 10.0. This CVSS10.0 Base Score is for CVE-2015-0235 (a.k.a. GHOST which affects the GNU libc library) affecting the Oracle Exalogic Infrastructure.

This Critical Patch Update also delivers 14 new security fixes for Oracle Java SE. 11 of these Java SE fixes are for client-only (i.e., these vulnerabilities can be exploited only through sandboxed Java Web Start applications and sandboxed Java applets). Two apply to JSSE client and Server deployments and 1 to Java client and Server deployments. The Highest CVSS Base Score reported for these vulnerabilities is 10.0 and this score applies to 3 of the Java vulnerabilities (CVE-2015-0469, CVE-2015-0459, and CVE-2015-0491).

For Oracle Applications, this Critical Patch Update provides 4 new fixes for Oracle E-Business Suite , 7 for Oracle Supply Chain Suite, 6 for Oracle PeopleSoft Enterprise, 1 for Oracle JDEdwards EnterpriseOne, 1 for Oracle Siebel CRM, 2 for the Oracle Commerce Platform, and 2 for Oracle Retail Industry Suite, and 1 for Oracle Health Sciences Applications.

Finally, this Critical Patch Update provides 26 new fixes for Oracle MySQL. 4 of the MySQL vulnerabilities are remotely exploitable without authentication and the maximum CVSS Base Score for the MySQL vulnerabilities is 10.0.

As stated at the beginning of this blog, Oracle recommends that customers consistently apply Critical Patch Update as soon as possible. The security fixes provided through the Critical Patch Update program are thoroughly tested to ensure that they do not introduce regressions across the Oracle stack. Extensive documentation is available on the My Oracle Support Site and customers are encouraged to contact Oracle Support if they have questions about how to best deploy the fixes provided through the Critical Patch Update program.

For More Information:

The April 2015 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpuapr2015-2365600.html

The Critical Patch Updates, Security Alerts and Third Party Bulletin page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance/overview/index.html. Oracle’s vulnerability handling policies and practices are described at http://www.oracle.com/us/support/assurance/vulnerability-remediation/introduction/index.html

QlikView – Load data in QlikView / QlikSence – Best practice and trick

Yann Neuhaus - Tue, 2015-04-14 07:45

In this blog, I will give you some best practices and tricks when you are loading table to generate your data. But before, I will review the different ways to load data in a QlikView or QlikSense report.


1.How to retrieve data in QlikView

You have 2 different possibilities to load your data in your report:

- Database connector

- File connector

Picture1.png

   a) Database connector:

     If your data are located in a database, you must use this kind of connectors.

To connect to a database:

Open "Script Editor"

Picture2.png

Click on “Tab” and “Add Tab…”

Picture3.png

Give a name to the new Tab and clock “OK"

Picture4.png

Select the data source

Picture5.png 

Select your connection (for OLE DB, ODBC, connections should be created in the ODBC administrator tool from windows)

Remark: You can use a special connection on the ETL tool from QlikView named QlikView Expressor

Picture6.png

For this example, I want to connect on a Oracle database:

Select the connection and click “OK”

Picture7.png

Select a table (1), than select the fields you want see in your report (2) and click “OK” (3)

Picture8.png

TRICK 1: If you use the “Connect” option and you add the User ID and the password in the connection interface, they will be put in the SQL script in an encrypted format

Picture28.png

Picture9.png

   b) Data file:

You have 4 different options:

Picture11.png

(1) Table Files: you can select the following kind of files

Picture12.png

(2) QlikView File: You can load .qvw file (QlikView file)

(3) Web File: You can load a file coming from a website

(4) Field data: you can load specific rows coming from a field located in a database

In this example, we select a .qvd file using the 1 option (table file)

Picture13.png

You have 2 options:

Click “Next” and “Next”: you access to the column choose interface

Remark: To remove a column, click on the cross. Then click “Next ”

Picture14.png

Check the SQL. If it’s ok, click on “Finish”

Picture15.png

Click on “Reload” to load the data

Picture16.png

Best Practices: create a variable path

If you must load data coming from files located in a specific repository, and if this repository is going to change after the report is published in different environments, it is recommended to create a variable to define the folder path.

Go on the Main Tab and create the variable “FilePath”. Don’t forget the “;” at the end

Picture17.png

On the other tab where you load data coming from file located in the same folder, add the variable before the name of the file.

Picture18.png

After the deployment on other environment, you just have to update the variable and of course, reload the data.

 

2.Optimize the data recovery in QlikView / QlickSence

In this example, some tables are loaded just for one field. We can optimize this schema with using a mapping function. The goal is to limit the number of tables used directly in the schema.

Picture19.png

Warning: The mapping function can only be used to add one field in a table.

In our example, we want to add the filed “Product Group desc” in the table “ITEM_MATSER”.

To create a mapping tab:

Add a new Tab just after the main tab

Picture20.png

Use the function “MAPPING” in the script as follow:

Picture21.png

In the destination table, add the field with the following function “Applymap” as follow:

Picture22.png

(1) Put the name of the Mapping table you have created

(2) Put the name of the key field

(3) Put the name of the field you want to show in your table

Don’t forget to comment the script from your mapped table.

After refreshing the data, you will see that the table has disappeared and the filed has been added in the main table.

Picture23.png

 

3.How to add a Master Calendar table

You can generate automatically a master calendar if you need to have all the days located in a period. This calendar will be generated in 3 different steps:

   a) Creation of the Min / Max date temporary table.

Create a new tab and add the following script:

Picture24.png

   b) Creation date temporary table.

Add the following script to create a temporary table with all the dates between the MIN and MAX date you have define using the function “AUTOGENERATE”

Picture25.png

Note that we drop the result from the MIN / MAX table at the end of the creation from the temporary table.

   c) Creation Master Calendar table.

After the generation from the temporary table, add the following script to create all the different date fields you need (Year, month, week …)

Picture26.png

Remark: to join your tables, you must give to your new generated field the same name than the field you have used to create your calendar. The result should be like this.

Picture27.png

I hope that these best pratices and tricks will help you !