The changes with Kuali are accelerating, and there are some big updates on the strategy.
Earlier this week the Kuali Foundation distributed an Information Update obtained by e-Literate on many of the details of the transition to Kuali 2.0 and the addition of the for-profit KualiCo. Some of the key clarifications:
- KualiCo will be an independent C Corporation with a board of directors. KualiCo will not be a subsidiary of Kuali Foundation. Capital structure, equity allocations, and business plans are confidential and will not be shared publicly for the same reasons these things are rarely shared by private companies. The board of directors will start out with three members and will move to five or seven over time. Directors will include the CEO and an equal number of educational administrators and outside directors. One of the educational administrators will be appointed by the Kuali Foundation. Outside directors will be compensated with equity. Educational administrators will not be compensated in any way and could only serve as a director with the explicit permission of their university administration with attention to all relevant institutional policies.
- KualiCo’s only initial equity investor is the Kuali Foundation. The Kuali Foundation will invest up to $2M from the Foundation’s cash reserves. [snip] For its equity investment, the Kuali Foundation will have the right to designate a director on the KualiCo Board of Directors. The Kuali Foundation, through its director, will have an exceptional veto right to block the sale of the company, an IPO of the company or a change to the open source license. This helps ensure that KualiCo will stay focused on marketplace-winning products and services rather than on flipping the company on Wall Street.
- The Kuali Foundation is not licensing the Kuali software code for Kuali products to KualiCo as Kuali software is already fully open source and could be used by anyone for any purpose — as is already being done today. No license transfer or grant is needed by KualiCo or anyone else.
- The copyright for the AGPL3 software will be copyright KualiCo for the open source distribution that is available to everyone. It would very quickly become untenable to even try to manage multiple copyright lines as various sections of code evolve through the natural enhancement processes of an open source community.
One key point the document describes at length is the lack of financial interest from individuals in the Kuali Foundation and KualiCo, including the uncompensated director position, the lack of equity held by individuals outside of KualiCo, etc.
Two other key points that are particularly relevant to yesterday’s news:
- Each project board will decide if, when, to what extent, and for what term to engage with KualiCo. Project boards could decide to continue on as they currently do, to engage KualiCo in a limited way, or to allow KualiCo to help drive substantial change to the software approach to that product. If a project chooses not to engage KualiCo, KualiCo will have less initial funding to invest in enhancing the product, but will slowly build up those funds over time by hosting the product and enhancing the product for its customers. Choosing to engage with KualiCo in any fashion requires code to be reissued under the AGPL3 license (see Open Source section).
- KualiCo will be working with the Kuali community to make improvements to current Kuali products. In addition to enhancing the current codebase, KualiCo is beginning the re-write of Kuali products with a modern technology stack. The initial focus will be on Kuali Student and then HR. Complete rewrites of KFS and KC will likely not begin for 3-5 years.
With this in mind, yesterday the Kuali Student (KS) Project Board met and made the decision to sunset their current project and to transition to KualiCo development. Bob Cook, CIO at the University of Toronto and chair of the KS Project Board confirmed by email.
I can say that the Board adopted its resolution because it is excited about the opportunity that KualiCo presents for accelerating the delivery of high quality administrative services for use in higher education, and is eager to understand how to best align our knowledgeable project efforts to achieve that goal. [snip]
In recognition of the opportunity presented by the establishment of KualiCo as a new facet in the Kuali community, the Kuali Student Board has struck a working group to develop a plan for transitioning future development of Kuali Student by the KualiCo. The plan will be presented to the Board for consideration.
While Bob did not confirm the additional level of details I asked (“It would be premature to anticipate specific outcomes from a planning process that has not commenced”), my understanding is that it is safe to assume:
- Kuali Student will transition to AGPL license with KualiCo holding copyright;
- KualiCo will develop a new product roadmap based on recoding / additions for multi-tenant framework; and
- Some of all of the current KS development efforts will be shut down over the next month or two.
KS Project Director Rajiv Kaushik sent a note to the full KS team with more details:
KS Board met today and continued discussions on a transition to Kuali 2.0. That thread is still very active with most current investors moving in the Kuali 2.0 direction. In the meantime, UMD announced its intent to invest in Kuali 2.0 and to withdraw in 2 weeks from the current KS effort. Since this impacts all product streams, Sean, Mike and I are planning work over the next 2 weeks while we still have UMD on board. More to come on that tomorrow at the Sprint demo meeting.
I will update or correct this information as needed.
Kuali Student (KS) is the centerpiece of Kuali – it is the largest and most complex project and the most central value to higher education. KS was conceived in 2007. Unlike KFS, Coeus and Rice, Kuali Student was designed from the ground up. The full suite of modules within Kuali Student had been scheduled to be released between 2012 – 2015 in a single-tenant architecture. With the transition, there will be a new roadmap redeveloping for multi-tenant and updated technology stack.
Just how large has this project been? According to a financial analysis of 2009-2013 performed by instructional media + magic inc. Kuali Student had $30 million in expenditures in that 5-year span. The 2014 records are not yet available nor the 2007-8 records, but an educated guess is that the total is closer to $40 million.
I mention this to show the scope of Kuali Student to date as well as the relative project size compared to other Kuali projects. I wrote a post on cloud computing around the LMS that might be relevant to the future KualiCo development, calling out how cloud technologies and services are driving down the cost of product development and time. In the case of the LMS, the difference has been close to an order of magnitude compared to the first generation:
Think about the implications – largely due to cloud technologies such as Amazon web services (which underpins Lore as well as Instructure and LoudCloud), a new learning platform can be designed in less than a year for a few million dollars. The current generation of enterprise LMS solutions often cost tens of millions of dollars (for example, WebCT raised $30M prior to 2000 to create its original LMS and scale to a solid market position, and raised a further $95M in 2000 alone), or product redesigns take many years to be released (for example, Sakai OAE took 3 years to go from concept to release 1.0). It no longer takes such large investments or extended timeframes to create a learning platform.
Cloud technologies are enabling a rapid escalation in the pace of innovation, and they are lowering the barriers to entry for markets such as learning platforms. Lore’s redesign in such a short timeframe gives a concrete example of how quickly systems can now be developed.
How will these dynamics apply to student information systems? Given the strong emphasis on workflow and detailed user functionality, I suspect that the differences will be less than for the LMS, but still significant. In other words, I would not see the redevelopment of Kuali Student to take anywhere close to $40 million or seven years, but I will be interested to see the new roadmap when it comes out.
This decision – moving Kuali Student to KualiCo – along with the foundation’s ability to hold on to the current community members (both institutions and commercial affiliates) will be the make-or-break bets that the Kuali Foundation has made with the move to Kuali 2.0. Stay tuned for more updates before the Kuali Days conference in November.
Say what you will about the move away from Community Source, Kuali is definitely not sitting on its laurels and being cautious. This redevelopment of Kuali Student with a new structure is bold and high-risk.
- Disclosure: Jim Farmer from im+m has been a guest blogger at e-Literate for many years.
- It’s probably more than that, but let’s use a conservative estimate to set general scope.
The post Kuali Student Sunsetting $40 million project, moving to KualiCo appeared first on e-Literate.
Most companies want to deploy features faster, and fix bugs more quickly—at the same time, a stable product that delivers what the users expected is crucial to winning and keeping the trust of those users. At face value, stability and speed appear to be in conflict; developers can either spend their time on features or on stability. In reality, problems delivering on stability as well as problems implementing new features are both related to a lack of visibility. Developers can’t answer a very basic question: What will be impacted by my change?
When incompatible changes hit the production servers as a result of bug fixes or new features, they have to be tracked down and resolved. Fighting these fires is unproductive, costly, and prevents developers from building new features.
The goal of Continuous Integration (CI) is to break out of the mentality of firefighting—it gives developers more time to work on features, by baking stability into the process through testing.Sample Workflow
- Document the intended feature
- Write one or more integration tests to validate that the feature functions as desired
- Develop the feature
- Release the feature
This workflow doesn’t include an integration step—code goes out automatically when all the tests pass. Since all the tests can be run automatically, by a testing system like Jenkins, a failure in any test, even those outside of the developers control, constitutes a break which must be fixed before continuing. Of course in some cases, users follow paths other than those designed and explicitly tested by developers and bugs happen. New testing is required to validate that bugs are fixed and these contribute to a library of tests which collectively increase collective confidence in the codebase. Most importantly, the library of tests limits the scope of any bug which increases the confidence of developers to move faster.Testing is the Secret Sauce
As the workflow illustrates, the better the tests, the more stable the application. Instead of trying to determine which parts of the application might be impacted by a change, the tests can prove that things still work, as designed.
Continuous Integration is just one of the many ways our DevOps group engages with clients. We also build clouds and solve difficult infrastructure problems. Does that sound interesting to you? Want to come work with us? Get in touch!
If your generating SOAP proxies using Apache Axis/2 you may find yourself hitting strange errors.. Whats even stranger is that you can generate proxies using JDeveloper and it works fine in tooling like SOAPUI.. Well help is at hand..
The most common error is
IWAB0399E Error in generating Java from WSDL: java.lang.RuntimeException: Unknown element _value
I'm not sure if this is a bug in Fusion Sales Clouds base tech (ADFBC SDOs) or a bug in Apache Axis but there is a workaround and engineering are looking into this.
For a workaround you have two options
- Use adb binding and set the flag –Eosv to turn off strict validation.
- Use JDK xjc command to generate the JAXB classes:
e.g. xjs -wsdl http://<salescloudsoapendpoint/opptyMgmtOpportunities/OpportunityService?WSDL
Enjoy and let me know if this works for you :-)
- It’s a very exciting opportunity to experience the powerful Financial Reporting innovations in our Cloud offerings without disruption to your existing ERP investments.
- It’s a way to take advantage of the Simplified Financials Report Center, optimized for easy access to reports on your choice of mobile device
- It includes the sunburst data visualization tool, which was my killer demo last week at OpenWorld (see screen shot)
- It’s a way to move to cloud in an incremental manner, realizing business benefits quickly without disruption to your existing business processes and systems.
- It has a companion EBusiness Suite feature (available on 12.1.3 and 12.2.4) that will push all your set up and GL Balances to your cloud service and generate reports automatically for you. Giving you a zero configuration reporting solution for you EBS GL Balances data (watch out for more detailed posts on this soon)
- It has web services to load General Ledger data from PeopleSoft, JDE Edwards or any other ERP system.
- It’s a way to get your hands on the Oracle Social Network which is part of the platform our Cloud offerings are built on.
That’s a decent list to start with, but there are a few things that it isn’t which I should call out
- It is not(yet) the Accounting Hub Integration Platform with all the rule based accounting transformations provided by Subledger Accounting Architecture (SLA)
- It is not a new name designed to confuse you when we already have Financials Accounting Hub and Fusion Accounting Hub.
Look out for future posts going into more detail, or you can look at the cloud service page, which has important details such as pricing.
In the past, most of my customers skipped R1 releases. That is, 8.1.7 -> 9.2 -> 10.2 -> 11.2. SAP does the same. For the very first time SAP plans to go to 126.96.36.199 + some PSU in spring 2015. But only to avoid running out of support and without any fancy feature like Multitenant or in Memory.
#oracle 188.8.131.52 is the last patch set for Release 12.1.
— laurentsch (@laurentsch) October 9, 2014
184.108.40.206, which is not available on AIX yet, will be the last patchset of 12cR1. It is the first and only patchset for that release. It is actually more than a patchset, as it introduced in memory database and JSON in the database.
The next release is expected beginning of 2016 on Linux. 220.127.116.11 patching ends January 2018.
Should I I go to an already aborted release or should I buy extended support for 18.104.22.168 until 2018 ?
Probably I will go both ways, depending on the applications.
Some software has been built. It generates revenue (or reduces cost) associated with sales, but the effect is not immediate. It could be the implementation of a process change that takes a little time to bed in, or the release of a new optional extra that not everyone will want immediately.
It is expected that when it is initially released there’ll be a small effect. Over the next 6 months there will be an accelerating uptake until it reaches saturation point and levels off.
Nothing particularly unusual about that plan. It probably describes a lot of small scale software projects.
Now let’s put some numbers against that.
At saturation point it’s expected to generate / save an amount equal to 2% of the total revenue of the business. It might be an ambitious number, but it’s not unrealistic.
The business initially generates £250k a month, and experiences steady growth of around 10% a year.
What does the revenue generation of that software look like over the first 12 months?
It’s pretty easy to calculate, plugging in some percentages that reflect the uptake curve:
Period Original Business Revenue Software Revenue Generation Additional Revenue1 £250,000.00 0.2% £500.002 £252,500.00 0.5% £1,262.503 £255,025.00 1.1% £2,805.284 £257,575.25 1.6% £4,121.20 5 £260,151.00 1.9% £4,942.876 £262,752.51 2.0% £5,255.057 £265,380.04 2.0% £5,307.608 £268,033.84 2.0% £5,360.689 £270,714.18 2.0% £5,414.28 10 £273,421.32 2.0% £5,468.4311 £276,155.53 2.0% £5,523.1112 £278,917.09 2.0% £5,578.34 Total: £51,539.34
Or, shown on a graph:
So, here’s a question:
What is the opportunity cost of delaying the release by 2 months?
The initial thought might be that the effect isn’t that significant, as the software doesn’t generate a huge amount of cash in the first couple of months.
Modelling it, we end up with this:
Period Original Business Revenue Software Revenue Generation Additional Revenue 1 £250,000.00 £- 2 £252,500.00 £- 3 £255,025.00 0.2% £510.05 4 £257,575.25 0.5% £1,287.88 5 £260,151.00 1.1% £2,861.66 6 £262,752.51 1.6% £4,204.04 7 £265,380.04 1.9% £5,042.22 8 £268,033.84 2.0% £5,360.68 9 £270,714.18 2.0% £5,414.28 10 £273,421.32 2.0% £5,468.43 11 £276,155.53 2.0% £5,523.11 12 £278,917.09 2.0% £5,578.34 Total: £41,250.69
Let’s show that on a comparative graph, showing monthly generated revenue:
Or, even more illustrative, the total generated revenue:
By releasing 2 months later, we do not lose the first 2 months revenue – we lose the revenue roughly equivalent to P5 and P6.
When we release in P3, we don’t immediately get the P3 revenue we would have got. Instead we get something roughly equivalent to P1 (it’s slightly higher because the business generates a little more revenue overall in P3 than it did in P1).
This trend continues in P3 through to P8, where the late release finally reaches saturation point (2 periods later than the early release – of course).
Throughout the whole of P1 to P7 the late release has an opportunity cost associated. That opportunity cost is never recovered later in the software’s lifespan as the revenue / cost we could have generated the effect from is gone.
If the business was not growing, this would amount to a total equal to the last 2 periods of the year.
In our specific example, the total cost of delaying the release for 2 months amounts to 20% of the original expected revenue generation for the software project in the first year.
And this opportunity cost is solely related to the way in which the revenue will be generated; the rate at which the uptake comes in over the first 6 months.
Or to put it another way – in this example, if you were to increase or decrease the revenue of the business or the percentage generation at which you reach saturation point the cost will always be 20%.
So, when you’re thinking of delaying the release of software it’s probably worth taking a look, modelling your expected uptake and revenue generation to calculate just how much that will cost you…
Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data.It uses a simple data model. Source => Channel => Sink
It's a good time to introduce a good book about Flume - Using Flume - Flexible, Scalable, and Reliable Data Streaming by Hari Shreedharan (@harisr1234). It was written with 8 Chapters: giving basic about Apache Hadoop and Apache HBase, idea for Streaming Data Using Apache Flume, about Flume Model (Sources, Channels, Sinks), and some moew for Interceptors, Channel Selectors, Sink Groups, and Sink Processors. Additional, Getting Data into Flume* and Planning, Deploying, and Monitoring Flume.
This book was written about how to use Flume. It's very good to guide about Apache Hadoop and Apache HBase before starting about Flume Data flow model. Readers should know about java code, because they will find java code example in a book and easy to understand. It's a good book for some people who want to deploy Apache Flume and custom components.
Author separated each Chapter for Flume Data flow model. So, Readers can choose each chapter to read for part of Data flow model: reader would like to know about Sink, then read Chapter 5 only until get idea. In addition, Flume has a lot of features, Readers will find example for them in a book. Each chapter has references topic, that readers can use it to find out more and very easy + quick to use in Ebook.
With Illustration in a book that is helpful with readers to see Big Picture using Flume and giving idea to develop it more in each System or Project.
So, Readers will be able to learn about operation and how to configure, deploy, and monitor a Flume cluster, and customize examples to develop Flume plugins and custom components for their specific use-cases.
- Learn how Flume provides a steady rate of flow by acting as a buffer between data producers and consumers
- Dive into key Flume components, including sources that accept data and sinks that write and deliver it
- Write custom plugins to customize the way Flume receives, modifies, formats, and writes data
- Explore APIs for sending data to Flume agents from your own applications
- Plan and deploy Flume in a scalable and flexible way—and monitor your cluster once it’s running
Author: Hari ShreedharanWritten By: Surachart Opun http://surachartopun.com
We planned to do our full production upgrade in a weekend - and one part was the replacement of FAST by xPlore. As we had to index more than 10 millions objects, we had to find a solution where the full repository would be indexed by Monday morning.
Since we used a clone of production to validate and test our upgrade process, functions, and performance, we decided to prepare the production fulltext servers by using them first against the clone repository. After the production repository upgrade, the prepared fulltext servers could then be used for production where only the gap of objects (new and modified objects) since the last refresh of the clone would be indexed.
- Creation of a clone
- ADTS installation
- D2-Client, D2-Config installation
- Installation of xPlore in High Availability on servers which will be used later on for production
- Full repository fulltext indexing
- Fulltext search
3) Production rollout preparation
Once the testing was finished and the go live date defined, we had to do first the preparation before the rollout weekend of the xPlore fulltext servers to reuse them in production. We removed the index agent on both xPlore index server using the GUI to have a clean installation on them where only the index server files/folders were kept.
4) Production rollout
Before doing the upgrade, we cleaned the objects related to the FAST fulltext indexing (dm_ftindex_agent-config, dm_ftengine_config, dm_fulltext_index_s) from the repository, unregistered the event related to both fulltext users (dm_fulltext_index_user, dm_fulltext_index_user_01) and at the end removed the old queue objects related to both fulltext index users.
With these steps the repository was ready, once upgraded, to install the xPlore Index agents.
So after the repository upgrade, we installed the Index Agents to have a HA configuration and ran, on both index servers, the ftintegrity tool to have the documents r_object_id list to be refeed in the xPlore collection.
To resubmit the r_object_id listed in ObjectId-dctmOnly.txt and ObjectId-common-version-mismatch:
cp ObjectId-dctmOnly.txt /dctm/opt/documentum/xPlore/jboss7.1.1/server/DctmServer_IndexagentDms/deployments/IndexAgent.war/WEB-INF/classes/ids.txt
Once the file has been processed, the ids.txt is renamed to ids.txt.done
cut -f1 ObjectId-common-version-mismatch.txt -d " " > /dctm/opt/documentum/xPlore/jboss7.1.1/server/DctmServer_IndexagentDms/deployments/IndexAgent.war/WEB-INF/classes/ids.txt
Update the acls and group by running aclreplication_for_Dms.sh
On the standby server, update the aclreplication_for_Dms.sh script by adding in the java command:
Remove objects in fulltext DB if required by running:
With this approach, we had the fulltext database ready on Monday morning when people started to work with the system.
This week, I discovered a new enhancement which is not hidden but not really visible in SQL Server 2014. Among Hekaton and other new features in SQL Server 2014, a little option appears in table creation.
On the msdn website, we can see a new section in the "CREATE TABLE" code:
We can directly create an index in the "CREATE TABLE" query. Ok, then let's go!
In my example, I create a diver table with a default clustered index on a primary key (DiverId) and 2 other non-clustered indexes on Diving Level (OWD, AOWD, Level 1, Level 2,…) and Diving Organization (PADI, CCMAS, SSI,…).
Prior to SQL Server 2014, you create indexes after setting up the table like this.
We need to have 3 instructions to create a table and 2 non-clustered indexes.
In SQL 2014, it is very easy with just one query:
I'm very happy to share new features with you that are not necessarily in the light!
You will find the MSDN "CREATE TABLE" reference for SQL Server 2014 here.
This is what happens when your abstract is selected!Ready for some fun!? It's that time of year again and the competition will be intense. The "call for abstracts" for a number of Oracle Database conferences are about to close.
The focus of this posting is how you can get a conference abstract accepted.
As a mentor, Track Manager and active conference speaker I've been helping DBAs get their abstracts accepted for many years. If you follow my 11 tips below, I'm willing to bet you will get a free pass to any conference you wish in any part of the world.
1. No Surprises!
Track Manager After A SurpriseThe Track Manager wants no surprises, great content and a great presentation. Believe me when I say, they are looking for ways to reduce the risk of a botched presentation, a cancelation or a no show. Your abstract submissions is your first way to show you are serious and will help make the track incredibly awesome.
Tip: In all your conference communications, demonstrate a commitment to follow through.
2. Creative Title.
The first thing everyone sees is the title. I can personally tell you, if the title does not peak my curiosity without sounding stupid, then unless I know the speaker is popular I will not read the abstract. Why do I do this? Because as a Track Manager, I know conference attendees will do the same thing! And as a Track Manager, I want attendees to want to attend sessions in my track.
Tip: Find two people, read the title to them and ask what they think. If they say something like, "What are you going to talk about?" that's bad. Rework the title.
3. Tell A Story
The abstract must tell a compelling story. Oracle conferences are not academic conferences! There needs to be some problem along with a solution complete with drama woven into the story.
Tip: People forget bullet points, but they never forget a good story.
4. Easy To Read
The abstract must be easy to review. The abstract reviewers may have over a hundred abstracts to review. Make it a good quick read for the reviewers and your chances increase.
Tip: Have your computer read your abstract back to you. If you don't say, "Wow!" rework the abstract.
5. Be A Grown-Up
You can increase the perception you will physically show up and put on a great show at the conference by NOT putting into your abstract emoji, bullet points, your name and title or pushing a product or service. NEVER copy/paste from a powerpoint outline into the abstract or outline. (I've seen people do this!)
Tip: Track Managers do not want to baby sit you. They want an adult who will help make their track great.
6. Submit Introductory Level Abstracts
I finally figured this out a couple years ago. Not everyone is ready for a detailed understanding of cache buffer chain architecture, diagnosis, and solution development. Think of it from a business perspective. Your market (audience) will be larger if your presentation is less technical. If this bothers you, read my next point.
Tip: Submit both an introductory level version and advanced level version of your topic.
7. Topics Must Be Filled
Not even the Track Manager knows what people will submit. And you do not know what the Track Manager is looking for. And you do not know what other people are submitting. Mash this together and it means you must submit more than one abstract. I know you really, really want to present on topic X. But would you rather not have an abstract accepted?
Tip: Submit abstracts on multiple topics. It increases your chances of being accepted.
8. Submit Abstract To Multiple Tracks
This is similar to submitting both an introductory version of your abstract. Here's an example: If there is a DBA Bootcamp track and a Performance & Internals Track, craft your abstract to Bootcamp version has a more foundational/core feel to it. And craft your Performance & Internals version to feel more technical and advanced.
Do not simply change the title and the abstract can not be the same. If the conference managers or the Track Manager feels you are trying to game the conference, you present a risk to the conference and their track and your abstracts will be rejected. So be careful and thoughtful.
Tip: Look for ways to adjust your topic to fit into multiple tracks.
9. Great Outline Shows Commitment
If the reviewers have read your title and abstract, they are taking your abstract seriously. Now is the time to close the deal by demonstrating you will put on a great show. And this means you already have in mind an organized and well thought out delivery. You convey this with a fantastic outline. I know it is difficult to create an outline BUT the reviewers also know this AND having a solid outline demonstrates to them you are serious, you will show up, and put on a great show.
Tip: Develop your abstract and outline together. This strengthens both and develops a kind of package the reviewers like to see.
10. Learning Objectives Show Value
You show the obvious value of your topic through the learning objectives. Personally, I use these to help keep me focused on my listener, just not what I'm interested in at the moment. Because I love my work, I tend to think everyone also does... not so. I must force myself to answer the question, "Why would a DBA care about this topic?"
Tip: Develop your learning objectives by asking yourself, "When my presentation is over, what do I want the attendees to remember?"
11. Submit About Problems You Solved
Submit on the topics you have personally explored and found fascinating. Every year, every DBA has had to drill deep into at least one problem. This concentrated effort means you know the topic very well. And this means you are qualified to tell others about it! People love to hear from people who are fascinated about something. Spread the good news resulting from a "bad" experience.
Tip: Submit on topics you have explored and are fascinated with.
How Many Abstracts Should I Submit?
It depends on the conference, but for a big North America conference like ODTUG, RMOUG and IOUG I suggest at least four.
Based on what I wrote above, pick three topics, perhaps create both an introductory and advanced version and look to see if it makes sense to submit to multiple tracks. That means you'll probably submit at least four abstracts. It's not as bad as it sounds, because you will only have perhaps three core abstracts. All the others are modifications to fit a specific need. Believe when you receive the acceptance email, it will all be worth it!
See you at the conference!
The innovation engine in the field of Business Intelligence and Data visualization tools , is certainly cranked up. Qlikview, Tableau and Tibco Spotfire introduced new category of Data Visualization term in the field of Business Intelligence.
Now every vendor offers some form of Data Discovery. Oracle is also working on something similar adding to their confusing mix of OBIEE stack.
With the launch of new InfoCaptor, you can perform ad-hoc data visualizations and build dashboards all within the browser. Now that is refreshing. The browser is the key here. Once you deploy on the server, users can simply login, upload their datasets or point to existing database connection. Before you know users are already slicing and dicing their datasets and swimming in the world of beautiful visualizations. Yes, the visualizations are absolutely stunning and why shouldn’t they be. It is based on the excellent d3js.org library.
The key here is that the browser is your canvas and it is pretty huge, for e.g the detfault size for the visuals takes up my entire browser screen real estate. I like big visuals and if I am producing a Trellis chart then I can simply drag the corners and resize it. The visualization library is very comprehensive and offers around 30 visuals. It provides the bullet graph as well for KPI tracking.
Here are some screenshots from the website
InfoCaptor is also available on the cloud as a service and based on that there are few live analysis to try out without login or installing anything.
I would say with this release small business owners have truly found their Tableau or Qlikview alternative.
Go check out the new InfoCaptor Data Visualizer
Welcome to RDX. For our retail customers, the holiday season is a critical time of year for revenue generation. The increased activity can put additional stress on transactional databases.
Here are some best practice suggestions to ensure your databases are ready for the holiday season from RDX Director of Technical Sales, Katy Park:
Put in a High Availability solution if you do not have one.
Secondly, run a test of your DR plan to ensure you can meet your time to recovery objectives.
Ask your DBA for code tuning suggestions for queries that are run often and utilize a lot of resources.
You should also consider removing the reporting load from your transactional database if reports are currently running on the production server.
And finally, review object sizes and maximum server capacities.
Thanks for watching, and we'll see you next time!
The post How to keep databases up and running during the holidays [VIDEO] appeared first on Remote DBA Experts.
Time to clear up some confusion.
In the past 60 days, I have encountered the following:
- Two different customers who said they were told by Oracle Support that "APEX isn't supported."
- An industry analyst who asked "Is use of Oracle Application Express supported? There is an argument internally that it cannot be used for production applications."
- A customer who was told by an external non-Oracle consultant "Oracle Application Express is good for a development environment but we don't see it being used in production." I'm not even sure what that means.
- Oracle Application Express is considered a feature of the Oracle Database. It isn't classified as "free", even though there is no separate licensing fee for it. It is classified as an included feature of the Oracle Database, no differently than XML DB, Oracle Text, Oracle Multimedia, etc.
- If you are licensed and supported for your Oracle Database, you are licensed and supported (by Oracle Support) for Oracle Application Express in that database. Many customers aren't even aware that they are licensed for it.
- If you download a later version of Oracle Application Express made available for download from the Oracle Technology Network and install it into your Oracle Database, as long as you are licensed and supported for that Oracle Database, you are licensed and supported (by Oracle Support) for Oracle Application Express in that database.
- Oracle Application Express is listed in the Lifetime Support Policy: Oracle Technology Products document.
As far as the customers who believed they were told directly by Oracle Support that Oracle Application Express isn't supported, there was a common misunderstanding. In their Service Requests to Oracle Support, they were told that Oracle REST Data Services (formerly called Oracle Application Express Listener, the Web front-end to Oracle Application Express) running in stand-alone mode isn't supported. This is expressed in the Oracle REST Data Services documentation. However, this does not pertain to the supportability of Oracle Application Express. Additionally, a customer can run Oracle REST Data Services in a supported fashion in specific versions of Oracle WebLogic Server, Glassfish Server, and Apache Tomcat. To reiterate - running Oracle REST Data Services in standalone mode is the one method which is not supported in production deployments, as articulated in the documentation - however, you can run it supported in Oracle WebLogic Server, Glassfish Server and Apache Tomcat.
Oracle Application Express has been a supported feature of the Oracle Database since 2004, since it first shipped as Oracle HTML DB 1.5 in Oracle Database 10gR1. Every subsequent version of Oracle Application Express has been supported by Oracle Support when run in a licensed and supported Oracle Database. Anyone who says otherwise is...confused.
Today, I'm simply putting up a link to the best Oracle press release on recent cloud announcements. The release touts the six new platform services for Oracle Cloud. You can find it here. This is the "sneak peek", made especially for those of you who think I'm too slow about writing things. Heck, I'm much faster than George RR Martin, but anything to keep ya'all happy...
UPDATE: So the highlights for me all have to do with PaaS (Platform-as-a-Service)...30,000 devices, 400 pedabytes of storage, 19 data centers around the globe...whew.
I had the opportunity to work hands-on with the Mobile Cloud, which puts development, deployment and administration onto one user interface (yup, it's the Oracle Alta UI). Built a mobile app in about 30 minutes. More on that in a subsequent post.
The Integration Cloud also looks exciting. Yes, there are other integration service providers (Boomi comes immediately to mind), but working on integration of Oracle products on an Oracle platform offers some pretty unique possibilities.
The Process Cloud looks promising, especially if we will eventually be able to extend Oracle packaged applications with custom, cloud-based business processes.
Those are my big three highlights. How about you?
Today, I have the chance to be in London for the amazing Alfresco Summit 2014 (7-9 October 2014)! Two weeks ago, the Alfresco Summit took place in San Francisco but unfortunately I wasn't available at that time. This year, the Alfresco Summit in London is a three day event: the first day is, as often, a training day with a complete day course and the two other days are composed of conferences (General, Business, Technical or Solution sessions).
So yesterday, I attended the Alfresco University training course with Rich McKnight (Principal Consultant of Alfresco) about "Creating Custom Content Models in Alfresco". It was a really good training, very well presented. I'm sure that a lot of people with basic knowledge of Alfresco now have a better understanding of the general concepts of Content Models in Alfresco.
The conferences have started today with a small status of Alfresco by Doug Dennerline (CEO of Alfresco) with quite a lot of interesting figures. After that, Thomas DeMeo (VP of Product Management at Alfresco) presented the Alfresco Product Keynote... And here we are, the brand new Alfresco Enterprise major version (Alfresco One 5.0) was presented with all its new features. Of course, I already had the opportunity to test almost everything in this version thanks to the Alfresco Community 5.0.a (there is often little differences between Community and Enterprise versions) but it's always good to view a presentation of what's new in an Alfresco Enterprise version especially a major version! Moreover, I can assure you that you will LOVE this new Alfresco version because of the following points:
- Alfresco 5.0 now use Solr 4.9 instead of Solr 1.4 (for Alfresco 4.x) which involves a lot of improvements
- Live Search feature: Start typing anything in the search field and alfresco will show you results as you type for documents, sites, people, wiki or blogs
- Search suggestions and spell check feature: When typing, Alfresco present suggestions related to what you are writing and if you mistyped a word, then Alfresco is able to present you what the word should be (e.g. for 'Especialy', Alfresco will suggest you 'Especially')
- Search by facets: A facet is a search filter based on a pre-defined metadata like document extension, creator, modifier, aso...
- Default search operator switched from OR to AND: A consequence of that is that generally, there are less results to display (which means better performance) and these results are more relevant (better for user satisfaction)
- New document previewer (faster, better browser support) with a search feature to directly search into the document preview. The new previewer let users crop videos or images directly from Alfresco to only keep important data on the repository
- An improved WYSIWYG editor
- A new page for administrators to manage all sites in the same place, finally!
- Improuved Outlook integration: directly from the outlook interface, it's possible to create folders/docs, search documents, manage workflows, aso...
- Improved Microsoft Office integration: document creation, modification and upload can be done directly from MS Office and moreover, you can specify the Type of the document directly from MS Office (even if it's a company specific Type. e.g. dbi_Meetings_Summary) and once done, you can view and edit all Type specific Properties on your document! All these elements will then be sent to Alfresco when the document is saved
- Better integration between Alfresco & SharePoint: It's possible to use SharePoint as a UI (Client) and Alfresco as a Repository where documents are stored (Server). With this solution, all new features of Alfresco 5.0 are available directly through the SharePoint Sites (Live Search, Search with facets, Documents previewer, aso...)
This new major version of Alfresco brings its new features but some others will disappear:
- The eclipse specific SDK: Now everything is built using Maven
- Liferay portlets don't exist anymore. Well, they were not very useful, so...
- The Alfresco Explorer client doesn't exist anymore! It's the turn of Alfresco Share to shine!
From what I've seen, there are a lot of other interesting things that will need your attention in the next few months like:
- New version of Alfresco mobile apps (ATM iOS & Android) with some profiles to completly changes the way it looks like depending on your roles in Alfresco (Business, Sales, Technical, Management, aso...)
- BI directly in Alfresco (using Dashlets)?
- The new Alfresco Activiti Enterprise release (a new separate software with its own UI) who let business users write their own workflow logic and create the form they want for all steps of this workflow using a very simple graphical interface
So the first day was very interesting AND exciting! I hope that the Conferences Day 2 will be at least as cool! See you tomorrow for a new article with a lot of other interesting things (well if I have time for this ).
Invoice processing is time-consuming and expensive work. This is especially true if invoicing involves inefficient paper-based processes, manually-conducted approvals and manual data-entry into the financial system. These inefficient methods affect the bottom line by increasing costs, creating liability & accrual blind-spots and causing other cash management, reconciliation & reporting challenges.
Change the game with the Next-Generation of A/P Process Automation
- Automate up to 80% of invoice processing, eliminating paper, manual data entry and associated errors
- Streamline operations with automated invoice creation, 2/3-way matching and by easily connecting to a variety of leading ERP business applications
- Realize benefits faster, by leveraging new Cloud and on-premise deployment options & capabilities
Register Now October 16, 2014
10 a.m. PT / 1 p.m. ET SPEAKERS Chris Preston
Oracle Nilay Banker
Founder & CEO
Inspyrus Copyright © 2014, Oracle Corporation and/or its affiliates.