- It’s a very exciting opportunity to experience the powerful Financial Reporting innovations in our Cloud offerings without disruption to your existing ERP investments.
- It’s a way to take advantage of the Simplified Financials Report Center, optimized for easy access to reports on your choice of mobile device
- It includes the sunburst data visualization tool, which was my killer demo last week at OpenWorld (see screen shot)
- It’s a way to move to cloud in an incremental manner, realizing business benefits quickly without disruption to your existing business processes and systems.
- It has a companion EBusiness Suite feature (available on 12.1.3 and 12.2.4) that will push all your set up and GL Balances to your cloud service and generate reports automatically for you. Giving you a zero configuration reporting solution for you EBS GL Balances data (watch out for more detailed posts on this soon)
- It has web services to load General Ledger data from PeopleSoft, JDE Edwards or any other ERP system.
- It’s a way to get your hands on the Oracle Social Network which is part of the platform our Cloud offerings are built on.
That’s a decent list to start with, but there are a few things that it isn’t which I should call out
- It is not(yet) the Accounting Hub Integration Platform with all the rule based accounting transformations provided by Subledger Accounting Architecture (SLA)
- It is not a new name designed to confuse you when we already have Financials Accounting Hub and Fusion Accounting Hub.
Look out for future posts going into more detail, or you can look at the cloud service page, which has important details such as pricing.
In the past, most of my customers skipped R1 releases. That is, 8.1.7 -> 9.2 -> 10.2 -> 11.2. SAP does the same. For the very first time SAP plans to go to 22.214.171.124 + some PSU in spring 2015. But only to avoid running out of support and without any fancy feature like Multitenant or in Memory.
#oracle 126.96.36.199 is the last patch set for Release 12.1.
— laurentsch (@laurentsch) October 9, 2014
188.8.131.52, which is not available on AIX yet, will be the last patchset of 12cR1. It is the first and only patchset for that release. It is actually more than a patchset, as it introduced in memory database and JSON in the database.
The next release is expected beginning of 2016 on Linux. 184.108.40.206 patching ends January 2018.
Should I I go to an already aborted release or should I buy extended support for 220.127.116.11 until 2018 ?
Probably I will go both ways, depending on the applications.
Some software has been built. It generates revenue (or reduces cost) associated with sales, but the effect is not immediate. It could be the implementation of a process change that takes a little time to bed in, or the release of a new optional extra that not everyone will want immediately.
It is expected that when it is initially released there’ll be a small effect. Over the next 6 months there will be an accelerating uptake until it reaches saturation point and levels off.
Nothing particularly unusual about that plan. It probably describes a lot of small scale software projects.
Now let’s put some numbers against that.
At saturation point it’s expected to generate / save an amount equal to 2% of the total revenue of the business. It might be an ambitious number, but it’s not unrealistic.
The business initially generates £250k a month, and experiences steady growth of around 10% a year.
What does the revenue generation of that software look like over the first 12 months?
It’s pretty easy to calculate, plugging in some percentages that reflect the uptake curve:
Period Original Business Revenue Software Revenue Generation Additional Revenue1 £250,000.00 0.2% £500.002 £252,500.00 0.5% £1,262.503 £255,025.00 1.1% £2,805.284 £257,575.25 1.6% £4,121.20 5 £260,151.00 1.9% £4,942.876 £262,752.51 2.0% £5,255.057 £265,380.04 2.0% £5,307.608 £268,033.84 2.0% £5,360.689 £270,714.18 2.0% £5,414.28 10 £273,421.32 2.0% £5,468.4311 £276,155.53 2.0% £5,523.1112 £278,917.09 2.0% £5,578.34 Total: £51,539.34
Or, shown on a graph:
So, here’s a question:
What is the opportunity cost of delaying the release by 2 months?
The initial thought might be that the effect isn’t that significant, as the software doesn’t generate a huge amount of cash in the first couple of months.
Modelling it, we end up with this:
Period Original Business Revenue Software Revenue Generation Additional Revenue 1 £250,000.00 £- 2 £252,500.00 £- 3 £255,025.00 0.2% £510.05 4 £257,575.25 0.5% £1,287.88 5 £260,151.00 1.1% £2,861.66 6 £262,752.51 1.6% £4,204.04 7 £265,380.04 1.9% £5,042.22 8 £268,033.84 2.0% £5,360.68 9 £270,714.18 2.0% £5,414.28 10 £273,421.32 2.0% £5,468.43 11 £276,155.53 2.0% £5,523.11 12 £278,917.09 2.0% £5,578.34 Total: £41,250.69
Let’s show that on a comparative graph, showing monthly generated revenue:
Or, even more illustrative, the total generated revenue:
By releasing 2 months later, we do not lose the first 2 months revenue – we lose the revenue roughly equivalent to P5 and P6.
When we release in P3, we don’t immediately get the P3 revenue we would have got. Instead we get something roughly equivalent to P1 (it’s slightly higher because the business generates a little more revenue overall in P3 than it did in P1).
This trend continues in P3 through to P8, where the late release finally reaches saturation point (2 periods later than the early release – of course).
Throughout the whole of P1 to P7 the late release has an opportunity cost associated. That opportunity cost is never recovered later in the software’s lifespan as the revenue / cost we could have generated the effect from is gone.
If the business was not growing, this would amount to a total equal to the last 2 periods of the year.
In our specific example, the total cost of delaying the release for 2 months amounts to 20% of the original expected revenue generation for the software project in the first year.
And this opportunity cost is solely related to the way in which the revenue will be generated; the rate at which the uptake comes in over the first 6 months.
Or to put it another way – in this example, if you were to increase or decrease the revenue of the business or the percentage generation at which you reach saturation point the cost will always be 20%.
So, when you’re thinking of delaying the release of software it’s probably worth taking a look, modelling your expected uptake and revenue generation to calculate just how much that will cost you…
Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data.It uses a simple data model. Source => Channel => Sink
It's a good time to introduce a good book about Flume - Using Flume - Flexible, Scalable, and Reliable Data Streaming by Hari Shreedharan (@harisr1234). It was written with 8 Chapters: giving basic about Apache Hadoop and Apache HBase, idea for Streaming Data Using Apache Flume, about Flume Model (Sources, Channels, Sinks), and some moew for Interceptors, Channel Selectors, Sink Groups, and Sink Processors. Additional, Getting Data into Flume* and Planning, Deploying, and Monitoring Flume.
This book was written about how to use Flume. It's very good to guide about Apache Hadoop and Apache HBase before starting about Flume Data flow model. Readers should know about java code, because they will find java code example in a book and easy to understand. It's a good book for some people who want to deploy Apache Flume and custom components.
Author separated each Chapter for Flume Data flow model. So, Readers can choose each chapter to read for part of Data flow model: reader would like to know about Sink, then read Chapter 5 only until get idea. In addition, Flume has a lot of features, Readers will find example for them in a book. Each chapter has references topic, that readers can use it to find out more and very easy + quick to use in Ebook.
With Illustration in a book that is helpful with readers to see Big Picture using Flume and giving idea to develop it more in each System or Project.
So, Readers will be able to learn about operation and how to configure, deploy, and monitor a Flume cluster, and customize examples to develop Flume plugins and custom components for their specific use-cases.
- Learn how Flume provides a steady rate of flow by acting as a buffer between data producers and consumers
- Dive into key Flume components, including sources that accept data and sinks that write and deliver it
- Write custom plugins to customize the way Flume receives, modifies, formats, and writes data
- Explore APIs for sending data to Flume agents from your own applications
- Plan and deploy Flume in a scalable and flexible way—and monitor your cluster once it’s running
Author: Hari ShreedharanWritten By: Surachart Opun http://surachartopun.com
We planned to do our full production upgrade in a weekend - and one part was the replacement of FAST by xPlore. As we had to index more than 10 millions objects, we had to find a solution where the full repository would be indexed by Monday morning.
Since we used a clone of production to validate and test our upgrade process, functions, and performance, we decided to prepare the production fulltext servers by using them first against the clone repository. After the production repository upgrade, the prepared fulltext servers could then be used for production where only the gap of objects (new and modified objects) since the last refresh of the clone would be indexed.
- Creation of a clone
- ADTS installation
- D2-Client, D2-Config installation
- Installation of xPlore in High Availability on servers which will be used later on for production
- Full repository fulltext indexing
- Fulltext search
3) Production rollout preparation
Once the testing was finished and the go live date defined, we had to do first the preparation before the rollout weekend of the xPlore fulltext servers to reuse them in production. We removed the index agent on both xPlore index server using the GUI to have a clean installation on them where only the index server files/folders were kept.
4) Production rollout
Before doing the upgrade, we cleaned the objects related to the FAST fulltext indexing (dm_ftindex_agent-config, dm_ftengine_config, dm_fulltext_index_s) from the repository, unregistered the event related to both fulltext users (dm_fulltext_index_user, dm_fulltext_index_user_01) and at the end removed the old queue objects related to both fulltext index users.
With these steps the repository was ready, once upgraded, to install the xPlore Index agents.
So after the repository upgrade, we installed the Index Agents to have a HA configuration and ran, on both index servers, the ftintegrity tool to have the documents r_object_id list to be refeed in the xPlore collection.
To resubmit the r_object_id listed in ObjectId-dctmOnly.txt and ObjectId-common-version-mismatch:
cp ObjectId-dctmOnly.txt /dctm/opt/documentum/xPlore/jboss7.1.1/server/DctmServer_IndexagentDms/deployments/IndexAgent.war/WEB-INF/classes/ids.txt
Once the file has been processed, the ids.txt is renamed to ids.txt.done
cut -f1 ObjectId-common-version-mismatch.txt -d " " > /dctm/opt/documentum/xPlore/jboss7.1.1/server/DctmServer_IndexagentDms/deployments/IndexAgent.war/WEB-INF/classes/ids.txt
Update the acls and group by running aclreplication_for_Dms.sh
On the standby server, update the aclreplication_for_Dms.sh script by adding in the java command:
Remove objects in fulltext DB if required by running:
With this approach, we had the fulltext database ready on Monday morning when people started to work with the system.
This week, I discovered a new enhancement which is not hidden but not really visible in SQL Server 2014. Among Hekaton and other new features in SQL Server 2014, a little option appears in table creation.
On the msdn website, we can see a new section in the "CREATE TABLE" code:
We can directly create an index in the "CREATE TABLE" query. Ok, then let's go!
In my example, I create a diver table with a default clustered index on a primary key (DiverId) and 2 other non-clustered indexes on Diving Level (OWD, AOWD, Level 1, Level 2,…) and Diving Organization (PADI, CCMAS, SSI,…).
Prior to SQL Server 2014, you create indexes after setting up the table like this.
We need to have 3 instructions to create a table and 2 non-clustered indexes.
In SQL 2014, it is very easy with just one query:
I'm very happy to share new features with you that are not necessarily in the light!
You will find the MSDN "CREATE TABLE" reference for SQL Server 2014 here.
This is what happens when your abstract is selected!Ready for some fun!? It's that time of year again and the competition will be intense. The "call for abstracts" for a number of Oracle Database conferences are about to close.
The focus of this posting is how you can get a conference abstract accepted.
As a mentor, Track Manager and active conference speaker I've been helping DBAs get their abstracts accepted for many years. If you follow my 11 tips below, I'm willing to bet you will get a free pass to any conference you wish in any part of the world.
1. No Surprises!
Track Manager After A SurpriseThe Track Manager wants no surprises, great content and a great presentation. Believe me when I say, they are looking for ways to reduce the risk of a botched presentation, a cancelation or a no show. Your abstract submissions is your first way to show you are serious and will help make the track incredibly awesome.
Tip: In all your conference communications, demonstrate a commitment to follow through.
2. Creative Title.
The first thing everyone sees is the title. I can personally tell you, if the title does not peak my curiosity without sounding stupid, then unless I know the speaker is popular I will not read the abstract. Why do I do this? Because as a Track Manager, I know conference attendees will do the same thing! And as a Track Manager, I want attendees to want to attend sessions in my track.
Tip: Find two people, read the title to them and ask what they think. If they say something like, "What are you going to talk about?" that's bad. Rework the title.
3. Tell A Story
The abstract must tell a compelling story. Oracle conferences are not academic conferences! There needs to be some problem along with a solution complete with drama woven into the story.
Tip: People forget bullet points, but they never forget a good story.
4. Easy To Read
The abstract must be easy to review. The abstract reviewers may have over a hundred abstracts to review. Make it a good quick read for the reviewers and your chances increase.
Tip: Have your computer read your abstract back to you. If you don't say, "Wow!" rework the abstract.
5. Be A Grown-Up
You can increase the perception you will physically show up and put on a great show at the conference by NOT putting into your abstract emoji, bullet points, your name and title or pushing a product or service. NEVER copy/paste from a powerpoint outline into the abstract or outline. (I've seen people do this!)
Tip: Track Managers do not want to baby sit you. They want an adult who will help make their track great.
6. Submit Introductory Level Abstracts
I finally figured this out a couple years ago. Not everyone is ready for a detailed understanding of cache buffer chain architecture, diagnosis, and solution development. Think of it from a business perspective. Your market (audience) will be larger if your presentation is less technical. If this bothers you, read my next point.
Tip: Submit both an introductory level version and advanced level version of your topic.
7. Topics Must Be Filled
Not even the Track Manager knows what people will submit. And you do not know what the Track Manager is looking for. And you do not know what other people are submitting. Mash this together and it means you must submit more than one abstract. I know you really, really want to present on topic X. But would you rather not have an abstract accepted?
Tip: Submit abstracts on multiple topics. It increases your chances of being accepted.
8. Submit Abstract To Multiple Tracks
This is similar to submitting both an introductory version of your abstract. Here's an example: If there is a DBA Bootcamp track and a Performance & Internals Track, craft your abstract to Bootcamp version has a more foundational/core feel to it. And craft your Performance & Internals version to feel more technical and advanced.
Do not simply change the title and the abstract can not be the same. If the conference managers or the Track Manager feels you are trying to game the conference, you present a risk to the conference and their track and your abstracts will be rejected. So be careful and thoughtful.
Tip: Look for ways to adjust your topic to fit into multiple tracks.
9. Great Outline Shows Commitment
If the reviewers have read your title and abstract, they are taking your abstract seriously. Now is the time to close the deal by demonstrating you will put on a great show. And this means you already have in mind an organized and well thought out delivery. You convey this with a fantastic outline. I know it is difficult to create an outline BUT the reviewers also know this AND having a solid outline demonstrates to them you are serious, you will show up, and put on a great show.
Tip: Develop your abstract and outline together. This strengthens both and develops a kind of package the reviewers like to see.
10. Learning Objectives Show Value
You show the obvious value of your topic through the learning objectives. Personally, I use these to help keep me focused on my listener, just not what I'm interested in at the moment. Because I love my work, I tend to think everyone also does... not so. I must force myself to answer the question, "Why would a DBA care about this topic?"
Tip: Develop your learning objectives by asking yourself, "When my presentation is over, what do I want the attendees to remember?"
11. Submit About Problems You Solved
Submit on the topics you have personally explored and found fascinating. Every year, every DBA has had to drill deep into at least one problem. This concentrated effort means you know the topic very well. And this means you are qualified to tell others about it! People love to hear from people who are fascinated about something. Spread the good news resulting from a "bad" experience.
Tip: Submit on topics you have explored and are fascinated with.
How Many Abstracts Should I Submit?
It depends on the conference, but for a big North America conference like ODTUG, RMOUG and IOUG I suggest at least four.
Based on what I wrote above, pick three topics, perhaps create both an introductory and advanced version and look to see if it makes sense to submit to multiple tracks. That means you'll probably submit at least four abstracts. It's not as bad as it sounds, because you will only have perhaps three core abstracts. All the others are modifications to fit a specific need. Believe when you receive the acceptance email, it will all be worth it!
See you at the conference!
The innovation engine in the field of Business Intelligence and Data visualization tools , is certainly cranked up. Qlikview, Tableau and Tibco Spotfire introduced new category of Data Visualization term in the field of Business Intelligence.
Now every vendor offers some form of Data Discovery. Oracle is also working on something similar adding to their confusing mix of OBIEE stack.
With the launch of new InfoCaptor, you can perform ad-hoc data visualizations and build dashboards all within the browser. Now that is refreshing. The browser is the key here. Once you deploy on the server, users can simply login, upload their datasets or point to existing database connection. Before you know users are already slicing and dicing their datasets and swimming in the world of beautiful visualizations. Yes, the visualizations are absolutely stunning and why shouldn’t they be. It is based on the excellent d3js.org library.
The key here is that the browser is your canvas and it is pretty huge, for e.g the detfault size for the visuals takes up my entire browser screen real estate. I like big visuals and if I am producing a Trellis chart then I can simply drag the corners and resize it. The visualization library is very comprehensive and offers around 30 visuals. It provides the bullet graph as well for KPI tracking.
Here are some screenshots from the website
InfoCaptor is also available on the cloud as a service and based on that there are few live analysis to try out without login or installing anything.
I would say with this release small business owners have truly found their Tableau or Qlikview alternative.
Go check out the new InfoCaptor Data Visualizer
Welcome to RDX. For our retail customers, the holiday season is a critical time of year for revenue generation. The increased activity can put additional stress on transactional databases.
Here are some best practice suggestions to ensure your databases are ready for the holiday season from RDX Director of Technical Sales, Katy Park:
Put in a High Availability solution if you do not have one.
Secondly, run a test of your DR plan to ensure you can meet your time to recovery objectives.
Ask your DBA for code tuning suggestions for queries that are run often and utilize a lot of resources.
You should also consider removing the reporting load from your transactional database if reports are currently running on the production server.
And finally, review object sizes and maximum server capacities.
Thanks for watching, and we'll see you next time!
The post How to keep databases up and running during the holidays [VIDEO] appeared first on Remote DBA Experts.
Time to clear up some confusion.
In the past 60 days, I have encountered the following:
- Two different customers who said they were told by Oracle Support that "APEX isn't supported."
- An industry analyst who asked "Is use of Oracle Application Express supported? There is an argument internally that it cannot be used for production applications."
- A customer who was told by an external non-Oracle consultant "Oracle Application Express is good for a development environment but we don't see it being used in production." I'm not even sure what that means.
- Oracle Application Express is considered a feature of the Oracle Database. It isn't classified as "free", even though there is no separate licensing fee for it. It is classified as an included feature of the Oracle Database, no differently than XML DB, Oracle Text, Oracle Multimedia, etc.
- If you are licensed and supported for your Oracle Database, you are licensed and supported (by Oracle Support) for Oracle Application Express in that database. Many customers aren't even aware that they are licensed for it.
- If you download a later version of Oracle Application Express made available for download from the Oracle Technology Network and install it into your Oracle Database, as long as you are licensed and supported for that Oracle Database, you are licensed and supported (by Oracle Support) for Oracle Application Express in that database.
- Oracle Application Express is listed in the Lifetime Support Policy: Oracle Technology Products document.
As far as the customers who believed they were told directly by Oracle Support that Oracle Application Express isn't supported, there was a common misunderstanding. In their Service Requests to Oracle Support, they were told that Oracle REST Data Services (formerly called Oracle Application Express Listener, the Web front-end to Oracle Application Express) running in stand-alone mode isn't supported. This is expressed in the Oracle REST Data Services documentation. However, this does not pertain to the supportability of Oracle Application Express. Additionally, a customer can run Oracle REST Data Services in a supported fashion in specific versions of Oracle WebLogic Server, Glassfish Server, and Apache Tomcat. To reiterate - running Oracle REST Data Services in standalone mode is the one method which is not supported in production deployments, as articulated in the documentation - however, you can run it supported in Oracle WebLogic Server, Glassfish Server and Apache Tomcat.
Oracle Application Express has been a supported feature of the Oracle Database since 2004, since it first shipped as Oracle HTML DB 1.5 in Oracle Database 10gR1. Every subsequent version of Oracle Application Express has been supported by Oracle Support when run in a licensed and supported Oracle Database. Anyone who says otherwise is...confused.
Today, I'm simply putting up a link to the best Oracle press release on recent cloud announcements. The release touts the six new platform services for Oracle Cloud. You can find it here. This is the "sneak peek", made especially for those of you who think I'm too slow about writing things. Heck, I'm much faster than George RR Martin, but anything to keep ya'all happy...
UPDATE: So the highlights for me all have to do with PaaS (Platform-as-a-Service)...30,000 devices, 400 pedabytes of storage, 19 data centers around the globe...whew.
I had the opportunity to work hands-on with the Mobile Cloud, which puts development, deployment and administration onto one user interface (yup, it's the Oracle Alta UI). Built a mobile app in about 30 minutes. More on that in a subsequent post.
The Integration Cloud also looks exciting. Yes, there are other integration service providers (Boomi comes immediately to mind), but working on integration of Oracle products on an Oracle platform offers some pretty unique possibilities.
The Process Cloud looks promising, especially if we will eventually be able to extend Oracle packaged applications with custom, cloud-based business processes.
Those are my big three highlights. How about you?
Today, I have the chance to be in London for the amazing Alfresco Summit 2014 (7-9 October 2014)! Two weeks ago, the Alfresco Summit took place in San Francisco but unfortunately I wasn't available at that time. This year, the Alfresco Summit in London is a three day event: the first day is, as often, a training day with a complete day course and the two other days are composed of conferences (General, Business, Technical or Solution sessions).
So yesterday, I attended the Alfresco University training course with Rich McKnight (Principal Consultant of Alfresco) about "Creating Custom Content Models in Alfresco". It was a really good training, very well presented. I'm sure that a lot of people with basic knowledge of Alfresco now have a better understanding of the general concepts of Content Models in Alfresco.
The conferences have started today with a small status of Alfresco by Doug Dennerline (CEO of Alfresco) with quite a lot of interesting figures. After that, Thomas DeMeo (VP of Product Management at Alfresco) presented the Alfresco Product Keynote... And here we are, the brand new Alfresco Enterprise major version (Alfresco One 5.0) was presented with all its new features. Of course, I already had the opportunity to test almost everything in this version thanks to the Alfresco Community 5.0.a (there is often little differences between Community and Enterprise versions) but it's always good to view a presentation of what's new in an Alfresco Enterprise version especially a major version! Moreover, I can assure you that you will LOVE this new Alfresco version because of the following points:
- Alfresco 5.0 now use Solr 4.9 instead of Solr 1.4 (for Alfresco 4.x) which involves a lot of improvements
- Live Search feature: Start typing anything in the search field and alfresco will show you results as you type for documents, sites, people, wiki or blogs
- Search suggestions and spell check feature: When typing, Alfresco present suggestions related to what you are writing and if you mistyped a word, then Alfresco is able to present you what the word should be (e.g. for 'Especialy', Alfresco will suggest you 'Especially')
- Search by facets: A facet is a search filter based on a pre-defined metadata like document extension, creator, modifier, aso...
- Default search operator switched from OR to AND: A consequence of that is that generally, there are less results to display (which means better performance) and these results are more relevant (better for user satisfaction)
- New document previewer (faster, better browser support) with a search feature to directly search into the document preview. The new previewer let users crop videos or images directly from Alfresco to only keep important data on the repository
- An improved WYSIWYG editor
- A new page for administrators to manage all sites in the same place, finally!
- Improuved Outlook integration: directly from the outlook interface, it's possible to create folders/docs, search documents, manage workflows, aso...
- Improved Microsoft Office integration: document creation, modification and upload can be done directly from MS Office and moreover, you can specify the Type of the document directly from MS Office (even if it's a company specific Type. e.g. dbi_Meetings_Summary) and once done, you can view and edit all Type specific Properties on your document! All these elements will then be sent to Alfresco when the document is saved
- Better integration between Alfresco & SharePoint: It's possible to use SharePoint as a UI (Client) and Alfresco as a Repository where documents are stored (Server). With this solution, all new features of Alfresco 5.0 are available directly through the SharePoint Sites (Live Search, Search with facets, Documents previewer, aso...)
This new major version of Alfresco brings its new features but some others will disappear:
- The eclipse specific SDK: Now everything is built using Maven
- Liferay portlets don't exist anymore. Well, they were not very useful, so...
- The Alfresco Explorer client doesn't exist anymore! It's the turn of Alfresco Share to shine!
From what I've seen, there are a lot of other interesting things that will need your attention in the next few months like:
- New version of Alfresco mobile apps (ATM iOS & Android) with some profiles to completly changes the way it looks like depending on your roles in Alfresco (Business, Sales, Technical, Management, aso...)
- BI directly in Alfresco (using Dashlets)?
- The new Alfresco Activiti Enterprise release (a new separate software with its own UI) who let business users write their own workflow logic and create the form they want for all steps of this workflow using a very simple graphical interface
So the first day was very interesting AND exciting! I hope that the Conferences Day 2 will be at least as cool! See you tomorrow for a new article with a lot of other interesting things (well if I have time for this ).
Invoice processing is time-consuming and expensive work. This is especially true if invoicing involves inefficient paper-based processes, manually-conducted approvals and manual data-entry into the financial system. These inefficient methods affect the bottom line by increasing costs, creating liability & accrual blind-spots and causing other cash management, reconciliation & reporting challenges.
Change the game with the Next-Generation of A/P Process Automation
- Automate up to 80% of invoice processing, eliminating paper, manual data entry and associated errors
- Streamline operations with automated invoice creation, 2/3-way matching and by easily connecting to a variety of leading ERP business applications
- Realize benefits faster, by leveraging new Cloud and on-premise deployment options & capabilities
Register Now October 16, 2014
10 a.m. PT / 1 p.m. ET SPEAKERS Chris Preston
Oracle Nilay Banker
Founder & CEO
Inspyrus Copyright © 2014, Oracle Corporation and/or its affiliates.
I keep on forgetting how to type é on Windows 8 (I used to CTRL+ALT+e, but that's now often reserved for the Euro symbol)
I then tend to run a search on Google, and end up being pointed towards 8 year old answers that point you to character map, options in old version of word, or the old way of typing the extended ASCII character code.
They all suck.
And then I remember - it's easy.
You start by pressing a CTRL + a key that represent the accent, then type the letter you want accented.
For example, CTRL + ' followed by e gives you é.
The great thing about using this technique is that the characters you use (dead letters) are representative of the accents you want to type. This makes them much easier to remember than the seemingly random character codes
Here are the ones I know about:KeystrokesAccent typeExamplesCTRL + 'acuteéCTRL + `graveèCTRL + SHIFT + 6 / CTRL + ^circumflexêCTRL + ,cedillaçCTRL + ~perispomeneõCTRL + SHIFT + 7 / CTRL + &Diphthongs / others a = æ, o = œ, s = ß It doesn't quite work with every app (Blogger on Chrome, for example), but it certainly covers Office 2013, including both Outlook and Word.
Today’s blog post is part one of seven in a series dedicated to Deploying a Private Cloud at Home. In my day-to-day activities, I come across various scenarios where I’m required to do sandbox testing before proceeding further on the production environment—which is great because it allows me to sharpen and develop my skills.
My home network consists of an OpenFiler NAS which also serves DNS, DHCP, iSCSI, NFS and Samba in my network. My home PC is a Fedora 20 Workstation, where I do most of the personal activities. KVM hypervisor is running on CentOS 6.2 x86_64 to run sandbox VMs for testing.
Recently I decided to move it to the cloud and create a private cloud at home. There are plenty of open source cloud solutions available, but I decided to use OpenStack for two reasons.
- I am already running Redhat compatible distros ( CentOS and Fedora ) so I just need to install OpenStack on top of it to get started.
- Most of the clients I support have RHEL compatible distros in the environment, so it makes sense having RHEL compatible distros to play around.
Ideally OpenStack cloud consists of minimum three nodes with at least 2 NICs on each node.
- Controller: As the name suggests, this is the controller node which runs most of the control services.
- Network: This is the network node which handles virtual networking.
- Compute : This is the hypervisor node which runs your VMs.
However due to small size of my home network I decided to use legacy networking which only requires controller and compute nodes with single NIC
Stay tuned for the remainder of my series, Deploying a Private Cloud at Home. In part two of seven, I will be demonstrating configuration and setup.
Last week I had the great pleasure to attend Oracle Open World (OOW) for the first time, presenting No Silver Bullets – OBIEE Performance in the Real World at one of the ODTUG user group sessions on the Sunday. It was a blast, as the saying goes, but the week before OOW I was more nervous about the event itself than my presentation. Despite having been to smaller conferences before, OOW is vast in its scale and I felt like the week before going to university for the first time, full of uncertainty about what lay ahead and worrying that everyone would know everyone else except you! So during the week I jotted down a few things that I’d have found useful to know ahead of going and hopefully will help others going to OOW take it all in their stride from the very beginning.Coming and going
I arrived on the Friday at midday SF time, and it worked perfectly for me. I was jetlagged so walked around like a zombie for the remainder of the day. Saturday I had chance to walk around SF and get my bearings both geographically, culturally and climate. Sunday is “day zero” when all the user group sessions are held, along with the opening OOW keynote in the evening. I think if I’d arrived Saturday afternoon instead I’d have felt a bit thrust into it all straight away on the Sunday.
In terms of leaving, the last formal day is Thursday and it’s full day of sessions too. I left straight after breakfast on Thursday and I felt I was leaving too early. But, OOW is a long few days & nights so chances are by Thursday you’ll be beat anyway, so check the schedule and plan your escape around it.Accomodation
Book in advance! Like, at least two months in advance. There are 60,000 people descending on San Francisco, all wanting some place to stay.
Get airbnb, a lot more for your money than a hotel. Wifi is generally going to be a lot better, and having a living space in which to exist is nicer than just a hotel room. Don’t fret about the “perfect” location – anywhere walkable to Moscone (where OOW is held) is good because it means you can drop your rucksack off at the end of the day etc, but other than that the events are spread around so you’ll end up walking further to at least some of them. Or, get an Uber like the locals do!Sessions
Go to Oak Table World (OTW), it’s great, and free. Non-marketing presentations from some of the most respected speakers in the industry. Cuts through the BS. It’s also basically on the same site as the rest of OOW, so easy to switch back and forth between OOW/OTW sessions.
Go and say hi to the speakers. In general they’re going to want to know that you liked it. Ask questions — hopefully they like what they talk about so they’ll love to speak some more about it. You’ll get more out of a five minute chat than two hours of keynote. And on that subject, don’t fret about dropping sessions — people tweet them, the slides are usually available, and in fact you could be sat at your desk instead of OOW and have missed the whole lot so just be grateful for what you do see. Chance encounters and chats aren’t available for download afterwards; most presentations are. Be strict in your selection of “must see” sessions, lest you drop one you really really did want to see.
Use the schedule builder in advance, but download it to your calendar (watch out for line-breaks in the exported file that will break the import) and sync it to your mobile phone so you can see rapidly where you need to head next. Conference mobile apps are rarely that useful and frequently bloated and/or unstable.
Don’t feel you need to book every waking moment of every day to sessions. It’s not slacking if you go to half as many but are twice as effective from not being worn out!Dress
Dress wise, jeans and polo is fine, company polo or a shirt for delivering presentations. Day wear is fine for evenings too, no need to dress up. Some people do wear shorts but they’re in the great minority. There are lots of suits around, given it is a customer/sales conference too.Socialising
The sessions and random conversations with people during the day are only part of OOW — the geek chat over a beer (or soda) is a big part too. Look out for the Pythian blogger meetup, meetups from your country’s user groups, companies you work with, and so on.
Register for the evening events that you get invited to (ODTUG, Pythian, etc) because often if you haven’t pre-registered you can’t get in if you change your mind, whereas if you do register but then don’t go that’s fine as they’ll bank on no-shows. The evening events are great for getting to chat to people (dare I say, networking), as are the other events that are organised like the swim in the bay, run across the bridge, etc.
Sign up for stuff like swim in the bay, it’s good fun – and I can’t even swim really. Run/Bike across the bridge are two other events also organised. Hang around on twitter for details, people like Yury Velikanov and Jeff Smith are usually in the know if not doing the actual organising.General
When the busy days and long evenings start to take their toll don’t be afraid to duck out and go and decompress. Grab a shower, get a coffee, do some sight seeing. Don’t forget to drink water as well as the copious quantities of coffee and soda.
Get a data package for your mobile phone in advance of going eg £5 per day unlimited data. Conference wifi is just about OK at best, often flaky. Trying to organise short-notice meetups with other people by IM/twitter/email gets frustrating if you only get online half an hour after the time they suggested to meet!
Don’t pack extra clothes ‘just in case’. Pack minimally because (1) you are just around the corner from Market Street with Gap, Old Navy etc so can pick up more clothes cheaply if you need to and (2) you’ll get t-shirts from exhibitors, events (eg swim in the bay) and you’ll need the suitcase space to bring them all home. Bring a suitcase with space in or that expands, don’t arrive with a suitcase that’s already at capacity.Food
So much good food and beer. Watch out for some of the American beers; they seem to start at about 5% ABV and go upwards, compared to around 3.6% ABV here in the UK. Knocking back this at the same rate as this will get messy.
In terms of food you really are spoilt, some of my favourites were:
- Lori’s diner (map) : As a brit, I loved this American Diner, and great food - yum yum. 5-10 minutes walk from Moscone.
- Mel’s drive-in (map) : Just round the corner from Moscone, very busy but lots of seats. Great american breakfast experience! yum
- Grove (map) : Good place for breakfast if you want somewhere a bit less greasy than a diner (WAT!)