- OASIS Cloud Application Management for Platforms (CAMP).
- OASIS Cloud Authorization (CloudAuthZ).
- The AWS GovCloud (US) Region
Is CloudAuthZ the next OAuth?
With all the buzz around cloud, I am curious to see where we are going.
How are all these new initiatives going to impact us on the Web, while using DropBox, EverNote, or Google+, LinkedIn, Apple and the like?
Where is the trust going to be on the WEB?
I'll like to keep the center-stage in this role game and own my identity, keep my tokens secured in my wallet like I do today. I am not sure I want to trust comercial entities to do this for me, free of charges.
Can players like Adobe, Microsoft, Oracle, RedHat and the telcos get into this game and provide the unified platform for everyone to standardize on?
- Define data-config.xml (whatever the name of your data configuration file) :
- This file defines from how to read data from RDBMS to your document to be indexed. So, define your SQL for full import as well as subsequent partial imports (called as delta imports) in this file.
- how does the data read get mapped to fields: Map here columns to SOLR fields.
- Make sure that you test your sql in using your favorite RDBMS client.
- solrconfig.xml : Register request handler and data-config.xml in solrconfig.xml
- For example, if your db import is defined as dbimport in data-config.xml, you can define a request handler and specify request's url and map to data-config.xml
- schema.xml should contain all the fields that are defined in document in data-config.xml The solr config specifies how those fields should be dealt with when adding documents to the index.
- You can define your datasource either in data-cofig.xml or in solrconfig.xml
- You can index the data by http invocation of http://:port/solr/dbimport?command=full-import (please note that the use whatever path you mentioned for 'dbimport' in your request handler.
- Please make sure that appropriate jdbc driver is in the lib path of solr.
- You can monitor the progress / status as : http://host:port/solr/admin/stats.jsp
- To look inside the index, use web version of Luke added as solr plugin : http://host:port/solr/admin/luke BTW the perfect way to look into indexes would be to install Luke and point to the data dir.
- Cleanup / Re-index: You can either cleanup solr indexes through issuing cleanup command on your dbimport or you can simply wipe of the content of data directory. However, make sure that you really want to do it.
- You can debug (very minimal) indexing by specifying debug=true in your dbimport command. However, make sure that you add commit=true
Any how, here are the steps for reclaiming the space. Disclaimer: As you know I am not a DBA but I have to do what I have to do:
1. Take a sqldump of entire db 2
2. Shutdown mysql
3. delete (filesystem) ibdata1, ib_logfile0 and 1
4. Edit my.cnf (/etc/my.cnf) : add: innodb_file_per_table
With this param, table data would be in separate files and only metadata will reside in ibdata1
5. Start mysqld
6. Reload the data dump.
It has been a long time since I last used soapUI - see my 2007 post - when I was using version 2.1.x. Today, I got 4.5.1, from the same location: http://www.soapui.org/
I wanted to test a simple WS using WS-Security username token and ran into an interesting issue; the invocations were failing with an HTTP error : 401 Unauthorized.
The solution was simple - switch the HTTP authentication to use preemptive mode (see documentation).
Changes to Request Setup
Here are 2 common HTTP errors codes:HTTP/1.1 403 ForbiddenThe username/password you provided is not valid.HTTP/1.1 401 UnauthorizedThe server is sending back an auth challenge that may be ignored by the HTTP client library - the hint on the response is "WWW-Authenticate: BASIC realm="owsm".
And 1 common soap fault:
<faultstring>GenericFault : generic error</faultstring>
The key here is ns0 - the GenericFault is related to policy enforcement. This is the only hint you will get from the server. For the reason of the failure, you will have to look on the server side logs. For some obvious security reason, the true root cause is not provided to limit potential exploits.
Regarding WS-Security setup, it's all here in the soapUI documentation.
More on this to come in some follow-up post.
Anyway, since I was doing this as a POC, I was able to afford a small down time. BTW the process for increase the volume size is not that difficult. Here are the instructions from another blogger: http://www.e-zest.net/blog/simple-steps-to-change-size-of-ebs-volume-in-ec2-of-aws-using-aws-console/
From the documentation, it's pretty clear - the Help menu.
Enable JDev Extension - section 9 : http://docs.oracle.com/cd/E26098_01/install.1112/e17074/ojdig.htm#BDCBECBH
The issue was to find this menu - until I finaly found the empty entry right to the Window menu - It's the standard location for this entry, on Mac OS-X, except that it was not rendered due to the number of tools I have coming from the far right end.
As it took me some time to figure it out, I'll share it here in case you are running into the same small hurtle.
Help menu, right to Window, but without the display name when space is limited.Once you have the extension installed, here is a good blog post that walk you through the next steps: https://blogs.oracle.com/bwb/resource/JUnit_Testing/Unit_Testing_with_JUnit_and_JDeveloper_11g.html
Last weekend I completed 15 years at Oracle, all of those years have been spent in Applications Product Development, specifically financials. That may sound like a long time to do the same thing, but I have been through a lot if big shifts in the IT industry and within Oracle during that time, it’s always been a challenging time and the only constant is change.
I started out in the UK for 2 years before moving out to California in 2000 where I have been ever since, in the 300 building you see on the left.
The earliest Applications release I started out working on was 10.4 and I think the database version was 7. I still have a promotional VHS cassette that all employees were sent when the 8i database was launched and a license plate holder with the tag line ‘the internet changes everything’ which I got on my first day starting at Oracle HQ. I also have more Polo Shirts, hats, bags and vests with the Oracle Logo on than I care to count.
As I said earlier the only constant has been change and I’ve seen a lot of interesting changes, shifts and fads come and go. Here’s a list of the things I can recall in a somewhat chronological order
- Character mode Apps
- Client Server GUI Apps
- Browser based Apps
- The Millennium Bug
- Providing 24 hour cover over New Year 2000, when nothing happened
- The emergence of the internet
- The dot com boom
- The dot com bust
- The Network Computer (including Larry showing a network computer on Oprah)
- Oracle starting consolidation in the Enterprise Applications space
- Web 2.0
- Social Apps
- The Cloud
So there’s a few of my favorite things, feel free to add yours or comment on mine in the comments section below.
If you want to get some of the revolutionary reporting capabilities that the embedded essbase General Ledger cube brings to Fusion Accounting Hub whilst continuing to leverage the investment you have in your Oracle eBusiness suite Financials then Oracle Fusion Accounting Hub coexistence is the path to take. This is what Oracle has done to not only improve it’s reporting capabilities, but also to enable it to move forward with a new Global Chart of Accounts and reduce the time spent on it’s month end close process. Oracle is now one of a number of customers live on Fusion Accounting Hub and realizing real business benefits as discussed by Corey West, Senior Vice President, Oracle’s Corporate Controller and Chief Accounting Officer in this Blog Post.
I’m very excited about this and at the General Ledger Special Interest Group meeting at Oracle OpenWorld this year I will outline The functional and technical architecture of the system, some details of the implementation. I’ll also make sure to leave plenty of time for questions. Watch this space for details on timings and location, or follow me on twitter where I will of course post details as we get closer.
I first blogged on this topic back in early 2010, then later introduced it in a guest post on Steve Miranda’s Oracle Applications blog and last month I did another guest post on the subject. The latest post discusses an Olympic cycling event where pertinent information was available, but it was not relayed to the cyclists in a timely manner and so they could not act upon it when they needed to, this was part of what cost them medals (there may have been other factors too, but that is a long story).
Timeliness is one of the three key attributes required for BI to be considered embedded BI (the other two are relevant and actionable) and I wanted to give a business example of where timely is relevant. Let’s imagine I am approving an expense report for one of my overseas employees’ home broadband costs for June, July and August; I don’t know if the amount they are claiming is reasonable and I don’t recall if they already submitted an expense for June already. So embedded BI here I want is average costs of broadband in that location (from external sources, or maybe average of all employee expense claims in that location) and some information on previous expense reports, it would also be good to see that amount in US dollars too. All this information is available in my Enterprise Application and I can get it quickly, I should not for example wait until the amount is transferred to the General Ledger and converted into US dollars at my Month End rate published from my Treasury team, an approximation based on a rate from an external web service or last month’s rate will work for this purpose. If I get this information in a timely manner, it is there when I review the expense report and I can approve right away and save my time navigating to other places to research this and also avoid any late fees from the credit card firm arising from a delay in my approval.
The example above does not require aggregation of huge amounts of data in an offline data warehouse, nor does it require an eye catching 3D animated Chart; these exciting things do have their place and they are very impressive in demos but I find the simple use cases that bring real improvements to the way people work are the ones that resonate really well when I talk to Users.
Now it’s your turn to agree or disagree with my assessment of things, the comments section is right below and I would love to hear your thoughts.
What more is there to say? This event is going to be fantastic and there’s still time to register!
While installing Oracle Forms and Reports 11gR2 (188.8.131.52.0) from a Mac (OS/x Mountain Lion) the following error occurred executing the runInstaller installation script:
$ ./runInstallerStarting Oracle Universal Installer… Checking Temp space: must be greater than 270 MB. Actual 40478 MB Passed Checking swap space: must be greater than 500 MB. Actual 4094 MB Passed Checking monitor: must be configured to display at least 256 colors >>> Could not execute auto check for display colors using command /usr/bin/xdpyinfo. Check if the DISPLAY variable is set. Failed <<<< Some requirement checks failed. You must fulfill these requirements before continuing with the installation, Continue? (y/n) [n] y I have a remote session from my mac using terminal. To export the display, I typed ssh -Y user@servername. I have upgraded my O/S to Mountain Lion a couple of days ago. So I though that might be the cause. I tried to manually start X11 and I received the following message: click on the continue button and get redirected to the following URL: About X11 and OS X Mountain Lion explaining X11 is no longer part of the O/S and that I should use XQuartz from now on. download and Install XQuartz and there you go! You can now continue with the rest of the Install via the Installer GUI.
Last week was busy… making travel arrangements for this week’s trip to New York (technically Jersey) and some light analysis of AWR reports from exadata RAT runs and some heavy troubleshooting of a Solaris x86 RAC cluster with random node reboots. (I think I finally traced the node reboots to a kernel CPU/scheduling problem). I really did thoroughly enjoy my time in Africa despite being nowhere near Oracle software – but it feels good to be working on challenging cluster problems again!
Before I completely forget the details from my work in Africa I want to wrap up my article about high-level lessons learned earlier this week. By the way, I’m not just stretching obscure aspects of my work in Africa to get stuff that sounds good. I view these cultural lessons which we learned together as the most central and most important technical aspect of my work at the hospital. And it might be surprising, but it’s true: the same cultural adjustments are important and oft-missed here in corporate America.
The first two lessons were to (1) understand the fundamentals and (2) avoid unjustifiable complexity. The remaining two lessons I want to talk about are slightly less technical but equally important.
- People first, Technology second
I’m using the word “people” here to sum up three major components of our accomplishments at the hospital: organizational policy, user education and technical training.
First, a fun technical story:
On day 1 after our arrival at the hospital, several issues required immediate attention – I discussed one example in the previous article. A second issue involved the network links to the outside world. In particular, sites like gmail were often completely unreachable. I worked hard on this one. Eventually I got the reproducible test case by connecting to an AWS instance very close to the other end of their link. I compared low-level packet traces from both sides to see what was happening.
I could initiate a very large download and throttle the connection on our end, causing TCP window full messages to cascade all the way up to the AWS source server – normal behavior for throttling connections. But I did notice that the source server sent a very large chunk of data before it started throttling to the same rate that I was demanding.
Next I opened a second connection with a different protocol on a different port. The throttled connection wasn’t even using a fraction of our contracted bandwidth – but every single packet on the second port took a consistent 60 seconds or so to get through. If I killed the download then things would zoom at normal high speeds again.
The killer was that the TLS handshake in HTTPS connections needed a response to the initial packet within 30 seconds in order to continue. Small file downloads had no impact – but if a download was large enough, then SSL quit working completely – although HTTP would still (slowly) connect.
My best guess was that this network provider (not an African company BTW) was running some enormous cache in the middle combined with a single packet queue per customer – no fair queuing by TCP connection. It doesn’t make complete sense but I haven’t yet thought of anything better. Maybe those other packets were just stuck in line on a huge cache throttled by packets in front of them? I actually didn’t think it was possible to configure network equipment to be this stupid… but maybe? Whatever the cause, the end result was that a large download – even throttled – generally trashed our whole network connection.
I did pursue a ticket with the network provider, but it was a lot of effort to keep the issue moving. We discussed technical solutions on our end. Block large downloads? Sometimes they’re needed for the business. I spent quite a bit of time researching various QOS solutions – and this was when I learned a very important lesson. I especially learned from and ebook called How To Accelerate Your Internet (with several good African case studies) and some slides from Christian Benvenuti (International Centre for Theoretical Physics).
Communication and people can do things that technology can’t.
We weren’t able to resolve this issue in a “technical” way – so how did we solve it? We had something that I’ll call our PEOPLE strategy – it addressed this challenge and many others too. Here are a few elements of the strategy:
- We wrote a computer policy for the hospital, approved and enforcable at the top.
- Made sure a few tools were in place to identify violations.
- Setup a new wiki for the entire hospital community to use for any purposes.
- Designed the network so that every new device was redirected to a wiki page with the policy.
- Held extensive discussions with all staff and compound residents on several technical subjects.
- Hand-picked a few users for special training on several topics.
We kept the policy short and simple and easy to remember – it was three rules that I could recite on my fingers. But it covered our needs well, protecting us from bandwidth problems and from malware.
I worked to catalyze broad user education. Lots of conversations with non-technical people. Basic conversations, not fancy ones! It began to cultivate a new culture around technology.
I briefly mentioned a new wiki up there – this was also a big part of our user education strategy. Except for a small restricted section, the wiki was 100% open to be updated by anybody. Part of our cultural change included training everyone at the hospital to use this wiki as a central repository for processes and information. With a high frequency of arrivals and departures, with people often rotating into and out of well-defined functions, the usefulness of the wiki was immediately apparent.
The third important part of our people strategy was the special training. There were three unique qualities of this training:
- Documentation: Much like the work I’ve done for Oracle RAC Attack, we worked very hard to boil down tough concepts and processes into extremely verbose step-by-step instructions on the wiki. Whenever possible, I taught people to start practicing step-by-step documentation for everything they do.
- Reproducibility: Enabled by the growing documentation library on the wiki, we expected that everyone who received training would be able to train someone else on the same thing. When we switched to a new internet access system, I trained a few people on setting up accounts and had those few people handle the rest of the compound.
- Functional Selection (rather than technical selection): also enabled by the documentation library, we started choosing the most logically positioned people for jobs instead of the most technical people. As one example, several non-IT people helped setup the new internet accounts. As a second example, the special projects coordinator – despite not being at IT person at all – followed extensive documentation to practice a bare-metal server recovery… and built our new server in the process! As a previous director, she was logically positioned for training which could potentially provide access to any data at the hospital.
In Corporate America, we typically have more financial resources, more technology at our disposal, and much longer staff retention. Even if you have the same problems that we had in Africa, your situation will require different strategies. But the main point here is universally, absolutely crucial: there are two sides to every project, the technical side and the people side.
Even if you’re working with outside partners to provide technical expertise, there is an in-house “people” component to your project. Even if it’s being called an appliance, it’s gonna be your baby to use & maintain & retire someday. Even if it’s a cloud-based service, you have to deal with the upgrades & functionality changes & workarounds for buggy, uncommon use cases. Every new technology you start working with will require somebody in-house to start learning about it. Never underestimate the people side of technology! It’s always there and it’s always important.
I think that we easily get immersed in the technology side of our projects (myself included) and we often need a reminder to keep the people side in view.
- Adapt the Process instead of Customizing the Product
Finally, I have one quick word about processes and products. Now I don’t want to overstate this point; obviously there’s a cost-benefit analysis that happens for each case – and often changing business processes is not an easy thing. But I do believe that as technologists we have a tendency to favor customization a little more than we should. And in general (technologists and managers and executives) I think that we tend to not investigate many possible process changes because we think that it’s not really plausible. I think that if we start asking, we might be surprised how willing people sometimes are to change the way they work in order to have the business as a whole work better. If we’re getting some good new tools then people could totally understand changes to better accommodate those tools.
When I was working at the hospital, there were two specific places where this discussion happened. The first was around a new pharmacy system that we were putting in place. The present system is heavily paper-based and for many reasons the hospital is pushing ahead with a new computerized system. It’s the classic conversation about software customization and frankly I didn’t do an awful lot besides convince hospital staff to work more closely with the company who writes the pharmacy software. (Great company – and eager to help – and they totally understand this conversation.)
The second conversation was a little more unusual: server administration. Specifically, around NTFS file permissions and administrator access to user home directories. One requirement of the hospital was the ability to audit contents of redirected user home directories. By default these directories are created so that even Administrators cannot peer into them.
Of course none of us were experienced windows server administrators. With some fiddling we managed to get folder redirection to work. I had found that many folder redirections on the old network didn’t work because of incorrect permissions on home directory folders. We could have done more testing to figure out what permissions worked – but we were running short on time. So this was where I encouraged the staff to “go light on changing things”. In the end we found another very good solution which didn’t require any NTFS permission changes.
I often aim for environments to be as close as possible to the engineers who are building the software. I’ll ask people from my vendors, “what do your engineers mostly develop on?” With some companies it’s hard to get a straight answer, but I like to ask!
When it comes to customization in Oracle databases, of course underscore parameters come to mind right away. Those little buggers can be life-savers sometimes… but their indiscriminately global effects can also be killers. Be careful!
Well I hope you can see a bit of why I enjoyed the work in Africa. Even though I wasn’t working for a fortune 500 company with millions to invest in bleeding-edge technology, there were still interesting and challenging projects. It was a great opportunity to hone my skills at helping people find the best ways to use technology in a real life business. And to be honest, I think that’s the one thing I’m most passionate about.
I hope that I’ve challenged your thinking a little bit around how technology serves your company. I hope that these new ideas push you to the next level in your professional career. And with any luck I might have even convinced a few people to help out in the non-profit world.
It has been nine months since I’ve written here. Needless to say, a lot has happened!
First, my family was living in Africa for three months earlier this year while I did some tech work at an NGO hospital. Second, upon our return I decided to join the good people at Pythian. I’m not moving to Canada, although I will travel a decent bit as part of the company’s consulting group.
If you’re interested in the Africa trip, look at the Africa page. I wasn’t working with Oracle technology but it was still a very interesting, challenging and engaging project.
I thought I’d briefly share a few high-level insights. You might be surprised how well these lessons apply almost anywhere (even Oracle-related projects)!Four Lessons from IT in Rural Africa
- Understand the Fundamentals
Two fun and important projects at the hospital in Africa:
- protecting assets (both physical equipment and their data) from very unreliable electricity, and
- doing some field testing to see how radio signals from wifi routers would behave in a large compound with concrete walls and tin roofs.
How much more fundamental does it get than copper wires and radio signals? And in our industry, it doesn’t matter what you do – these are also the fundamentals underneath what you’re building.
In the days of [everything]-as-a-service and engineered-[anything]-appliances, we’re building at a higher level than ever before. You might think that building on clouds (or any abstracted/virtualized platform) means we can leave the platform implementation details to specialists.
But smart companies and experienced engineers still pay a lot of attention to the fundamentals. There’s no magic or voodoo in computer systems. An experienced engineer can understand how a particular stack works from top to bottom – and you should be wary of anyone who won’t explain at some level of detail how their piece works.
Do you remember that youtube video where the data center guy shouts into a bank of hard drives? Even if you’re buying pre-engineered, pre-packaged systems that come by the rack and fill half a room, you still need to ask the same basic environmental questions that you would with any other deployment into your datacenter.
Bottom line: everything generally comes down to the same few basic things – for example processors and I/O and memory/storage hierarchies. Ninety percent of what you need to know, even for very complex systems, is in chapter one of my computer systems college textbook. Know your fundamentals and find them even in your complex systems.
- Avoid Unjustifiable Complexity
When I arrived in Africa, there were a number of issues crying out for immediate attention. For example: the head of finance couldn’t login to his workstation unless it was unplugged from the network. Every morning, this friendly Canadian guy unplugged his desktop, logged in to his domain account, then re-connected the network cable so he could access his network shares.
This problem had existed almost a year. A sudden power outage had corrupted a virtual server running as a domain controller. To get systems back up, overseas IT support had directed local employees to restore a previous backup image of the VM. This did restore that server’s file shares but caused havoc among the multi-master DC setup – which was never totally resolved.
A simple workaround beyond the unplug-and-login trick was not forthcoming. But now – after discussion with the director of the Hospital – we made a key decision and we changed course. A major upgrade was on the wish list… so rather than diving into complex debug & repair operation on multi-master windows domain, we put all our effort on the upgrade. And we re-architected the system so that this particular problem could never happen again.
When I say “re-architected” … I mean we started at the very beginning. Did you pay attention during the requirements engineering section of your software engineering college class?
- Functional Requirements: What does this hospital use IT for in the first place? How would we prioritize these functions? … (e.g. communication/email, collaboration/file sharing/printing, internet access for guests & residences, specialized programs like accounting & pharm…)
- Non-Functional Requirements: Assuming that we must have IT, what specific qualities are important to us? How would we prioritize these qualities? … (e.g. data protection after >24hr power loss or theft or emergency evacuation, data retention after mistakes or theft or evacuation or catastrophe, ability for short-term IT staff to become productive quickly, can be supported remotely in lieu of local IT staff, protection from malware and viruses, etc.)
Asking these simple questions led to two very important findings:
- There’s high turnover in the nonprofit world. We might get people for a few months or a year… but a lot of the IT support has to be do-able from overseas.
- There is no need for 99.99% uptime. If there’s a serious problem and we’re offline for a day then that’s really not a big deal at all.
We realized that it was much better to have a simple system which would nonetheless prove reliable and easy to maintain, instead of a complex system which was hard to fix if something ever went wrong. The result was – as I wrote in my technical summary of the trip – we completely got rid of virtualization and the multi-master domain controller “cluster”. We migrated from four operating systems on two servers to a single server with a single operating system. But we also added a few things: on-disk encryption, RAID mirroring, improved & thoroughly tested backups, a printed & tested DR plan, a test server/domain exactly like the production server/domain, and a wiki.
The most important word here is actually not complexity but rather unjustifiable. We justified everything in the architecture by showing tracability to this organization’s unique requirements.
Here in the States, I spend a lot of time working with clusters and I really enjoy it. But I can’t count how many times I’ve thought someone forgot these simple questions. Do you know your requirements?
Maybe you’re buying pre-engineered or pre-packaged systems that come by the rack and fill half a room. You already learned from my first lesson and you have engineers who understand the fundamentals for these systems. But the most critical part is this step, the second one: now you need to justify those fundamental architectural characteristics by connecting them to the unique requirements of your organisation.
Big job? Yes. Somebody else’s job? I don’t buy it. No matter where you are in your organisation, you can start asking questions and learning. Don’t assume it’s not your problem just because you’re not the decision-maker. If you become an expert on both your business and your technology, then before long everyone will be specifically asking for your input!
If it’s worthwhile for a non-profit hospital in the middle of Africa, then how much more will it be beneficial for you?
I do have four lessons, but I think I’ll save the other two for another article. (This got long. <g>) Hope it’s thought-provoking and helpful.
Update: Lessons from Africa, Part 2