Fishbowl Solutions recently held a webinar about our newest product, ControlCenter for Oracle WebCenter Content. Product manager Kim Negaard discussed ControlCenter’s unique features and advantages for controlled document management. If you missed the webinar, you can now view it on YouTube:
Below is a summary of questions Kim answered during the webinar. If you have any unanswered questions, or would just like to learn more, feel free to contact Fishbowl by emailing firstname.lastname@example.org.
Is this a custom component that I can apply to current documents and workflow processes, or are there additional customization that needs to be done?
ControlCenter is installed as a custom component and will work with current documents. You can identify which document types and workflows you want to be able to access through ControlCenter and additional customizations are not necessary.
Does the metadata sync with the header information in both Microsoft Word and Adobe PDF documents?
The metadata synchronization works with PDF and Word documents.
Do you need to have a specific template in order to synchronize?
Not really. There’s two ways you can insert metadata into a document:
- You can use the header or footer area and replace anything existing in the current header with a standard header.
- You can use properties fields: for example, anytime the Microsoft Office properties field “document ID” occurs in a document, the dDoc name value could be inserted into that field.
In either of these cases, the formatting standards wouldn’t be rigid, but you would ideally want to know before rolling this out which approach you’d like to take.
What version of WebCenter do you need for this?
This is supported on WebCenter Content version 11.1.8 or above.
Is it completely built on component architecture as a custom component?
Does ControlCenter require the Site Studio component to be enabled in order to work?
No, Site Studio doesn’t have to be enabled.
Does ControlCenter require Records Management to be installed on top of WCC?
Does the custom component require additional metadata specific to ControlCenter?
Not exactly. We’ve made it pretty flexible; for example, with the scheduled reviews, we don’t force you to create a field called “review date”. We allow you to pick any date field you want to use for the scheduled reviews, so that if you already have something in place you could use it.
Where do you put ControlCenter if you don’t already have an existing server?
You do need to have an Oracle WebCenter Content server to run ControlCenter. If you don’t have a server, you’ll need to purchase a license for WebCenter Content. However, you don’t need any additional servers besides your WebCenter Content server.
Does the notification have to go to a specific author, or could you send it to a group or list in case that author is no longer with the organization?
The notification system is very flexible in terms of who you can send documents to. You can send it to a group, like an entire department or group of team leads, or it can be configured to send to just one person, like the document author or owner.
How does this work with profiles?
ControlCenter fully supports profiles. When you view content information for a document, it will display the metadata using the profile. If you check in a document using a check-in profile, then all of the metadata and values from that profile will be respected and enforced within ControlCenter. I should also mention that ControlCenter does support DCLs, so if you’re using DCLs those will be respected, both from a check in perspective but also in the metadata on the left. So as you are creating a browse navigation in ControlCenter, it will recognize your DCLs and allow you to filter with the proper relationships.
Does it integrate with or support OID (Oracle Internet Directory)/OAM (Oracle Access Manager)?
ControlCenter will use whatever authentication configuration you already have set up. So if you’re using OAM with WebCenter Content, then that’s what ControlCenter will use as well.
Does it support any custom metadata that has already been created?
Yes, if you have custom metadata fields that are already created, any of those can be exposed in ControlCenter.
Does it support any other customizations that have already been defined in the WebCenter Content instance?
It will depend on the nature of the customization. In general, if you have customized the WebCenter UI, those customizations would not show up in ControlCenter because ControlCenter has a separate UI; however, customizations on the back end, like workflow or security, would likely carry over into ControlCenter.
Does ControlCenter integrate with URM?
The ControlCenter interface isn’t specifically integrated with URM.
In case of a cluster environment, does ControlCenter need to be installed on both WebCenter Content servers?
Yes, if you have a clustered WebCenter Content environment, you would need to install ControlCenter on both/all nodes of the clustered environment.
Does it change anything within core WebCenter?
Not really. The only change to the core UI is an additional button in the Browse Content menu that will take you to the ControlCenter interface. But other than that, ControlCenter doesn’t change or prevent you from using the regular Content Server interface.
Can you customize the look and feel (icons, colors, etc.)?
Yes. We will work with you to customize the look and feel, widgets, etc. The architecture that we used when we created this supports customization.
The post ControlCenter for WebCenter Content: Controlled Document Management for any Device appeared first on Fishbowl Solutions' C4 Blog.
via Blogs.Oracle.com/IMC - Slideshows by User: oracle_imc_team http://www.slideshare.net/
I attended the Oracle HCM Cloud Partner Enablement Summit near Milan, Italy to explain the Oracle Applications User Experience (OAUX) enablement strategy of using Oracle Platform as a Service (PaaS) to extend the Oracle Applications Cloud. We enable partners to offer their customers even more: a great user experience across the Software as a Service (SaaS) applications portfolio. We call this PaaS4SaaS.
The central part of my charter is to drive an OAUX PaaS4SaaS strategy that resonates with the business needs of the Oracle PartnerNetwork (OPN) and our own sales enablement worldwide, but with the EMEA region as focus.
We have a great team that delivers Oracle PaaS and SaaS enablement and direct deal support, scaling our outreach message and running events so that the proven resources to win more business get into the hands of our partners and our sales teams.
The OAUX team PaaS4SaaS enablement is based on a rapid development kit (RDK) strategy of simple development, design, and business materials. After a few hours, partners walk away from one of our events with Cloud solutions they can sell to customers.
Let me explain more broadly why our PaaS4SaaS approach is a partner differentiator and a competitive must-have, and about how you can be in on the action!
During the event in Italy, I deployed live a tablet-first Oracle Applications Cloud simplified UI from the RDK to the Oracle Java Cloud Service - SaaS Extension (JCS-SX), demonstrating that our apps are not only simple to use, but easy to build, and therefore easy for partners to sell.
Make no mistake; PaaS4SaaS is the partner differentiator when it comes to competing in the cloud. Our enablement means partners can:
- Build Oracle Applications Cloud simplified UIs productively using Oracle PaaS and SaaS.
- Offer customization and integration confidence to customers in the cloud: they’ll get the same great user experience that Oracle delivers out of the box.
- Identify new reusable business opportunities in the cloud and win more deals.
- Accelerate innovation and SaaS adoption and increase the range of value-add PaaS solutions offered to customers.
- Sharpen sales and consulting strategies using the user experience message, and take your position in the partner world to a new level.
But don’t just take it from me, check out the wisdom of the cloud, and what our partners, the press, and Oracle’s leadership team have to say about PaaS and SaaS:
- PaaS and SaaS Perfect: Partners bring innovation to the Oracle ecosystem and run it on Oracle’s Cloud (Debra Lilley [@debralilley], Vice President Certus Cloud Services, Certus Solutions)
- "PaaS for SaaS will emerge as the de-facto way to extend Oracle Applications in a safe, manageable, and cost-effective way" (Debra Lilley)
- By Oracle OpenWorld 2015 (October) 95% of Oracle products will be in the Cloud (Mark Hurd [@markvhurd], co-CEO Oracle Corporation)
- PaaS opportunity even bigger than SaaS (Forbes OracleVoice [@forbes])
- On the Oracle Cloud—"an out-of-the-box, standardized environment to start building the application or deploying the application or data in an hour" (Mike Lehman, Vice President, Product Management, Oracle Corporation)
- Partners can better tailor their applications to customers’ needs and, in general, make them better and faster (Steve Miranda [@stevenrmiranda], Executive Vice President, Applications Development, Oracle Corporation)
Here are the partner requirements to start that conversation with you about OAUX enablement:
- Do you have use cases for PaaS and the Oracle Applications Cloud?
- Do you want user experience (UX) as the partner differentiator?
- Are you an Oracle Applications Cloud (ERP, HCM, Sales) partner who wants to lead and influence?
- Do you have Oracle ADF, Oracle Fusion Middleware, SOA, Cloud or other development skills in-house or by way of an alliance?
- Are you willing to participate jointly in Oracle outreach and communications about your enablement and the outcome?
For more information on our PaaS4SaaS enablement, check out these links:
- Oracle.com/UsableApps > Build a simplified UI
- Oracle UX PaaS4SaaS Enablement with Certus Solutions
- Making the Cloud Your Own (AppsConnect recording, access required)
- OPN Oracle Fusion (Cloud) Applications UX Specialization (Access required)
- OPN Oracle Cloud Applications UX Extensibility Specialization
I’m trying to compare two types of database servers and it looks like one has a faster CPU than the other. But, the benchmark I have used runs a complicated variety of SQL so it is hard to really pin down the CPU performance. So, I made up a simple query that eats up a lot of CPU and does not need to read from disk.
First I created a small table with five rows:
create table test (a number); insert into test values (1); insert into test values (1); insert into test values (1); insert into test values (1); insert into test values (1); commit;
Then I ran a query Cartesian joining that table to itself multiple times:
select sum(t1.a)+ sum(t2.a)+ sum(t3.a)+ sum(t4.a)+ sum(t5.a)+ sum(t6.a)+ sum(t7.a)+ sum(t8.a)+ sum(t9.a)+ sum(t10.a) from test t1, test t2, test t3, test t4, test t5, test t6, test t7, test t8, test t9, test t10;
Then I used one of my profile scripts to extract the CPU. Here is a typical output:
SUBSTR(TIMESOURCE,1,30) SECONDS PERCENTAGE ------------------------------ ---------- ---------- TOTAL_TIME 32 100 CPU 32 100
I edited the output to make it fit. The profile shows the time that the query spent on the CPU in seconds.
I tried multiple runs of the same query and kept adding tables to the join to make the query longer.
This zip includes the sql scripts that I ran and my spreadsheet with the results: zip
I was comparing an Itanium and a Xeon processor and the test query ran in about half the time on the Xeon. I realize that this is not a complete benchmark, but it is some information. My other testing is not targeted specifically to CPU but I also saw a significant CPU speed-up there as well. So, this simple query adds to the evidence that the Xeon processor that I am evaluating is faster than the Itanium one.
I wrote a blog post some time ago about using a file share witness with a minimal windows failover cluster configuration that consists of two cluster nodes. In this blog post, I told I was reluctant to use a witness in this case because it introduces a weakness in the availability process. Indeed, the system is not able to adjust node weight in this configuration but it does mean that we don’t need a witness in this case and this is what I want to clarify here. I admit myself I was wrong on this subject during for some time.
Let’s set the scene with a pretty simple Windows failover cluster architecture that includes two nodes and with dynamic quorum but without a configured witness. The node vote configuration is as follows:
At this point the system will affect randomly a node weight to the current available nodes. For instance, in my context the vote is affected to the SQL143 node but there is a weakness in this configuration. Let’s first say the node SQL141 goes down in an unplanned scenario. In this case the cluster stays functioning because the node SQL143 has the vote (last man standing). Now, let’s say this time the node SQL143 goes down in an unplanned scenario. In this case the cluster will lost the quorum because the node SQL141 doesn’t have the vote to survive. You will find related entries in the cluster event log as shown to the next picture with two specific event ids (1135 and 1177).
However in the event of the node SQL143 is gracefully shutdown, the cluster will able to remove the vote of the node SQL143 and give it to the node SQL141. But you know, I’m a follower of the murphy law: anything that can go wrong, will go wrong and it is particularly true in IT world.
So we don’t have the choice here. To protect from unplanned failure with two nodes, we should add a witness and at this point you may use either a disk or a file share witness. My preference is to promote first the disk quorum type but it is often not suitable with customers especially for geo cluster configuration. In this case using file share witness is very useful but it might introduce some important considerations about quorum resiliency. First of all, I want to exclude scenarios where the cluster resides on one datacenter. There are no really considerations here because the loose of the datacenter implies the unavailability of the entire cluster (and surely other components).
Let’s talk about geo location clusters often used with SQL Server availability groups and where important considerations must be made about the file share witness localization. Indeed, most of my customers are dealing only with two datacenters and in this case the 100$ question is where to place it? Most of time, we will place the witness in the location of what we can call the primary datacenter. If the connectivity is lost between the two datacenters the service stays functioning in the primary datacenter. However a manual activation will be required in the event of full primary data center failure.
Another scenario consists in placing the witness on the secondary datacenter. Unlike our first scenario, a network failure between the two datacenters will trigger an automatic failover of the resources to the secondary datacenter but if in the event of a complete failure of the secondary datacenter, the cluster will lost the quorum (as a reminder the remaining node is not able to survive).
As you can see, each of aforementioned scenario have their advantages and drawbacks. A better situation would be to have a third datacenter to host the witness. Indeed, in the event of network failure between the two datacenters that host the cluster nodes, the vote will be assigned to the node which will first successfully lock the file share witness this time.
Keep in mind that even in this third case, losing the witness because either of a network failure between the two main datacenters and the third datacenter or the file share used by the witness deleted accidently by an administrator, can compromise the entire of the cluster availability in case of a node failure (one who has the vote). So be aware to monitor correctly this critical resource.
So, I would finish by a personal think. I always wondered why in the case of a minimal configuration (only 2 cluster nodes and a FSW), the cluster was not able to perform weight adjustment. Until now, I didn’t get the response from Microsoft but after some time, I think this weird behavior is quite normal. Let’s image the scenario where your file share witness resource is in failed state and the cluster is able to perform weight adjustment. Which of the nodes it may choose? The primary or the secondary? In fact it doesn’t matter because in the both cases, the next failure of the node which has the vote will also shutdown the cluster. Finally it is just delaying an inevitable situation …
Happy clustering !
From Oracle’s Linux Blog: Friday Spotlight: Oracle Linux 7 - Optimizing Deployment Flexibility and Increasing ROI.
And from the same source: The April 2015 Oracle Linux Newsletter is Now Available
From Fusion Applications Performance Issues, How to Increase Performance With Business Events in Fusion Applications
And from Fusion Applications Developer Relations: JDeveloper Versions Revisited
Less Noisy NetBeans, from Geertjan’s Blog.
Oracle Database Mobile Server 12c (22.214.171.124.0) Released, from Oracle Partner Hub: ISV Migration Center Team.
Implementing Coherence and OWSM out-of-process on OSB, from the SOA & BPM Partner Community Blog.
What Does Facebook’s API Change Mean?, from Oracle Social Spotlight.
From Oracle’s MySQL Blog: Importing and Backing Up with mysqldump
From the Oracle Demantra blog: The table_reorg procedure ran for SALES_DATA errored due to lack of tablespace. Can I delete the RDF segments?
From the Oracle E-Business Suite Support blog:
Critical Update to the EBS Procurement Approvals Analyzer (Version 200.3)
So really why should I apply the March 2015 12.1.3: Procurement Family Update - Rollup Patch?
From the Oracle E-Business Suite Technology blog:
High Availability Configuration for Integrated SOA Gateway R12.2
Database Migration Using 12cR1 Transportable Tablespaces Now Certified for EBS 12.1.3
Best Practices for Optimizing Oracle XML Gateway Performance
Now you can listen to Wikipedia. No, not sounds on Wikimedia, but the sounds of edits being entered in various languages on Wikipedia itself. There’s a list on the bottom to add and remove languages to the stream.
This works great and you can simplify your connection strings that you use. Vadim wired this into the code completion and we can now code complete via key, a connection string that you have used before or you can set up a new now using the net command.
The first technical preview of the future version of Windows Server was, since last October, available (here) and a second one with more new features should be available in May.
The final version which should be normally released in 2015 has been recently postponed to 2016. It will be the first time that the client and server releases will be decoupled.
This new version of Windows Server will include:
- new and changed functionalities for Hyper-V
- improvements for Remote Desktop Services
- new and updated functionalities for Failover Clustering
- significant new features with PowerShell 5.0
- directory services, Web application proxy and other features
According to Microsoft employee, Jeffrey Snover, the next version of Windows Server has been deeply refactoring to really build a Cloud-optimized server! A server which is deeply refactored for a cloud scenario.
The goal is to scope out my needs in order to use only the required components.
On top of this Cloud-Optimized server, the server will be build, the same server that we have for the moment, compatible with that we have but with two application profiles:
- the first application profile which will target the existing set of APIs server
- the second will be a subset of APIs which will be cloud-optimized
Microsoft works also to further clarify the difference between Server and Client to avoid making a mix between client APIs and server APIs for example.
Microsoft will also introduce Docker containers to his new Windows Server 2016! Container is a compute environment also called compute container.
We will have two flavors of compute containers:
- one for application compatibility (server running in a container)
- a second optimized for the cloud (cloud-optimized server)
The goal of Docker is to embed an application into a virtual container. Application via the container will be able to be executed without any problem on Windows or Linux servers. This technology will facilitate the deployment of application and is offered as Open Source under apache license by an American company called Docker.
A container is very lightweight as it does not contain its own operation system. In fact, it will use the host machine in order to achieve all of the system calls.
Migration of Docker containers will be easier as their weight are small.
The bigger clouds providers like Amazon on AWS, Microsoft on Azure, Google on Google Compute, have already integrated this new technology... Dealing with Docker containers give the possibility to migrate from one cloud to another one easily.
In addition, the Docker container technology which will come with Windows Server 2016 will be part of a set of application deployment services, called Nano Server.
According to an internal presentation of WZor published by Microsoft, Nano Server is presented as “The future of Windows Server”.
Nano Server will be a zero-footprint model, server roles and optional features will reside outside of it. No binaries or metadata in the image, it will be just standalone packages.
Hyper-V, Clustering, Storage, Core CLR, ASP.NET V.Next, PaaS v2, containers will be part of the new roles and features.
The goal will be also to change the mentality of servers management. Tend towards remote management and process automation via Core PowerShell and WMI. In order to facilitate remote management, local tools like Task manager, Registry editor, Event viewer... will be replaced by web-based tools and accessible via a remote connection.
This new solution will be integrated also in Visual Studio.
In conclusion, WZor summarized Nano Server as “a nucleus of next-gen cloud infrastructure and applications”. This shows the direction that Microsoft wants to give to Windows Server 2016: even better integration to the cloud, optimization for new distributed applications and management facilitation.
- Partner Webcast - Oracle Mobile Application Framework 2.1: Update Overview (Oracle Partner Hub: ISV Migration Center Team)
via Oracle Partner Hub: ISV Migration Center Team https://blogs.oracle.com/imc/
- Getting Started with RCU 12c for SOA 12c (Oracle Partner Hub: ISV Migration Center Team)
via Oracle Partner Hub: ISV Migration Center Team https://blogs.oracle.com/imc/
The first post in this series explained how to get ppas installed on a linux system. Now that the database cluster is up and running we should take care immediately about backup and recovery. For this I'll use another system where I'll install and configure bart. So, the system overview for now is:server ip address purpose ppas 192.168.56.243 ppas database cluster ppasbart 192.168.56.245 backup and recovery server
As bart requires the postgres binaries I'll just repeat the ppas installation on the bart server. Check the first post on how to do that.
tip: there is a "--extract-only" switch which only extracts the binaries without bringing up a database cluster.
After that just install the bart rpm:
yum localinstall edb-bart-1.0.2-1.rhel6.x86_64.rpm
All the files will be installed under:
ls -la /usr/edb-bart-1.0/ total 20 drwxr-xr-x. 4 root root 44 Apr 23 13:41 . drwxr-xr-x. 14 root root 4096 Apr 23 13:41 .. drwxr-xr-x. 2 root root 17 Apr 23 13:41 bin drwxr-xr-x. 2 root root 21 Apr 23 13:41 etc -rw-r--r--. 1 root root 15225 Jan 27 15:24 license.txt
Having a dedicated user for bart is a good idea:
# groupadd bart # useradd -g bart bart # passwd bart Changing password for user bart. New password: Retype new password: $passwd: all authentication tokens updated successfully.
As backups need some space a top level directory for all the bart backups needs to be created:
# mkdir /opt/backup chown bart:bart /opt/backup chmod 700 /opt/backup mkdir -p /opt/backup/ppas94/archived_wals
Now everything is in place to start the bart configuration. A minimal configuration file would look like this:
cat /usr/edb-bart-1.0/etc/bart.cfg [BART] bart-host = email@example.com backup_path = /opt/backup pg_basebackup_path = /opt/PostgresPlus/9.4AS/bin/pg_basebackup logfile = /var/tmp/bart.log xlog-method = fetch [PPAS94] host = 192.168.56.243 port = 5444 user = enterprisedb description = "PPAS 94 server"
The BART section is the global section while the next sections are specific to the database clusters to backup and restore. As bart requires passwordless ssh authentication between the bart host and the database host to be backup up lets setup this. On the bart bart host ( ppasbart ):
su - bart ssh-keygen -t rsa
On the host where database runs ( ppas ):
su - cd /opt/PostgresPlus/9.4AS mkdir .ssh chown enterprisedb:enterprisedb .ssh/ chmod 700 .ssh/ su - enterprisedb ssh-keygen -t rsa
As the public keys are now available we'll need to make them available on each host. On the ppas host:
cat .ssh/id_rsa.pub > .ssh/authorized_keys chmod 600 .ssh/authorized_keys
Add the public key from the barthost to the authorized keys file above. Example: get the public key from the bart host:
[bart@ppasbart ~]$ id uid=1001(bart) gid=1001(bart) groups=1001(bart) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 [bart@ppasbart ~]$ cat .ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCn+DN//ia+BocR6kTfHkPoXfx3/HRU5KM1Bqy1nDeGnUn98CSl3kbRkUkiyumDfj4XOIoxOxnVJw6Invyi2VjzeQ12XMMILBFRBAoePDpy4kOQWY+SaS215G72DKzNYY8nGPUwjaQdFpFt3eQhwLP4D5uqomPIi9Dmv7Gp8ZHU0DBgJfrDaqrg8oF3GrzF50ZRjZTAkF3pDxJnrzIEEme+QQFKVxBnSU2ClS5XHdjMBWg+oSx3XSEBHZefP9NgX22ru52lTWmvTscUQbIbDo8SaWucIZC7uhvljteN4AuAdMv+OUblOm9ZUtO2Y8vX8hNMJvqRBlYh9RGl+m6wUZLN document.write(['bart','ppasbart.loca'].join('@'))l
Copy/paste this key into the authorized_keys file for the enterprisedb user on the database host, so that the file looks similar to this:
cat .ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCn+DN//ia+BocR6kTfHkPoXfx3/HRU5KM1Bqy1nDeGnUn98CSl3kbRkUkiyumDfj4XOIoxOxnVJw6Invyi2VjzeQ12XMMILBFRBAoePDpy4kOQWY+SaS215G72DKzNYY8nGPUwjaQdFpFt3eQhwLP4D5uqomPIi9Dmv7Gp8ZHU0DBgJfrDaqrg8oF3GrzF50ZRjZTAkF3pDxJnrzIEEme+QQFKVxBnSU2ClS5XHdjMBWg+oSx3XSEBHZefP9NgX22ru52lTWmvTscUQbIbDo8SaWucIZC7uhvljteN4AuAdMv+OUblOm9ZUtO2Y8vX8hNMJvqRBlYh9RGl+m6wUZLN l [bart@ppasbart ~]$ cat .ssh/authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAQZWeegLpqVB20c3cIN0Bc7pN6OjFM5pBsunDbO6SQ0+UYxZGScwjnX9FSOlmYzqrlz62jxV2dOJBHgaJj/mbFs5XbmvFw6Z4Zj224aBOXAfej4nHqVnn1Tpuum4HIrbsau3rI+jLCNP+MKnumwM7JiG06dsoG4PeUOghCLyFrItq2/uCIDHWoeQCqqnLD/lLG5y1YXQCSR4VkiQm62tU0aTUBQdZWnvtgskKkHWyVRERfLOmlz2puvmmc5YxmQ5XBVMN5dIcIZntTfx3JC3imjrUl10L3hkiPkV0eAt3KtC1M0n9DDao3SfHFfKfEfp5p69vvpZM2uGFbcpkQrtN l ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCn+DN//ia+BocR6kTfHkPoXfx3/HRU5KM1Bqy1nDeGnUn98CSl3kbRkUkiyumDfj4XOIoxOxnVJw6Invyi2VjzeQ12XMMILBFRBAoePDpy4kOQWY+SaS215G72DKzNYY8nGPUwjaQdFpFt3eQhwLP4D5uqomPIi9Dmv7Gp8ZHU0DBgJfrDaqrg8oF3GrzF50ZRjZTAkF3pDxJnrzIEEme+QQFKVxBnSU2ClS5XHdjMBWg+oSx3XSEBHZefP9NgX22ru52lTWmvTscUQbIbDo8SaWucIZC7uhvljteN4AuAdMv+OUblOm9ZUtO2Y8vX8hNMJvqRBlYh9RGl+m6wUZLN
Make the file the same on the bart host and test if you can connect without passwords:
[bart@ppasbart ~]$ hostname ppasbart.local [bart@ppasbart ~]$ ssh bart@ppasbart Last login: Thu Apr 23 14:24:39 2015 from ppas [bart@ppasbart ~]$ logout Connection to ppasbart closed. [bart@ppasbart ~]$ ssh enterprisedb@ppas Last login: Thu Apr 23 14:24:47 2015 from ppas -bash-4.2$ logout Connection to ppas closed.
Do the same test on the ppas host:
bash-4.2$ hostname ppas.local -bash-4.2$ ssh bart@ppasbart Last login: Thu Apr 23 14:22:07 2015 from ppasbart [bart@ppasbart ~]$ logout Connection to ppasbart closed. -bash-4.2$ ssh enterprisedb@ppas Last login: Thu Apr 23 14:22:18 2015 from ppasbart -bash-4.2$ logout Connection to ppas closed. -bash-4.2$
Once this works we need to setup a replication user in the database being backed up. So create the user in the database which runs on the ppas host (I'll do that with enterprise user instead of the postgres user as we'll need to adjust pg_hba.conf file right after creating the user):
[root@ppas 9.4AS]# su - enterprisedb Last login: Thu Apr 23 14:25:50 CEST 2015 from ppasbart on pts/1 -bash-4.2$ . pgplus_env.sh -bash-4.2$ psql -U enterprisedb psql.bin (126.96.36.199) Type "help" for help. edb=# CREATE ROLE bart WITH LOGIN REPLICATION PASSWORD 'bart'; CREATE ROLE edb=# exit -bash-4.2$ echo "host all bart 192.168.56.245/32 md5" >> data/pg_hba.conf
Make sure that the IP matches your bart host. Then adjust the bart.cfg file on the bart host to match your configuration:
cat /usr/edb-bart-1.0/etc/bart.cfg [BART] bart-host = firstname.lastname@example.org backup_path = /opt/backup pg_basebackup_path = /opt/PostgresPlus/9.4AS/bin/pg_basebackup logfile = /var/tmp/bart.log xlog-method = fetch [PPAS94] host = 192.168.56.243 port = 5444 user = bart remote-host = email@example.com description = "PPAS 94 remote server"
Another requirement is that the bart database user must be able to connect to the database without prompting for a password. Thus we create the .pgpass file on the bart host which is used for reading the password:
[bart@ppasbart ~]$ cat .pgpass 192.168.56.243:5444:*:bart:bart [bart@ppasbart ~]$ chmod 600 .pgpass
As a last step we need to enable wal archiving on the database that should be backed up. The following parameters need to be set in the postgresql.conf file:
wal_level = archive # or higher archive_mode = on archive_command = 'scp %p firstname.lastname@example.org:/opt/backup/ppas94/archived_wals/%f' max_wal_senders = 1 # or higher
Once done restart the database cluster:
su - service ppas-9.4 restart
Lets see if bart can see anything on the bart server:
[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg SHOW-SERVERS -s PPAS94 Server name : ppas94 Host name : 192.168.56.243 User name : bart Port : 5444 Remote host : email@example.com Archive path : /opt/backup/ppas94/archived_wals WARNING: xlog-method is empty, defaulting to global policy Xlog Method : fetch Tablespace path(s) : Description : "PPAS 94 remote server"
Looks fine. So lets do a backup:
[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg BACKUP -s PPAS94 INFO: creating backup for server 'ppas94' INFO: backup identifier: '1429795268774' WARNING: xlog-method is empty, defaulting to global policy 56357/56357 kB (100%), 1/1 tablespace INFO: backup checksum: 6e614f981902c99326a7625a9c262d98 INFO: backup completed successfully
Cool. Lets see what is in the backup catalog:
[root@ppasbart tmp]# ls -la /opt/backup/ total 0 drwx------. 3 bart bart 19 Apr 23 15:02 . drwxr-xr-x. 4 root root 38 Apr 23 13:49 .. drwx------. 4 bart bart 46 Apr 23 15:21 ppas94 [root@ppasbart tmp]# ls -la /opt/backup/ppas94/ total 4 drwx------. 4 bart bart 46 Apr 23 15:21 . drwx------. 3 bart bart 19 Apr 23 15:02 .. drwx------. 2 bart bart 36 Apr 23 15:21 1429795268774 drwx------. 2 bart bart 4096 Apr 23 15:21 archived_wals [root@ppasbart tmp]# ls -la /opt/backup/ppas94/1429795268774/ total 56364 drwx------. 2 bart bart 36 Apr 23 15:21 . drwx------. 4 bart bart 46 Apr 23 15:21 .. -rw-rw-r--. 1 bart bart 33 Apr 23 15:21 base.md5 -rw-rw-r--. 1 bart bart 57710592 Apr 23 15:21 base.tar [root@ppasbart tmp]# ls -la /opt/backup/ppas94/archived_wals/ total 81928 drwx------. 2 bart bart 4096 Apr 23 15:21 . drwx------. 4 bart bart 46 Apr 23 15:21 .. -rw-------. 1 bart bart 16777216 Apr 23 15:10 000000010000000000000002 -rw-------. 1 bart bart 16777216 Apr 23 15:13 000000010000000000000003 -rw-------. 1 bart bart 16777216 Apr 23 15:20 000000010000000000000004 -rw-------. 1 bart bart 16777216 Apr 23 15:21 000000010000000000000005 -rw-------. 1 bart bart 16777216 Apr 23 15:21 000000010000000000000006 -rw-------. 1 bart bart 304 Apr 23 15:21 000000010000000000000006.00000028.backup
Use the SHOW-BACKUPS switch to get on overview of the backups available:
[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg SHOW-BACKUPS Server Name Backup ID Backup Time Backup Size ppas94 1429795268774 2015-04-23 15:21:23 55.0371 MB ppas94 1429795515326 2015-04-23 15:25:18 5.72567 MB ppas94 1429795614916 2015-04-23 15:26:58 5.72567 MB
A backup without a restore proves nothing so lets try to restore one of the backups to the ppas server to a different directory:
[root@ppas 9.4AS]# mkdir /opt/PostgresPlus/9.4AS/data2 [root@ppas 9.4AS]# chown enterprisedb:enterprisedb /opt/PostgresPlus/9.4AS/data2
On the ppasbart host do the restore:
[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg RESTORE -s PPAS94 -i 1429795614916 -r enterprisedb@ppas -p /opt/PostgresPlus/9.4AS/data2 INFO: restoring backup '1429795614916' of server 'ppas94' INFO: restoring backup to enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2 INFO: base backup restored INFO: archiving is disabled INFO: backup restored successfully at enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2
Looks good. Lets see what is in the data2 directory on the ppas host:
[root@ppas 9.4AS]# ls /opt/PostgresPlus/9.4AS/data2 backup_label dbms_pipe pg_clog pg_hba.conf pg_log pg_multixact pg_replslot pg_snapshots pg_stat_tmp pg_tblspc PG_VERSION postgresql.auto.conf base global pg_dynshmem pg_ident.conf pg_logical pg_notify pg_serial pg_stat pg_subtrans pg_twophase pg_xlog postgresql.conf [root@ppas 9.4AS]# ls /opt/PostgresPlus/9.4AS/data2/pg_xlog 000000010000000000000008 archive_status
Looks good, too. As this is all on the same server we need to change the port before bringing up the database:
-bash-4.2$ grep port postgresql.conf | head -1 port = 5445 # (change requires restart) -bash-4.2$ pg_ctl start -D data2/ server starting -bash-4.2$ 2015-04-23 16:01:30 CEST FATAL: data directory "/opt/PostgresPlus/9.4AS/data2" has group or world access 2015-04-23 16:01:30 CEST DETAIL: Permissions should be u=rwx (0700).
Ok, fine. Change it:
-bash-4.2$ chmod 700 /opt/PostgresPlus/9.4AS/data2 -bash-4.2$ pg_ctl start -D data2/ server starting -bash-4.2$ 2015-04-23 16:02:00 CEST LOG: redirecting log output to logging collector process 2015-04-23 16:02:00 CEST HINT: Future log output will appear in directory "pg_log".
Seems ok, lets connect:
-bash-4.2$ psql -p 5445 -U bart Password for user bart: psql.bin (188.8.131.52) Type "help" for help. edb=> l List of databases Name | Owner | Encoding | Collate | Ctype | ICU | Access privileges -----------+--------------+----------+-------------+-------------+-----+------------------------------- edb | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | postgres | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | template0 | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | =c/enterprisedb + | | | | | | enterprisedb=CTc/enterprisedb template1 | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | =c/enterprisedb + | | | | | | enterprisedb=CTc/enterprisedb (4 rows)
Cool. Works. But: archiving is disabled and you'll need to enable it again. This is the default behavior of bart as it adds "archive_mode=off" to the end of the postgressql.conf. But take care that you adjust the archive_command parameter as all archived wals will be scp'ed to the same directory on the ppasbart server as the original database did. Can we do a point in time recovery? Let's try (I'll destroy the restored database cluster and will use the same data2 directory ):
-bash-4.2$ pg_ctl -D data2 stop -m fast waiting for server to shut down.... done server stopped -bash-4.2$ rm -rf data2/* -bash-4.2$
Lets try the restore to a specific point in time:
[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg RESTORE -s PPAS94 -i 1429795614916 -r enterprisedb@ppas -p /opt/PostgresPlus/9.4AS/data2 -g '2015-04-03 15:23:00' INFO: restoring backup '1429795614916' of server 'ppas94' INFO: restoring backup to enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2 INFO: base backup restored INFO: creating recovery.conf file INFO: archiving is disabled INFO: backup restored successfully at enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2
Seems ok, but what is the difference? When specifying a point in time a recovery.conf file will be created for the restored database cluster:
-bash-4.2$ cat data2/recovery.conf restore_command = 'scp -o BatchMode=yes -o PasswordAuthentication=no firstname.lastname@example.org:/opt/backup/ppas94/archived_wals/%f %p' recovery_target_time = '2015-04-03 15:23:00'
Lets start the database (after changing the port again in postgresql.conf):
-bash-4.2$ pg_ctl -D data2 start server starting -bash-4.2$ 2015-04-23 16:16:12 CEST LOG: redirecting log output to logging collector process 2015-04-23 16:16:12 CEST HINT: Future log output will appear in directory "pg_log".
Are we able to connect?
-bash-4.2$ psql -U bart -p 5445 Password for user bart: psql.bin (184.108.40.206) Type "help" for help. edb=>
Works, too. So now we have a central backup server for our postgresql infrastructure from which backups and restores can be executed. Combine this with a backup software (like netbackup, etc) which picks up the backups from the bartserver and you should be fine. in the next post we'll setup a hot standby database server.
Enhancements in AD and TXK Delta 6
4. New and Changed FeaturesOracle E-Business Suite Technology Stack and Oracle E-Business Suite Applications DBA contain the following new or changed features in R12.AD.C.Delta.6 and R12.TXK.C.Delta.6.4.1 Support for single file system development environments
- A normal Release 12.2 online patching environment requires one application tier file system for the run edition, and another for the patch edition. This dual file system architecture is fundamental to the patching of Oracle E-Business Suite Release 12.2 and is necessary for production environments and test environments that are meant to be representative of production. This enhancement makes it possible to have a development environment with a single file system, where custom code can be built and tested. A limited set of adop phases and modes are available to support downtime patching of such a development environment. Code should then be tested in standard dual file system test environments before being applied to production.
Support for Single File System Development Environments
A normal Release 12.2 online patching environment requires two application tier file systems, one for the run edition and another for the patch edition. This dual file system architecture is fundamental to patching of Oracle E-Business Suite Release 12.2, and is necessary both for production environments and test environments that are intended to be representative of production. This feature makes it possible to create a development environment with a single file system, where custom code can be built and tested. The code should then always be tested in a standard dual file system test environment before being applied to production.
You can set up a single file system development environment by installing Oracle E-Business Suite Release 12.2 in the normal way, and then deleting the $PATCH_BASE directory with the command:
A limited set of adop phases and modes are available to support patching of a single file system development environment. These are:
$ rm -rf $PATCH_BASE
· apply phase in downtime mode· cleanup phaseSpecification of any other phase or mode will cause adop to exit with an error.
The following restrictions apply to using a single file system environment:
· You can only use a single file system environment for development purposes.· You cannot use online patching on a single file system environment.· You can only convert an existing dual file system environment to a single file system: you cannot directly create a single file system environment via Rapid Install or cloning.· There is no way to convert a single file system environment back into a dual file system.
· You cannot clone from a single file system environment.
Oracle is progressively moving their products to a new exciting user experience called Oracle Alta.This new interface style optimizes the user interface for both desktop and mobile platforms with a unified user experience. The features of the new interface are too numerous to mention but here is a summary of the Oracle Alta implementation in Oracle Utilities Application Framework V220.127.116.11.1 and above:
- The user interface is clearer and with a more modern look and feel. An example is shown below:
- The implementation of Oracle Alta for Oracle Utilities uses the Oracle Jet version of the Alta interface which was integrated into the Oracle Utilities Application Framework rendering engine.
- For easier adoption, the existing product screens have been converted to Alta with as small amount of changes as possible. This means training for adoption is minimized and helps existing customers adopt the new user interface quicker. Over subsequent releases new user experiences will be added to existing screens or new screens to take full advantage of the user experience.
- There are a few structural changes on the screens to improve the user experience as part of the Alta adoption:
- The fly-out menu on left in past releases has been replaced with a new menu toolbar. The buttons that appear on the toolbar will depend on the services that user is connected to on their security definition. An example of the toolbar is shown below:
- User preferences and common user functions are now on a menu attached to the user. For example:
- Portals and Zones now have page actions attached in the top right of their user interfaces (the example at the top of this article illustrates an example of this behavior). The buttons displayed are dynamic will vary from zone to zone, portal to portal and user to user depending on the available functions and the users security authorizations.
- In query portals, searches can now be saved as named. In past releases, it was possible to only change the default view for an alternative. It is now possible to alter the criteria, column sequencing, column sorting and column view for a query view and save that as a named search to jump to. It is possible to have multiple different views of the same query zone available from a context menu. The end user can build new views, alter existing views or remove views as necessary. All of this functionality is security controlled to allow sites to define what individual users can and cannot do. Also, views can be inherited from template users in line with bookmarks, favorites etc. An example of the saved view context menu is shown below:
- Menu's have changed. In the past the menu item supported the Search action (the default action) or "+" to add a new record. In Alta, these are now separate submenu's. For example:
- Page titles have been moved to the top of zones to improve usability. The example at the top of this article illustrates this point. The User page title used to be above zone in the middle, now it is up the top left of the portal or zone.
- Bookmarking has been introduced. This is akin to browser bookmarking where the page and the context for that page are stored with the bookmark for quick traversal. The Bookmark button will appear on pages that can be bookmarked.
- The new user interface allows Oracle Utilities products to support a wide range of browsers and client platforms including mobile platforms. Refer to the Installation Guides for each product to find the browsers and client platforms supported at the time of release.
This article is just a summary of the user interface changes. There will other articles in the future to cover user interface aspects, and other enhancements, in more detail.
The Technical best Practices and Batch Best Practices whitepapers have been updated with new and changed advice for Oracle Utilities Application Framework V18.104.22.168.1. Advice for previous versions of Oracle Utilities Application Framework have been included as well.
The whitepapers are available from My Oracle Support at the following document ids:
I came across Evodesk Standing Desk Review
Could not resist the temptation but reach out to @TreadmillDesker
You bark loud but how is ThermoDesk ELITE better EVODESK other than motor? $477=1333-886 is a lot for a motor. Let’s see pic.
Here is the respond I got back.
Thanks for the question! lab test of the Evo’s base concluded: slower speed, louder motors, and instability at taller heights.
Notice the respond was very vague. Slower speed comparing to? Louder motors comparing to? Instability comparing to?
Keep in mind @TreadmillDesker recommends ThermoDesk ELITE which it has an affiliation with and wonder if there is a bias here.
In my opinion, if a website is to perform a review and critique other products it should provide sufficient data.
Videos and pictures would be great.
Here’s is another marketing gimmick from @TreadmillDesker
Did You Know Office Fitness Can Be Tax Deductible?
Looks like I pinched another nerve and responded.
@TreadmillDesker you need to be clear that tax deductible is not the same as tax deduction. please don’t use tax deductible to lure people.
An item that is tax deductible means it may be included in the expense for a possible tax deduction but does not necessitate a tax deduction.
First, the individual would need to itemized. Next, only expenses exceeding 2% of AGI qualifies.
The likelihood of one getting a tax deduction is less than 10% and it’s not a truly a full deduction.
You might ask, what qualifies me to make this assessment. I am a retired Tax Advisor after 19 yrs of experience.
Don’t get me wrong, I really like ThermoDesk ELITE and in all fairness, the review was “perhaps most entertaining”
If there were compatibilities between the 2 companies, I would buy components from both to build the desk.
Lastly, here is a price comparison for the two desks,
ThermoDesk ELITE has a 50+% price premium but is there a 50% increase in performance, product, or quality?
Desktop Size: 30×72
ThermoDesk ELITE – Electric 3D-Adjustable Desk with 72″ Tabletop
Disclaimer: I do not have any affiliation with either companies nor am I compensated by any means for this post.
Bringing in a couple of keys
But don't touch my bags if you please
Mister Customs Man --From Arlo Guthrie's "Coming Into Los Angeles
As I write this, I’m on the road again…Los Angeles. It’s my good fortune to be attending some collaboration sessions on designs for new Oracle Cloud Applications. Can’t talk about the apps being developed…sorry. But the attendees include Oracle Development, the Oracle User Experience team, several Oracle customers, and a few people from my firm. What I can talk about is some observations about the interaction.
The customers in this group are pretty vocal…a great thing when you’re looking for design feedback. They’re not a shy bunch. What’s interesting to me is their focus of interests. Simply put, they’re not interested in the technology of how the applications work. In the words of one customer addressing Oracle: “that’s your problem now.”
These customers are focused first on outcomes - this is what is important to my organization in this particular subject area, so show how you’ll deliver the outcome we need. And, even more interesting, tell us about final states we have yet to consider that may make my organization better. And, in both cases, what are the explicit metrics that show us that we’ve achieved that end state?
Secondly, they care about integration. How will this new offering integrate with what we already have? And who will maintain those integrations going forward?
Third, please show us what information we’ll get that will help us make better decisions? Much of this discussion has revolved around the context of information obtained rather than simply delivering a batch of generic dashboards. This is where the social aspect of enterprise software comes into play, because it provides context.
From these observations, I personally drew four conclusions:
- If this group of customers is fairly representative of all enterprise software customers, it seems that the evolvement of enterprise software customers from concerns about technology to concerns about quantifiable outcomes is well underway.
- Integration matters. For the moment, customers seem more interested in best of breed solutions rather than purchasing entire platforms. So stitching applications together really matters. While I suspect that, as SaaS continues to evolve, customers will begin to consider enterprise software on a platform basis rather than going with best-of-breed point solutions. But it does not appear that we’re there yet.
- Business intelligence, analytics, big data, whatever…it’s of limited value without context. Customers…at least, these customers, are very interested in learning about their own customer personas and the historical data from those personas in order to predict future behavior.
- User Experience, while not explicitly mentioned during these sessions, has been an implicit requirement. Good UX - attractive, easy to use, elegant applications - are no longer an option. All the customers here expect a great UX and, quite frankly, would not even engage in a product design review without seeing a great UX first.
See now you know what I think I’m seeing and hearing. Thoughts? Opinions? Comments.
I need to change a view and an index on an active production system. I’m concerned that the change will fail with a “ORA-00054: resource busy” error because I’m changing things that are in use. I engaged in a twitter conversation with @FranckPachot and @DBoriented and they gave me the idea of using DDL_LOCK_TIMEOUT with a short timeout to sneak in my changes on our production system. Really, I’m more worried about backing out the changes since I plan to make the change at night when things are quiet. If the changes cause a problem it will be during the middle of the next day. Then I’ll need to sneak in and make the index invisible or drop it and put the original view text back.
I tested setting DDL_LOCK_TIMEOUT to one second at the session level. This is the most conservative setting:
alter session set DDL_LOCK_TIMEOUT=1;
I created a test table with a bunch of rows in it and ran a long updating transaction against it like this:
update /*+ index(test testi) */ test set blocks=blocks+1;
Then I tried to alter the index invisible with the lock timeout:
alter index testi invisible * ERROR at line 1: ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
Same error as before. The update of the entire table took a lot longer than 1 second.
Next I tried the same thing with a shorter running update:
update /*+ index(test testi) */ test set blocks=blocks+1 where owner='SYS' and table_name='DUAL'; commit; update /*+ index(test testi) */ test set blocks=blocks+1 where owner='SYS' and table_name='DUAL'; commit; ... lots more of these so script will run for a while...
With the default setting of DDL_LOCK_TIMEOUT=0 my alter index invisible statement usually exited with an ORA-00054 error. But, eventually, I could get it to work. But, with DDL_LOCK_TIMEOUT=1 in my testing my alter almost always worked. I guess in some cases my transaction exceeded the 1 second but usually it did not.
Here is the alter with the timeout:
alter session set DDL_LOCK_TIMEOUT=1; alter index testi invisible;
Once I made the index invisible the update started taking 4 seconds to run. So, to make the index visible again I had to bump the timeout up to 5 seconds:
alter session set DDL_LOCK_TIMEOUT=5; alter index testi visible;
So, if I have to back out these changes at a peak time setting DDL_LOCK_TIMEOUT to a small value should enable me to make the needed changes.
Here is a zip of my scripts if you want to recreate these tests: zip
You need Oracle 11g or later to use DDL_LOCK_TIMEOUT.
These tests were all run on Oracle 22.214.171.124.
Also, I verified that I studied DDL_LOCK_TIMEOUT for my 11g OCP test. I knew it sounded familiar but I have not been using this feature. Either I just forgot or I did not realize how helpful it could be for production changes.