Skip navigation.

Feed aggregator

On Demand Webcast: Driving a Connected, More Productive Next-Gen Workplace

WebCenter Team - Thu, 2016-05-26 08:25
Over the past two decades technological change has transformed business processes. Among the biggest changes is mobile ubiquity.
Finding ways to engage with customers in a mobile world is becoming increasingly challenging for outdated systems to handle.
Discover how a cloud platform can unite content, social, and mobile engagement with simplified line of business automation, as well as microsites and communities for an integrated next-generation workplace in this on demand webcast. View today!

Using Enterprise Manager to manage cloud services

Pat Shuff - Thu, 2016-05-26 07:18
Yesterday we talked about the virtues of Enterprise Manager. To honest the type of monitoring tool is not important but the fact that you have one is. One of the virtues that VMWare touts of VSphere is that you can manage instances on your server as well as instances in VCloud. This is something worthy of playing with. The same tool for your on premise instances also managing your instances in the cloud has power. Unfortunately, VCloud allows you to allocate virtual machines and storage associated with it so you only have a IaaS option of compute only. You can't allocate just storage. You can't deploy a database server unless you have a database deployed that you want to clone. You need to start with an operating system and build from there. There are benefits of PaaS and SaaS that you will never see in the VCloud implementation.

Oracle Enterprise Manager provides the same universal management interface for on premise and in cloud services. Amazon falls short on this. First, they don't have on premise instances so the tools that they have don't monitor anything in your data center, only in their cloud. Microsoft has tools for monitoring services plugins for looking at Azure services. It is important to note that you need a gateway server in the Azure cloud to aggregate the data and ship the telemetry data back and report it in the monitoring tool. There is a good Blog detailing the cost if IaaS monitoring in Azure. The blog points out that the outbound data transfer for monitoring can cost up to $17/month/server so this is not something that comes for free.

Today we are going to look at using Enterprise Manager as a management tool for on premise systems, the Oracle Public Cloud, Amazon AWS, and Microsoft Azure. We are going to cheat a little and use a VirtualBox instance of Enterprise Manager 13c. We are not going to go through the installation process. The books and blogs that we referenced yesterday detail how to do this. Unfortunately, the VirtualBox instance is available from edelivery.oracle.com. We are not going to use this instance but are going to use an instance for demo purposes only available internal to Oracle. The key difference between the two systems is that the edelivery instance is 21 GB in size for download and expands to provide an OEM 13c instance for testing while the internal system (retriever.us.oracle.com) has a 12c and 11g database installed and is 39.5 GB (expanded to almost 90 GB when uncompressed). Given the size of the instance I really can't provide external access to this instance. You can recreate this by downloading the edelivery system, installing an 11g database instance, installing a 12c database instance, and configuring OEM to include data from those instances to replicate the screen shots that we are including.

If we look at the details on the virtual box instance we notice that we need at least 2 cores and 10 GB of memory to run this instance. The system is unusable at 8 GB of RAM. We really should bump this up to 12 GB of RAM but given that it is for demo purposes and for training it is ok if it runs a little slow. If we were running this in production it is recommended to grow this to 4 cores and 16 GB of memory and also recommended that you not use a downloaded VirtualBox instance for production but install from scratch.

The key things that we are going to do are walk through what it takes to add a monitoring agent onto the service that we are trying to monitor and manage. If we look at the architecture of Enterprise Manager we notice that there are three key components; the Oracle Management Repository (OMR), the Oracle Management Service (OMS), and the Oracle Management Agent (OMA). The OMR is basically a database that keeps a history of all telemetry actions as well as reports and analytics for the systems being monitored. The OMS is the heart of Enterprise Manager and runs on a WebLogic server. The code is written in Java and presents the primary user interface to the administrators as well as being the gateway between the OMR and the agents or OMAs. The agents are installed on the target systems and collect operating system data, database data, weblogic data, and all other log data to ship back to the OMR for analysis by the users.

It is important to note at this point that most PaaS and SaaS providers do not allow you to install an Enterprise Manager Agent or any other management agent on their instances. They want to manage the services for you and force you to use their tools to manage their instance. SalesForce, for example, only gives you access to your customer relationship data. You can export your contact lists to an csv file to backup your data but you can't correlate the contact list to the documents that you have shared with these users. Amazon RDS does not provide a file system access, system access to the database, or access to the operating system so that you can install the management agent. You must use their tools to monitor services provided on their sites. Unfortunately, this inhibits you from looking at important things like workload repository reports or sql tuning guides to see if something is running slow or waiting on a lock. Your only choice is to deploy the desired PaaS or SaaS as a manual or bundled install on IaaS forcing you to manually manage things like backups and patching on your own.

The first thing that we need to do in Enterprise Manager is to log in and click on the Setup button on the top right. We need to define named credentials since we are going to connect to the cloud service using public and private ssh keys. We need to follow the Security pull down to Named Credentials.

We click on the Create icon in the top left and add credentials with public and private keys. If we don't have an ssh key to access the service we can generate an ssh key using ssh-keygen which generates a public and private key and upload the key using the SSH Access pull down in the hamburger menu. Once we upload the ssh key we can use ssh -i keyname.ppk opc@ip_address for our database server. We will use this keyname.ppk to connect with Enterprise Manager and have all telemetry traffic transferred via the ssh protocol.

Once we have the credentials valid in the cloud account we can create the ssh access through Enterprise Manager. To do this we to to Setup at the top right, Security, Named Credentials. We then click on the Create button in the middle left to start entering data about the credentials. The name in the the screen shot below failed because it begins with a number so we switched it to ssh2017 since 2017ssh failed the naming convention. We are trying to use host access via ssh which is done with pull down menu definitions. The system defaults to a host access but we need to change from host to global which does not tie our credentials to one ip address. We upload our public and private key as well as associate this with the opc user since that user has sudo rights. We can verify the credentials by looking at the bottom of the list. This should allow us to access our cloud host via ssh and deploy an agent to our cloud target.

Note that we created two credentials because we had a step fail later. We created credentials for the opc user and for the oracle user. The opc credentials are called ssh2017 as shown in the screen shots. The oracle credentials are called oracle2017 and are not shown. The same steps are used just the username is changed as well as the name of the credentials.

If we want to install the management agent onto our instance we need to know the ip address of the service that we are going to monitor as well as an account that can sudo to root or run elevated admin services. We go to the Enterprise Manager splash screen, login, select the Setup button in the top right and drill down to Add Target and Add Target Manually. This takes us to the Add Target screen where we can Install Agent on Host. To get rid of the warnings, we added our cloud target ip address to the /etc/hosts file and used a fully qualified and short name associated with the ip address. We probably did not add the right external dns name but it works with Enterprise Manager. When we add the host we use the fully qualified host name. We can find this by logging into the cloud target and looking at the /etc/hosts file on that server. This gives us the local ip address and a fully qualified host name. Once we have this we can enter a directory to upload the agent software to. We had to create an agent directory under the /u01/app/oracle directory. We select the oracle2017 credentials (the screen shots use ssh2017 but this generates an error later) we defined in the previous step and start uploading the agent software and configuring the host as a target.

Note that we could have entered the ip address rather than going through adding the ip address to /etc/hosts. We would have received a warning with the ip address.

When we first tried this we got an error during the initialization phase that opc did not own the /u01/app/oracle directory and had to create an agent directory and change ownership. Fortunately, we could easily resubmit and enter a new directory without having to reenter all of the other information. The deployment takes a while because Enterprise Manager needs to upload the agent binaries, extract, and install them. The process is updated with status so that you can see the progress and restart when errors happen. When we changed the ownership, the installation failed at a later step stating the opc did not have permission to add the agent to the inventory. We corrected this by installing as oracle and setting the /u01/app/oracle/agent directory to be owned by oracle.

When we commit the ip address or host name as well as the ssh credentials, we can track progress as the management server deploys the agent. We get to a point where we note that the oracle user does not have ssh capabilities and we will need to run some stuff manually from the opc account.

At this point we should have an enterprise manager connection to a cloud host. To get this working from my VirtualBox behind my AT&T Uverse wireless router I first had to configure a route on my broadband connection and set the ip address of the Enterprise Manager VirtualBox image to a static ip address. This allows the cloud instance to talk back to the OMS and store data in the OMR.

The next step is to discover the database instances. This is done by going through a guided discovery on the host that we just provisioned. It took a few minutes to sync up with the OMS but we could verify this with the emctl status agent command on the target host. We add the target manually using the guided discovery and select database services to look for on the target.

At this point we should have a database, listener, and host connected to our single pane of management glass. We should see a local database (em12c) and a cloud based database (prs12cHP). We can look at the host characteristics as well as dive into sql monitoring, database performance, and database management like backup and restore options or adding users to the repository. We could add a Java Cloud Service as well as link these two systems together and trace a web page request down to a sql read and look at what the longest latency component is. We can figure out if the network, java memory allocation, or databse disk is causing the slowest response. We can also look at sql tuning recommendations to get suggestions on changing our sql code or execution plans using the arw report and sql tuning utilities in Enterprise Manager.

In summary, we can connect to an on premise server as well as a cloud server. We can't connect to an Amazon RDS instance because we don't get file system level access to push a client to or a root user to change the agent permissions. We do get this with IaaS on Oracle, Compute servers on Azure, and EC2 on Amazon. We also get this with PaaS on Oracle and potentially event Force.com from SalesForce. No one give you this ability with SaaS. It is assumed that you will take the SaaS solution as is and not need to look under the covers. Having a single pane of glass for monitoring and provisioning services is important. The tool should do more than tell you how full a disk is or how much of a cpu is loaded or available. It should dive into the application and let you look at where bottlenecks are and help troubleshoot issues. We could spend weeks diving into Enterprise Manager and the different management packs but we are on a journey to look at PaaS options from Amazon, Microsoft, and Oracle.

How to clear MDS Cache

Darwin IT - Thu, 2016-05-26 06:06
In the answer on a question on community.oracle.com, I found the following great tip: http://www.soatutor.com/2014/09/clear-mds-cache.html.

In 12cR2 this looks like:
1. Start System MBean Browser; In Domain, pull down the Weblogic Domain menu and choose 'System MBean Browser':


2. Browse for the Application Defined Beans:



3. Expand it and navigate to oracle.mds.lcm, choose server (AdminServer or SOAServer1)
 4. Navigate to the node Application em (AdminServer) or soa-infra (SOAServer) -> MDSAppRuntime -> MDSAppRuntime. Click on the tab Operations, and then on clearCache


5. Click on Invoke:


Then the following confirmation is shown:


By the way, often a redeploy of the SOA Composite that calls the WSDL or other artefact helps. But unfortunately not always.





My "Must See" ADF/MAF Sessions at KScope 16

Scott Spendolini - Thu, 2016-05-26 06:00
Yes, you read that right - it's not a typo, nor did one of my kids or wife gain access to my laptop.  It's part of a "blog hop" - where a number of experts made recommendations about KScope sessions that are "must attend" and are not in their core technology.  I picked ADF/MAF, as I don't have any practical experience in either technology, but they are at least similar enough that I would not be totally lost.

In any case, the following sessions in the ADF/MAF track are worth checking out at Kscope 16 this year:

How to Use Oracle ALTA UI to Create a Smashing UI for Web and Mobile
Luc Bors, eProseed NL
When: Jun 28, 2016, Session 12, 4:45 pm - 5:45 pm

I've always liked UI, and Oracle ALTA is a new set of templates that we'll be seeing quite a bit of across a number of new technologies.

Three's Company: Going Mobile with Oracle APEX, Oracle MAF, and Oracle MCS
Frederic Desbiens , Oracle Corporation
When: Jun 27, 2016, Session 6, 4:30 pm - 5:30 pm

I'll admit - anytime there's a comparison of APEX and other similar technologies, it's always interesting to witness the discussion.  If nothing else, there will be a good healthy debate as a result of this session!

Introduction to Oracle JET: JavaScript Extension Toolkit
Shay Shmeltzer, Oracle Corporation
When: Jun 28, 2016, Session 7, 8:30 am - 9:30 am

Oracle JET is a lot more than just charts, and there's a lot of momentum behind this technology.  I'm very interested to learn more and perhaps even see a thing or two that you can do with it, as well as the various integration points that are possible with other technologies.

Build a Mobile App in 60 Minutes with MAF
John King , King Training Resources
When: Jun 27, 2016, Session 5, 3:15 pm - 4:15 pm

Native mobile applications are something that APEX doesn't do, so it would be nice to see how this would be possible, should the need ever arise.

Thanks for attending this ODTUG blog hop! Looking for some other juicy cross-track sessions to make your Kscope16 experience more educational? Check out the following session recommendations from fellow experts!

Oracle CURRENT_TIMESTAMP Function with Examples

Complete IT Professional - Thu, 2016-05-26 05:00
The CURRENT_TIMESTAMP function is one of the many time and date-related functions in Oracle. Learn how to use it and see some examples in this article. Purpose of the Oracle CURRENT_TIMESTAMP Function The Oracle CURRENT_TIMESTAMP function will display the current date and time. It’s displayed in the session time zone, which is the timezone of […]
Categories: Development

Storing Date Values As Characters (What’s Really Happening)

Richard Foote - Thu, 2016-05-26 02:00
For something that’s generally considered an extremely bad idea, I’ve lost count of the number of times I’ve come across applications that insist on storing date values as characters within the database. We’ve all seen them … I recently got called in to assist a customer who was having issues with a POC in relation […]
Categories: DBA Blogs

The river floes break in spring...

Greg Pavlik - Wed, 2016-05-25 18:37

Alexander Blok The river floes break in spring... March 1902translation by Greg Pavlik 

The river floes break in spring,
And for the dead I feel no sorrow -
Toward new summits I am rising,
Forgetting crevasses of past striving,
I see the blue horizon of tomorrow.

What regret, in fire and smoke,
What agony of Aaron’s rod,
With each hour, with each stroke -
Or instead - the heavens’ gift stoked,
From the Bush of Moses, the Mother of God!

Original:

Весна в реке ломает льдины,
И милых мертвых мне не жаль:
Преодолев мои вершины,
Забыл я зимние теснины
И вижу голубую даль.

Что сожалеть в дыму пожара,
Что сокрушаться у креста,
Когда всечасно жду удара
Или божественного дара
Из Моисеева куста!  Март 1902

Please, use HTTPS for your APEX apps

Dimitri Gielis - Wed, 2016-05-25 16:07
Why use HTTPS?

When you Google this question you get many different answers, but this answer of Google Developers answers it for me in short (click the link for more details):
  • HTTPS protects the integrity of your website/APEX app
  • HTTPS protects the privacy and security of your users
  • HTTPS is the future of the web; many new technologies only work with https (for example Service Workers; you can read more about Service Workers and APEX in my presentation)
Industry going to HTTPS

Before websites had an HTTP portion and an HTTPS portion, which became active when you would login to the site, but nowadays everything is under HTTPS. Google will actually rank your site higher when it's using HTTPS. Look at the sites you visit; many of them will now use HTTPS as a default.

HTTPS on localhost

If you're developing locally, you don't really need HTTPS on localhost, but I still like to have that.
Here're the steps I did in Chrome on my Mac (OSX) to get the nice green lock when developing locally (works also with APEX Front-End Boost)
  • In the address bar, click the little lock with the X. This will bring up a small information screen. Click the button that says "Certificate Information."
  • Click and drag the certificate image to your desktop. 
  • Double-click it. This will bring up the Keychain Access utility. Enter your password to unlock it.
  • Be sure you add the certificate to the System keychain, NOT the login keychain. 
  • After it has been added, double-click it. 
  • Expand the "Trust" section. "When using this certificate," set to "Always Trust"
  • Close Keychain Access and restart Chrome, and your self-signed certificate should be recognized now by the browser.
HTTPS on your own server

For years I've been using SSL certificates ordered from Godaddy, but depending the certificate you get, it might not be that cheap. The APEX R&D website is a multi-site certificate - the same certificate is used for the APEX Office Print website.

But there's some good news... you can get SSL for free too (and it's very easy to do!), thanks to Letsencrypt. I used Letsencrypt to protect the Euro2016challenge.eu APEX app/website for example.
Here's the Getting Started Guide from Let's Encrypt. This is the command I used (after installing the package):

./letsencrypt-auto certonly --webroot -w /var/www/euro2016 -d euro2016challenge.eu -d www.euro2016challenge.eu


If you're not yet on https with your APEX app/site, I would definitely recommend looking into it :)

Categories: Development

Are Zero Days or Bugs Fixed by CPU The Worst?

Pete Finnigan - Wed, 2016-05-25 15:50

I spoke yesterday about compartmentalising Oracle Security and one element that comes out of this is the need to consider what you are trying to achieve; secure actual data and also secure the platform. In general applying security patches will....[Read More]

Posted by Pete On 25/05/16 At 12:51 PM

Categories: Security Blogs

Compartmentalised Oracle Security

Pete Finnigan - Wed, 2016-05-25 15:50

I have been teaching security classes about Oracle Security for many years and they are very popular and I teach many classes per year around the world; mostly in the UK and EEC but I also venture to the Middle....[Read More]

Posted by Pete On 24/05/16 At 12:43 PM

Categories: Security Blogs

New Oracle Security Paper on Non-Production and Delphix

Pete Finnigan - Wed, 2016-05-25 15:50

I was asked by Delphix earlier this year to review their product with a particular focus on Oracle security of course. I wrote two papers; the first about Data Masking and Delphix and the second about securing data in non-production....[Read More]

Posted by Pete On 23/05/16 At 11:23 AM

Categories: Security Blogs

Oracle Security And Delphix Paper and Video Available

Pete Finnigan - Wed, 2016-05-25 15:50

I did a webinar with Delphix on 30th March 2016 on USA time. This was a very good session with some great questions at the end from the attendees. I did a talk on Oracle Security in general, securing non-production....[Read More]

Posted by Pete On 01/04/16 At 03:43 PM

Categories: Security Blogs

3 Days of Oracle Security Training In York, UK

Pete Finnigan - Wed, 2016-05-25 15:50

I have just updated the public Oracle Security training dates on our Oracle Security training page to remove the public trainings that have already taken place this year and to add a new training in York for 2016. After the....[Read More]

Posted by Pete On 31/03/16 At 01:53 PM

Categories: Security Blogs

Oracle Data Masking and Secure Test Databases

Pete Finnigan - Wed, 2016-05-25 15:50

My daily work is helping my customers secure their Oracle databases. I do this in many ways from performing detailed security audits of key databases to helping in design of secure lock down policies to creating audit trails to teaching....[Read More]

Posted by Pete On 14/03/16 At 08:45 AM

Categories: Security Blogs

BOF: A Sample Application For Testing Oracle Security

Pete Finnigan - Wed, 2016-05-25 15:50

In my Oracle security training classes I use a couple of sample applications for various demonstrations. I teach people how to perform security audits of Oracle databases, secure coding in PL/SQL, designing audit trail solutions and locking down Oracle. We....[Read More]

Posted by Pete On 10/03/16 At 11:07 AM

Categories: Security Blogs

Two New Oracle Security Presentations Available

Pete Finnigan - Wed, 2016-05-25 15:50

I attended the UKOUG conference last week Monday to Wednesday in Birmingham. This is the first year for three years that it has been back at the ICC in the center of Birmingham. The last two years have seen the....[Read More]

Posted by Pete On 14/12/15 At 08:54 PM

Categories: Security Blogs

Log Buffer #475: A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2016-05-25 13:32

This Log Buffer Edition goes through various blogs, and selects some of the top posts from Oracle, SQL Server and MySQL.

Oracle:

MOS Note:136697.1 – New HCHECK.SQL for Oracle Database 12c

ORAchk / EXAchk questions.

Cloud control won’t start!

ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices.

ORA-56841: Master Diskmon cannot connect to a CELL.

Oracle BITAND Function with Examples.

 

SQL Server:

Natively Compiled Stored Procedures: What they are all about

Considerations around validation errors 41305 and 41325 on memory optimized tables with foreign keys

Taking Azure SQL Data Warehouse for a Test-Drive.

Persistent PowerShell: The PowerShell Profile.

SQL Server Always On Endpoint Encryption Algorithm Compatibility Error.

 

MySQL:

Fixing MySQL scalability problems with ProxySQL or thread pool.

Installing a Web, Email & MySQL Database Cluster on Debian 8.4 Jessie with ISPConfig 3.1

Planets9s – Download the new ClusterControl 1.3 for MySQL, MongoDB & PostgreSQL

AWS Aurora Benchmark – Choose the right tool for the job

Where is the MySQL 5.7 root password?

Categories: DBA Blogs

SharePlex Replication Between Two Instances On The Same Host

Pythian Group - Wed, 2016-05-25 09:12

Several days ago I was asked a question about SharePlex and should verify behaviour before providing the answer. I had one linux VM with two databases and needed to setup replication between them. One of them (orcl) was a target 12c EE database while the second one was a source 10g SE. Accordingly, I should use two different versions of SharePlex to mimic the questionable behaviour, so I should have two different SharePlex instances talking to each other on the same host machine. It worked pretty well for, as an example, GoldenGate, where you just setup different ports for manager processes. However, in SharePlex all the instances participating in a configuration should use the same port. The SharePlex documentation states:
“Important! The SharePlex port number must be the same one on all machines in the replication configuration so that they can communicate through TCP/IP connections.”

Of course you cannot use the same port on the same network interface for two independent SharePlex processes working from different homes. In addition, SharePlex is not asking you about a hostname either during installation, or when you start it.
In my case I had 2 interfaces in different subnets on my VM, and I could use them for my replication. But how should I tell to the SharePlex to use one or the other? The answer was simple. You should use “SP_SYS_HOST_NAME” parameter as the environment variable for your shell. Here is how I’ve done that.
I added two new hostnames for those interfaces to my /etc/hosts file to be used for my SharePlex instances:

[root@sandbox ~]$ cat /etc/hosts | grep splex
10.177.130.58	splexhost
172.16.128.10	splexstor
[root@sandbox ~]$ 

I unzipped SharePlex 8.6.3 for Oracle 10 and prepared installation. By default the installer will pick up the hostname automatically and use it during installation.
Here is my hostname and it is not what I want to use for my first SharePlex instance:

[oracle@sandbox ~]$ hostname
sandbox.localdomain
[oracle@sandbox ~]$ ping sandbox.localdomain
PING sandbox.localdomain (127.0.0.1) 56(84) bytes of data.
64 bytes from sandbox.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from sandbox.localdomain (127.0.0.1): icmp_seq=2 ttl=64 time=0.061 ms
^C
--- sandbox.localdomain ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1468ms
rtt min/avg/max/mdev = 0.058/0.059/0.061/0.007 ms
[oracle@sandbox ~]$

What you need to do is to setup an environment variable SP_SYS_HOST_NAME. I wanted the first SharePlex listening and working on hostname splexhost using default port 2100. I should also define proper Oracle home and Oracle database SID during installation. I used standard Oracle utility oraenv to have Oracle variables setup. Here is how I started installation for the source:

[oracle@sandbox ~]$ export SP_SYS_HOST_NAME=splexhost
[oracle@sandbox ~]$ . oraenv
ORACLE_SID = [test] ?
The /u01/app/oracle/product/10.2.0/sehome_1/bin/orabase binary does not exist
You can set ORACLE_BASE manually if it is required.
[oracle@sandbox ~]$ cd /u01/distr/SharePlex
[oracle@sandbox SharePlex]$ ll
total 96736
-rwxr-xr-x. 1 oracle oinstall 99056391 Jan 11 21:56 SharePlex-8.6.3-b171-oracle100-rh-40-amd64-m64.tpm
[oracle@sandbox SharePlex]$ ./SharePlex-8.6.3-b171-oracle100-rh-40-amd64-m64.tpm
Unpacking ..................................................................
  ..........................................................................

During the installation I provided directories for program and variable files, port number and information about ORACLE_SID and ORACLE_HOME. Everything went smoothly. I finished the installation by running the “ora_setup” utility creating necessary schema and objects in the source database.
For target I used the same strategy setting up Oracle variables by “oraenv” and exporting explicitly SP_SYS_HOST_NAME=splexstor. The installation worked out without any problems and I got two SharePlex instances installed to different directories.
To start a SharePlex home for a certain IP address and interface you need to explicitly setup SP_SYS_HOST_NAME to an appropriate value. Let’s see how it’s been done for source.

[oracle@sandbox ~]$ export SP_SYS_HOST_NAME=splexhost
[oracle@sandbox ~]$ . oraenv
ORACLE_SID = [test] ?
The /u01/app/oracle/product/10.2.0/sehome_1/bin/orabase binary does not exist
You can set ORACLE_BASE manually if it is required.
[oracle@sandbox ~]$ cd /u01/sp10/bin
[oracle@sandbox bin]$ telnet splexhost 2100
Trying 10.177.130.58...
telnet: connect to address 10.177.130.58: Connection refused

[oracle@sandbox bin]$ nohup /u01/sp10/bin/sp_cop -usp10 &
[1] 2023
[oracle@sandbox bin]$ nohup: ignoring input and appending output to `nohup.out'

[oracle@sandbox bin]$ telnet splexhost 2100
Trying 10.177.130.58...
Connected to splexhost.
Escape character is '^]'.
^]
telnet> q
Connection closed.
[oracle@sandbox bin]$

For target you have to adjust your environment variables accordingly and do the same.

[oracle@sandbox ~]$ export SP_SYS_HOST_NAME=splexstor
[oracle@sandbox ~]$ . oraenv
ORACLE_SID = [test] ? orcl
The Oracle base has been set to /u01/app/oracle
[oracle@sandbox ~]$cd /u01/sp12/bin
[oracle@sandbox bin]$ nohup /u01/sp12/bin/sp_cop -usp12 &
[1] 2066
[oracle@sandbox bin]$ nohup: ignoring input and appending output to `nohup.out'

[oracle@sandbox bin]$ 

As result we have two SharePlex instances running on the same host and talking to each other. Now we can create a sample replication. In the database schema “splex” created by “ora_setup” utility we have DEMO_SRC and DEMO_DST tables. SharePlex is using those tables for a demo replication. We can use them too and see how it works in our case. We can either modify a default pre-created sample configuration “ORA_config” or we can create a new one. I’ve created a new config “sample” on my source SharePlex and activated it:

sp_ctrl (splexhost:2100)> list config

File   Name                                         State       Datasource
--------------------------------------------------  ----------  ---------------
ORA_config                                          Inactive    o.SOURCE_SID
Last Modified At: 17-May-16 11:17    Size: 151

sp_ctrl (splexhost:2100)> create config sample

The command opens default editor and you can write your configuration.
Here is what I put to my “sample” config.

datasource:o.test

#source tables      target tables           routing map

splex.demo_src      splex.demo_dest             splexstor@o.orcl

Now we can activate config.

sp_ctrl (splexhost:2100)> activate config sample

sp_ctrl (splexhost:2100)> list config

File   Name                                         State       Datasource
--------------------------------------------------  ----------  ---------------
ORA_config                                          Inactive    o.SOURCE_SID
Last Modified At: 17-May-16 11:17    Size: 151

sample                                              Active      o.test
Last Modified At: 17-May-16 11:30    Size: 134     Internal Name: .conf.1

sp_ctrl (splexhost:2100)>

Now we can see all the processes running :

On the source:

sp_ctrl (splexhost:2100)> lstatus

Detailed Status for splexhost
Process          State                             PID     Running   Since
---------------  ------------------------------  --------  --------------------
Cop              Running                             2023  17-May-16 11:24:39
Capture          Running                             2250  17-May-16 11:30:53
  Data/Host:   o.test
Read             Running                             2279  17-May-16 11:30:53
  Data/Host:   o.test
Export           Running                             2304  17-May-16 11:30:56
  Data/Host:   splexstor
  Queue Name:  splexhost
Cmd & Ctrl       Running                             2581  17-May-16 11:40:39
  Data/Host:   splexhost

On the target:

sp_ctrl (splexstor:2100)> lstatus

Detailed Status for splexstor
Process          State                             PID     Running   Since
---------------  ------------------------------  --------  --------------------
Cop              Running                             2066  17-May-16 11:26:23
Import           Running                             2305  17-May-16 11:30:56
  Data/Host:   splexhost
  Queue Name:  splexhost
Post             Running                             2306  17-May-16 11:30:56
  Data/Host:   o.test-o.orcl
  Queue Name:  splexhost
Cmd & Ctrl       Running                             2533  17-May-16 11:38:18
  Data/Host:   splexstor

Let’s insert a row on the source :

test>  insert into splex.demo_src values ('JIM', '8001 Irvine Center Drive', '949-754-8000');

1 row created.

test> commit;

Commit complete.

test>

And we can see the row was successfully replicated to target:

orcl> select * from splex.demo_dest;

NAME							     ADDRESS							  PHONE#
------------------------------------------------------------ ------------------------------------------------------------ ------------
JIM							     8001 Irvine Center Drive					  949-754-8000

orcl>

As you can see, we were able to use one box to replicate data between two different databases, using two different SharePlex installations. The idea was simple and clear from the start, but I couldn’t find enough information in the installation guide for the SP_SYS_HOST_NAME parameter except documentation about configuring it for cluster installation. The parameter was documented in the reference section of documentation, though in reality it was not.
It could be even better if we could place the parameter inside, and not think about setting variables, but unfortunately setting the parameter in the “paramdb” didn’t work for me. Even having the parameter, you still need to setup your environment variable SP_SYS_HOST_NAME=your_host_name for non default hostname. I hope the article may help somebody save a bit of time.

Categories: DBA Blogs

Maker Faire 2016

Oracle AppsLab - Wed, 2016-05-25 08:13

Another year, another amazing at the Maker Faire.

I’ve attended my fair share of Maker Faires these years, so the pyrotechnic sculptures, 3D printing masterpieces, and handmade artisan marketplaces were of no particular surprise. But somehow, every time I come around to the San Mateo fairgrounds, the Faire can’t help but be so aggressively fresh, crazy, and novel. This year, a host of new and intriguing trends kept me on my toes as I ventured through the greatest show and tell on Earth.

Young makers came out in full force this year. Elementary school maker clubs showed off their circuit projects, middle schoolers explained how they built the little robots, high school STEM programs presented their battle robots. It’s pleasing to see how Maker education has blossomed these past years, and how products and startups like LittleBits and Adafruit have made major concepts in electronics and programming so simple and inexpensive that any kid could pick it up and start exploring. Also wonderful is seeing young teams traveling out to the Bay Area from Texas, Oregon, and all these other states, a testament to the growth of the Maker movement out of the Silicon Valley.

Robot battle demo for kids

Robot battle demo for kids

Speaking of young makers’ participation, Arduino creator Massimo Banzi talked about Arduino as an education tool for kids to play and tinker, even he never planned to make kid’s toys in his early years. The maker movement has invoked the curious minds of all age, to start playing electronics, making robots, and learning a new language in programming.

While the maker movement made things very accessible to individuals, the essence of creation and innovation also impacted on the large enterprise. On the “Maker Pro” stage, our GVP, Jeremy Ashley (@jrwashley), talked about new trends of large enterprise application design, and OAUX group is driving the change to make simpler, but more effective and more engaging enterprise application.

Jeremy talks on Maker Pro stage

Jeremy talks on Maker Pro stage

Drones were also a trending topic this year, with a massive Drone Racing tent set up with events going on the whole weekend. Everything was being explored – new shapes for efficient and quick flight; new widgets and drone attachment modules; new methods of interaction with the drone. One team had developed a smart glove that responded to gyroscopic motion and gestures to control the flight of a quadcopter, and had the machine dance around him – an interesting and novel marriage of wearable tech and flight.

Flight of the bumble drone

Flight of the bumble drone

Personally, I’ve got a soft spot for art and whimsy, and the Faire had whimsy by the gallon. The artistry of the creators around the country and globe can’t be overestimated.

Lotus boat

Lotus boat

Blueberry muffin

Blueberry muffin

Flying dragon blowing fire

Flying dragon blowing fire

Maker Faire never disappoints. We brought friends along who had never been to a Faire, and it’s always fun to watch them get blown off their feet literally and figuratively the first time a flamethrower blasts open from the monolithic Crucible. Or their grins of delight when they see a cupcake shaped racecar zoom past them… and another… and another. Or the spark of amazement when they witness some demo that’s out of any realm of imagination.

Electrifying experience

Electrifying experience

Possibly Related Posts:

Managing servers and instances in the cloud

Pat Shuff - Wed, 2016-05-25 06:59
Managing servers and instances has been an ongoing issue since the introduction of the first computer. Recently with the advent of virtualization the idea of a management console to control what processors are running what services and what storage is allocated to what operating system has gained popularity. Many people are familiar with VMWare VSphere where you get a view of processors. We get a view of a server and can see virtual images deployed on this server. We can see how well the resources (memory, cpu, and disk) are being utilized. We can allocate more or less resources since this is a dynamic allocation and make sure that we are not over allocating resources and wasting them or under allocating them and causing applications to run slower.

In this example we can see that we have two processors, 2 GB of memory, and just under 300 GB of disk on this computer. We have five virtual machines running on this computer and can dive into each operating system and look at what operating system is installed and how the limited resources are allocated and utilized. What we can't see is what applications are installed and how the applications are running. For example, is the Windows Home Server 2011 running an Apache Web Server and how many hits did the web server get in the past four days? Monitoring tools beg the question of what are you monitoring. If you are managing limited resources and making sure that you have not over or under allocated services, tools like VSphere are excellent tools. Unfortunately, you will need other tools to dive into another tool. EMC, for example, has a storage manager that lets you look not only at a logical unit level but a controller and disk level. It understands VMWare and lets you look at how disks are related to virtualization engines and how they are consuming resources.

Again, this is a very good tool to look at how well a disk is performing, how well data is laid out across spindles, and how well your data network is being transmitted between disk and server. We can see hot spots. We can see disks that are over and under utilized. We can manage a scarce resource and make sure that it is properly utilized.

When we talk about monitoring we need to shift our thought process. Yes, it is important to manage compute, memory, and storage resources but it is also important to realize that these resources are commodities. If we run low, we get more. If we use too much we are wasting resources. We should be able to automate allocation of resources and size up or size down resources without manual monitoring. What we are really interested in is how well is our company running. If we are a university we might be interested in the latency of delivering online video classes. We might be interested in how many classes are being added to a student schedule during registration. If we are a ticket retailer we might be interested in how many tickets were requested and paid for on a minute by minute basis. Note that we are not talking about how well a disk drive is allocated or if we have enough processors allocated to a virtual machine, we are talking in term of business terms. We are looking at tying revenue generating services back to computer resources and trying to figure out what is causing a problem. In the online video classroom example, we might have our processors allocated properly, storage tuned to the last IOP, and memory allocated to buffer data and reduce disk reads. If we are on the same network as the athletic department and our basketball team made it to the elite eight during March madness and the athletic department live streams the game on the same network as our classroom servers our classes will be offline due to demand to watch the basketball game. Tools from EMC and VMware will show that everything is working fine and life is good. Meanwhile the help desk is getting calls from students off campus that can't access their assignments during midterms and their Thursday class is not available. What we need is a monitoring system that can look at systems and incorporate more than just processor and disk. What we need is a tool that can look at systems and services and not just resources. We would like to look at the video distribution system and be able to dive into the disk, network, or processor and see what the bottleneck is and fix it quickly.

Oracle released a tool years ago called Enterprise Manager. The tool started out as a database monitoring tool that allowed you to dive into sql calls and figure out why it was taking longer than necessary. With acquisitions of companies like BEA and Sun Microsystems the tool expanded to look at how Java was performing inside a WebLogic server and how disk drives were performing that were serving up requests for the database and WebLogic server. Acquisitions of companies like JD Edwards and PeopleSoft drove the monitoring tools in the opposite direction and screens showing how many purchase orders were being processed on an hourly basis were suddenly available. You could look at what was the bottleneck in closing your books for the end of month reconciliation. Was it a manual process waiting on a report to drop into a directory or was it a sql statement that was taking minutes rather than seconds to complete? You could start looking at a process like purchase orders and dive into a database to see if a table was reaching storage limits as well as figure out that someone recently patched the database which caused an index to not look at a new column that was created and searches are now going against this column so select statements are doing a full table scan rather than using an index to report answers quicker. Adding more storage in this case will be a waste of time. Yes, we are running out of storage on a table but the real issue is we need to re-index the database or execute a new sql execution plan. Below is a screen shot of how well a database is performing with links to look at all the sub-components of the database.

Books have been written on Enterprise Manager. We are not going to cover everything in this blog to make you an expert on the subject.

There are also a number of blogs related to Enterprise Manager

This is a partial list of blogs returned by a Google search. I am sure I missed a few. Note that the list of books and blogs is not a short list. There are classes offered by Oracle University that you can take virtually or in a classroom (both cost money).

The way that Oracle Enterprise Manager is paid for is simple. The base system is free and you pay for the options that you want to use. Unfortunately, the Technology Price Guide is not very clear as to what is and is not Enterprise Manager and what is an option on the database. For example, on page 7, most of the management packs are listed. If you want diagnostics for the database you will need to license your database at $7,500 per processor and not Enterprise Manager. You can license at $150 per named user but the licensing metrics for your database need to match the licensing for you management pack. You could have a two processor license for production and a 25 named user license for development and testing so you will need to blend these licenses into Enterprise Manager with the management packs. Diagnostics is specifically confusing because you enable or disable this feature in Enterprise Manager and not in the database. The telemetry data is being collected for the database but the reporting on the results of the analysis is not being done in the database. You could turn on the reporting in Enterprise Manager without involving the DBA thus incurring an additional license fee that you had not paid for. There is no license key or email that is sent to Oracle saying that you enabled the license it is a simple checkbox in Enterprise Manager that says turn on diagnostic reporting. In recent versions a warning screen pops up telling you that this is not a free feature. In OEM 10g the feature was turned on by default and you had to turn it off. This has changed in recent releases. If you try to turn this feature on when connecting to an Enterprise Edition in the Oracle Public Cloud you will get a feature not available message. You need to go with High Performance or Extreme Performance edition of the database to get the diagnostics enabled.

There are also management packs for Oracle Applications and the pricing for these products can be found in the Oracle Applications Price List. You need to search for the word "packs" to find the price of the management packs in this list. You can get a list of all the management packs from the Oracle Tech Network page for Enterprise Manager

It is important to note that the Enterprise Manager that runs in your data center monitoring your servers and Oracle hardware and software products is the same tool that you can use to monitor and manage PaaS and IaaS resources in the Oracle Public Cloud. You can connect to the instance in the cloud using ssh and read the telemetry from the cloud instance as if it were installed on one of your servers. You can use extensions to the latest version of Enterprise Manager, 13c, to clone a pluggable database instance from your on site installation to a cloud instance.

You can also setup reporting and self service requests to have end users ask for a new service to be provisioned either on site or in the cloud. Below is a screen shot of how to do this for a database. We could do something similar for a WebLogic server, an Apache Web server, a PeopleSoft instance for dev/test, or any layer of the Oracle stack.

In summary, selection of a management tool is important. Tools are good to understand and properly use. At some point you need to step back and ask what is the questions that I need answers to. Am I diving too deep on trying to optimize something that is not that worth deep analysis? Could I automate this and not have to monitor it at all? If I run out of processing power does it make sense to automatically scale up the number of processors? Should I scale out by spinning up more web servers? Do I need to re-architect my network topology to isolate disk traffic from client traffic? If I generate a report who will consume the results? Is the report for someone in IT? Purchasing? The process owner? Is it a technology or financial report? Products like Enterprise Manager allow you to generate all of these reports using different management extensions. My suggestion is to look at some of the introductory videos on the Oracle Tech Network to get an introduction to the problem that you are trying to solve then figure out how much it will cost to measure what is important to you.