Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 8 hours 26 min ago

The First PASS Summit Bloggers’ Meetup

Fri, 2014-10-31 13:02

We are stoked to announce the first ever PASS Summit Bloggers Meetup!

What: PASS Summit Bloggers Meetup 2014
When: Thursday, November 6th, 5pm – 7pm
Where: Sports Restaurant & Bar, 140 4th Ave North, Suite 130, Seattle, WA 98109
How: Please comment with “COUNT ME IN” if coming — we need to know attendance numbers.

We’re excited to meet old friends, and make new ones in Seattle this year. Pythian will sponsor munchies and drinks. There will be a networking contest with some cool prizes, plus everyone will get their very own PASS Summit Bloggers Meetup shirt! Meetups are a great way to make new friends in the community, so come on out — all bloggers are welcome!

If you are planning to attend, please comment here with the phrase “COUNT ME IN”. This will help us ensure we have the attendance numbers right. Please provide your blog URL with your comment — it’s a Bloggers Meetup after all! Make sure you comment here if you are attending so that we have enough room, food, and drinks.

The location is perfect to get ready for the appreciation event — minutes walk from EMP Museum! Snacks and drinks before the big event and mingle with fellow bloggers. What can be better?

Of course, do not forget to blog and tweet about this year’s bloggers meetup using #Summit14 #sqlpass. See you there!

Categories: DBA Blogs

Speaking at the Spanish Virtual PASS Chapter

Fri, 2014-10-31 10:09

Title : Recuperación de desastres y soluciones de alta disponibilidad con SQL Server
Event link: http://globalspanish.sqlpass.org/Inicio.aspx?EventID=1846
Event description: “Esta presentación presenta las soluciones de recuperacion de desastres (Disaster Recovery) y alta disponibilidad (High Availability) con SQL Server y ofrece escenarios creativos por usar las soluciones para reportages (Reporting), BI y almacen de datos (Datawarehouse). ”

Please feel free to register!

Categories: DBA Blogs

Log Buffer #395, A Carnival of the Vanities for DBAs

Fri, 2014-10-31 07:44

This Log Buffer Edition ventures out in the fields of Oracle, SQL Server and MySQL and picks some of the coolest blog posts for this week.

Oracle:

In Oracle, if we add a column to a table which is NOT NULL, we are allowed to do it directly, in a single statement, as long as we supply a DEFAULT value to populate any pre-existing rows.

There have been rumblings from the HPC community indicating a general suspicion of and disdain for Big Data technology which would lead one to believe that whatever Google, Facebook and Twitter do with their supercomputers is not important enough to warrant seriousness—that social supercomputing is simply not worthy.

Work-around Instance Migration Limits of BPM Suite 11g.

Oracle Event Processing 12c: java errors when deploying a new OEP project.

Creating a WebLogic 12c Data Source Connection to Pivotal GemFireXD 1.3.

SQL Server:

It sounds simple enough. Either your column will always have a value or it may not. Yet somehow such a seemingly simple decision can become a never-ending debate where database schema begins to resemble superstition and designing effective tables seems more contentious than you expected it to be.

There’s a joke doing the rounds at SQL conferences and seminars: three DBAs walk into a NoSQL bar and leave when they can’t find a table.

Stairway to AlwaysOn Level 3: Infrastructure 101.

How to compare two databases and email the results to every one who needs to know.

Free eBook: Troubleshooting SQL Server: A Guide for the Accidental DBA.

MySQL:

Set up an SSL-encrypted connection between Sphinx and MySQL.

Sometimes users ask for something that doesn’t really make sense. On the first glance. But then you start asking and realize that the user was right, you were wrong, and it is, actually, a perfectly logical and valid use case.

PECL/mysqlnd_ms needs updates for MySQL Group Replication.

Why should you migrate from MySQL to MariaDB?

The Perfect Server – CentOS 7 (Apache2, Dovecot, ISPConfig 3).

Categories: DBA Blogs

Pythian at PASS 2014

Thu, 2014-10-30 11:40

Join us in Seattle for SQL PASS Summit 2014—planned by and for the Microsoft SQL Server community—where some of our very own will be presenting. You’ll notice that PASS has a funny way of scheduling speaking sessions, so both of our experts are presenting at the same time.

Also be sure to visit us at booth #226 for a chance to win a Sonos Play: 1. The Exhibit Hall will be open all day Wednesday, Thursday, and Friday morning.

IMPORTANT UPDATE: We are proud to announce that we’re hosting the first ever PASS Summit Bloggers Meetup! Join us Thursday night to meet with old friends, and make new ones. RSVP here.

 

Edwin Sarmiento Configuring SharePoint 2013 as a Business Intelligence Platform by Edwin Sarmiento
Wednesday November 5 — 1:30-2:45 PM
Room 608

Edwin Sarmiento, a Microsoft MVP, Certified Master, and Principal Consultant in Pythian’s Advanced Technology Group, will be presenting a session called Configuring SharePoint 2013 as a Business Intelligence Platform.

With SharePoint as the business collaboration platform of choice for the enterprise, it has become an important piece in the overall Microsoft Business Intelligence (BI) stack. However, deploying and configuring it to support Microsoft BI solutions has become a cumbersome task for IT experts.

In this session, Edwin will demonstrate what it takes to successfully architect and design SharePoint 2013 as a BI platform. He will cover multiple configuration scenarios and the different technologies necessary to build the infrastructure. Attendees will walk away with the confidence to run Power Pivot, Power View, and Reporting Services in their SharePoint 2013 farms.

 

Warner ChavesThe Use Cases for In-Memory OLTP by Warner Chaves
Wednesday November 5 — 1:30-2:45 PM
Room 3AB

Warner Chaves, a Microsoft Certified Master and Principal Consultant in Pythian’s SQL Server practice, will be presenting a session called The Use Cases for In-Memory OLTP.

In this session, he will do a short introduction to the In-Memory OLTP feature before diving straight into the use cases where the new lockless/latchless concurrency control and native compilation really shine.

Demos will cover PAGELATCH contention (or lack thereof), use of non-persistent in-memory tables for ETL, and in-memory tables as “shock absorbers” for high throughput environments.

For each case, Warner will do a comparison of “classic” versus in-memory, what gains the audience can expect, and what patterns will yield the biggest benefits.

 

BONUS MATERIAL! Since its release back in April, SQL Server experts across the globe are becoming familiar with the top features in Microsoft SQL Server 2014—the In-Memory OLTP engine, the AlwaysOn enhancements, and more. Warner couldn’t help but notice that there were a few features that weren’t getting the same attention and filmed a video series sharing the most underrated features of SQL Server 2014. We’ll be publishing that series during PASS, so follow @Pythian on Twitter to receive our updates.

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Microsoft SQL Server expertise.

Categories: DBA Blogs

Pythian at Percona Live London 2014

Wed, 2014-10-29 14:29

Percona Live London takes place next week from November 3-4 where Pythian is a platinum sponsor—visit us at our booth during the day on Tuesday, or at the reception in the evening. Not only are we attending, but we’re taking part in exciting speaking engagements, so be sure to check out our sessions and hands-on labs. Find those details down below.

 

MySQL Break/Fix Lab by Miklos Szel, Alkin Tezuysal, and Nikolaos Vyzas
Monday November 3 — 9:00AM-12:00PM
Cromwell 3 & 4

Miklos, Alkin, and Nikolaos will be presenting a hands-on lab by demonstrating an evaluation of operations errors and issues in MySQL 5.6, and recovering from them. They will be covering instance crashes and hangs, troublesehooting and recovery, and significant performance issues. Find out more about the speakers below.

About Miklos: Miklos Szel is a Senior Engineer at Pythian, based in Budapest. With greater than 10 years’ experience in system and network administration, he has also worked for Walt Disney International as its main MySQL DBA. Miklos specializes in MySQL-based high availability solutions, performance tuning, and monitoring, and has significant experience working with large-scale websites.

About Alkin: Alkin Tezuysal has extensive experience in enterprise relational databases, working in various sectors for large corporations. With greater than 19 years’ of industry experience, he has been able to work on large projects from the group up to production. In recent years, he has been focusing on eCommerce, SaaS, and MySQL technologies.

About Nikolaos: Nik Vyzas is a Lead Database Consultant at Pythian, and an avid open source engineer. He began his career as a software developer in South Africa, and moved into technology consulting firms for various European and US-based companies. He specializes in MySQL, Galera, Redis, MemcacheD, ad MongoDB on many OS platforms.

 

Setting up Multi-Source Replication in MariaDB 10 by Derek Downey
Monday November 3 — 2:00-5:00PM
Cromwell 3 & 4

For a long time, replication in MySQL was limited to only a single master. When MariaDB 10.0 became generally available, the ability to allow multiple masters became a reality. This has opened up the door to previously impossible architectures. In this hands-on tutorial, Derek will discuss some of the features in MariaDB 10.0, demonstrate establishing a four-node environment running on participants’ computer using Vagrant annd VirtualBox, and even discuss some limitations associated with  10.0. Check out Derek’s blog post for more detailed info about his session.

About Derek:Derek began his career as a PHP application developer, working out of Knoxville, Tennessee. Now a Principal Consultant in Pythian’s MySQL practice, Derek is sought after for his deep knowledge of Galera and diagnosing replication issues.

 

Understanding Performance Through Measurement, Benchmarking, and Profiling by René Cannaò
Monday November 3 — 2:00-5:00PM
Orchard 2

It is essential to understand how your system performs at different workloads to measure the impacts of changes and growth and to understand how those impacts will manifest. Measuring the performance of current workloads is not trivial and the creation of a staging environment where different workloads need to be tested has it’s own set of challenges. Performing capacity planning, exploring concerns about scalability and response time and evaluating new hardware or software configurations are all operations requiring measurement and analysis in an environment appropriate to your production set up. To find bottlenecks, performance needs to be measured both at the OS layer and at the MySQL layer: an analysis of OS and MySQL benchmarking and monitoring/measuring tools will be presented. Various benchmark strategies will be demonstrated for real-life scenarios, as well as tips on how to avoid common mistakes.

About René: René has 10 years of working experience as System, Network and Database Administrator mainly on Linux/Unix platform. In recent years, he has been focused mainly on MySQL, previously working as Senior MySQL Support Engineer at Sun/Oracle and now as Senior Operational DBA at Pythian (formerly Blackbird, acquired by Pythian.)

 

Low-Latency SQL on Hadoop — What’s Best for Your Cluster? by Danil Zburivsky
Tuesday November 4 — 11:20AM-12:10PM
Cromwell 3 & 4

Low-latency SQL is the Holy Grail of Hadoop platforms, enabling new use cases and better insights. A number of open-source projects have sprung up to provide fast SQL querying; but which one is best for your cluster? This session will present results of Danil’s in-depth research and benchmarks of Facebook Presto, Cloudera Impala and Databricks Shark. Attendees will look at performance across multiple storage formats, query profiles and cluster configurations to find the best engine for a variety of use cases. This session will help you to pick the right query engine for new cluster or get most out of your existing Hadoop deployment.

About Danil: Danil Zburivsky is a Big Data Consultant/Solutions Architect at Pythian. Danil has been working with databases and information systems since his early years in university, where he received a Master’s Degree in Applied Math. Danil has 7 years of experience architecting, building and supporting large mission-critical data platforms using various flavors of MySQL, Hadoop and MongoDB. He is also the author of the book Hadoop Cluster Deployment.

 

Scaling MySQL in Amazon Web Services by Mark Filipi and Laine Campbell
Tuesday November 4 — 5:30-6:20PM
Cromwell 3 & 4

Mark Filipi, MySQL Team Lead at Pythian, will explain the options for running MySQL at high volumes at Amazon Web Services, exploring options around database as a service, hosted instances/storages and all appropriate availability, performance and provisioning considerations. He will be using real-world examples from companies like Call of Duty, Obama for America, and many more.

Laine will demonstrate how to build highly available, manageable, and performant MySQL environments that scale in AWS—how to maintain them, grow them, and deal with failure.

About Mark: With years of experience as a MySQL DBA, Mark Felipi has direct experience administrating everything from multinational corporations to tiny web start-ups. He leads a global team of talented DBAs to identify performance bottlenecks and provide consistent daily operations.

About Laine: Laine is currently the Co-Founder and Associate Vice President of Pythian’s open source database practice—the result of the acquisition of Blackbird.io by Pythian in June 2014. Blackbird.io itself was the product of a merger that involved PalominoDB, a company that Laine founded in January 2006. Prior to that, Laine spent her career working in various corporate environments, including working at Travelocity for nearly a decade building out their database team. Laine is passionate about supporting members of underserved populations to gain experience, skills, and jobs in technology.

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s MySQL expertise.

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 7

Wed, 2014-10-29 08:09

Welcome to part 7, the final blog post in my series, Deploying Private Cloud at Home, where I will be sharing the scripts to configure controller and computer nodes. In my previous post, part six, I demonstrated how to configure the controller and compute nodes.

Kindly update the script with the password you want and then execute. I am assuming here that this is a fresh installation and no service is configured on the nodes.

Below script configures controller node, and has two parts

  1. Pre compute node configuration
  2. Post compute node configuration

The “config-controller.sh -pre” will run the pre compute node configuration and prepare the controller node and OpenStack services. “config-controller.sh -post” will run the post compute node configuration of the controller node as these services are dependant of compute node services.

config-controller.sh

#!/bin/bash
#Configure controller script v 4.4
#############################################
# Rohan Bhagat             ##################
# Email:Me at rohanbhagat.com ###############
#############################################
#set variables used in the configuration
#Admin user password
ADMIN_PASS=YOUR_PASSWORD
#Demo user password
DEMO_PASS=YOUR_PASSWORD
#Keystone database password
KEYSTONE_DBPASS=YOUR_PASSWORD
#Admin user Email
ADMIN_EMAIL=YOUR_EMAIL
#Demo user Email
DEMO_EMAIL=YOUR_EMAIL
#Glance db user pass
GLANCE_DBPASS=YOUR_PASSWORD
#Glance user pass
GLANCE_PASS=YOUR_PASSWORD
#Glance user email
GLANCE_EMAIL=YOUR_EMAIL
#Nova db user pass
NOVA_DBPASS=YOUR_PASSWORD
#Nova user pass
NOVA_PASS=YOUR_PASSWORD
#Nova user Email
NOVA_EMAIL=YOUR_EMAIL
#Neutron db user pass
NEUTRON_DBPASS=YOUR_PASSWORD
#Neutron user pass
NEUTRON_PASS=YOUR_PASSWORD
#Neutron user email
NEUTRON_EMAIL=YOUR_EMAIL
#Metadata proxy pass
METADATA_SECRET=YOUR_PASSWORD
#IP to be declared for controller
MY_IP=192.168.1.140
#FQDN for controller hostname or IP
CONTROLLER=controller
#MYSQL root user pass
MYSQL_PASS=YOUR_PASSWORD
#Heat db user pass
HEAT_DBPASS=YOUR_PASSWORD
#Heat user pass
HEAT_PASS=YOUR_PASSWORD
#Heat user email
HEAT_EMAIL=YOUR_EMAIL
#IP range for VM Instances
RANGE=192.168.1.16\\/28
#Secure MySQL
MYSQL_ROOT_PASSWORD=YOUR_PASSWORD
#Current MySQL root password leave blank if you have not configured MySQL
CURNT_PASS=""



# Get versions:
SCRIPT_VER="v4.4"
if [ "$1" = "--version" -o "$1" = "-v" ]; then
	echo "`basename $0` script version $SCRIPT_VER"
  exit 0
elif [ "$1" = "" ] || [ "$1" = "--help" ]; then
  echo "Configures controller node with pre compute and post compute deployment settings"
  echo "Usage:"
  echo "       `basename $0` [--help | --version | -pre | -post]"
  exit 0

elif [ "$1" = "-pre" ]; then

echo "============================================="
echo "This installation script is based on OpenStack icehouse guide"
echo "Found http://docs.openstack.org/icehouse/install-guide/install/yum/content/index.html"
echo "============================================="

echo "============================================="
echo "controller configuration started"
echo "============================================="

echo "Installing MySQL packages"
yum install -y mysql mysql-server MySQL-python
echo "Installing RDO OpenStack repo"
yum install -y http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
echo "Installing openstack keystone, qpid Identity Service, and required packages for controller"
yum install -y yum-plugin-priorities openstack-utils mysql mysql-server MySQL-python qpid-cpp-server openstack-keystone python-keystoneclient expect


echo "Modification of qpid config file"
perl -pi -e 's,auth=yes,auth=no,' /etc/qpidd.conf
chkconfig qpidd on
service qpidd start


echo "Configuring mysql database server"
cat > /etc/my.cnf <&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone

echo "Define users, tenants, and roles"
export OS_SERVICE_TOKEN=$ADMIN_TOKEN
export OS_SERVICE_ENDPOINT=http://$CONTROLLER:35357/v2.0

echo "keystone admin creation"
keystone user-create --name=admin --pass=$ADMIN_PASS --email=$ADMIN_EMAIL
keystone role-create --name=admin
keystone tenant-create --name=admin --description="Admin Tenant"
keystone user-role-add --user=admin --tenant=admin --role=admin
keystone user-role-add --user=admin --role=_member_ --tenant=admin


echo "keystone demo creation"
keystone user-create --name=demo --pass=$DEMO_PASS --email=$DEMO_EMAIL
keystone tenant-create --name=demo --description="Demo Tenant"
keystone user-role-add --user=demo --role=_member_ --tenant=demo
keystone tenant-create --name=service --description="Service Tenant"

echo "Create a service entry for the Identity Service"
keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://$CONTROLLER:5000/v2.0 \
--internalurl=http://$CONTROLLER:5000/v2.0 \
--adminurl=http://$CONTROLLER:35357/v2.0

echo "Verify Identity service installation"
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
echo "Request a authentication token by using the admin user and the password you chose for that user"
keystone --os-username=admin --os-password=$ADMIN_PASS \
  --os-auth-url=http://$CONTROLLER:35357/v2.0 token-get
keystone --os-username=admin --os-password=$ADMIN_PASS \
  --os-tenant-name=admin --os-auth-url=http://$CONTROLLER:35357/v2.0 \
  token-get

cat > /root/admin-openrc.sh <<EOF
export OS_USERNAME=admin
export OS_PASSWORD=$ADMIN_PASS
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0
EOF

source /root/admin-openrc.sh
echo "keystone token-get"
keystone token-get
echo "keystone user-list"
keystone user-list
echo "keystone user-role-list --user admin --tenant admin"
keystone user-role-list --user admin --tenant admin

echo "Install the Image Service"
yum install -y openstack-glance python-glanceclient
openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:$GLANCE_DBPASS@$CONTROLLER/glance
openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:$GLANCE_DBPASS@$CONTROLLER/glance

echo "configure glance database"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "CREATE DATABASE glance;"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS';"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS';"

echo "Create the database tables for the Image Service"
su -s /bin/sh -c "glance-manage db_sync" glance

echo "creating glance user"
keystone user-create --name=glance --pass=$GLANCE_PASS --email=$GLANCE_EMAIL
keystone user-role-add --user=glance --tenant=service --role=admin


echo "glance configuration"
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password $GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password $GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone


echo "Register the Image Service with the Identity service"
keystone service-create --name=glance --type=image --description="OpenStack Image Service"
keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ image / {print $2}') \
  --publicurl=http://$CONTROLLER:9292 \
  --internalurl=http://$CONTROLLER:9292 \
  --adminurl=http://$CONTROLLER:9292
  
echo "Start the glance-api and glance-registry services"
service openstack-glance-api start
service openstack-glance-registry start
chkconfig openstack-glance-api on
chkconfig openstack-glance-registry on

echo "Testing image service"
echo "Download the cloud image"
wget -q http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img -O /root/cirros-0.3.2-x86_64-disk.img
echo "Upload the image to the Image Service"
source /root/admin-openrc.sh
glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 \
--container-format bare --is-public True \
--progress  < /root/cirros-0.3.2-x86_64-disk.img

echo "Install Compute controller services"
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
source /root/admin-openrc.sh

echo "Configure compute database"
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:$NOVA_DBPASS@$CONTROLLER/nova

echo "configuration keys to configure Compute to use the Qpid message broker"
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname $CONTROLLER

source /root/admin-openrc.sh

echo "Set the my_ip, vncserver_listen, and vncserver_proxyclient_address configuration options"
echo "to the management interface IP address of the $CONTROLLER node"
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address $MY_IP

echo "Create a nova database user"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "CREATE DATABASE nova;"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS';"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS';"

echo "Create the Compute service tables"
su -s /bin/sh -c "nova-manage db sync" nova

echo "Create a nova user that Compute uses to authenticate with the Identity Service"
keystone user-create --name=nova --pass=$NOVA_PASS --email=$NOVA_EMAIL
keystone user-role-add --user=nova --tenant=service --role=admin

echo "Configure Compute to use these credentials with the Identity Service running on the controller"
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password $NOVA_PASS

echo "Register Compute with the Identity Service"
keystone service-create --name=nova --type=compute --description="OpenStack Compute"
keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ compute / {print $2}') \
  --publicurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s \
  --internalurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s \
  --adminurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s
  
echo "Start Compute services and configure them to start when the system boots"
service openstack-nova-api start
service openstack-nova-cert start
service openstack-nova-consoleauth start
service openstack-nova-scheduler start
service openstack-nova-conductor start
service openstack-nova-novncproxy start
chkconfig openstack-nova-api on
chkconfig openstack-nova-cert on
chkconfig openstack-nova-consoleauth on
chkconfig openstack-nova-scheduler on
chkconfig openstack-nova-conductor on
chkconfig openstack-nova-novncproxy on  

echo "To verify your configuration, list available images"
echo "nova image-list"
sleep 5
source /root/admin-openrc.sh
nova image-list

fi


if [ "$1" = "-post" ]; then
#set variables used in the configuration

source /root/admin-openrc.sh
############OpenStack Networking start here##############
echo "configure legacy networking"
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova 

echo "Restart the Compute services"
service openstack-nova-api restart
service openstack-nova-scheduler restart
service openstack-nova-conductor restart

echo "Create the network"
source /root/admin-openrc.sh
nova network-create vmnet --bridge br0 --multi-host T --fixed-range-v4 $RANGE

echo "Verify creation of the network"
nova net-list

############OpenStack Legacy ends##############
echo "Install the dashboard"
yum install -y mod_wsgi openstack-dashboard

echo "Configure openstack dashborad"
sed -i 's/horizon.example.com/\*/g' /etc/openstack-dashboard/local_settings
echo "Start the Apache web server and memcached"
service httpd start
chkconfig httpd on

fi

Below is the config-compute.sh script which configures compute node

config-compute.sh

#!/bin/bash
#configure comutue script v4
#############################################
# Rohan Bhagat             ##################
# Email:Me at rohanbhagat.com ###############
#############################################
#set variables used in the configuration
#Nova user pass
NOVA_PASS=YOUR_PASSWORD
#NEUTRON user pass
NEUTRON_PASS=YOUR_PASSWORD
#Nova db user pass
NOVA_DBPASS=YOUR_PASSWORD
FLAT_INTERFACE=eth0
PUB_INTERFACE=eth0
#FQDN for $CONTROLLER hostname or IP
CONTROLLER=controller
#IP of the compute node
MY_IP=192.168.1.142


echo "============================================="
echo "This installation script is based on OpenStack icehouse guide"
echo "Found http://docs.openstack.org/icehouse/install-guide/install/yum/content/index.html"
echo "============================================="

echo "============================================="
echo "compute configuration started"
echo "============================================="

echo "Install the MySQL Python library"
yum install -y MySQL-python


echo "Install the Compute packages"
yum install -y openstack-nova-compute openstack-utils

echo "Edit the /etc/nova/nova.conf configuration file"
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:$NOVA_DBPASS@$CONTROLLER/nova
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password $NOVA_PASS

echo "Configure the Compute service to use the Qpid message broker"
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname $CONTROLLER

echo "Configure Compute to provide remote console access to instances"
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://$CONTROLLER:6080/vnc_auto.html

echo "Specify the host that runs the Image Service"
openstack-config --set /etc/nova/nova.conf DEFAULT glance_host $CONTROLLER

echo "Start the Compute service and its dependencies. Configure them to start automatically when the system boots"
service libvirtd start
service messagebus start
service openstack-nova-compute start
chkconfig libvirtd on
chkconfig messagebus on
chkconfig openstack-nova-compute on

echo "kernel networking functions"
perl -pi -e 's,net.ipv4.ip_forward = 0,net.ipv4.ip_forward = 1,' /etc/sysctl.conf
perl -pi -e 's,net.ipv4.conf.default.rp_filter = 1,net.ipv4.conf.default.rp_filter = 0,' /etc/sysctl.conf
echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
sysctl -p

echo "Install legacy networking components"
yum install -y openstack-nova-network openstack-nova-api
sleep 5
echo "Configure legacy networking"
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
openstack-config --set /etc/nova/nova.conf DEFAULT network_manager nova.network.manager.FlatDHCPManager
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT network_size 254
openstack-config --set /etc/nova/nova.conf DEFAULT allow_same_net_traffic False
openstack-config --set /etc/nova/nova.conf DEFAULT multi_host True
openstack-config --set /etc/nova/nova.conf DEFAULT send_arp_for_ha True
openstack-config --set /etc/nova/nova.conf DEFAULT share_dhcp_address True
openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release True
openstack-config --set /etc/nova/nova.conf DEFAULT flat_network_bridge br0
openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface $FLAT_INTERFACE
openstack-config --set /etc/nova/nova.conf DEFAULT public_interface $PUB_INTERFACE

echo "Start the services and configure them to start when the system boots"
service openstack-nova-network start
service openstack-nova-metadata-api start
chkconfig openstack-nova-network on
chkconfig openstack-nova-metadata-api on

echo "Now restart networking"
service network restart

echo "Compute node configuration competed"
echo "Now you can run config-congroller.sh -post on the controller node"
echo "To complete the OpenStack configuration"

Categories: DBA Blogs

MariaDB 10.0 Multi-source Replication at Percona Live UK 2014

Mon, 2014-10-27 12:12

Percona Live UK is upon us and I have the privilege to present a tutorial on setting up multi-source replication in MariaDB 10.0 on Nov 3, 2014.

If you’re joining me at PLUK14, we will go over setting up two different topologies that incorporates the features in MariaDB. The first is a mirrored topology:

Replication Topologies - Mirrored

Replication Topologies – Mirrored

This basically makes use of an existing DR environment by setting it up to be able to write to either master. Please be advised, this is normally not recommended due to the complexity of making your application able to resolve conflicts and data sync issues that might arise from writing to multiple masters.

The second topology is a basic fan-in topology:

Replication Topologies - Fan-in

Replication Topologies – Fan-in

This use-case is more common, especially for unrelated datasets that can be gathered into a single machine for reporting purposes or as part of a backup strategy. It was also previously available in MySQL only through external tools such as Tungsten Replicator

As promised in the description of the tutorial, I am providing a Vagrantfile for the tutorial. This can be downloaded/cloned from my PLUK14 repository

The vagrant environment requires at least Vagrant 1.5 to make use of Vagrant Cloud.

I hope to see you next week!

Categories: DBA Blogs

The Power of the Oracle Database Proxy Authenticated Connections

Mon, 2014-10-27 08:46

We recently received this inquiry from a client:

“Can an Oracle database account support two passwords at once so we can roll out updated credentials to application servers gradually rather than having to change them all at the same time? Then once all of the application servers have been configured to use the new/second password we can change or remove the first one?

The short answer is no. Like most computer technologies, an Oracle database user has only one password that is valid at any given time. However, a very powerful and under-appreciated feature of the Oracle database could be used in this case: It is called proxy authentication.

 

How proxy authenication works

Introduced with Oracle Database 10g, the basic premise of proxy authentication is that a user with the required permission can connect to the Oracle database using their own credentials, but proxy into another user in the database.  To put it more plainly: connect as USER_A but using the password of USER_B !

The proxy permission is granted through the “CONNECT THROUGH” privilege.  Interestingly, it is granted through an ALTER USER command as really it’s an “authorization” and property of the user and not truly a privilege like the traditional privileges we’re used to:

SQL> connect / as sysdba
Connected.
SQL> alter user USER_A grant connect through USER_B;

User altered.

SQL>

 

Now USER_B who may not know the password for USER_A can connect as USER_A by specifying the proxy account in square brackets in the connection string:

SQL> connect USER_B[USER_A]/passw0rd
Connected.
SQL> show user
USER is "USER_A"
SQL>

 

The password specified was the one for USER_B, not USER_A.  Hence the credentials for USER_B were used but the end result is that the session is connected as USER_A!

Specifically when a proxy authenticated connection is made the USERENV namespace parameters are updated as follows:

  • The “SESSION_USER” becomes USER_A
  • The “SESSION_SCHEMA” also becomes USER_A
  • The “PROXY_USER” remains USER_B who initiated the connection and who’s credentials were used

Since the syscontext(‘USERENV’,’PROXY_USER’) remains unchanged, the connection is properly audited and information on who made the initial connection can still recorded in audit records.  However for all other purposes, USER_B has effectively connected to the database as USER_A without having to know USER_A’s password.

So back to the original question, a possible approach to their problem would be to create a second USER_B that has permission to proxy into their application user account APP_USER.  Then they could gradually roll out the credential change to use the new USER_B and proxy into APP_USER to all of their app servers.  Once all app servers have been updated it would then be safe to change the password on the base application account APP_USER.

 

A similar feature is the ability to change the current session’s schema. For example as USER_B issuing:

alter session set current_schema = USER_A;

 

This is a very quick and simple approach, but isn’t quite the same. Doing this only changes the “CURRENT_SCHEMA” which is the currently active default schema. Hence any queries issued without specifying the schema name will default to “CURRENT_SCHEMA”. But there are many cases when actually connecting to another user is required. For example, if the DBA needs to drop and re-create a database link then the “current_schema” approach will not suffice.  But the proxy authenticated connection alternative will work perfectly.

Another case where the “current_schema” approach may be an issue is if the application is user aware.  What I mean by this is that possibly the application has some logic such as “if user = USER_A then do suff“.  If you connect as USER_B and simply changes the current schema then the boolean logic of this condition will evaluate to FALSE.  However if you use a proxy authenticated connection user USER_A, the condition will evaluate to TRUE.

Previously if the DBA needed to connect to the database as a specific user (maybe to re-create a DB link for example) they might employ the old trick of temporarily changing the user’s password, quickly connecting, and then quickly changing it back using the extracted/saved password hash.  However there are numerous serious problems with this approach:

  1. The schema may be locked
  2. The password may be controlled by a PROFILE that may also need to be adjusted.
  3. Account intrusion detection tools may detect the connection.
  4. The connection may not be properly audited via Oracle or external auditing tools.
  5. The application may unsuccessfully try to connect while the password is temporarily changed causing an application failure!

Hence that approach should never be used. The proxy authenticated connection alternative doesn’t have any of those issues and is perfectly safe.

 

A simple example

Putting it all together into a small example to show how the userenv properties are affected:

SQL> alter user USER_A grant connect through USER_B;

User altered.

SQL> connect USER_B[USER_A]/passw0rd
Connected.
SQL> alter session set current_schema = SCOTT;

Session altered.

SQL> select sys_context('USERENV','SESSION_USER') as session_user,
  2  sys_context('USERENV','SESSION_SCHEMA') as session_schema,
  3  sys_context('USERENV','CURRENT_SCHEMA') as current_schema,
  4  sys_context('USERENV','PROXY_USER') as proxy_id,
  5  user
  6  from dual;

SESSION_USER   SESSION_SCHEMA CURRENT_SCHEMA PROXY_ID       USER
-------------- -------------- -------------- -------------- ------------
USER_A         SCOTT          SCOTT          USER_B         USER_A

SQL>

 

As can be seen above, for all intensive purposes the connection has been made to USER_A but using the credentials of USER_B.  USER_A’s password did not need to be known nor was the USER_A account affected or adjusted in any way.

FAQs

What if USER_A’s password is locked or expired?

The answer is that the connection will still report the same error as it would have if a direct connection to USER_A was made:

SQL> connect / as sysdba
Connected.
SQL> alter user USER_A account lock;

User altered.

SQL> connect USER_B[USER_A]/passw0rd
ERROR:
ORA-28000: the account is locked


Warning: You are no longer connected to ORACLE.
SQL>

Can this be used with other tools such as Data Pump?

(Importing as the actual user instead of a DBA user was necessary with Oracle 10g under specific circumstances such as importing JOBs and REFRESH GROUPS).  The answer is yes it works with Data Pump and other similar tools:

$ impdp dumpfile=temp.dmp nologfile=y include=JOB

Import: Release 11.2.0.4.0 - Production on Wed Oct 15 19:10:13 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: USER_B[USER_A]/passw0rd

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "USER_A"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "USER_A"."SYS_IMPORT_FULL_01":  USER_B[USER_A]/******** dumpfile=temp.dmp nologfile=y include=JOB
Processing object type SCHEMA_EXPORT/JOB
Job "USER_A"."SYS_IMPORT_FULL_01" successfully completed at Wed Oct 15 19:10:22 2014 elapsed 0 00:00:01

$

Notice that the Data Pump master table is created in the USER_A schema even though we connected using the USER_B credentials.

Is proxy authentication supported by JDBC/JDBC thin driver?

Yes, it works through almost any OCI connection including JDBC connections.

What about Oracle Wallets?

The answer again is yes, they can support it too! See below for an example using an Oracle Wallet:

$ mkstore -wrl "/u01/app/oracle/wallet" -createCredential ORCL USER_A passw0rd
Oracle Secret Store Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
Create credential oracle.security.client.connect_string1
$ mkstore -wrl "/u01/app/oracle/wallet" -listCredential
Oracle Secret Store Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
1: ORCL USER_A
$

$ sqlplus /@ORCL

SQL*Plus: Release 11.2.0.4.0 Production on Wed Oct 15 13:45:04 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show user
USER is "USER_A"
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

$ sqlplus [app_user]/@ORCL

SQL*Plus: Release 11.2.0.4.0 Production on Wed Oct 15 13:45:14 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show user
USER is "APP_USER"
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
$

 

Reporting and Revoking

Finally, how can we report on what proxy authentication authorizations we’ve granted as we need to clean them up (revoke them)?  Or perhaps we just need to report on or audit what’s out there?  Fortunately, it’s as simple as querying a catalog view to see what’s been set and we can remove/revoke through another simple ALTER USER command:

SQL> select * from PROXY_USERS;

PROXY        CLIENT       AUT FLAGS
------------ ------------ --- -----------------------------------
USER_B       USER_A       NO  PROXY MAY ACTIVATE ALL CLIENT ROLES

SQL> alter user USER_A revoke connect through USER_B;

User altered.

SQL> select * from PROXY_USERS;

no rows selected

SQL>

 

Conclusion

Since the introduction of the GRANT ANY OBJECT privilege with Oracle9i, the number of times that the DBA needs to actually connect as other users has been reduced.  However, there still are some distinct situations such as those mentioned in the examples above when the connection as another user may be absolutely necessary.

Thanks to the proxy authenticated connection capabilities introduced with Oracle Database 10g, connecting as another user when you don’t know the other account’s password has become a breeze.  And even if you do know the password, connecting through proxy authentication can still add value with the additional audit information.

Have any other situations where connecting as another user is absolutely necessary? Share them in the comments section below.

 

References

http://docs.oracle.com/cd/E25054_01/network.1111/e16543/authentication.htm
http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions184.htm

Categories: DBA Blogs

Log Buffer #394, A Carnival of the Vanities for DBAs

Mon, 2014-10-27 08:22

This week’s log buffer edition collects some of the insightful blog posts from Oracle, SQL Server, and MySQL.

Oracle:

Oracle StorageTek T10000D Tape Drive achieves FIPS 140-2 Validation.

Oracle is a Leader in IDC MarketScape for Global Trade Management.

The Benefits of Integrating a Google Search Appliance with an Oracle WebCenter or Liferay Portal.

Maintenance Windows is too small? Autotask Jobs fail.

SOA Suite 12c: Querying LDAP directories using the LDAP Adapter.

SQL Server:

Regaining access to SQL server after changing the domain.

NuGet has transformed the ease of getting and installing the latest version of .NET packages, tools and frameworks.

The Mindset of the Enterprise DBA: Delegating Work.

Stairway to SQL PowerShell Level 8: SQL Server PowerShell Provider.

Finding a table name from a page ID.

MySQL:

A few weeks ago, Gharieb received an interesting Galera Cluster support case from one of customers that the application is not working well and they face a lot of troubles in their Galera Cluster setup.

Resources for Database Clusters: 9 DevOps Tips, ClusterControl 1.2.8 Release, HAProxy Webinar Replay & More.

Refactoring replication topology with Pseudo GTID.

Improvements to STRICT MODE in MySQL.

Monitoring progress and temporal memory usage of Online DDL in InnoDB.

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 6

Thu, 2014-10-23 07:35

Today’s blog post is part six of seven in a series dedicated to Deploying a Private Cloud at Home, where I will be demonstrating how to configure controller node with legacy networking ad OpenStack dashboard for webgui. Feel free to check out part five where we configured compute node with OpenStack services.

  1. First load the admin variables admin-openrc.sh
    source /root/admin-openrc.sh
  2. Enable legacy networking
    openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
    openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
  3. Restart the Compute services
    service openstack-nova-api restart
    service openstack-nova-scheduler restart
    service openstack-nova-conductor restart
  4. Create the IP pool which will be assigned to the instances we will launch later. My network is 192.168.1.0/24. I took a subpool of the range and I am using that subnet to assign IPs to the VMs. As the VMs will be on my shared network I want the ip in the same range my other systems on the network.
    Here I am using the subnet of 192.168.1.16/28
  5. Create a network
    nova network-create vmnet --bridge br0 --multi-host T --fixed-range-v4 192.168.1.16/28
  6. Verify networking by listing the network
    nova net-list
  7. Install dashboard. Dashboard gives you webui to manage OpenStack instances and services. As we will be using the default configuration I am not going in detail with this.
    yum install -y mod_wsgi openstack-dashboard
  8. Update the ALLOWED_HOSTS in local_settings to include the addresses you wish to access the dashboard from. I am running these in my Intranet so I allowed every host in my network. But you can specify which hosts you want to give access.
    ALLOWED_HOSTS = ['*']
  9. Start and enable Apache web server
    service httpd start
    chkconfig httpd on
  10. You can now access the dashboard at http://controller/dashboard

 

This completes the configuration of OpenStack private cloud. We can use the same guide for RackSpace private cloud as it too is based on OpenStack Icehouse, but that is for another time.

Now that we have a working PaaS cloud, we can configure any SaaS on top of it, but that will require another series altogether.

Stay tuned for part seven, our final post in the series Deploying Private Cloud at Home, where I will be sharing scripts that will automate the installation and configuration of controller and compute nodes.

Categories: DBA Blogs

MySQL: Troubleshooting an Instance for Beginners

Wed, 2014-10-22 09:17
IMG_1299

So as you may know, my new position involves the MySQL world, so I’m in the task of picking up the language and whereabouts of this DBMS, and my teamate Alkin Tezuysal (@ask_dba on Twitter) has a very cool break and fix lab which you should check out if you are going to Percona Live London 2014, he will be running this lab, so be sure to don’t miss out.

So the first thing I tried was to bring up the service, but to my surprise, the MySQL user didn’t exist. So the first thing I did was create the user.

Note: Whenever you see “…”, it is to shorten the output.

[user-lab@ip-10-10-10-1 ~]$ service mysqld start
touch: cannot touch ‘/var/log/mysqld.log’: Permission denied
chown: invalid user: ‘mysql:mysql’
chmod: changing permissions of ‘/var/log/mysqld.log’: Operation not permitted
mkdir: cannot create directory ‘/var/lib/msql’: Permission denied
[user-lab@ip-10-10-10-1 ~]$ id mysql
id: mysql: no such user
[user-lab@ip-10-10-10-1 ~]$ sudo useradd mysql

So now that the user exists, I try to bring it up and we are back at square one as the initial configuration variable in the .cnf file is incorrect. But there is another problem, as there is more than one .cnf file.

[user-lab@ip-10-10-10-1 ~]$ sudo su -
Last login: Thu Jul 31 11:37:21 UTC 2014 on pts/0
Last failed login: Tue Oct 14 05:45:47 UTC 2014 from 60.172.228.40 on ssh:notty
There were 1269 failed login attempts since the last successful login.
[root@ip-10-10-10-1 ~]# service mysqld start
Initializing MySQL database: Installing MySQL system tables...
141014 17:05:46 [ERROR] /usr/libexec/mysqld: unknown variable 'tmpd1r=/var/tmp'
141014 17:05:46 [ERROR] Aborting

141014 17:05:46 [Note] /usr/libexec/mysqld: Shutdown complete

Installation of system tables failed! Examine the logs in
/var/lib/msql for more information.

...

 [FAILED]

In the Oracle world, it is easier to troubleshoot. Here in the MySQL world, the best way to see which .cnf file is being used, we do it with an strace command.

[root@ip-10-10-10-1 ~]# strace -e trace=open,stat /usr/libexec/mysqld
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libaio.so.1", O_RDONLY|O_CLOEXEC) = 3
...
stat("/etc/my.cnf", {st_mode=S_IFREG|0644, st_size=255, ...}) = 0
open("/etc/my.cnf", O_RDONLY)           = 3
stat("/etc/mysql/my.cnf", 0x7fffe4b38120) = -1 ENOENT (No such file or directory)
stat("/usr/etc/my.cnf", {st_mode=S_IFREG|0644, st_size=25, ...}) = 0
open("/usr/etc/my.cnf", O_RDONLY)       = 3
stat("/root/.my.cnf", {st_mode=S_IFREG|0644, st_size=33, ...}) = 0
open("/root/.my.cnf", O_RDONLY)         = 3
...
141014 17:12:05 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root!

So now I can see that the /usr/etc/my.cnf is the one with the incorrect wording variable, so we modify it to have it the correct one.

[root@ip-10-10-10-1 ~]# cat /usr/etc/my.cnf
[mysqld]
tmpd1r=/var/tmp
[root@ip-10-10-10-1 ~]# sed -i -e 's/tmpd1r/tmpdir/' /usr/etc/my.cnf
[root@ip-10-10-10-1 ~]# cat /usr/etc/my.cnf
[mysqld]
tmpdir=/var/tmp

Another try, but again the same result — but even worse this time, as there is no output. After digging around, I found that the place to look is the /var/log/mysqld.log and the problem was that some libraries belonged to root user, instead of the MySQL user.

[root@ip-10-10-10-1 ~]# service mysqld start
MySQL Daemon failed to start.
Starting mysqld:                                           [FAILED]
[root@ip-10-10-10-1 ~]# cat /var/log/mysqld.log
141014 17:16:33 mysqld_safe Starting mysqld daemon with databases from /var/lib/msql
141014 17:16:33 [Note] Plugin 'FEDERATED' is disabled.
/usr/libexec/mysqld: Table 'mysql.plugin' doesn't exist
141014 17:16:33 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
141014 17:16:33 InnoDB: The InnoDB memory heap is disabled
141014 17:16:33 InnoDB: Mutexes and rw_locks use GCC atomic builtins
141014 17:16:33 InnoDB: Compressed tables use zlib 1.2.7
141014 17:16:33 InnoDB: Using Linux native AIO
/usr/libexec/mysqld: Can't create/write to file '/var/tmp/ib1rikjr' (Errcode: 13)
141014 17:16:33  InnoDB: Error: unable to create temporary file; errno: 13
141014 17:16:33 [ERROR] Plugin 'InnoDB' init function returned error.
141014 17:16:33 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
141014 17:16:33 [ERROR] Unknown/unsupported storage engine: InnoDB
141014 17:16:33 [ERROR] Aborting

141014 17:16:33 [Note] /usr/libexec/mysqld: Shutdown complete
[root@ip-10-10-10-1 ~]# perror 13
Error code 13: Permission denied
[root@ip-10-10-10-1 ~]# ls -l /var/lib/mysql/mysql/plugin.*
-rw-rw---- 1 root root 8586 Mar 13  2014 /var/lib/mysql/mysql/plugin.frm
-rw-rw---- 1 root root    0 Mar 13  2014 /var/lib/mysql/mysql/plugin.MYD
-rw-rw---- 1 root root 1024 Mar 13  2014 /var/lib/mysql/mysql/plugin.MYI
[root@ip-10-10-10-1 ~]# chown -R mysql:mysql /var/lib/mysql/mysql/

So I think, yey, I’m set and it will come up! I give it one more shot and, you guessed it, same result and different error :( This time around the problem seemed to be that the memory assigned is incorrect and we don’t have enough on the machine, so we change it.

[root@ip-10-10-10-1 ~]# service mysqld start
MySQL Daemon failed to start.
Starting mysqld:                                           [FAILED]
141014 17:36:15 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
141014 17:36:15 [Note] Plugin 'FEDERATED' is disabled.
141014 17:36:15 InnoDB: The InnoDB memory heap is disabled
141014 17:36:15 InnoDB: Mutexes and rw_locks use GCC atomic builtins
141014 17:36:15 InnoDB: Compressed tables use zlib 1.2.7
141014 17:36:15 InnoDB: Using Linux native AIO
141014 17:36:15 InnoDB: Initializing buffer pool, size = 100.0G
InnoDB: mmap(109890764800 bytes) failed; errno 12
141014 17:36:15 InnoDB: Completed initialization of buffer pool
141014 17:36:15 InnoDB: Fatal error: cannot allocate memory for the buffer pool
141014 17:36:15 [ERROR] Plugin 'InnoDB' init function returned error.
141014 17:36:15 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
141014 17:36:15 [ERROR] Unknown/unsupported storage engine: InnoDB
141014 17:36:15 [ERROR] Aborting
[root@ip-10-10-10-1 ~]# grep 100 /etc/my.cnf
innodb_buffer_pool_size=100G
[root@ip-10-10-10-1 ~]# sed -i -e 's/100G/256M/' /etc/my.cnf
[root@ip-10-10-10-1 ~]# grep innodb_buffer_pool_size /etc/my.cnf
innodb_buffer_pool_size=256M

Now, I’m not even expecting this instance to come up, and I am correct — It seems a filename has incorrect permissions.

[root@ip-10-10-10-1 ~]# service mysqld start
MySQL Daemon failed to start.
Starting mysqld:                                           [FAILED]
root@ip-10-10-10-1 ~]# cat /var/log/mysqld.log
...
141014 17:37:15 InnoDB: Initializing buffer pool, size = 256.0M
141014 17:37:15 InnoDB: Completed initialization of buffer pool
141014 17:37:15  InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name ./ibdata1
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
141014 17:37:15 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
[root@ip-10-10-10-1 ~]# ls -l /var/lib/mysql/ibdata1
-rw-rw---- 1 27 27 18874368 Mar 13  2014 /var/lib/mysql/ibdata1
[root@ip-10-10-10-1 ~]# ls -l /var/lib/mysql
total 83980
-rw-rw---- 1    27    27 18874368 Mar 13  2014 ibdata1
-rw-rw---- 1    27    27 33554432 Mar 13  2014 ib_logfile0
-rw-rw---- 1    27    27 33554432 Mar 13  2014 ib_logfile1
drwx------ 2 mysql mysql     4096 Mar 13  2014 mysql
drwx------ 2 root  root      4096 Mar 13  2014 performance_schema
drwx------ 2 root  root      4096 Mar 13  2014 test
[root@ip-10-10-10-1 ~]# chown -R mysql:mysql /var/lib/mysql

Now, I wasn’t even expecting the service to come up, but to my surprise it came up!

[root@ip-10-10-10-1 ~]# service mysqld start
Starting mysqld:                                           [  OK  ]

So now, what I wanted to do was connect and start working, but again, there was another error! I saw that it was related to the socket file mysql.sock, so I changed it to the correct value in our .cnf file

[root@ip-10-10-10-1 ~]# mysql
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
[root@ip-10-10-10-1 ~]# service mysql status
mysql: unrecognized service
[root@ip-10-10-10-1 ~]# service mysqld status
mysqld (pid  5666) is running...
[root@ip-10-10-10-1 ~]# ls -l /tmp/mysql.sock
ls: cannot access /tmp/mysql.sock: No such file or directory
[root@ip-10-10-10-1 ~]# grep socket /var/log/mysqld.log | tail -n 1
Version: '5.5.36'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)
[root@ip-10-10-10-1 ~]# lsof -n | grep mysqld | grep unix
mysqld    5666    mysql   12u     unix 0xffff880066fbea40       0t0     981919 /var/lib/mysql/mysql.sock
[root@ip-10-10-10-1 ~]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql

innodb_data_file_path=ibdata1:18M
innodb_buffer_pool_size=256M
innodb_log_file_size=32M
sort_buffer_size=60M

[client]
socket=/tmp/mysql.sock
[root@ip-10-10-10-1 ~]# vi /etc/my.cnf
[root@ip-10-10-10-1 ~]# service mysqld restart
Stopping mysqld:                                           [  OK  ]
Starting mysqld:                                           [  OK  ]
[root@ip-10-10-10-1 ~]# mysql -p
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.5.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

Conclusion

As you can see, there are different ways to troubleshoot the startup of a MySQL instance, so hope this helps you out in your journey when you are starting to use this DBMS and also if you know of another way, let me know in the comment section below.

Please note that this blog post was originally published on my personal blog.

Categories: DBA Blogs

High System Time on Windows Running in a VM

Wed, 2014-10-22 07:59

Recently I’ve seen an issue with CPU usage on a server running Windows 2003 Server in a VMware. This is a small Virtual Machine with just 2 cores allocated (which are possibly mapped to “threads” on a host level but I don’t know the details). For some reason very high System CPU time was reported in a Statspack report.

Here is how it looks like in a 1 hour Statspack report:

Host CPU  (CPUs: 2  Cores: 2  Sockets: 0)
~~~~~~~~              Load Average
                      Begin     End      User  System    Idle     WIO     WCPU
                    ------- -------   ------- ------- ------- ------- --------
                                         3.04    8.77   88.19

Note that the System CPU time is more than twice the User CPU time on average (remember the averages could be misleading sometimes). This caught my attention as usual. Although the average CPU used is not really high, this server is somewhat sluggish even for a one hop RDP connection over the VPN.
I have tried to find out some details about what is going on. Since I’m not a Windows guy, I did not know what kind of tools could be used to track places in the OS kernel that take too much time. On Linux this is relatively easy starting with strace/pstack/perf utilities and other command line tools. Windows is different.

I’ve started to search for the options available, and the first thing to find is of course Perfmon, which allows to track and visualize different OS related metrics (counters in Perfmon terminology) on a system, CPU, or process levels. I’ve used it to capture a few key metrics such as User Time, System Time (which is apparently called Privileged Time on Windows), Queue length and Context Switches per second. From a graph of the CPU usage the issue is visible:

Here the white line is representing Privileged (or System) CPU, and yellow line is Total CPU. It’s clear that almost all used CPU is accounted to the Privileged part.
By the way it is actually very easy to see a similar picture in a standard Performance tab of Task Manager, you just need to select View then Show Kernel Times and Privileged part of the used CPU will be displayed in red.

After that I have searched for details of where to find why Privileged CPU time is so high. A good article that I have found is here. Although it is relatively old, it fits my case as the OS is a 32 bit Windows 2003 Server. The article points to a tool called KernRates. This is a command line tool with a very easy interface: you run it, wait for some time and stop it with Ctrl-C. After that the tool prints the profile of system calls by module. Here is what I’ve seen:

C:\Program Files\KrView\Kernrates>Kernrate_i386_Win2000.exe
 /==============================\
<         KERNRATE LOG           >
 \==============================/
Date: 2014/09/03   Time: 12:39:21
Machine Name: ***
Number of Processors: 2
PROCESSOR_ARCHITECTURE: x86
PROCESSOR_LEVEL: 6
PROCESSOR_REVISION: 1706
Physical Memory: 3072 MB
Pagefile Total: 6996 MB
Virtual Total: 2047 MB
PageFile1: \??\C:\pagefile.sys, 4080MB
OS Version: 5.2 Build 3790 Service-Pack: 2.0
WinDir: C:\WINDOWS

Kernrate User-Specified Command Line:
Kernrate_i386_Win2000.exe

Kernel Profile (PID = 0): Source= Time,
Using Kernrate Default Rate of 25000 events/hit
Starting to collect profile data

***> Press ctrl-c to finish collecting profile data
===> Finished Collecting Data, Starting to Process Results

------------Overall Summary:--------------

P0     K 0:00:03.703 ( 8.7%)  U 0:00:00.734 ( 1.7%)  I 0:00:38.046 (89.6%)  DPC 0:00:00.031 ( 0.1%)  Interrupt 0:00:00.406 ( 1.0%)
       Interrupts= 22840, Interrupt Rate= 538/sec.

P1     K 0:00:02.343 ( 5.5%)  U 0:00:00.656 ( 1.5%)  I 0:00:39.484 (92.9%)  DPC 0:00:00.000 ( 0.0%)  Interrupt 0:00:00.281 ( 0.7%)
       Interrupts= 20017, Interrupt Rate= 471/sec.

TOTAL  K 0:00:06.046 ( 7.1%)  U 0:00:01.390 ( 1.6%)  I 0:01:17.531 (91.2%)  DPC 0:00:00.031 ( 0.0%)  Interrupt 0:00:00.687 ( 0.8%)
       Total Interrupts= 42857, Total Interrupt Rate= 1009/sec.

Total Profile Time = 42484 msec

                                       BytesStart          BytesStop         BytesDiff.
    Available Physical Memory   ,       372678656,       363945984,        -8732672
    Available Pagefile(s)       ,      3285475328,      3281805312,        -3670016
    Available Virtual           ,      2131267584,      2130219008,        -1048576
    Available Extended Virtual  ,               0,               0,               0

                                  Total      Avg. Rate
    Context Switches     ,        61247,         1442/sec.
    System Calls         ,       305201,         7184/sec.
    Page Faults          ,        58440,         1376/sec.
    I/O Read Operations  ,         3496,         82/sec.
    I/O Write Operations ,         2637,         62/sec.
    I/O Other Operations ,        29567,         696/sec.
    I/O Read Bytes       ,     59649045,         17062/ I/O
    I/O Write Bytes      ,      2653894,         1006/ I/O
    I/O Other Bytes      ,    624604436,         21125/ I/O

-----------------------------

Results for Kernel Mode:
-----------------------------

OutputResults: KernelModuleCount = 109
Percentage in the following table is based on the Total Hits for the Kernel

Time   33235 hits, 25000 events per hit --------
 Module                                Hits   msec  %Total  Events/Sec
intelppm                              30310      42486    91 %    17835286
ntkrnlpa                               2337      42486     7 %     1375158
hal                                     271      42486     0 %      159464
mfehidk01                                74      42486     0 %       43543
Ntfs                                     58      42486     0 %       34128
mfehidk                                  52      42486     0 %       30598
mfeapfk                                  47      42486     0 %       27656
mfeavfk01                                17      42486     0 %       10003
tcpip                                    13      42486     0 %        7649
win32k                                   12      42486     0 %        7061
mfeavfk                                  10      42486     0 %        5884
fltmgr                                    6      42486     0 %        3530
CLASSPNP                                  3      42486     0 %        1765
SCSIPORT                                  3      42486     0 %        1765
RDPDD                                     2      42486     0 %        1176
afd                                       2      42486     0 %        1176
Npfs                                      2      42486     0 %        1176
NDIS                                      2      42486     0 %        1176
symmpi                                    2      42486     0 %        1176
TDTCP                                     1      42486     0 %         588
rdbss                                     1      42486     0 %         588
netbt                                     1      42486     0 %         588
mfetdi2k                                  1      42486     0 %         588
ipsec                                     1      42486     0 %         588
termdd                                    1      42486     0 %         588
TDI                                       1      42486     0 %         588
vmxnet                                    1      42486     0 %         588
KSecDD                                    1      42486     0 %         588
atapi                                     1      42486     0 %         588
volsnap                                   1      42486     0 %         588
ftdisk                                    1      42486     0 %         588

================================= END OF RUN ==================================
============================== NORMAL END OF RUN ==============================

The default output contains some basic information about the system, CPU usage, memory and context switching. The kernel modules profile is the most interesting part here. It lists some modules with internal names and the profile data: number of times the module was running during a sample; this is the most important information. So in mycase intelppm was the top running kernel module.
I’ve searched again, now for intelppm, and found a few posts describing similar symptoms. Apparently intelppm is a CPU driver. Sometimes it causes issues such as BSOD or high CPU usage, especially if it is a cloned VM and CPU architecture changes in between. It was not clear if this something which can be disabled, but there were posts suggesting that stopping this service (which is not listed in Services) helped a few people. So I have recommended the client to try to disable this driver with the following commands:

sc config intelppm start=disabled
sc stop intelppm

Theoretically this should disable Intel CPU driver and Windows should try to use another if it is available. When we tried to run it, the 2nd command (to stop the driver) failed with the following message:

[SC] ControlService FAILED 1052:

The requested control is not valid for this service.

So it is not possible to stop the driver online, and Windows restart is necessary.
We did a restart of the VM. After that, the situation was a bit different: the CPU time was somewhat reduced; but the privileged part was still quite high with hal (Hardware Abstraction Layer) on top instead of intelppm:

Time   95865 hits, 25000 events per hit --------
 Module                                Hits   msec  %Total  Events/Sec
hal                                   82669     125183    86 %    16509629
ntkrnlpa                              11788     125183    12 %     2354153
mfehidk                                 474     125183     0 %       94661
mfeapfk                                 224     125183     0 %       44734
Ntfs                                    207     125183     0 %       41339
vmmemctl                                155     125183     0 %       30954
mfeavfk                                  92     125183     0 %       18373
tcpip                                    85     125183     0 %       16975
win32k                                   54     125183     0 %       10784
fltmgr                                   14     125183     0 %        2795
mfetdi2k                                 11     125183     0 %        2196
TDI                                      10     125183     0 %        1997
RDPWD                                     9     125183     0 %        1797
PartMgr                                   9     125183     0 %        1797
KSecDD                                    7     125183     0 %        1397
SCSIPORT                                  7     125183     0 %        1397
afd                                       6     125183     0 %        1198
symmpi                                    6     125183     0 %        1198
RDPDD                                     5     125183     0 %         998
ipsec                                     5     125183     0 %         998
NDIS                                      5     125183     0 %         998
CLASSPNP                                  5     125183     0 %         998
mfebopk                                   4     125183     0 %         798
Npfs                                      3     125183     0 %         599
termdd                                    3     125183     0 %         599
vmxnet                                    2     125183     0 %         399
volsnap                                   2     125183     0 %         399
ndisuio                                   1     125183     0 %         199
mrxsmb                                    1     125183     0 %         199
rdbss                                     1     125183     0 %         199
atapi                                     1     125183     0 %         199

But in terms of Oracle performance everything changed: everything now run much faster, including simple queries in the SQL*Plus. A particular query started to run 3 times faster on average:

-- stats before
SQL> @sqlstats cp9jr3hp1jupk
                  Elapsed     Ela/exec                            User IO     Rows per   Versi           Share  Avg hard
       Execs            s            s      CPU, s    Gets/exec         s         exec     ons   Loads Mem, MB parse, ms PX Exec
------------ ------------ ------------ ----------- ------------ --------- ------------ ------- ------- ------- --------- -------
         135      170.093        1.260      155.31          835      4.29            1       1       1     .02    350.99       0

-- stats after
SQL> @sqlstats cp9jr3hp1jupk
                  Elapsed     Ela/exec                            User IO     Rows per   Versi           Share  Avg hard
       Execs            s            s      CPU, s    Gets/exec         s         exec     ons   Loads Mem, MB parse, ms PX Exec
------------ ------------ ------------ ----------- ------------ --------- ------------ ------- ------- ------- --------- -------
         604      212.151         .351      154.75        1,013     31.79            1       1       1     .02      8.34       0

It looks like the change helped, but there is no sign that it helped on the OS level. This makes me think that such an improvement in performance may be attributed to something else, such as OS, hypervisor or combination of them and Oracle. In any case, high system time is not good and it usually indicates that something is wrong.

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 5

Mon, 2014-10-20 13:05

Today’s blog post is part five of seven in a series dedicated to Deploying Private Cloud at Home, where I will be demonstrating how to configure Compute node and OpenStack services on the compute node. We have already installed the MySQL Python library on compute node in previous posts.

  1. Install OpenStack compute packages on the node
    yum install -y openstack-nova-compute openstack-utils
  2. Configure Nova compute service
    openstack-config --set /etc/nova/nova.conf database connection mysql://nova:Youre_Password@controller/nova
    openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password Your_Password
    openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
    openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
    openstack-config --set /etc/nova/nova.conf DEFAULT my_ip Your_Compute_node_IP
    openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address Your_Compute_node_IP
    openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://controller:6080/vnc_auto.html
    openstack-config --set /etc/nova/nova.conf DEFAULT glance_host controller
  3. Start the Compute service and its dependencies. Configure them to start automatically when the system boots
    service libvirtd start
    service messagebus start
    service openstack-nova-compute start
    chkconfig libvirtd on
    chkconfig messagebus on
    chkconfig openstack-nova-compute on
  4. Enable IP forwarding
    perl -pi -e 's,net.ipv4.ip_forward = 0,net.ipv4.ip_forward = 1,' /etc/sysctl.conf
    perl -pi -e 's,net.ipv4.conf.default.rp_filter = 1,net.ipv4.conf.default.rp_filter = 0,' /etc/sysctl.conf
    echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
    sysctl -p
  5. Install legacy networking components and Flat DHCP
    yum install -y openstack-nova-network openstack-nova-api

    We are using legacy networking and single NIC on both controller and compute nodes. Flat and public interfaces will be the same on below configuration. In this case, it is etho replace with the one you have on your system.

    openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
    openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
    openstack-config --set /etc/nova/nova.conf DEFAULT network_manager nova.network.manager.FlatDHCPManager
    openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver
    openstack-config --set /etc/nova/nova.conf DEFAULT network_size 254
    openstack-config --set /etc/nova/nova.conf DEFAULT allow_same_net_traffic False
    openstack-config --set /etc/nova/nova.conf DEFAULT multi_host True
    openstack-config --set /etc/nova/nova.conf DEFAULT send_arp_for_ha True
    openstack-config --set /etc/nova/nova.conf DEFAULT share_dhcp_address True
    openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release True
    openstack-config --set /etc/nova/nova.conf DEFAULT flat_network_bridge br0
    openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface eth0
    openstack-config --set /etc/nova/nova.conf DEFAULT public_interface eth0
  6. Start the services and configure them to start when the system bootsservice openstack-nova-network start
    service openstack-nova-metadata-api start
    chkconfig openstack-nova-network on
    chkconfig openstack-nova-metadata-api on
  7. Restart networking
    service network restart

 

This completes the configuration of compute node. Stay tuned for part six where we will configure network services on controller node.

Categories: DBA Blogs

Avro MapReduce Jobs in Oozie

Mon, 2014-10-20 07:56

Normally when using Avro files as input or output to a MapReduce job, you write a Java main[] method to set up the Job using AvroJob. That documentation page does a good job of explaining where to use AvroMappers, AvroReducers, and the AvroKey and AvroValue (N.B. if you want a file full of a particular Avro object, not key-value pair of two Avro types, use AvroKeyOutputWriter as the OutputFormat, AvroKey as the key and NullWritable as the value).

Sometimes (like if you’re using Oozie), you need to set everything up without using AvroJob as a helper. The documentation is less clear here, so here’s a list of Hadoop keys and the appropriate values (for MRv2):

  • avro.schema.output.key - The JSON representation of the output key’s Avro schema. For large objects you may run afoul of Oozie’s 100,000 character workflow limit, in which case you can isolate your Avro job in a subflow
  • avro.schema.output.value – Likewise, if you’re emitting key-value pairs instead of using AvroKeyOutputWriter, put your value’s JSON schema here
  • avro.mapper - your mapper class that extends AvroMapper. You can also use a normal Mapper (with the normal Mapper configuration option), but you’ll have to handle coverting the AvroKey/AvroValue yourself
  • avro.reducer - likewise, a class that extends AvroReducer
  • mapreduce.job.output.key.class - always AvroKey
  • mapreduce.job.output.value.class – AvroValue or NullWritable, as above
  • mapreduce.input.format.class  - if you’re reading Avro files as Input, you’ll need to set this to
  • mapreduce.map.output.key.class - AvroKey, if you’re using a subclass of AvroMapper. If you write your own Mapper, you can pick
  • mapreduce.map.output.value.class - AvroKey or NullWritable, unless you write a Mapper without subclassing AvroMapper
  • io.serializations  – AvroJob set this value to the following:

org.apache.hadoop.io.serializer.WritableSerialization, org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization, org.apache.hadoop.io.serializer.avro.AvroReflectSerialization, org.apache.avro.hadoop.io.AvroSerialization

With these configuration options you should be able to set up an Avro job in Oozie, or any other place where you have to set up your MapReduce job manually.

Categories: DBA Blogs

An Introduction to Extended Data Types in Oracle 12c

Mon, 2014-10-20 07:55

One of the lesser known new features that comes as a boon to many developers and DBAs is the provision of implicit handling of large data strings using scalar data types like VARCHAR2 and RAW.

When creating tables, each column must be assigned a data type, which determines the nature of the values that can be inserted into the column. Common data types include number, date, and varchar2. These data types are also used to specify the nature of arguments for PL/SQL programs like functions and procedures.

When choosing a data type, you must carefully consider the data you plan to store and the operations you may want to perform upon it. Making good decisions at the table design stage reduces the potential negative downstream impact on space utilization and performance. Space is a consideration since some data types occupy a fixed length, consuming the same number of bytes, no matter what data is actually stored in it.

In pre-12c databases, long characters strings of more than 4000 bytes had to be handled using creative solutions including: CLOB or LONG data types and multiple columns or variables. These approaches led to inefficient unnecessarily complex designs and added processing overheads.

12c introduced the MAX_STRING_SIZE system parameter that allows string data types to be much larger when the parameter is changed from its default value of STANDARD to EXTENDED. The VARCHAR2 data type, stores variable length character data from 1 to 4000 bytes if MAX_STRING_SIZE=STANDARD or up to 32767 bytes if MAX_STRING_SIZE=EXTENDED.

RAW and NVARCHAR2 data types are affected in similar ways.

edt0

Potential issues to consider:

  • Internally, extended data types are stored out-of-line using LOBs, but these cannot be manipulated using the DBMS_LOB interface.
  • When changing the MAX_STRING_SIZE parameter, objects may be updated invalidating dependent objects, so ideally, change this parameter during a maintenance window in your important databases.
  • List partitioning on EDT columns may potentially exceed the 4096 byte limit for the partition bounds. The DEFAULT partition may be used for data values that exceed the 4096 byte limit or a hash function may be used on the data to create unique identifiers smaller than 4096 bytes.
  • Indexing EDT columns may fail with “maximum key length exceeded” errors. For example, databases with an 8k default block size support a maximum key length of approximately 6400 bytes. A suggested work-around is to use a virtual column or function-based index to effectively shorten the index key length.
edt1

 

 

This feature will no doubt be improved and the shortcomings will be dealt with in future releases—but for now, it offers a clean and elegant mechanism for handling large character data within existing applications requiring minimal code changes.

 

 

Categories: DBA Blogs

Debugging High CPU Usage Using Perf Tool and vmcore Analysis

Fri, 2014-10-17 08:08

There are several tools and technologies available to debug deeper into high CPU utilization in a system; perf, sysrq, oprofile, vmcore, and more. In this post, I will narrate the course of debugging a CPU utilization issue using technologies like perf and vmcore.

Following sar output is from a system which faces high %system usage.

[root@prod-smsgw1 ~]# sar 1 14
Linux 2.6.32-431.20.5.el6.x86_64 (xxxxx) 08/08/2014 _x86_64_ (8 CPU)05:04:57 PM CPU %user %nice %system %iowait %steal %idle
05:04:58 PM all 2.90 0.00 15.01 0.38 0.00 81.72
05:04:59 PM all 2.02 0.00 10.83 0.13 0.00 87.03
05:05:00 PM all 3.27 0.00 13.98 0.76 0.00 81.99
05:05:01 PM all 9.32 0.00 16.62 0.25 0.00 73.80

From ‘man sar’.

%system
Percentage of CPU utilization that occurred while executing at the system level (kernel). Note
that this field includes time spent servicing hardware and software interrupts.

This means that the system is spending considerable time on catering to kernel code. System runs a java application which is showing high CPU usage.

perf – Performance analysis tools for Linux, is a good place to start in these kind of scenarios.

‘perf record’ command would capture system state for all cpus in perf.data file. -g would allow call graph and -p allows profiling a process.

‘perf report’ command would show the report.

Samples: 18K of event ‘cpu-clock’, Event count (approx.): 18445, Thread: java(3284), DSO: [kernel.kallsyms]
58.66% java [k] _spin_lock ?
31.82% java [k] find_inode ?
2.66% java [k] _spin_unlock_irqrestore ?
2.44% java [k] mutex_spin_on_owner

Here we can see that considerable time is spend in spinlock and find_inode code for the java application..

While investigation was going on, system crashed and dumped a vmcore. Vmcore is a memory dump of the system captured by tools like kdump.

I downloaded the debuginfo file and extracted the vmlinux to analyse the vmcore.

# wget wget http://debuginfo.centos.org/6/x86_64/kernel-debuginfo-2.6.32-431.20.5.el6.x86_64.rpm
# rpm2cpio kernel-debuginfo-2.6.32-431.20.5.el6.x86_64.rpm |cpio -idv ./usr/lib/debug/lib/modules/2.6.32-431.20.5.el6.x86_64/vmlinux

Then ran following command.

# crash ./usr/lib/debug/lib/modules/2.6.32-431.20.5.el6.x86_64/vmlinux /var/crash/127.0.0.1-2014-08-07-17\:56\:19/vmcoreKERNEL: ./usr/lib/debug/lib/modules/2.6.32-431.20.5.el6.x86_64/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2014-08-07-17:56:19/vmcore [PARTIAL DUMP]
CPUS: 8
DATE: Thu Aug 7 17:56:17 2014
UPTIME: 1 days, 13:08:01
LOAD AVERAGE: 91.11, 91.54, 98.02
TASKS: 1417
NODENAME: xxxxx
RELEASE: 2.6.32-431.20.5.el6.x86_64
VERSION: #1 SMP Fri Jul 25 08:34:44 UTC 2014
MACHINE: x86_64 (2000 Mhz)
MEMORY: 12 GB
PANIC: “Oops: 0010 [#1] SMP ” (check log for details)
PID: 11233
COMMAND: “java”
TASK: ffff88019706b540 [THREAD_INFO: ffff880037a90000]
CPU: 6
STATE: TASK_RUNNING (PANIC)

From the vmcore I see that dtracedrv module was loaded and unloaded (possibly for running dtrace), this resulted in several warnings (first warning from ftrace is expected) and then kernel panicked as memory got corrupted. Instruction pointer is corrupted, which points to memory corruption. Looks like Panic was triggered by dtrace module.

/tmp/dtrace/linux-master/build-2.6.32-431.20.5.el6.x86_64/driver/dtrace.c:dtrace_ioctl:16858: assertion failure buf->dtb_xamot != cached
Pid: 8442, comm: dtrace Tainted: P W ————— 2.6.32-431.20.5.el6.x86_64 #1
Pid: 3481, comm: java Tainted: P W ————— 2.6.32-431.20.5.el6.x86_64 #1
Call Trace:
[] ? dump_cpu_stack+0x3d/0×50 [dtracedrv]
[] ? generic_smp_call_function_interrupt+0×90/0x1b0
[] ? smp_call_function_interrupt+0×27/0×40
[] ? call_function_interrupt+0×13/0×20
[] ? _spin_lock+0x1e/0×30
[] ? __mark_inode_dirty+0x6c/0×160
[] ? __set_page_dirty_nobuffers+0xdd/0×160
[] ? nfs_mark_request_dirty+0x1a/0×40 [nfs]
[] ? nfs_updatepage+0x3d2/0×560 [nfs]
[] ? nfs_write_end+0×152/0x2b0 [nfs]
[] ? iov_iter_copy_from_user_atomic+0×92/0×130
[] ? generic_file_buffered_write+0x18a/0x2e0
[] ? nfs_refresh_inode_locked+0x3e1/0xbd0 [nfs]
[] ? __generic_file_aio_write+0×260/0×490
[] ? __put_nfs_open_context+0×58/0×110 [nfs]
[] ? dtrace_vcanload+0×20/0x1a0 [dtracedrv]
[..]
BUG: unable to handle kernel paging request at ffffc90014fb415e
IP: [] 0xffffc90014fb415e
PGD 33c2b5067 PUD 33c2b6067 PMD 3e688067 PTE 0
Oops: 0010 [#1] SMP
last sysfs file: /sys/devices/system/node/node0/meminfo
CPU 6
Modules linked in: cpufreq_stats freq_table nfs fscache nfsd lockd nfs_acl auth_rpcgss sunrpc exportfs ipv6 ppdev parport_pc parport microcode vmware_balloon sg vmxnet3 i2c_piix4 i2c_core shpchp ext4 jbd2 mbcache sd_mod crc_t10dif vmw_pvscsi pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: dtracedrv]Pid: 11233, comm: java Tainted: P W ————— 2.6.32-431.20.5.el6.x86_64 #1 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform
RIP: 0010:[] [] 0xffffc90014fb415e
RSP: 0018:ffff880037a91f70 EFLAGS: 00010246
RAX: 0000000000000001 RBX: 0000000000000219 RCX: ffff880037a91d40
RDX: 0000000000000001 RSI: 0000000000000001 RDI: 0000000000000000
RBP: 00007fba9a67f4c0 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000000 R11: 00000000000003ff R12: 000000000001d4c0
R13: 0000000000000219 R14: 00007fb96feb06e0 R15: 00007fb96feb06d8
FS: 00007fb96fec1700(0000) GS:ffff880028380000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffffc90014fb415e CR3: 000000031e49e000 CR4: 00000000000407e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process java (pid: 11233, threadinfo ffff880037a90000, task ffff88019706b540)
Stack:
0000000000000000 0000000000002be1 ffffffff8100b072 0000000000000293
000000000000ebe6 0000000000002be1 0000000000000000 0000000000000007
00000030692df333 000000000001d4c0 0000000000000001 00007fb96feb06d8
Call Trace:
[] ? system_call_fastpath+0×16/0x1b
Code: Bad RIP value.
RIP [] 0xffffc90014fb415e
RSP
CR2: ffffc90014fb415e
crash>

This allowed me to have have a look at the CPU usage issue happening in the system. Other way to capture a vmcore is to manually panic the system using sysrq + c.

None of the runnable and uninterruptable_sleep processes are running for long time..

Looking at the oldest D state process..

crash> bt 4776
PID: 4776 TASK: ffff88027f3daaa0 CPU: 6 COMMAND: “java”
#0 [ffff88027f3dfd88] schedule at ffffffff815287f0
#1 [ffff88027f3dfe50] __mutex_lock_killable_slowpath at ffffffff8152a0ee
#2 [ffff88027f3dfec0] mutex_lock_killable at ffffffff8152a1f8
#3 [ffff88027f3dfee0] vfs_readdir at ffffffff8119f834
#4 [ffff88027f3dff30] sys_getdents at ffffffff8119f9f9
#5 [ffff88027f3dff80] system_call_fastpath at ffffffff8100b072
RIP: 00000030692a90e5 RSP: 00007fa0586c51e0 RFLAGS: 00000206
RAX: 000000000000004e RBX: ffffffff8100b072 RCX: 00007fa0cd2cf000
RDX: 0000000000008000 RSI: 00007fa0bc0de9a8 RDI: 00000000000001f6
RBP: 00007fa0bc004cd0 R8: 00007fa0bc0de9a8 R9: 00007fa0cd2fce58
R10: 00007fa0cd2fcaa8 R11: 0000000000000246 R12: 00007fa0bc004cd0
R13: 00007fa0586c5460 R14: 00007fa0cd2cf1c8 R15: 00007fa0bc0de980
ORIG_RAX: 000000000000004e CS: 0033 SS: 002b

Looking at its stack..

crash> bt -f 4776
PID: 4776 TASK: ffff88027f3daaa0 CPU: 6 COMMAND: “java”
[..]
#2 [ffff88027f3dfec0] mutex_lock_killable at ffffffff8152a1f8
ffff88027f3dfec8: ffff88027f3dfed8 ffff8801401e1600
ffff88027f3dfed8: ffff88027f3dff28 ffffffff8119f834
#3 [ffff88027f3dfee0] vfs_readdir at ffffffff8119f834
ffff88027f3dfee8: ffff88027f3dff08 ffffffff81196826
ffff88027f3dfef8: 00000000000001f6 00007fa0bc0de9a8
ffff88027f3dff08: ffff8801401e1600 0000000000008000
ffff88027f3dff18: 00007fa0bc004cd0 ffffffffffffffa8
ffff88027f3dff28: ffff88027f3dff78 ffffffff8119f9f9
#4 [ffff88027f3dff30] sys_getdents at ffffffff8119f9f9
ffff88027f3dff38: 00007fa0bc0de9a8 0000000000000000
ffff88027f3dff48: 0000000000008000 0000000000000000
ffff88027f3dff58: 00007fa0bc0de980 00007fa0cd2cf1c8
ffff88027f3dff68: 00007fa0586c5460 00007fa0bc004cd0
ffff88027f3dff78: 00007fa0bc004cd0 ffffffff8100b072crash> vfs_readdir
vfs_readdir = $4 =
{int (struct file *, filldir_t, void *)} 0xffffffff8119f7b0
crash>crash> struct file 0xffff8801401e1600
struct file {
f_u = {
fu_list = {
next = 0xffff88033213fce8,
prev = 0xffff88031823d740
},
fu_rcuhead = {
next = 0xffff88033213fce8,
func = 0xffff88031823d740
}
},
f_path = {
mnt = 0xffff880332368080,
dentry = 0xffff8802e2aaae00
},

[..]

crash> mount|grep ffff880332368080
ffff880332368080 ffff88033213fc00 nfs nanas1a.m-qube.com:/vol/test /scratch/test/test.deploy/test/test-internal

The process was waiting while reading from above nfs mount.

Following process seems to the culprit.

crash> bt 9104
PID: 9104 TASK: ffff8803323c8ae0 CPU: 0 COMMAND: “java”
#0 [ffff880028207e90] crash_nmi_callback at ffffffff8102fee6
#1 [ffff880028207ea0] notifier_call_chain at ffffffff8152e435
#2 [ffff880028207ee0] atomic_notifier_call_chain at ffffffff8152e49a
#3 [ffff880028207ef0] notify_die at ffffffff810a11ce
#4 [ffff880028207f20] do_nmi at ffffffff8152c0fb
#5 [ffff880028207f50] nmi at ffffffff8152b9c0
[exception RIP: _spin_lock+30]
RIP: ffffffff8152b22e RSP: ffff88001d209b88 RFLAGS: 00000206
RAX: 0000000000000004 RBX: ffff88005823dd90 RCX: ffff88005823dd78
RDX: 0000000000000000 RSI: ffffffff81fd0820 RDI: ffffffff81fd0820
RBP: ffff88001d209b88 R8: ffff88017b9cfa90 R9: dead000000200200
R10: 0000000000000000 R11: 0000000000000000 R12: ffff88005823dd48
R13: ffff88001d209c68 R14: ffff8803374ba4f8 R15: 0000000000000000
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
— —
#6 [ffff88001d209b88] _spin_lock at ffffffff8152b22e
#7 [ffff88001d209b90] _atomic_dec_and_lock at ffffffff81283095
#8 [ffff88001d209bc0] iput at ffffffff811a5aa0
#9 [ffff88001d209be0] dentry_iput at ffffffff811a26c0
#10 [ffff88001d209c00] d_kill at ffffffff811a2821
#11 [ffff88001d209c20] __shrink_dcache_sb at ffffffff811a2bb6
#12 [ffff88001d209cc0] shrink_dcache_parent at ffffffff811a2f64
#13 [ffff88001d209d30] proc_flush_task at ffffffff811f9195
#14 [ffff88001d209dd0] release_task at ffffffff81074ec8
#15 [ffff88001d209e10] wait_consider_task at ffffffff81075cc6
#16 [ffff88001d209e80] do_wait at ffffffff810760f6
#17 [ffff88001d209ee0] sys_wait4 at ffffffff810762e3
#18 [ffff88001d209f80] system_call_fastpath at ffffffff8100b072

From upstream kernel source..

/**
* iput – put an inode
* @inode: inode to put
*
* Puts an inode, dropping its usage count. If the inode use count hits
* zero, the inode is then freed and may also be destroyed.
*
* Consequently, iput() can sleep.
*/
void iput(struct inode *inode)
{
if (inode) {
BUG_ON(inode->i_state & I_CLEAR);if (atomic_dec_and_lock(&inode->i_count, &inode->i_lock))
iput_final(inode);
}
}
EXPORT_SYMBOL(iput);#include
/**
* atomic_dec_and_lock – lock on reaching reference count zero
* @atomic: the atomic counter
* @lock: the spinlock in question
*
* Decrements @atomic by 1. If the result is 0, returns true and locks
* @lock. Returns false for all other cases.
*/
extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
#define atomic_dec_and_lock(atomic, lock) \
__cond_lock(lock, _atomic_dec_and_lock(atomic, lock))

#endif /* __LINUX_SPINLOCK_H */

Looks like the process was trying to drop dentry cache and was holding to the spinlock while dropping an inode associated with it. This resulted in other processes waiting on spinlock, resulting in high %system utilization.

When the system again showed high %sys usage I checked and found large slab cache.

[root@xxxxx ~]# cat /proc/meminfo
[..]
Slab: 4505788 kB
SReclaimable: 4313672 kB
SUnreclaim: 192116 kB

Checking slab in a running system using slabtop, I saw that nfs_inode_cache is the top consumer.

ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
[..]
2793624 2519618 90% 0.65K 465604 6 1862416K nfs_inode_cache

I ran ‘sync’ and then ‘echo 2 > /proc/sys/vm/drop_caches’ to drop the dcache, which fixed the high %sys usage in the system.

[root@xxxxx ~]# sar 1 10
Linux 3.10.50-1.el6.elrepo.x86_64 (prod-smsgw4.sav.mqube.us) 08/12/2014 _x86_64_ (8 CPU)11:04:45 AM CPU %user %nice %system %iowait %steal %idle
11:04:46 AM all 1.51 0.00 13.22 0.50 0.00 84.76
11:04:47 AM all 1.25 0.00 12.55 0.13 0.00 86.07
11:04:48 AM all 1.26 0.00 8.83 0.25 0.00 89.66
11:04:49 AM all 1.63 0.00 11.93 0.63 0.00 85.80
^C
[root@xxxxx ~]# sync
[root@xxxxx ~]# sar 1 10
Linux 3.10.50-1.el6.elrepo.x86_64 (prod-smsgw4.sav.mqube.us) 08/12/2014 _x86_64_ (8 CPU)11:05:23 AM CPU %user %nice %system %iowait %steal %idle
11:05:24 AM all 1.50 0.00 13.03 0.75 0.00 84.71
11:05:25 AM all 1.76 0.00 9.69 0.25 0.00 88.30
11:05:26 AM all 1.51 0.00 9.80 0.25 0.00 88.44
11:05:27 AM all 1.13 0.00 10.03 0.25 0.00 88.60
^C
[root@xxxxx ~]# echo 2 > /proc/sys/vm/drop_caches
[root@xxxxx ~]# cat /proc/meminfo
[..]
Slab: 67660 kB

[root@prod-smsgw4 ~]# sar 1 10
Linux 3.10.50-1.el6.elrepo.x86_64 (prod-smsgw4.sav.mqube.us) 08/12/2014 _x86_64_ (8 CPU)

11:05:58 AM CPU %user %nice %system %iowait %steal %idle
11:05:59 AM all 1.64 0.00 1.38 0.13 0.00 96.86
11:06:00 AM all 2.64 0.00 1.38 0.38 0.00 95.60
11:06:01 AM all 2.02 0.00 1.89 0.25 0.00 95.84
11:06:02 AM all 2.03 0.00 1.39 4.68 0.00 91.90
11:06:03 AM all 8.21 0.00 2.27 2.65 0.00 86.87
11:06:04 AM all 1.63 0.00 1.38 0.13 0.00 96.86
11:06:05 AM all 2.64 0.00 1.51 0.25 0.00 95.60

From kernel documentation,

drop_cachesWriting to this will cause the kernel to drop clean caches, dentries and
inodes from memory, causing that memory to become free.To free pagecache:
echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
echo 3 > /proc/sys/vm/drop_caches

java application was traversing through nfs and was accessing large number of files, resulting in large number of nfs_inode_cache entries, resulting in in a large dcache.

Tuning vm.vfs_cache_pressure would be a persistent solution for this.

From kernel documentation,

vfs_cache_pressure
——————Controls the tendency of the kernel to reclaim the memory which is used for
caching of directory and inode objects.At the default value of vfs_cache_pressure=100 the kernel will attempt to
reclaim dentries and inodes at a “fair” rate with respect to pagecache and
swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
never reclaim dentries and inodes due to memory pressure and this can easily
lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
causes the kernel to prefer to reclaim dentries and inodes.

Categories: DBA Blogs

NZOUG14 Beckons

Fri, 2014-10-17 07:50

New Zealand is famous for Kiwis, pristine landscape, and the New Zealand Oracle User Group (NZOUG) conference.  The location of choice is New Zealand when it comes to making Lord of the Rings and making Oracle Lord of the Databases.

NZOUG 2014 will be held 19–21 November in the Owen G. Glenn Building at the University of Auckland. The main conference will be held on the 20th and 21st, preceded by a day of workshops on the 19th. It’s one of the premier Oracle conferences in Southern hemisphere.

Where there is Oracle, there is Pythian. Pythian will be present in full force in NZOUG 2014.

Following are Pythian sessions at NZOUG14:

12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management
Fahd Mirza Chughtai

Everyone Talks About DR – But Why So Few Implement It
Francisco Munoz Alvarez

DBA 101: Calling All New Database Administrators
Gustavo Rene Antunez

My First 100 Days with an Exadata
Gustavo Rene Antunez

Do You Really Know the Index Structures?
Deiby Gómez

Oracle Exadata: Storage Indexes vs Conventional Indexes
Deiby Gómez

Oracle 12c Test Drive
Francisco Munoz Alvarez

Why Use OVM for Oracle Database
Francisco Munoz Alvarez

Please check the full agenda of NZOUG14 here.

Categories: DBA Blogs

Log Buffer #393, A Carnival of the Vanities for DBAs

Fri, 2014-10-17 07:47

Bloggers get connected to both the databases and their readers through their blogs. Bloggers act like a bridge here. Log Buffer extends this nexus through the Log Buffer Edition.

Oracle:

MS Sharepoint and Oracle APEX integration.

Just a couple of screenshots of sqlplus+rlwrap+cygwin+console.

Say “Big Data” One More Time (I dare you!)

Update OEM Harvester after 12.1.0.4 Upgrade

Insight in the Roadmap for Oracle Cloud Platform Services.

SQL Server:

Troubleshoot SQL P2P replication doesn’t replicate DDL schema change.

Set-based Constraint Violation Reporting in SQL Server.

Where do you start fixing a SQL Server crash when there isn’t a single clue?

A permission gives a principal access to an object to perform certain actions on or with the object.

When you can’t get to your data because another application has it locked, a thorough knowledge of SQL Server concurrency will give you the confidence to decide what to do.

MySQL:

MySQL 5.7.5- More variables in replication performance_schema tables.

Multi-source replication for MySQL has been released as a part of 5.7.5-labs-preview downloadable from labs.mysql.com.

How to install multiple MySQL instances on a single host using MyEnv?

Percona Toolkit for MySQL with MySQL-SSL Connections.

InnoDB: Supporting Page Sizes of 32k and 64k.

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 4

Thu, 2014-10-16 09:11

Today’s blog post is part four of seven in a series dedicated to Deploying Private Cloud at Home, where I will be demonstrating how to configure Imaging and compute services on controller node. See my previous blog post where we began configuring Keystone Identity Service.

  1. Install the Imaging service
    yum install -y openstack-glance python-glanceclient
  2. Configure Glance (Imaging Service) to use MySQL database
    openstack-config --set /etc/glance/glance-api.conf database connection \
    mysql://glance:Your_Password@controller/glance
    openstack-config --set /etc/glance/glance-registry.conf database connection \
    mysql://glance:Youre_Password@controller/glance
  3. Create Glance database user by running below queries on your MySQL prompt as root
    CREATE DATABASE glance;
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'Your_Password';
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'Your_Password';
  4. Create the database tables for the Image Service
    su -s /bin/sh -c "glance-manage db_sync" glance
  5. Create Glance user to communicate to OpenStack services and Identity services
    keystone user-create --name=glance --pass=Your_Password --email=Your_Email
    keystone user-role-add --user=glance --tenant=service --role=admin
  6. Configuration of Glance config files
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host controller
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password Your_Password
    openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host controller
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password Your_Password
    openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
  7. Register the Image Service with the Identity service
    keystone service-create --name=glance --type=image --description="OpenStack Image Service"
    keystone endpoint-create \
      --service-id=$(keystone service-list | awk '/ image / {print $2}') \
      --publicurl=http://controller:9292 \
      --internalurl=http://controller:9292 \
      --adminurl=http://controller:9292
  8. Start the Glance-api and Glance-registry services and enable them to start at startup
    service openstack-glance-api start
    service openstack-glance-registry start
    chkconfig openstack-glance-api on
    chkconfig openstack-glance-registry on
  9. Download CirrOS cloud image which is created for testing purpose
    wget -q http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img \
    -O /root/cirros-0.3.2-x86_64-disk.img
  10. Upload the image to Glance using admin account
    source /root/admin-openrc.sh
    glance image-create --name "cirros-0.3.2-x86_64" \
    --disk-format qcow2 \
    --container-format bare \
    --is-public True \
    --progress < /root/cirros-0.3.2-x86_64-disk.img
  11. Install Compute controller service on controller node
    yum install -y openstack-nova-api openstack-nova-cert \
    openstack-nova-conductor openstack-nova-console \
    openstack-nova-novncproxy openstack-nova-scheduler \
    python-novaclient
  12. Configure compute service database
    openstack-config --set /etc/nova/nova.conf database connection mysql://nova:Your_Password@controller/nova
  13. Configure compute service configuration file
    openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
    openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
    openstack-config --set /etc/nova/nova.conf DEFAULT my_ip Controller_IP
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen Controller_IP
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address Controller_IP
  14. Create nova database user by running below queries on your MySQL prompt as root
    CREATE DATABASE nova;
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'Your_Password';
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'Your_Password';
  15. Create Compute service tables
    su -s /bin/sh -c "nova-manage db sync" nova
  16. Create a nova user that Compute uses to authenticate with the Identity Service
    keystone user-create --name=nova --pass=Your_Passoword --email=Your_Email
    keystone user-role-add --user=nova --tenant=service --role=admin
  17. Configure Compute to use these credentials with the Identity Service running on the controller
    openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password Your_Password
  18. Register Compute with the Identity Service
    keystone service-create --name=nova --type=compute --description="OpenStack Compute"
    keystone endpoint-create \
      --service-id=$(keystone service-list | awk '/ compute / {print $2}') \
      --publicurl=http://controller:8774/v2/%\(tenant_id\)s \
      --internalurl=http://controller:8774/v2/%\(tenant_id\)s \
      --adminurl=http://controller:8774/v2/%\(tenant_id\\)s
  19. Now start Compute services and configure them to start when the system boots
    service openstack-nova-api start
    service openstack-nova-cert start
    service openstack-nova-consoleauth start
    service openstack-nova-scheduler start
    service openstack-nova-conductor start
    service openstack-nova-novncproxy start
    chkconfig openstack-nova-api on
    chkconfig openstack-nova-cert on
    chkconfig openstack-nova-consoleauth on
    chkconfig openstack-nova-scheduler on
    chkconfig openstack-nova-conductor on
    chkconfig openstack-nova-novncproxy on
  20. You can verify your configuration and list available images
    source /root/admin-openrc.sh
    nova image-list

 

This concludes the initial configuration of controller node before configuration of compute node. Stay tuned for part five where I will demonstrate how to configure compute node.

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 3

Tue, 2014-10-14 14:59

Today’s blog post is part three of seven in a series dedicated to Deploying Private Cloud at Home, where I will demonstrate how to configure OpenStack Identity service on the controller node. We have already configured the required repo in part two of the series, so let’s get started on configuring Keystone Identity Service.

  1. Install keystone on the controller node.
    yum install -y openstack-keystone python-keystoneclient

    OpenStack uses a message broker to coordinate operations and status information among services. The message broker service typically runs on the controller node. OpenStack supports several message brokers including RabbitMQ, Qpid, and ZeroMQ.I am using Qpid as it is available on most of the distros

  2. Install Qpid Messagebroker server.
    yum install -y qpid-cpp-server

    Now Modify the qpid configuration file to disable authentication by changing below line in /etc/qpidd.conf

    auth=no

    Now start and enable qpid service to start on server startup

    chkconfig qpidd on
    service qpidd start
  3. Now configure keystone to use MySQL database
    openstack-config --set /etc/keystone/keystone.conf \
       database connection mysql://keystone:YOUR_PASSWORD@controller/keystone
  4. Next create keystone database user by running below queries on your mysql prompt as root.
    CREATE DATABASE keystone;
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'YOUR_PASSWORD';
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'YOUR_PASSWORD';
  5. Now create database tables
    su -s /bin/sh -c "keystone-manage db_sync" keystone

    Currently we don’t have any user accounts that can communicate with OpenStack services and Identity service. So we will setup an authorization token to use as a shared secret between the Identity Service and other OpenStack services and store in configuration file.

    ADMIN_TOKEN=$(openssl rand -hex 10)
    echo $ADMIN_TOKEN
    openstack-config --set /etc/keystone/keystone.conf DEFAULT \
       admin_token $ADMIN_TOKEN
  6. Keystone uses PKI tokens as default. Now create the signing keys and certificates to restrict access to the generated data
    keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
    chown -R keystone:keystone /etc/keystone/ssl
    chmod -R o-rwx /etc/keystone/ssl
  7. Start and enable the keystone identity service to begin at startup
    service openstack-keystone start
    chkconfig openstack-keystone on

    Keystone Identity service stores expired tokens as well in the database. We will create below crontab entry to purge the expired tokens

    (crontab -l -u keystone 2>&1 | grep -q token_flush) || \
    echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone
  8. Now we will create admin user for keystone and define roles for admin user
    export OS_SERVICE_TOKEN=$ADMIN_TOKEN
    export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
    keystone user-create --name=admin --pass=Your_Password --email=Your_Email
    keystone role-create --name=admin
    keystone tenant-create --name=admin --description="Admin Tenant"
    keystone user-role-add --user=admin --tenant=admin --role=admin
    keystone user-role-add --user=admin --role=_member_ --tenant=admin
    keystone user-create --name=pythian --pass= Your_Password --email=Your_Email
    keystone tenant-create --name=pythian --description="Pythian Tenant"
    keystone user-role-add --user=pythian --role=_member_ --tenant=pythian
    keystone tenant-create --name=service --description="Service Tenant"
  9. Now we create a service entry for the identity service
    keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
    keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') \
    --publicurl=http://controller:5000/v2.0 \
    --internalurl=http://controller:5000/v2.0 \
    --adminurl=http://controller:35357/v2.0
  10. Verify Identity service installation
    unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
  11. Request an authentication token by using the admin user and the password you chose for that user
    keystone --os-username=admin --os-password=Your_Password \
      --os-auth-url=http://controller:35357/v2.0 token-get
    keystone --os-username=admin --os-password=Your_Password \
      --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 \
      token-get
  12. We will save the required parameters in admin-openrc.sh as below
    export OS_USERNAME=admin
    export OS_PASSWORD=Your_Password
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=http://controller:35357/v2.0
  13. Next Next check if everything is working fine and keystone interacts with OpenStack services. We will source the admin-openrc.sh file to load the keystone parameters
    source /root/admin-openrc.sh
  14. List Keystone tokens using:
    keystone token-get
  15. List Keystone users using
    keystone user-list

If all the above commands give you the output, that means your Keystone Identity Service is all set up, and you can proceed to the next steps—In part four, I will discuss on how to configure and set up Image Service to store images.

Categories: DBA Blogs