Skip navigation.

DBA Blogs

Digital Learning: LVC customer quotes

The Oracle Instructor - Mon, 2014-11-03 03:34

LVC

Here’s a collection of customer quotes as a follow-up to my last post about the importance of attitude towards Live Virtual Classes (LVCs). They are from courses that I have taught personally this year with an average delivery score of about 96%:

Oracle Database 12c: Managing Multitenant Architecture

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM

Oracle Database 12c: New Features for Administrators

Exadata Database Machine Administration Workshop

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated

“My first experience of an LVC. Pleased to say it was very positive. Introduction and start on Monday morning was smooth. I would recommend to my colleagues.”

Always important to make a good first impression!

“The whole LVC package just worked. From the comfort of my own environment with a great instructor makes for happy learning :)”

And that is exactly what we strive to deliver.

“Both, host and producer were very professional and guided the students through the course.”

An LVC producer takes care for all technical aspects apart from the course itself, like access to the learning platform. The instructor appears as “host” on the learning platform.

“Instructor professionally answered students’ questions and kept up a positive mood in the community!”

LVCs can be funny too :-)

“I appreciate the way how the course was presented. Very well controlled time, organization of presentation, exercises. Interaction with us was great. Always ready to answer a question, give an examples to difficult topic, illustrating topics.”

So much about allegedly missing interaction in LVCs.

“I work few years on RAC databases, my knowledge was not so clear regarding some topic on RAC and Grid after completing this training I’m sure that I will handle our RAC and Grid environment differently and for sure will have positive impact in our production environment. Great thank!”

You cannot top that with a classroom course either :-)

“LVC is offering great and flexible way to gain knowledge without travel or hotels etc.” “LVCs reduce travel costs and help the students to manage their time on their own, i.e. to join the classes from home and focus on the presented content.”

Trust me, I didn’t make up the last two although they may sound like your manager talking – or mine, for that matter ;-)


Tagged: Digital Learning, LVC
Categories: DBA Blogs

Does Increasing An Oracle Background Process OS Priority Improve Performance?

Does Increasing An Oracle Background Process OS Priority Improve Performance?
Does increasing an Oracle Database background process operating system priority improve performance? As you might expect, the answer is, "It depends."

In this posting I will explain the results of an experiment where I increase the Oracle Database 12c log writer background processes operating system priority.

In my experiment I created a clear CPU bottleneck and the top wait event was log file parallel write. I gathered some data. Next I increased all the log writer background process priorities. Again, I gathered some data and then I analyzed the two data sets. The results were disappointing, not surprising, but a fundamental rule in performance tuning was demonstrated and reinforced.

You can download the "analysis pack" which contains the raw experimental data, histogram, statistical R results and the R statical package commands HERE. You can download total time delta reporting script (ttpctx.sql) I show below, which is contained within my OraPub System Monitor (OSM) Toolkit HERE.

Before I could gather some data, I needed to create the appropriate system load, the data gather scripts and the data analysis scripts. Here's a quick overview of each.

Increasing The LGWR Processes PriorityIf you are not familiar with changing Oracle Database background OS process priority, I blogged about how to do this HERE.

My experiment consisted of creating two situations and statistically comparing them to see if increasing the LGWR background process OS priority increased performance. The only difference in the "two situations" was the instance parameter, _high_priority_processes. For the "normal priority" part of the experiment, the default "LMS*|VKTM" was used. For the "high priority" part of the experiment the parameter was changed to "LMS*|VKTM|LG*". The "LG*" caused the increase in the Linux OS priority of all log writer processes from the default 19 to 41.

Ready for some version specifics? Oracle continues to make OS priority related instance parameter changes... even within 12c releases. Since this experiment was done with Oracle Database version 12.1.0.1.0 the parameter default was "LMS*|VKTM" not "LMS*" as with version 12.1.0.2.0. Also, in 12.1.0.2.0 VKTM is placed into a new parameter, _highest_priority_processes.

Generating The DML With CPU Bottleneck Load
To generate the DML workload, I used my OP Load Generator. You can download it HERE. It's quick, easy and I'm familiar with it. I kept increasing the number of DML processes until the CPU utilization was pegged at 100%. While the database server was clearly out of CPU power, the log writer background processes were rarely seen as the top CPU consuming process. I used the "top" program to monitor the process activity. The top CPU consuming processes were almost always the Oracle server/foreground/shadow processes. As I'll explain in the Results section below, this has significant ramifications on the results.

Oracle Time Based Analysis Summary
The data collection script was similar to the one I used back in 2012 when collecting data regarding Oracle's commit write facility. Essentially, I collected multiple three minute samples of the delta elapsed time, user commits, total non-idle wait time and CPU consumption. In both the normal and high priority runs, around 99% of the database time was CPU consumption and 1% of the time Oracle non-idle wait time. Also the top wait event (~70%) was log file parallel write (display name: log file redo write).

If you want master an Oracle Time Based Analysis, check out my online seminar, Tuning Oracle Using An AWR Report. It will teach you how to using an AWR report to optimize Oracle performance so users will feel the difference.

The OraPub System Monitor Toolkit script ttpctx.sql report below was taken during the "normal priority" log writer load.

SQL> @ttpctx

Database: prod35 16-MAY-14 06:07pm
Report: ttpctx.sql OSM by OraPub, Inc. Page 1
Total Time Activity (39 sec interval)

Avg Time Time Wait
Time Component % TT % WT Waited (ms) (sec) Count(k)
------------------------------------- ------- ------- ----------- ----------- --------
CPU consumption: Oracle SP + BG procs 98.91 0.00 0.000 238.716 0
log file redo write 0.73 67.56 8.082 1.770 0
control file parallel write 0.18 16.41 33.077 0.430 0
target log write size 0.05 4.20 6.111 0.110 0
oracle thread bootstrap 0.03 3.05 40.000 0.080 0
os thread creation 0.02 1.53 20.000 0.040 0
commit: log file sync 0.01 0.76 10.000 0.020 0
enq: CR - block range reuse ckpt 0.01 0.76 20.000 0.020 0
Disk file operations I/O 0.00 0.00 0.000 0.000 0

The OSM report below was taken during the "high priority" log writer load.

SQL> @ttpctx

Database: prod35 16-MAY-14 09:25pm
Report: ttpctx.sql OSM by OraPub, Inc. Page 1
Total Time Activity (41 sec interval)

Avg Time Time Wait
Time Component % TT % WT Waited (ms) (sec) Count(k)
------------------------------------- ------- ------- ----------- ----------- --------
CPU consumption: Oracle SP + BG procs 98.92 0.00 0.000 238.733 0
log file redo write 0.83 77.01 8.272 2.010 0
control file parallel write 0.08 7.28 14.615 0.190 0
target log write size 0.05 4.98 5.909 0.130 0
oracle thread bootstrap 0.03 3.07 40.000 0.080 0
os thread creation 0.02 1.92 25.000 0.050 0
commit: log file sync 0.01 0.77 10.000 0.020 0
enq: CR - block range reuse ckpt 0.01 0.77 20.000 0.020 0
enq: RO - fast object reuse 0.00 0.38 10.000 0.010 0


Data Collection
For the normal priority load 27 three minute samples where collected. For the high priority situation there were 30 three minute samples collected. (I forgot why there was only 27 samples collected for the normal priority.)  I collected the elapsed time, total non-idle wait time, total CPU consumption (v$sys_time_model: db_cpu + background cpu time) and total user commits.

In this experiment more user commits processed per second means better performance.


Experimental Results
I used the free statistics package "R" (www.r-project.org) to analyze the data. I demonstrate how to get, install and use "R" in my online video seminar, Using Skewed Data To Your Advantage.

With the normal log writer process priority, an average of 984.5 commits/sec and a median of 983.0 commits/sec occurred. With the LG* high process priority, an average of 993.6 commits/sec and a median of 991.0 commits/sec occurred. While the "high priority" situation was able to process more commits per second, is this statistically significant?

The red "smoothed" histogram is the normal priority situation and the blue smoothed histogram is when the log writers were set to the higher priority. The more separated the two histograms the more "different" the sample sets, the more likely there is a statistically significant difference and the more likely a user would feel the difference. Looking at the above histograms plot, there does not appear to be a real difference. But let's do a proper significance test!

Because both sample sets are normally distributed (details are in the Analysis Pack), I could use a simple t-test. R produced a p-value of 0.04451. To be statistically "different" I want the p-value to be less than 0.05 and it is. What does this mean?

While statistically and numerically the commit rates are different, I wouldn't expect any special performance tuning award! In fact, the hassles with cycling a production instance and setting underscore/hidden parameters would make it very unlikely I would increase the OS priority of the log writer background processes. I want to see a big performance difference.

To Summarize... What I Learned... Again
This situation is a perfect example of focusing on the wrong thing! While there is a clear operating system CPU bottleneck and the top wait event is about redo, the log writers are not suffering from a want/need of CPU resources. I suspect the server processes want more CPU resources, but they are NOT the processes we increased their OS priority.

If the log writers were suffering from a lack of CPU resources and fighting for CPU resources, I would expect to see them consuming CPU resources along with the Oracle server processes. And I would definitely expect to see them near the top of the "top" process monitor... especially when their priority has been increased!

Because of this "misguided" tuning effort, this experiment does not build a case for or against changing the log writer priority. What it reinforces is in our quest to optimize performance, make sure we focus on the right thing.

As a side note, this is a great statistical analysis example for two reasons. First, our samples sets look similar, but statistically they are not. Second, while they are statistically different, the performance impact will not be very different. And my guess is the users won't feel a thing... except frustration and anger.

To Super Summarize
When considering increasing a background process's operating system priority, make sure the process is in need of CPU and is not able to get it. In this situation, the DBA could have been mislead by the CPU bottleneck. But upon closer inspection of the log writers from an operating system perspective and knowing the wait event "log file parallel write" is probably more about IO than CPU (Oracle perspective) it would be unlikely that increasing the log writer processes OS priority would help increase the commits per second.

Thanks for reading!

Craig.












Categories: DBA Blogs

StatsPack and AWR Reports -- Bits and Pieces -- 2

Hemant K Chitale - Sat, 2014-11-01 08:52
This is the second post in a series on reading StatsPack and AWR reports.
(The first is available here)


Comparing Reports :

Here are two 9.2 StatsPack extracts from one database:

Extract A  : 9.2 StatsPack
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 56,031.63 3,084.68
Logical reads: 68,286.24 3,759.32
Block changes: 314.88 17.33
Physical reads: 842.92 46.40
Physical writes: 134.76 7.42
User calls: 271.32 14.94
Parses: 146.46 8.06
Hard parses: 7.37 0.41
Sorts: 93.83 5.17
Logons: 0.33 0.02
Executes: 296.70 16.33
Transactions: 18.16

Extract B : 9.2 StatsPack
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 89,615.76 2,960.48
Logical reads: 210,302.81 6,947.42
Block changes: 541.83 17.90
Physical reads: 1,465.04 48.40
Physical writes: 161.68 5.34
User calls: 213.82 7.06
Parses: 125.28 4.14
Hard parses: 6.13 0.20
Sorts: 104.31 3.45
Logons: 0.35 0.01
Executes: 664.81 21.96
Transactions: 30.27

Extract B shows a higher volume of activity -- Transactions per second, Redo Size per second, Logical Reads per second.
Actually, although these are both from the same database but for two different time-windows on different dates.  The fact that the time-windows aren't the same make the comparisons difficult.  Extract A is for an 11-hour window on day 1 and Extract B is for a 6-hour window on day 6.  The two windows aren't strictly comparable.  You have to be very careful when comparing StatsPack or AWR reports, even when from the same database if they don't cover comparable time-windows with similar loads.  The first Report includes non-peak hours, the second is for Peak hours only. The level of activity averaged out over 11hours that include non-peak hours isn't strictly comparable with a 6hour peak window.

Extract C : 10.2 AWR
Load Profile
Per SecondPer TransactionRedo size: 520,776.15 50,948.36Logical reads: 353,525.71 34,585.98Block changes: 1,854.93 181.47Physical reads: 14,285.23 1,397.55Physical writes: 295.84 28.94User calls: 1,265.14 123.77Parses: 822.64 80.48Hard parses: 15.40 1.51Sorts: 168.09 16.44Logons: 0.16 0.02Executes: 1,040.31 101.78Transactions: 10.22 
Extract D : 10.2 AWRLoad Profile
Per SecondPer TransactionRedo size: 517,862.01 54,681.39Logical reads: 288,341.09 30,446.12Block changes: 1,879.27 198.43Physical reads: 12,820.96 1,353.77Physical writes: 323.90 34.20User calls: 1,115.78 117.82Parses: 719.39 75.96Hard parses: 17.28 1.82Sorts: 95.74 10.11Logons: 0.15 0.02Executes: 935.33 98.76Transactions: 9.47 
Extracts C and D are from the same database for the same time-window on 2 consecutive workdays.  They are comparable.  A difference of 10% for some statistic may not be significant.  Understand that every business, every activity, every system will have some "normal" fluctuations day-to-day.

.
.
.



Categories: DBA Blogs

The First PASS Summit Bloggers’ Meetup

Pythian Group - Fri, 2014-10-31 13:02

We are stoked to announce the first ever PASS Summit Bloggers’ Meetup!

What: PASS Summit Bloggers’ Meetup 2014
When: Thursday, November 6th, 5pm – 7pm
Where: Sports Restaurant & Bar, 140 4th Ave North, Suite 130, Seattle, WA 98109
How: Please comment with “COUNT ME IN” if coming — we need to know attendance numbers.

We’re excited to meet old friends, and make new ones in Seattle this year. Pythian will sponsor munchies and drinks. There will be a networking contest with some cool prizes, plus everyone will get their very own PASS Summit Bloggers Meetup shirt! Meetups are a great way to make new friends in the community, so come on out — all bloggers are welcome!

If you are planning to attend, please comment here with the phrase “COUNT ME IN”. This will help us ensure we have the attendance numbers right. Please provide your blog URL with your comment — it’s a Bloggers Meetup after all! Make sure you comment here if you are attending so that we have enough room, food, and drinks.

The location is perfect to get ready for the appreciation event — minutes walk from EMP Museum! Snacks and drinks before the big event and mingle with fellow bloggers. What can be better?

Of course, do not forget to blog and tweet about this year’s bloggers meetup using #Summit14 #sqlpass. See you there!

Categories: DBA Blogs

Speaking at the Spanish Virtual PASS Chapter

Pythian Group - Fri, 2014-10-31 10:09

Title : Recuperación de desastres y soluciones de alta disponibilidad con SQL Server
Event link: http://globalspanish.sqlpass.org/Inicio.aspx?EventID=1846
Event description: “Esta presentación presenta las soluciones de recuperacion de desastres (Disaster Recovery) y alta disponibilidad (High Availability) con SQL Server y ofrece escenarios creativos por usar las soluciones para reportages (Reporting), BI y almacen de datos (Datawarehouse). ”

Please feel free to register!

Categories: DBA Blogs

Log Buffer #395, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-10-31 07:44

This Log Buffer Edition ventures out in the fields of Oracle, SQL Server and MySQL and picks some of the coolest blog posts for this week.

Oracle:

In Oracle, if we add a column to a table which is NOT NULL, we are allowed to do it directly, in a single statement, as long as we supply a DEFAULT value to populate any pre-existing rows.

There have been rumblings from the HPC community indicating a general suspicion of and disdain for Big Data technology which would lead one to believe that whatever Google, Facebook and Twitter do with their supercomputers is not important enough to warrant seriousness—that social supercomputing is simply not worthy.

Work-around Instance Migration Limits of BPM Suite 11g.

Oracle Event Processing 12c: java errors when deploying a new OEP project.

Creating a WebLogic 12c Data Source Connection to Pivotal GemFireXD 1.3.

SQL Server:

It sounds simple enough. Either your column will always have a value or it may not. Yet somehow such a seemingly simple decision can become a never-ending debate where database schema begins to resemble superstition and designing effective tables seems more contentious than you expected it to be.

There’s a joke doing the rounds at SQL conferences and seminars: three DBAs walk into a NoSQL bar and leave when they can’t find a table.

Stairway to AlwaysOn Level 3: Infrastructure 101.

How to compare two databases and email the results to every one who needs to know.

Free eBook: Troubleshooting SQL Server: A Guide for the Accidental DBA.

MySQL:

Set up an SSL-encrypted connection between Sphinx and MySQL.

Sometimes users ask for something that doesn’t really make sense. On the first glance. But then you start asking and realize that the user was right, you were wrong, and it is, actually, a perfectly logical and valid use case.

PECL/mysqlnd_ms needs updates for MySQL Group Replication.

Why should you migrate from MySQL to MariaDB?

The Perfect Server – CentOS 7 (Apache2, Dovecot, ISPConfig 3).

Categories: DBA Blogs

Index Advanced Compression vs. Bitmap Indexes (Candidate)

Richard Foote - Thu, 2014-10-30 23:59
A good question from Robert Thorneycroft I thought warranted its own post. He asked: “I have a question regarding bitmapped indexes verses index compression. In your previous blog titled ‘So What Is A Good Cardinality Estimate For A Bitmap Index Column ? (Song 2)’ you came to the conclusion that ‘500,000 distinct values in a 1 […]
Categories: DBA Blogs

Pythian at PASS 2014

Pythian Group - Thu, 2014-10-30 11:40

Join us in Seattle for SQL PASS Summit 2014—planned by and for the Microsoft SQL Server community—where some of our very own will be presenting. You’ll notice that PASS has a funny way of scheduling speaking sessions, so both of our experts are presenting at the same time.

Also be sure to visit us at booth #226 for a chance to win a Sonos Play: 1. The Exhibit Hall will be open all day Wednesday, Thursday, and Friday morning.

IMPORTANT UPDATE: We are proud to announce that we’re hosting the first ever PASS Summit Bloggers Meetup! Join us Thursday night to meet with old friends, and make new ones. RSVP here.

 

Edwin Sarmiento Configuring SharePoint 2013 as a Business Intelligence Platform by Edwin Sarmiento
Wednesday November 5 — 1:30-2:45 PM
Room 608

Edwin Sarmiento, a Microsoft MVP, Certified Master, and Principal Consultant in Pythian’s Advanced Technology Group, will be presenting a session called Configuring SharePoint 2013 as a Business Intelligence Platform.

With SharePoint as the business collaboration platform of choice for the enterprise, it has become an important piece in the overall Microsoft Business Intelligence (BI) stack. However, deploying and configuring it to support Microsoft BI solutions has become a cumbersome task for IT experts.

In this session, Edwin will demonstrate what it takes to successfully architect and design SharePoint 2013 as a BI platform. He will cover multiple configuration scenarios and the different technologies necessary to build the infrastructure. Attendees will walk away with the confidence to run Power Pivot, Power View, and Reporting Services in their SharePoint 2013 farms.

 

Warner ChavesThe Use Cases for In-Memory OLTP by Warner Chaves
Wednesday November 5 — 1:30-2:45 PM
Room 3AB

Warner Chaves, a Microsoft Certified Master and Principal Consultant in Pythian’s SQL Server practice, will be presenting a session called The Use Cases for In-Memory OLTP.

In this session, he will do a short introduction to the In-Memory OLTP feature before diving straight into the use cases where the new lockless/latchless concurrency control and native compilation really shine.

Demos will cover PAGELATCH contention (or lack thereof), use of non-persistent in-memory tables for ETL, and in-memory tables as “shock absorbers” for high throughput environments.

For each case, Warner will do a comparison of “classic” versus in-memory, what gains the audience can expect, and what patterns will yield the biggest benefits.

 

BONUS MATERIAL! Since its release back in April, SQL Server experts across the globe are becoming familiar with the top features in Microsoft SQL Server 2014—the In-Memory OLTP engine, the AlwaysOn enhancements, and more. Warner couldn’t help but notice that there were a few features that weren’t getting the same attention and filmed a video series sharing the most underrated features of SQL Server 2014. We’ll be publishing that series during PASS, so follow @Pythian on Twitter to receive our updates.

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Microsoft SQL Server expertise.

Categories: DBA Blogs

Partner Webcast – Oracle Endeca Information Discovery: Unlocking Insights from any source

Businesses increasingly need to make quick decisions. Organizations need timely and actionable data, which will enable them to uncover opportunities faster and engage with customers better....

We share our skills to maximize your revenue!
Categories: DBA Blogs

2015 hyundai santa fe sport Release Date

Ameed Taylor - Thu, 2014-10-30 01:29
autos much the same as the 2015 hyundai santa fe sport changes Fe are in charge of Hyundai's fame as a brand that offers quality and reasonableness. the spot the past Santa Fe had a strange however unique appear to be, the quite recently upgraded, present adaptation is prepared as standard as you can get. View the Santa Fe from more than a couple of edges and its conceivable you'll recognize a similarity to more upscale moderate size hybrid Suvs, for example, the Lexus RX and VW Touareg. that is to not say it appears to be broadly broad - it doesn't. What the Santa Fe does give is a commonplace appear to be and a choice inside that verges on costly, especially in restricted trim.

hyundai santa fe 2015 caracteristicas might likewise be furnished with this area's foreseen alternatives, including a third-column seat and a route machine. supporters might moreover browse styles with considered one of two V6 motors, door or all-wheel force and a handbook or modernized transmission. lamentably, that you can't actually mix'n'match - base styles have a more modest V6 with less strength, and various the limited's alleviation and solace alternatives are restrictive to that trim.

in general, despite the fact that, the 2015 hyundai santa fe sport review is a brilliant plausibility for youthful families needing all-capacity transportation. We wouldn't guide it if energetic driving progress are favored. The more diminutive Mazda CX-7 and Mitsubishi Outlander, and additionally the average size Nissan Murano would all be most well known for this situation, and Toyota's Highlander is roomier and speedier. yet in the event that you require different hybrid for the cash, the Hyundai Santa Fe is sensibly convincing.

2015 hyundai santa fe sportThe when will the 2015 hyundai santa fe be available medium size hybrid action utility is accessible in three trim extents: base GLS, SE and limited. The GLS begins off with sixteen-inch compound wheels, full power supplies, keyless section, journey direct and a six-speaker sound machine with a CD/Mp3 player, satellite radio, an assistant sound jack and a USB port. The midlevel SE trim joins a finer V6 motor, 18-inch amalgam wheels, an auto-darkening rearview mirror, robotized headlights, a drive pc and direction wheel-snared sound controls. The best Santa Fe constrained includes a sunroof, cowhide based upholstery, warmed front seats, a force driver seat, twin-zone programmed nearby climate control and a top rate Infinity sound contraption with a six-CD changer.



one of the vital restricted's additional features are offered as options on the GLS and SE. different options include a third-row seat with auxiliary rear local weather controls and Bluetooth for wirelessly connecting your phone to the auto. A towing training bundle is same old on SE and limited fashions, and an not obligatory navigation device and a rear-seat leisure system are offered on the Santa Fe limited only.
hyundai santa fe 2015 price
In GLS trim, the Santa Fe comes with a 2.7-liter V6 that produces 185 horsepower and 183 pound-feet of torque. The SE and limited function a larger three.3-liter V6 excellent for 242 hp and 226 lb-feet of torque. A five-velocity guide transmission is same old with the bottom engine, and a 4-speed automatic is not obligatory. The greater V6 comes same old with a 5-speed automated transmission and accelerates the when is the 2015 hyundai santa fe release date from zero to 60 mph in 8.7 seconds.

All hyundai santa fe 2015 fashions are provided with both front-wheel-pressure or all-wheel-drive powertrains. The electronically controlled AWD gadget routinely routes energy to the wheels with the perfect traction. For more suitable performance in slippery or off-street stipulations, a driver-selectable AWD lock offers a fixed 50/50 torque split between the front and rear wheels.

correctly equipped, the Santa Fe can tow up to three,500 pounds. EPA estimated fuel economic system is pretty much an identical for both engines: An AWD variation with the 3.three-liter V6 has scores of 17 mpg city/24 mpg highway and 19 mpg blended, a tad above reasonable for this phase.
2015 hyundai santa fe mpgThe 2015 hyundai santa fe limited offers an impressive array of same old safety options together with antilock disc brakes, traction keep watch over, steadiness keep watch over, front-seat facet airbags, full-length head curtain airbags and lively front-seat head restraints.

In executive crash exams, the hyundai santa fe 2015 philippines got an excellent 5 stars for defense in frontal and side influences. In insurance Institute for highway safety testing, the nuevo hyundai santa fe 2015 earned the absolute best imaginable rating of "excellent" in each frontal-offset and aspect-influence assessments.
2015 hyundai santa fe sport Release DateThis second-technology Santa Fe has a lovely dashboard and high quality supplies during. In restricted trim, the convincing faux timber and aluminum accents give the crossover a certain luxurious feel. Blue instrument lighting fixtures and an non-compulsory 10-speaker Infinity sound machine handiest add to the Santa Fe's plush inside ambience.

The using position can be awkward for some, although, because the front seats are established overly high and the short bottom cushions supply minimal thigh fortify for taller adults.

With the optional third-row seat, the Santa Fe can accommodate up to seven passengers. Like most models in this segment, alternatively, the third row is truly best suitable for kids. The second row is notably above average in relation to alleviation. The cut up rear seats can be folded flat in each rows, and the Santa Fe splits the variation between smaller and larger crossover SUVs with seventy eight cubic ft of most cargo room.
hyundai santa fe 2015 model
Smaller crossover SUVs just like the Mazda CX-7 and Mitsubishi Outlander are sportier and extra worthwhile to force arduous, although the Santa Fe's handling is indisputably composed and might in reality be enjoyable every now and then. The trade-off is that the trip may also be very busy on the highway on fashions with the larger wheels. during standard using, the brake pedal feels about proper, however can get smooth all through onerous braking.

the 2.7-liter V6 gives first rate acceleration, however the additional kick and subtle nature of the three.three-liter V6, which is sort of as gas environment friendly, is important and gives the when does the 2015 hyundai santa fe come out a extra large feel. In testing, although, now we have discovered the greater V6's 5-velocity computerized can on occasion be sluggish to downshift for quick passing or merging maneuvers.


Categories: DBA Blogs

12.1.0.2 Introduction to Zone Maps Part II (Changes)

Richard Foote - Thu, 2014-10-30 00:45
In Part I, I discussed how Zone Maps are new index like structures, similar to Exadata Storage Indexes, that enables the “pruning” of disk blocks during accesses of the table by storing the min and max values of selected columns for each “zone” of a table. A Zone being a range of contiguous (8M) blocks. I […]
Categories: DBA Blogs

Script to count and recompile invalid objects

Bobby Durrett's DBA Blog - Wed, 2014-10-29 16:48

This is pretty simple, but I thought I would share it since it is helpful to me.  I have been preparing for a large migration which involves table, index, type, function, package, and procedure changes.  When I run a big migration like this I check for invalid objects before and after the migration and attempt to recompile any that are invalid.  By checking before and after the migration I know which objects the migration invalidated.

Here’s the script:

select status,count(*)
from dba_objects
where owner='YOURSCHEMA'
group by status
order by status;

select 'alter '||object_type||' '||owner||'.'||object_name||
       ' compile;'
from dba_objects
where owner='YOURSCHEMA' and
status='INVALID' and
object_type <> 'PACKAGE BODY'
union
select 'alter package '||owner||'.'||object_name||' compile body;'
from dba_objects
where owner='YOURSCHEMA' and
status='INVALID' and
object_type = 'PACKAGE BODY';

Replace “YOURSCHEMA” with the schema that your objects are in.

Output is something like this:

STATUS    COUNT(*)
------- ----------
INVALID          7
VALID        53581

'ALTER'||OBJECT_TYPE||''||OWNER||'.'||OBJECT_NAME||'COMPILE;'
--------------------------------------------------------------
alter FUNCTION YOURSCHEMA.YOURFUNCTION compile;
alter package YOURSCHEMA.YOURPACKAGE compile body;

The counts give me a general idea of how many objects are invalid and the alters gives me sql that I can paste into a script and run to attempt to compile the objects and make them valid.

Hope this is helpful to someone else.  It’s helpful to me.

– Bobby



Categories: DBA Blogs

Pythian at Percona Live London 2014

Pythian Group - Wed, 2014-10-29 14:29

Percona Live London takes place next week from November 3-4 where Pythian is a platinum sponsor—visit us at our booth during the day on Tuesday, or at the reception in the evening. Not only are we attending, but we’re taking part in exciting speaking engagements, so be sure to check out our sessions and hands-on labs. Find those details down below.

 

MySQL Break/Fix Lab by Miklos Szel, Alkin Tezuysal, and Nikolaos Vyzas
Monday November 3 — 9:00AM-12:00PM
Cromwell 3 & 4

Miklos, Alkin, and Nikolaos will be presenting a hands-on lab by demonstrating an evaluation of operations errors and issues in MySQL 5.6, and recovering from them. They will be covering instance crashes and hangs, troublesehooting and recovery, and significant performance issues. Find out more about the speakers below.

About Miklos: Miklos Szel is a Senior Engineer at Pythian, based in Budapest. With greater than 10 years’ experience in system and network administration, he has also worked for Walt Disney International as its main MySQL DBA. Miklos specializes in MySQL-based high availability solutions, performance tuning, and monitoring, and has significant experience working with large-scale websites.

About Alkin: Alkin Tezuysal has extensive experience in enterprise relational databases, working in various sectors for large corporations. With greater than 19 years’ of industry experience, he has been able to work on large projects from the group up to production. In recent years, he has been focusing on eCommerce, SaaS, and MySQL technologies.

About Nikolaos: Nik Vyzas is a Lead Database Consultant at Pythian, and an avid open source engineer. He began his career as a software developer in South Africa, and moved into technology consulting firms for various European and US-based companies. He specializes in MySQL, Galera, Redis, MemcacheD, ad MongoDB on many OS platforms.

 

Setting up Multi-Source Replication in MariaDB 10 by Derek Downey
Monday November 3 — 2:00-5:00PM
Cromwell 3 & 4

For a long time, replication in MySQL was limited to only a single master. When MariaDB 10.0 became generally available, the ability to allow multiple masters became a reality. This has opened up the door to previously impossible architectures. In this hands-on tutorial, Derek will discuss some of the features in MariaDB 10.0, demonstrate establishing a four-node environment running on participants’ computer using Vagrant annd VirtualBox, and even discuss some limitations associated with  10.0. Check out Derek’s blog post for more detailed info about his session.

About Derek:Derek began his career as a PHP application developer, working out of Knoxville, Tennessee. Now a Principal Consultant in Pythian’s MySQL practice, Derek is sought after for his deep knowledge of Galera and diagnosing replication issues.

 

Understanding Performance Through Measurement, Benchmarking, and Profiling by René Cannaò
Monday November 3 — 2:00-5:00PM
Orchard 2

It is essential to understand how your system performs at different workloads to measure the impacts of changes and growth and to understand how those impacts will manifest. Measuring the performance of current workloads is not trivial and the creation of a staging environment where different workloads need to be tested has it’s own set of challenges. Performing capacity planning, exploring concerns about scalability and response time and evaluating new hardware or software configurations are all operations requiring measurement and analysis in an environment appropriate to your production set up. To find bottlenecks, performance needs to be measured both at the OS layer and at the MySQL layer: an analysis of OS and MySQL benchmarking and monitoring/measuring tools will be presented. Various benchmark strategies will be demonstrated for real-life scenarios, as well as tips on how to avoid common mistakes.

About René: René has 10 years of working experience as System, Network and Database Administrator mainly on Linux/Unix platform. In recent years, he has been focused mainly on MySQL, previously working as Senior MySQL Support Engineer at Sun/Oracle and now as Senior Operational DBA at Pythian (formerly Blackbird, acquired by Pythian.)

 

Low-Latency SQL on Hadoop — What’s Best for Your Cluster? by Danil Zburivsky
Tuesday November 4 — 11:20AM-12:10PM
Cromwell 3 & 4

Low-latency SQL is the Holy Grail of Hadoop platforms, enabling new use cases and better insights. A number of open-source projects have sprung up to provide fast SQL querying; but which one is best for your cluster? This session will present results of Danil’s in-depth research and benchmarks of Facebook Presto, Cloudera Impala and Databricks Shark. Attendees will look at performance across multiple storage formats, query profiles and cluster configurations to find the best engine for a variety of use cases. This session will help you to pick the right query engine for new cluster or get most out of your existing Hadoop deployment.

About Danil: Danil Zburivsky is a Big Data Consultant/Solutions Architect at Pythian. Danil has been working with databases and information systems since his early years in university, where he received a Master’s Degree in Applied Math. Danil has 7 years of experience architecting, building and supporting large mission-critical data platforms using various flavors of MySQL, Hadoop and MongoDB. He is also the author of the book Hadoop Cluster Deployment.

 

Scaling MySQL in Amazon Web Services by Mark Filipi and Laine Campbell
Tuesday November 4 — 5:30-6:20PM
Cromwell 3 & 4

Mark Filipi, MySQL Team Lead at Pythian, will explain the options for running MySQL at high volumes at Amazon Web Services, exploring options around database as a service, hosted instances/storages and all appropriate availability, performance and provisioning considerations. He will be using real-world examples from companies like Call of Duty, Obama for America, and many more.

Laine will demonstrate how to build highly available, manageable, and performant MySQL environments that scale in AWS—how to maintain them, grow them, and deal with failure.

About Mark: With years of experience as a MySQL DBA, Mark Felipi has direct experience administrating everything from multinational corporations to tiny web start-ups. He leads a global team of talented DBAs to identify performance bottlenecks and provide consistent daily operations.

About Laine: Laine is currently the Co-Founder and Associate Vice President of Pythian’s open source database practice—the result of the acquisition of Blackbird.io by Pythian in June 2014. Blackbird.io itself was the product of a merger that involved PalominoDB, a company that Laine founded in January 2006. Prior to that, Laine spent her career working in various corporate environments, including working at Travelocity for nearly a decade building out their database team. Laine is passionate about supporting members of underserved populations to gain experience, skills, and jobs in technology.

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s MySQL expertise.

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 7

Pythian Group - Wed, 2014-10-29 08:09

Welcome to part 7, the final blog post in my series, Deploying Private Cloud at Home, where I will be sharing the scripts to configure controller and computer nodes. In my previous post, part six, I demonstrated how to configure the controller and compute nodes.

Kindly update the script with the password you want and then execute. I am assuming here that this is a fresh installation and no service is configured on the nodes.

Below script configures controller node, and has two parts

  1. Pre compute node configuration
  2. Post compute node configuration

The “config-controller.sh -pre” will run the pre compute node configuration and prepare the controller node and OpenStack services. “config-controller.sh -post” will run the post compute node configuration of the controller node as these services are dependant of compute node services.

config-controller.sh

#!/bin/bash
#Configure controller script v 4.4
#############################################
# Rohan Bhagat             ##################
# Email:Me at rohanbhagat.com ###############
#############################################
#set variables used in the configuration
#Admin user password
ADMIN_PASS=YOUR_PASSWORD
#Demo user password
DEMO_PASS=YOUR_PASSWORD
#Keystone database password
KEYSTONE_DBPASS=YOUR_PASSWORD
#Admin user Email
ADMIN_EMAIL=YOUR_EMAIL
#Demo user Email
DEMO_EMAIL=YOUR_EMAIL
#Glance db user pass
GLANCE_DBPASS=YOUR_PASSWORD
#Glance user pass
GLANCE_PASS=YOUR_PASSWORD
#Glance user email
GLANCE_EMAIL=YOUR_EMAIL
#Nova db user pass
NOVA_DBPASS=YOUR_PASSWORD
#Nova user pass
NOVA_PASS=YOUR_PASSWORD
#Nova user Email
NOVA_EMAIL=YOUR_EMAIL
#Neutron db user pass
NEUTRON_DBPASS=YOUR_PASSWORD
#Neutron user pass
NEUTRON_PASS=YOUR_PASSWORD
#Neutron user email
NEUTRON_EMAIL=YOUR_EMAIL
#Metadata proxy pass
METADATA_SECRET=YOUR_PASSWORD
#IP to be declared for controller
MY_IP=192.168.1.140
#FQDN for controller hostname or IP
CONTROLLER=controller
#MYSQL root user pass
MYSQL_PASS=YOUR_PASSWORD
#Heat db user pass
HEAT_DBPASS=YOUR_PASSWORD
#Heat user pass
HEAT_PASS=YOUR_PASSWORD
#Heat user email
HEAT_EMAIL=YOUR_EMAIL
#IP range for VM Instances
RANGE=192.168.1.16\\/28
#Secure MySQL
MYSQL_ROOT_PASSWORD=YOUR_PASSWORD
#Current MySQL root password leave blank if you have not configured MySQL
CURNT_PASS=""



# Get versions:
SCRIPT_VER="v4.4"
if [ "$1" = "--version" -o "$1" = "-v" ]; then
	echo "`basename $0` script version $SCRIPT_VER"
  exit 0
elif [ "$1" = "" ] || [ "$1" = "--help" ]; then
  echo "Configures controller node with pre compute and post compute deployment settings"
  echo "Usage:"
  echo "       `basename $0` [--help | --version | -pre | -post]"
  exit 0

elif [ "$1" = "-pre" ]; then

echo "============================================="
echo "This installation script is based on OpenStack icehouse guide"
echo "Found http://docs.openstack.org/icehouse/install-guide/install/yum/content/index.html"
echo "============================================="

echo "============================================="
echo "controller configuration started"
echo "============================================="

echo "Installing MySQL packages"
yum install -y mysql mysql-server MySQL-python
echo "Installing RDO OpenStack repo"
yum install -y http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
echo "Installing openstack keystone, qpid Identity Service, and required packages for controller"
yum install -y yum-plugin-priorities openstack-utils mysql mysql-server MySQL-python qpid-cpp-server openstack-keystone python-keystoneclient expect


echo "Modification of qpid config file"
perl -pi -e 's,auth=yes,auth=no,' /etc/qpidd.conf
chkconfig qpidd on
service qpidd start


echo "Configuring mysql database server"
cat > /etc/my.cnf <&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone

echo "Define users, tenants, and roles"
export OS_SERVICE_TOKEN=$ADMIN_TOKEN
export OS_SERVICE_ENDPOINT=http://$CONTROLLER:35357/v2.0

echo "keystone admin creation"
keystone user-create --name=admin --pass=$ADMIN_PASS --email=$ADMIN_EMAIL
keystone role-create --name=admin
keystone tenant-create --name=admin --description="Admin Tenant"
keystone user-role-add --user=admin --tenant=admin --role=admin
keystone user-role-add --user=admin --role=_member_ --tenant=admin


echo "keystone demo creation"
keystone user-create --name=demo --pass=$DEMO_PASS --email=$DEMO_EMAIL
keystone tenant-create --name=demo --description="Demo Tenant"
keystone user-role-add --user=demo --role=_member_ --tenant=demo
keystone tenant-create --name=service --description="Service Tenant"

echo "Create a service entry for the Identity Service"
keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://$CONTROLLER:5000/v2.0 \
--internalurl=http://$CONTROLLER:5000/v2.0 \
--adminurl=http://$CONTROLLER:35357/v2.0

echo "Verify Identity service installation"
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
echo "Request a authentication token by using the admin user and the password you chose for that user"
keystone --os-username=admin --os-password=$ADMIN_PASS \
  --os-auth-url=http://$CONTROLLER:35357/v2.0 token-get
keystone --os-username=admin --os-password=$ADMIN_PASS \
  --os-tenant-name=admin --os-auth-url=http://$CONTROLLER:35357/v2.0 \
  token-get

cat > /root/admin-openrc.sh <<EOF
export OS_USERNAME=admin
export OS_PASSWORD=$ADMIN_PASS
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0
EOF

source /root/admin-openrc.sh
echo "keystone token-get"
keystone token-get
echo "keystone user-list"
keystone user-list
echo "keystone user-role-list --user admin --tenant admin"
keystone user-role-list --user admin --tenant admin

echo "Install the Image Service"
yum install -y openstack-glance python-glanceclient
openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:$GLANCE_DBPASS@$CONTROLLER/glance
openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:$GLANCE_DBPASS@$CONTROLLER/glance

echo "configure glance database"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "CREATE DATABASE glance;"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS';"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS';"

echo "Create the database tables for the Image Service"
su -s /bin/sh -c "glance-manage db_sync" glance

echo "creating glance user"
keystone user-create --name=glance --pass=$GLANCE_PASS --email=$GLANCE_EMAIL
keystone user-role-add --user=glance --tenant=service --role=admin


echo "glance configuration"
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password $GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password $GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone


echo "Register the Image Service with the Identity service"
keystone service-create --name=glance --type=image --description="OpenStack Image Service"
keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ image / {print $2}') \
  --publicurl=http://$CONTROLLER:9292 \
  --internalurl=http://$CONTROLLER:9292 \
  --adminurl=http://$CONTROLLER:9292
  
echo "Start the glance-api and glance-registry services"
service openstack-glance-api start
service openstack-glance-registry start
chkconfig openstack-glance-api on
chkconfig openstack-glance-registry on

echo "Testing image service"
echo "Download the cloud image"
wget -q http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img -O /root/cirros-0.3.2-x86_64-disk.img
echo "Upload the image to the Image Service"
source /root/admin-openrc.sh
glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 \
--container-format bare --is-public True \
--progress  < /root/cirros-0.3.2-x86_64-disk.img

echo "Install Compute controller services"
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
source /root/admin-openrc.sh

echo "Configure compute database"
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:$NOVA_DBPASS@$CONTROLLER/nova

echo "configuration keys to configure Compute to use the Qpid message broker"
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname $CONTROLLER

source /root/admin-openrc.sh

echo "Set the my_ip, vncserver_listen, and vncserver_proxyclient_address configuration options"
echo "to the management interface IP address of the $CONTROLLER node"
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address $MY_IP

echo "Create a nova database user"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "CREATE DATABASE nova;"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS';"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS';"

echo "Create the Compute service tables"
su -s /bin/sh -c "nova-manage db sync" nova

echo "Create a nova user that Compute uses to authenticate with the Identity Service"
keystone user-create --name=nova --pass=$NOVA_PASS --email=$NOVA_EMAIL
keystone user-role-add --user=nova --tenant=service --role=admin

echo "Configure Compute to use these credentials with the Identity Service running on the controller"
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password $NOVA_PASS

echo "Register Compute with the Identity Service"
keystone service-create --name=nova --type=compute --description="OpenStack Compute"
keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ compute / {print $2}') \
  --publicurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s \
  --internalurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s \
  --adminurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s
  
echo "Start Compute services and configure them to start when the system boots"
service openstack-nova-api start
service openstack-nova-cert start
service openstack-nova-consoleauth start
service openstack-nova-scheduler start
service openstack-nova-conductor start
service openstack-nova-novncproxy start
chkconfig openstack-nova-api on
chkconfig openstack-nova-cert on
chkconfig openstack-nova-consoleauth on
chkconfig openstack-nova-scheduler on
chkconfig openstack-nova-conductor on
chkconfig openstack-nova-novncproxy on  

echo "To verify your configuration, list available images"
echo "nova image-list"
sleep 5
source /root/admin-openrc.sh
nova image-list

fi


if [ "$1" = "-post" ]; then
#set variables used in the configuration

source /root/admin-openrc.sh
############OpenStack Networking start here##############
echo "configure legacy networking"
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova 

echo "Restart the Compute services"
service openstack-nova-api restart
service openstack-nova-scheduler restart
service openstack-nova-conductor restart

echo "Create the network"
source /root/admin-openrc.sh
nova network-create vmnet --bridge br0 --multi-host T --fixed-range-v4 $RANGE

echo "Verify creation of the network"
nova net-list

############OpenStack Legacy ends##############
echo "Install the dashboard"
yum install -y mod_wsgi openstack-dashboard

echo "Configure openstack dashborad"
sed -i 's/horizon.example.com/\*/g' /etc/openstack-dashboard/local_settings
echo "Start the Apache web server and memcached"
service httpd start
chkconfig httpd on

fi

Below is the config-compute.sh script which configures compute node

config-compute.sh

#!/bin/bash
#configure comutue script v4
#############################################
# Rohan Bhagat             ##################
# Email:Me at rohanbhagat.com ###############
#############################################
#set variables used in the configuration
#Nova user pass
NOVA_PASS=YOUR_PASSWORD
#NEUTRON user pass
NEUTRON_PASS=YOUR_PASSWORD
#Nova db user pass
NOVA_DBPASS=YOUR_PASSWORD
FLAT_INTERFACE=eth0
PUB_INTERFACE=eth0
#FQDN for $CONTROLLER hostname or IP
CONTROLLER=controller
#IP of the compute node
MY_IP=192.168.1.142


echo "============================================="
echo "This installation script is based on OpenStack icehouse guide"
echo "Found http://docs.openstack.org/icehouse/install-guide/install/yum/content/index.html"
echo "============================================="

echo "============================================="
echo "compute configuration started"
echo "============================================="

echo "Install the MySQL Python library"
yum install -y MySQL-python


echo "Install the Compute packages"
yum install -y openstack-nova-compute openstack-utils

echo "Edit the /etc/nova/nova.conf configuration file"
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:$NOVA_DBPASS@$CONTROLLER/nova
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password $NOVA_PASS

echo "Configure the Compute service to use the Qpid message broker"
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname $CONTROLLER

echo "Configure Compute to provide remote console access to instances"
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://$CONTROLLER:6080/vnc_auto.html

echo "Specify the host that runs the Image Service"
openstack-config --set /etc/nova/nova.conf DEFAULT glance_host $CONTROLLER

echo "Start the Compute service and its dependencies. Configure them to start automatically when the system boots"
service libvirtd start
service messagebus start
service openstack-nova-compute start
chkconfig libvirtd on
chkconfig messagebus on
chkconfig openstack-nova-compute on

echo "kernel networking functions"
perl -pi -e 's,net.ipv4.ip_forward = 0,net.ipv4.ip_forward = 1,' /etc/sysctl.conf
perl -pi -e 's,net.ipv4.conf.default.rp_filter = 1,net.ipv4.conf.default.rp_filter = 0,' /etc/sysctl.conf
echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
sysctl -p

echo "Install legacy networking components"
yum install -y openstack-nova-network openstack-nova-api
sleep 5
echo "Configure legacy networking"
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
openstack-config --set /etc/nova/nova.conf DEFAULT network_manager nova.network.manager.FlatDHCPManager
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT network_size 254
openstack-config --set /etc/nova/nova.conf DEFAULT allow_same_net_traffic False
openstack-config --set /etc/nova/nova.conf DEFAULT multi_host True
openstack-config --set /etc/nova/nova.conf DEFAULT send_arp_for_ha True
openstack-config --set /etc/nova/nova.conf DEFAULT share_dhcp_address True
openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release True
openstack-config --set /etc/nova/nova.conf DEFAULT flat_network_bridge br0
openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface $FLAT_INTERFACE
openstack-config --set /etc/nova/nova.conf DEFAULT public_interface $PUB_INTERFACE

echo "Start the services and configure them to start when the system boots"
service openstack-nova-network start
service openstack-nova-metadata-api start
chkconfig openstack-nova-network on
chkconfig openstack-nova-metadata-api on

echo "Now restart networking"
service network restart

echo "Compute node configuration competed"
echo "Now you can run config-congroller.sh -post on the controller node"
echo "To complete the OpenStack configuration"

Categories: DBA Blogs

Oracle Trivia Quiz

Iggy Fernandez - Wed, 2014-10-29 07:36
All the answers can be found in the November 2014 issue of the NoCOUG Journal. I am the editor of the NoCOUG Journal. What’s NoCOUG, you ask? Only the oldest and most active Oracle users group in the world. If you live in the San Francisco bay area and have never ever attended a NoCOUG […]
Categories: DBA Blogs

Results of the NoCOUG SQL Mini-Challenge

Iggy Fernandez - Tue, 2014-10-28 15:55
As published in the November 2014 issue of the NoCOUG Journal The inventor of the relational model, Dr. Edgar Codd, was of the opinion that “[r]equesting data by its properties is far more natural than devising a particular algorithm or sequence of operations for its retrieval. Thus, a calculus-oriented language provides a good target language […]
Categories: DBA Blogs

OGG-01742 when trying to start extract process

DBASolved - Tue, 2014-10-28 13:57

Every once in awhile I do something in my test environments to test something else, then I go back to test core functions of the product; in this case I was testing a feature of Oracle GoldenGate 12c.  Earlier in the day, I had set the $ORACLE_HOME enviornment variable to reference the Oracle GoldenGate home.  Instead of closing my session and restarting, I just started GGSCI and the associated manager.  To my surprise, the extracts that I had configured wouldn’t start.  GGSCI was issuing an OGG-01742 error about the “child process is no longer alive” (Image 1).

Image 1:

As I was trying to figure out what was going on, I checked the ususal files (ggserr.log and report files) for any associated errors.  I didn’t find anything that was out of place or lead me to believe there was a  problem with Oracle GoldenGate.  The next thing I needed to identify was if the environment was configured correctly.  In doing this, I first checked the session environment variables (Image 2) related to the Oracle database.

Image 2:

As you can see, I had set the ORACLE_HOME equal to my OGG_HOME.  After seeing this, I remembered that I had set it this way for a specific test.  Since the ORACLE_HOME was set to OGG_HOME, the Oracle GoldenGate Extract (capture) process didn’t know where to get the needed libraries for the database.  

The Fix:

To fix this issue, I just needed to reset my environment to reference the ORACLE_HOME and ORACLE_SID correctly.  In Linux enviornment, I like to use “. oraenv” to do this (Image 3).

Image 3:

Now that I have reset my Oracle environment to point to the database, I should be able to start my extracts and not get any error messages related to OGG-01742 (Image 4).

Note: It appears that if the manager process has AUTOSTART and/or AUTORESTART set, the manager process needs to be restarted before the OGG-01742 message will go away.

Image 4:

With my extracts started now, I’ve got all my Oracle GoldenGate processes back up and running (Image 5).

Image 5:

In a nutshell, if you get OGG-01742 error while trying to start or restart an extract (capture) process; just reset your Oracle environment parameters!

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Use OPatch to check Oracle GoldenGate version

DBASolved - Tue, 2014-10-28 11:08

Recently on I was strolling the OTN message boards and came across a question about identifying the version of Oracle GoldenGate using OPatch.  This was the second time I came across this question; with that I decided to take a look and see if Oracle GoldenGate information could be retrieved using opatch.

Initially I thought that identifing the Oracle GoldenGate version could only be done by logging into GGSCI and reviewing the header information.  To do this, just setup the Oracle environment using “. oraenv”.

Note: “. oraenv” will use the /etc/oratab file to set the ORACLE_HOME and ORACLE_SID parameters and ensure that Oracle GoldenGate has access to the library files needed.

Once the enviornment is set, the GGSCI can be used to start the interface.

[oracle@db12cgg ogg]$ . oraenv
ORACLE_SID = [oragg] ?
The Oracle base has been set to /u01/app/oracle
[oracle@db12cgg ogg]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug  7 2014 10:21:34
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

GGSCI (db12cgg.acme.com) 1>

Notice in the code above, that the version of Oracle GoldenGate being ran is 12.1.2.1.0 for Linux x64.

How can this be done through OPatch?  The same information can be gathered using the opatch utility.  Ideally, you will want to use opatch from the $GG_HOME/OPatch directory.

Note: $ORACLE_HOME needs to be set to $OGG_HOME before correct opatch inventory will be listed.  If $ORACLE_HOME is set for the database, the opatch will return information the database not Oracle GoldenGate.

After making sure that the $ORACLE_HOME directory is pointed to the correct $GG_HOME,  the inventory for Oracle GoldenGate can be retrieved using “./opatch lsinventory”.

[oracle@db12cgg ogg]$ pwd
/u01/app/oracle/product/12.1.2/ogg
[oracle@db12cgg ogg]$cd OPatch
[oracle@db12cgg OPatch]$./opatch lsinventory
Invoking OPatch 11.2.0.1.7

Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation.  All rights reserved.
Oracle Home      : /u01/app/oracle/product/12.1.2/ogg
Central Inventory : /u01/app/oraInventory
  from          : /etc/oraInst.loc
OPatch version    : 11.2.0.1.7
OUI version      : 11.2.0.3.0
Log file location : /u01/app/oracle/product/12.1.2/ogg/cfgtoollogs/opatch/opatch2014-10-28_11-18-49AM.log
Lsinventory Output file location : /u01/app/oracle/product/12.1.2/ogg/cfgtoollogs/opatch/lsinv/lsinventory2014-10-28_11-18-49AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle GoldenGate Core                                              12.1.2.1.0

There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

--------------------------------------------------------------------------------

OPatch succeeded.

As you can tell, I was able to find the same information using OPatch without having to go to the GGSCI utility.

Note: I have not had a chance to check this against Oracle GoldenGate 11g and earlier. This may be something specific to Oracle GoldenGate 12c.  Will verify at a later time.

Enjoy!

about.me: http://about.me/dbasolved

 


Filed under: Golden Gate
Categories: DBA Blogs

How To Change The Priority Of Oracle Background Processes

How To Change The Priority Of Oracle Background Processes
Before you get in a huf, it can be done! You can change an Oracle Database background process

priority through an instance parameter! I'm not saying it's a good idea, but it can be done.
In this post I explore how to make the change, just how far you can take it and when you may want to consider changing an Oracle background process priority.
To get your adrenaline going, check out the instance parameter _high_priority_processes from one of your production Oracle system with a version of 11 or greater. Here is an example using my OSM tool, ipx.sql on my Oracle Database version 12.1.0.2.0.
SQL> @ipx _highest_priority_processes
Database: prod40 27-OCT-14 02:22pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_highest_priority_processes = VKTM Highest Priority TRUE
Process Name Mask
Then at the Linux prompt, I did:
$ ps -eo pid,class,pri,nice,time,args | grep prod40
2879 TS 19 0 00:00:00 ora_pmon_prod40
2881 TS 19 0 00:00:01 ora_psp0_prod40
2883 RR 41 - 00:02:31 ora_vktm_prod40
2889 TS 19 0 00:00:01 ora_mman_prod40
2903 TS 19 0 00:00:00 ora_lgwr_prod40
2905 TS 19 0 00:00:01 ora_ckpt_prod40
2907 TS 19 0 00:00:00 ora_lg00_prod40
2911 TS 19 0 00:00:00 ora_lg01_prod40
...
Notice the "pri" for priority of the ora_vktm_prod40 process? It is set to 41 while all the rest of the Oracle background processes are set to the default of 19. Very cool, eh?

Surprised By What I Found
Surprised? Yes, surprised because changing Oracle process priority is a pandoras box. Just imagine if an Oracle server (i.e., foreground) process has its priority lowered just a little and then attempts to acquire a latch or a mutex? If it doesn't get the latch quickly, I might never ever get it!

From a user experience perspective, sometimes performance really quick and other times the application just hangs.

This actually happened to a customer of mine years ago when the OS started reducing a process's priority after it consumed a certain amount of CPU. I learned that when it comes to Oracle processes, they are programed to expect an even process priority playing field. If you try to "game" the situation, do so at your own risk... not Oracle's.

Then why did Oracle Corporation allow background process priority to be changed. And why did Oracle Corporation actually change a background processes priority?!

Doing A Little Exploration
It turns out there are a number of "priority" related underscore instance parameters! On my 11.2.0.1.0 system there 6 "priority" parameters. On my 12.1.0.1.0 system there are 8 "priority" parameters. On my 12.1.0.2.0 system there are 13 "priority" parameters! So clearly Oracle is making changes! In all cases, the parameter I'm focusing on, "_high_priority_processes" exists.

In this posting, I'm going to focus on my Oracle Database 12c version 12.1.0.2.0 system. While you may see something different in your environment, the theme will be the same.

While I'll be blogging about all four of the below parameters, in this posting my focus will be on the _high_priority_processes parameter. Below are the defaults on my system:
_high_priority_processes        LMS*
_highest_priority_processes VKTM
_os_sched_high_priority 1
_os_sched_highest_priority 1

Messing With The LGWR Background Processes
I'm not testing this on a RAC system, so I don't have an LMS background process. When I saw the "LMS*" I immediately thought, "regular expression." Hmmm... I wonder if I can change the LGWR background process. So I made the instance parameter change and recycled the instance. Below shows the instance parameter change:
SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:36pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = LMS*|LGWR High Priority FALSE
Process Name Mask

Below is an operating system perspective using the ps command:

ps -eo pid,class,pri,nice,time,args | grep prod40
...
5521 RR 41 - 00:00:00 ora_vktm_prod40
5539 TS 19 0 00:00:00 ora_dbw0_prod40
5541 RR 41 - 00:00:00 ora_lgwr_prod40
5545 TS 19 0 00:00:00 ora_ckpt_prod40
5547 TS 19 0 00:00:00 ora_lg00_prod40
5551 TS 19 0 00:00:00 ora_lg01_prod40
...

How Far Can I Take This?
At this point in my journey, my mind was a blaze! The log file sync wait event can be really difficult to deal with and especially so when there is a CPU bottleneck. Hmmm... Perhaps I can increase the priority of all the log writer background processes?

So I made the instance parameter change and recycled the instance. Below shows the instance parameter change:
SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:44pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = LMS*|LG* High Priority FALSE
Process Name Mask

Below is an operating system perspective using the ps command:

ps -eo pid,class,pri,nice,time,args | grep prod40
...
5974 TS 19 0 00:00:00 ora_psp0_prod40
5976 RR 41 - 00:00:00 ora_vktm_prod40
5994 TS 19 0 00:00:00 ora_dbw0_prod40
5996 RR 41 - 00:00:00 ora_lgwr_prod40
6000 TS 19 0 00:00:00 ora_ckpt_prod40
6002 RR 41 - 00:00:00 ora_lg00_prod40
6008 RR 41 - 00:00:00 ora_lg01_prod40
6014 TS 19 0 00:00:00 ora_lreg_prod40
...

So now all the log writer background processes have a high priority. My hope would be that if there is an OS CPU bottleneck and the log writer background processes wanted more CPU, I now have the power to give that to them! Another tool in my performance tuning arsenal!

Security Hole?
At this point, my exuberance began to turn into paranoia. I thought, "Perhaps I can increase the priority of an Oracle server process or perhaps any process." If so, that would be a major Oracle Database security hole.

With fingers trembling, I changed the instance parameters to match an Oracle server process and recycled the instance. Below shows the instance parameter change:

SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:52pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = High Priority FALSE
LMS*|LG*|oracleprod40 Process Name Mask

Below is an operating system perspective using the ps command:

$ ps -eo pid,class,pri,nice,time,args | grep prod40
...
6360 TS 19 0 00:00:00 ora_psp0_prod40
6362 RR 41 - 00:00:00 ora_vktm_prod40
6366 TS 19 0 00:00:00 ora_gen0_prod40
6382 RR 41 - 00:00:00 ora_lgwr_prod40
6386 TS 19 0 00:00:00 ora_ckpt_prod40
6388 RR 41 - 00:00:00 ora_lg00_prod40
6394 RR 41 - 00:00:00 ora_lg01_prod40
6398 TS 19 0 00:00:00 ora_reco_prod40
...
6644 TS 19 0 00:00:00 oracleprod40...
...

OK, that didn't work so how about this?

SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:55pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = High Priority FALSE
LMS*|LG*|*oracle* Process Name Mask

Let's see what happened at the OS.

$ ps -eo pid,class,pri,nice,time,args | grep prod40
...
6701 RR 41 - 00:00:00 ora_vktm_prod40
6705 RR 41 - 00:00:00 ora_gen0_prod40
6709 RR 41 - 00:00:00 ora_mman_prod40
6717 RR 41 - 00:00:00 ora_diag_prod40
6721 RR 41 - 00:00:00 ora_dbrm_prod40
6725 RR 41 - 00:00:00 ora_vkrm_prod40
6729 RR 41 - 00:00:00 ora_dia0_prod40
6733 RR 41 - 00:00:00 ora_dbw0_prod40
...
6927 RR 41 - 00:00:00 ora_p00m_prod40
6931 RR 41 - 00:00:00 ora_p00n_prod40
7122 TS 19 0 00:00:00 oracleprod40 ...
7124 RR 41 - 00:00:00 ora_qm02_prod40
7128 RR 41 - 00:00:00 ora_qm03_prod40

Oh Oh... That's not good! Now EVERY Oracle background process has a higher priority and my Oracle server process does not.

So my "*" wildcard caused all the Oracle processes to be included. If all the processes a high prioirty, then the log writer processes have no advantage over the others. And to make matters even worse, my goal of increasing the server process priority did not occur.

However, this is actually very good news because it appears this is not an Oracle Database security hole! To me, it looks like the priority parameter is applied during the instance startup for just the background processes. Since my server process was started after the instance was started and for sure not included in the list of background processes, its priority was not affected. Good news for security, not as good of news for a performance optimizing fanatic such as myself.

Should I Ever Increase A Background Process Priority?
Now that we know how to increase an Oracle Database background process priority, when would we ever want to do this? The short answer is probably never. But the long answer is the classic, "it depends."

Let me give you an example. Suppose there is an OS CPU bottleneck and the log writer background processes are consuming lots of CPU while handling all the associated memory management when server process issues a commit. In this situation, performance may benefit by making it easier for the log writer processes to get CPU cycles, therefore improving performance. But don't even think about doing this unless there is a CPU bottleneck. And even then, be very very careful.

In my next block posting, I'll detail an experiment where I changed the log writer background processes priority.

Thanks for reading!

Craig.



Categories: DBA Blogs

MariaDB 10.0 Multi-source Replication at Percona Live UK 2014

Pythian Group - Mon, 2014-10-27 12:12

Percona Live UK is upon us and I have the privilege to present a tutorial on setting up multi-source replication in MariaDB 10.0 on Nov 3, 2014.

If you’re joining me at PLUK14, we will go over setting up two different topologies that incorporates the features in MariaDB. The first is a mirrored topology:

Replication Topologies - Mirrored

Replication Topologies – Mirrored

This basically makes use of an existing DR environment by setting it up to be able to write to either master. Please be advised, this is normally not recommended due to the complexity of making your application able to resolve conflicts and data sync issues that might arise from writing to multiple masters.

The second topology is a basic fan-in topology:

Replication Topologies - Fan-in

Replication Topologies – Fan-in

This use-case is more common, especially for unrelated datasets that can be gathered into a single machine for reporting purposes or as part of a backup strategy. It was also previously available in MySQL only through external tools such as Tungsten Replicator

As promised in the description of the tutorial, I am providing a Vagrantfile for the tutorial. This can be downloaded/cloned from my PLUK14 repository

The vagrant environment requires at least Vagrant 1.5 to make use of Vagrant Cloud.

I hope to see you next week!

Categories: DBA Blogs