Skip navigation.

DBA Blogs

Speakers: Put your Twitter Handle on the Windows taskbar!

The Oracle Instructor - Sat, 2016-04-30 03:56

If you speak often at conferences, sharing your screen to demo things, this could be helpful:

Twitter Handle on the Windows taskbar

Throughout your presentation, the audience will be able to see your Twitter Handle, reminding them to include it with tweets about the event. I used to include it in the slides, but this is better, because it works also with live demonstrations where no slides are being showed. Which is incidentally my favorite way to do presentations:-)

Now how can you do it? Quite easy, you open the Windows Control Panel and click on Region and Language. Then click on Additional settings:

Region and Language 1

Then you insert your Twitter Handle (or any other text you like to see on the taskbar) as AM and PM symbol. Make sure to select Time formats with trailing tt:

Region and Language 2

That’s it. If you want the font size as large as on the first picture above, that can be done here:

twitterhandel_taskbar4

I did that with Windows 7 Professional 64 bit. Hope you find it useful:-)


Tagged: speaker tip
Categories: DBA Blogs

Leaving Behind the Limits of Binary Thinking for Full Inclusiveness

Pythian Group - Fri, 2016-04-29 14:39

diversityaward

When Pythian became the first tech company in Canada to release gender-based metrics last November, we wanted to make a bold statement with the launch of the Pythia Program. And apparently it worked. We’ve already increased the amount of female applicants by more than 10% over just one quarter. Our internal Pythia Index has also risen 3% from 56% to 59%. And just this week, Pythian’s CEO Paul Vallée received the WCT Diversity Champion award in recognition of his leadership and efforts to promote diversity in the workplace, and a more inclusive tech industry that promotes men and women from all backgrounds.

Despite a clear case for gender parity, and research confirming the financial return for companies, full inclusion is still ‘controversial’ to implement. A lot of this has to do with the unconscious associations we still have with male and female roles which are placed in opposition. This kind of binary thinking is rampant, especially in our social constructions of what constitutes masculinity and femininity.

When the Pythia Program was in its early stages, we actually noticed a lot of binary, either/or thinking was shaping our assumptions. Off/On. 0/1. We can do this OR that. We can empower women technical professionals OR talk to employees about unconscious bias. We can take a stand on gender diversity OR maintain good relationships with male colleagues. Wait a minute…why can’t we do both?

If we had continued to believe our choices were that limited, it would have seriously eroded any impetus to act on our values of gender equity and inclusiveness. It was time to reframe our thinking, and that’s when we stopped compromising. A bolder stance emerged when we did away with limited, binary thinking that was trapping us in false dichotomies.

Let’s look at this from a data perspective, because that’s what we love and do best.

Current computer chips store information in electrical circuits as binary bits, either in a state of 0 or 1, so there’s a finite amount of data that can be processed. Quantum computer chips, or ‘qubits’ however, can be in the state of 0, 1, or both at the same time–giving quantum computers mind-blowing processing power.

So if we apply this idea of ‘binary’ vs. ‘quantum’ into a human context, could we potentially become quantum thinkers? Quantum thinking would be holistic, and enable the mind to function at a greater level of complexity. It’s an unlimited approach that ‘either/or’ binary thinking simply does not permit. Wouldn’t it be more exciting to break away from these limitations and move to a higher, more innovative level? Things look different when this binary thinking is disrupted. Start by replacing either/or with ‘and’.

We can help achieve gender parity AND we can achieve diversity in other important areas. Pythian can be inclusive, people-focused AND financially strong. Men can be powerful leaders AND feminist.

There is one big exception, one area where it’s either/or: whether you support the status quo of tech’s current ‘bro culture’, or inclusive leadership that embraces the value of multiple perspectives. Those two states cannot co-exist.

As he accepted his award for Diversity Champion at the WCT Gala on April 27, Pythian CEO Paul Vallée made his position clear “To the women who are working hard in high tech, and who are marginalized by bro culture — which is a real problem, we are in the midst of a culture war — I salute you and keep fighting the good fight because we will prevail. To the male leaders that have taken sides in this battle, the Pythia Index will help you keep score, whether you’re on my team [fighting to end bro culture] or the opposite team.”

As Einstein said, “you can’t solve problems with the same thinking used to create them.” And lack of gender diversity in the tech industry is a problem Pythian wants to help solve.

Categories: DBA Blogs

Partition Storage -- 6 : Revisiting Partition HWM

Hemant K Chitale - Fri, 2016-04-29 09:42
After the curious finding in my previous blog post, where a Partition's HighWaterMark was noticeably higher than that for a non-Partitioned Table but then shrunk on a MOVE operation, retrying the same rows with a different pattern of INSERT statements.

However, I am still sticking to a single session doing the INSERT (as I don't want ASSM spreading the incoming rows to different non-contiguous blocks)

This in 12.1.0.2
SQL> connect hemant/hemant
Connected.
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7 partition p_max values less than (maxvalue))
8 /

Table created.

SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_3 values (25, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12
13 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (55, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (45, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100' 2 3 4
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 3022

SQL>
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 2484

SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/ 2 3 4 5 6 7 8

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


So, a Row-By-Row Insert still resulted in the the HWM being 3,022 and shrinking to 2,484 after a MOVE.



Let's try the same data-set in 11.2.0.4
SQL> connect hemant/hemant
Connected.
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

SQL>
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
partition by range (id_column)
(partition p_100 values less than (101),
partition p_200 values less than (201),
partition p_300 values less than (301),
partition p_400 values less than (401),
partition p_max values less than (maxvalue))
/

2 3 4 5 6 7 8
Table created.

SQL> SQL> show parameter deferr

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE
SQL>
SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_3 values (25, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (55, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (45, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12
13 /

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 3022

SQL>
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 2484

SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/ 2 3 4 5 6 7 8

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


So, 11.2.0.4 and 12.1.0.2 display the same behaviour for the Partition HWM.  A HWM of 3,022 blocks shrinking to 2,484 blocks.

The next test would be with a larger AVG_ROW_LEN.
.
.
.


Categories: DBA Blogs

Turkish Oracle User Group Conference in Istanbul 2016 #TROUGDays

The Oracle Instructor - Fri, 2016-04-29 01:31

Straight after the Oracle University Expert Summit in Berlin – which was a big success, by the way – the circus moved on to another amazing place: Istanbul!

Istanbul viewThe Turkish Oracle User Group (TROUG) did its annual conference in the rooms of the Istanbul Technical University with local and international speakers and a quite attracting agenda.

Do you recognize anyone here?:-)

#TROUGDays speakers

I delivered my presentation “Best of RMAN” again like at the DOAG annual conference last year:

Uwe Hesse speaking in Istanbul

Many thanks to the organizers for making this event possible and for inviting us speakers to dinner

Istanbul speakers dinner

The conference was well received and in my view, it should be possible to attract even more attendees in the coming years by continuing to invite high-profile international speakers

audience

My special thanks to Joze, Yves and Osama for giving me your good company during the conference – even if that company was sometimes very tight during the car rides

Categories: DBA Blogs

A Practitioner’s Assessment: Digital Transformation

Pythian Group - Thu, 2016-04-28 12:49

 

Rohinee Mohindroo is a guest blogger on Pythian Business Insights.

 

trans·for·ma·tion/ noun: a thorough or dramatic change in form or appearance

The digital transformation rage continues into 2016 with GE, AT&T, GM, Domino’s, Flex, and Starbucks, to name a few. So what’s the big deal?

Technical advances continue to progress at a rapid rate. Digital transformation simply refers to the rate at which the technological trends are embraced by an individual, organization or team.

Organizational culture and vocabulary are leading indicators of the digital transformation maturity level.

blogimagerohinee

Level 1: Business vs. Tech (us vs. them). Each party is fairly ignorant of the value and challenges of the other. Each blames the other for failures and takes credit for successes. Technology is viewed as a competency with a mandate to enable the business.

Level 2: Business and Tech (us and them). Each party is aware of the capability and challenges of the other. Credit for success is shared, failure is not discussed publicly or transparently. Almost everyone  is perceived to be technically literate with a desire to deliver business differentiation.

Level 3: Business is Tech (us). Notable awareness of the business model and technology capabilities and opportunities throughout the organization. Success is expected and failure is an opportunity. The organization is relentlessly focused on learning from customers and partners with a shared goal to continually re-define the business.

Which level best describes you or your organization? Please share what inhibits your organization from moving to the next level.

 

Categories: DBA Blogs

Log Buffer #471: A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2016-04-28 09:14

This Log Buffer Edition covers Oracle, SQL Server and MySQL blog posts of the week.

Oracle:

Improving PL/SQL performance in APEX

A utility to extract and present PeopleSoft Configuration and Performance Data

No, Oracle security vulnerabilities didn’t just get a whole lot worse this quarter.  Instead, Oracle updated the scoring metric used in the Critical Patch Updates (CPU) from CVSS v2 to CVSS v3.0 for the April 2016 CPU.  The Common Vulnerability Score System (CVSS) is a generally accepted method for scoring and rating security vulnerabilities.  CVSS is used by Oracle, Microsoft, Cisco, and other major software vendors.

Oracle Cloud – DBaaS instance down for no apparent reason

Using guaranteed restore points to navigate through time

SQL Server:

ANSI SQL with Analytic Functions on Snowflake DB

Exporting Azure Data Factory (ADF) into TFS Source Control

Getting started with Azure SQL Data Warehouse

Performance Surprises and Assumptions : DATEADD()

With the new security policy feature in SQL Server 2016 you can restrict write operations at the row level by defining a block predicate.

MySQL:

How to rename MySQL DB name by moving tables

MySQL 5.7 Introduces a JSON Data Type

Ubuntu 16.04 first stable distro with MySQL 5.7

MariaDB AWS Key Management Service (KMS) Encryption Plugin

MySQL Document Store versus Bug hunter

Categories: DBA Blogs

How to recover space from already deleted files

Pythian Group - Wed, 2016-04-27 13:15

Wait, what? Deleted files are gone, right? Well, not so if they’re currently in use, with an open file handle by an application. In the Windows world, you just can’t touch it, but under Linux (if you’ve got sufficient permissions), you can!

Often in the Systems Administration, and Site Reliability Engineering world, we will encounter a disk space issue being reported, and there’s very little we can do to recover the space. Everything is critically important! We then check for deleted files and find massive amounts of space consumed when someone has previously deleted Catalina, Tomcat, or Weblogic log files while Java had them in use, and we can’t restart the processes to release the handles due to the critical nature of the service. Conundrum!

Here at Pythian, we Love Your Data, so I thought I’d share some of the ways we deal with situations like this.

How to recover

First, we grab a list of PIDs with files still open, but deleted. Then iterate over the open file handles, and null them.

PIDS=$(lsof | awk '/deleted/ { if ($7 > 0) { print $2 }; }' | uniq)
for PID in $PIDS; do ll /proc/$PID/fd | grep deleted; done

This could be scripted in an automatic nulling of all deleted files, with great care.

Worked example

1. Locating deleted files:

[root@importantserver1 usr]# lsof | head -n 1 ; lsof | grep -i deleted
 COMMAND   PID   USER   FD  TYPE DEVICE SIZE/OFF NODE   NAME
 vmtoolsd  2573  root   7u  REG  253,0  9857     65005  /tmp/vmware-root/appLoader-2573.log (deleted)
 zabbix_ag 3091  zabbix 3wW REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3093  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3094  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3095  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3096  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3097  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 java      23938 tomcat 1w  REG  253,0  0        32155  /opt/log/tomcat/catalina.out (deleted)
 java      23938 tomcat 2w  REG  253,0  45322216 32155  /opt/log/tomcat/catalina.out (deleted)
 java      23938 tomcat 9w  REG  253,0  174      32133  /opt/log/tomcat/catalina.2015-01-17.log (deleted)
 java      23938 tomcat 10w REG  253,0  57408    32154  /opt/log/tomcat/localhost.2016-02-12.log (deleted)
 java      23938 tomcat 11w REG  253,0  0        32156  /opt/log/tomcat/manager.2014-12-09.log (deleted)
 java      23938 tomcat 12w REG  253,0  0        32157  /opt/log/tomcat/host-manager.2014-12-09.log (deleted)
 java      23938 tomcat 65w REG  253,0  363069   638386 /opt/log/archive/athena.log.20160105-09 (deleted)

2. Grab the PIDs:

[root@importantserver1 usr]# lsof | awk '/deleted/ { if ($7 > 0) { print $2 }; }' | uniq
 2573
 3091
 3093
 3094
 3095
 3096
 3097
 23938

Show the deleted files that each process still has open (and is consuming space):

[root@importantserver1 usr]# export PIDS=$(lsof | awk '/deleted/ { if ($7 > 0) { print $2 }; }' | uniq)
[root@importantserver1 usr]# for PID in $PIDS; do ll /proc/$PID/fd | grep deleted; done
 lrwx------ 1 root root 64 Mar 21 21:15 7 -> /tmp/vmware-root/appLoader-2573.log (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 1 -> /opt/log/tomcat/catalina.out (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 10 -> /opt/log/tomcat/localhost.2016-02-12.log (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 11 -> /opt/log/tomcat/manager.2014-12-09.log (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 12 -> /opt/log/tomcat/host-manager.2014-12-09.log (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 2 -> /opt/log/tomcat/catalina.out (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 65 -> /opt/log/archive/athena.log.20160105-09 (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 9 -> /opt/log/tomcat/catalina.2015-01-17.log (deleted)

Null the specific files (here, we target the catalina.out file):

[root@importantserver1 usr]# cat /dev/null > /proc/23938/fd/2
Alternative ending

Instead of deleting the contents to recover the space, you might be in the situation where you need to recover the contents of the deleted file. If the application still has the file descriptor open on it, you can then recover the entire file to another one (dd if=/proc/23938/fd/2 of=/tmp/my_new_file.log) – assuming you have the space to do it!

Conclusion

While it’s best not to get in the situation in the first place, you’ll sometimes find yourself cleaning up after someone else’s good intentions. Now, instead of trying to find a window of “least disruption” to the service, you can recover the situation nicely. Or, if the alternative solution is what you’re after, you’ve recovered a file that you thought was long since gone.

Categories: DBA Blogs

Deploy Docker containers using AWS Opsworks

Pythian Group - Wed, 2016-04-27 12:51
Introduction

This post is about how to deploy Docker containers on AWS using Opsworks and Docker Composer.
For AWS and Docker, the introduction isn’t required. So, let’s quickly introduce Opsworks and Docker Composer.

Opsworks

Opsworks is a great tool provided by AWS, which runs Chef recipes on your Instances. If the instance is an AWS instance, you don’t pay anything for using Opsworks, but you can also manage instances outside of AWS with a flat cost just by installing the Agent and registering the instance on Opsworks.

Opsworks Instances type

We have three different types of instances on Opsworks:

1. 24x7x365
Run with no stop

2. Time based
Run in a predefined time. Such as work hours.

3. Load based
Scale up and down according to the metrics preconfigured.

You can find more details here.

Custom JSON

Opsworks provides Chef Databags (variables to be used in your recipes) via Custom JSON, and that’s the key to this solution. We will manage everything just changing a JSON file. This file can become a member of your development pipeline easily.

Life cycle

Opsworks has five life cycles:
1. Setup
2. Configure
3. Deploy
4. Undeploy
5. Shutdown
We will use setup, deploy, and shutdown. You can find more details about Opsworks life cycle here.

Docker Compose

Docker Compose was originally developed under the Fig project. Nowadays, the fig is deprecated, and docker-compose is a built-in component of Docker.
Using docker-compose, you can manage all containers and their attributes (links, share volumes, etc.) in a Docker host. Docker-compose can only manage containers on the local host where it is deployed. It cannot orchestrate Docker containers between hosts.
All configuration is specified inside of a YML file.

Chef recipes

Using Opsworks, you will manage all hosts using just one small Chef cookbook. All the magic is in translating Custom JSON file from Opsworks to YML file to be used by docker-compose.
The cookbook will install all components (Docker, pip, and docker-compose), translate Custom JSON to YML file and send commands to docker-compose.

Hands ON

Let’s stop talking and see things happen.

We can split it into five steps:

  1. Resources creation
    1. Opsworks Stack
        1. Log into your AWS account
        2. Go to Services -> Management Tools -> Opsworks
          Accessing Opsworks menu
        3. Click on Add stack (if you already have stacks on Opsworks) or Add your first stack (if it’s the first time you are creating stacks on opsworks)
        4. Select type Chef 12 stack
          Note: The Chef cookbook used in this example only supports Chef12
        5. Fill out stack information
          aws_opsworks_docker_image02
          Note:
          – You can use any name as stack name
          – Make sure VPC selected are properly configured
          – This solution supports Amazon Linux and Ubuntu
          – Repository URL https://bitbucket.org/tnache/opsworks-recipes.git
        6. Click on advanced if you want to change something. Changing “Use OpsWorks security groups” to No can be a good idea when you need to communicate with instances which are running outside of Opsworks
        7. Click on “Add stack”
    2. Opsworks layer
        1. Click on “Add a layer”
        2. Set Name, Short name and Security groups. I will use webserver

      Note:
      Use a simple name because we will use this name in next steps
      The Name web is reserved for AWS internal use

        1. Click on “Add layer”

      aws_opsworks_docker_image03

    3. Opsworks Instance
        1. Click on “Instances” on left painel
        2. Click on “Add an instance”
        3. Select the size (instance type)
        4. Select the subnet
        5. Click on “Add instance”

      aws_opsworks_docker_image05

  2. Resources configuration
    1. Opsworks stack
        1. Click on “Stack” on left painel
        2. Click on “Stack Settings”
        3. Click on “Edit”
        4. Find Custom JSON field and paste the content of the file bellow

      custom_json_1

      1. Click on “Save”
    2. Opsworks layer
        1. Click on “Layers” on left painel
        2. Click on “Recipes”
        3. Hit docker-compose and press enter on Setup
        4. Hit docker-compose::deploy and press enter on Deploy
        5. Hit docker-compose::stop and press enter on Deploy
        6. Click on “Save”

      aws_opsworks_docker_image04

  3. Start
    1. Start instance
        1. Click on start

      aws_opsworks_docker_image06

  4. Tests
    Note: Wait until instance get online state

      1. Open your browser and you should be able to see It works!
      2. Checking running containers

    aws_opsworks_docker_image07

  5. Management
      1. Change custom json to file bellow (See resources configuration=>Opsworks stack)

    custom_json_2

      1. Click on “Deployments” on left painel
      2. Click on “Run Command”
      3. Select “Execute Recipes” as “Command”
      4. Hit “docker-compose::deploy” as “Recipes to execute”
      5. Click on “Execute Recipes”

    Note: Wait until deployment finish

      1. Checking running containers

    aws_opsworks_docker_image08

Categories: DBA Blogs

Percona Live Data Performance Conference 2016 Retrospective

Pythian Group - Tue, 2016-04-26 07:39

 

Last week the annual Percona Live Data Performance Conference was held in Santa Clara, California. This conference is a great time to catch up with the industry, and be exposed to new tools and methods for managing MySQL and MongoDB.

Sessions

The highlights from this year’s sessions and tutorials centered around a few technologies:

  • The typical sessions for Galera Cluster and Performance Schema are always getting better, along with visualization techniques.
  • Oracle MySQL’s new Document Store blurs the line between RDBMS and NoSQL.
  • Facebook’s RocksDB is getting smaller and faster.
  • ProxySQL, the new proxy kid on the block, promises to address MySQL scalability issues.
  • If security is a concern, which it should be, Hashicorp’s Vault project would be something to look into for managing MySQL secrets or encrypting data in transit.
  • MongoDB was a hot topic as well, with a number of sessions addressing management of environments and design patterns.

I expect to see an influx of articles regarding ProxySQL and MySQL’s Document Store in the next few months.

Networking

The evenings were also great events for networking and socializing, giving attendees the chance to rub shoulders with some of the most successful ‘WebScale’ companies to hear stories from the trenches. Events included the Monday Community Networking Reception and Wednesday’s Game Night.

Thank you to all those who attended the Annual Community Dinner at Pedro’s organized by Pythian on Tuesday night! We had a blast and we hope you did as well.

Community Dinner At Pedro's

Thank you!

Pythian sponsored and provided a great range of sessions this year, and we want to thank all those who stopped by our booth or attended our sessions.

I’d like to give a huge shout-out to Percona for continuing to organize a high-quality MySQL user conference focused on solving some of the toughest technical issues that can be thrown at us, and an equal shout-out to the other sponsors and speakers that play a huge part in making this conference happen.

I am looking forward to what PerconaLive Europe will bring this fall, not to mention what we can expect next year when Percona Live Santa Clara rolls around again.

Categories: DBA Blogs

Improve Parsing and Query Performance – Fix Oracle’s Fixed Object Statistics

Pythian Group - Mon, 2016-04-25 19:50

What do I mean by ‘fix’ the the fixed object statistics?  Simply gather statistics to help the optimizer.

What are ‘fixed objects’?  Fixed objects are the x$ tables and associated indexes that data dictionary views are based on.  In this case we are interested in the objects that make up the v$sqlstats and v$sql_bind_capture views.

If you’ve never before collected statistics on Oracle fixed object, you may be wondering why you should bother with it, as everything appears to be fine in your databases.

After seeing an example you may want to schedule a time to collect these statistics.

Searching for SQL

Quite recently I was looking for recently executed SQL, based on the most recently captured bind variables.

select  sql_id, sql_fulltext
from v$sqlstats
where sql_id in (
   select  distinct sql_id
   from (
      select sql_id, last_captured
      from (
         select sql_id, last_captured
         from V$SQL_BIND_CAPTURE
         order by last_captured desc nulls last
      )
      where rownum <= 20
   )
)

I ran the query and was distracted for a few moments.  When I next looked at the terminal session where this SQL was executing, no rows had yet been returned.

Thinking that maybe ‘SET PAUSE ON’ had been run, I pressed ENTER.  Nothing.

From another session I checked for waits in v$session_wait.  Nothing there either.  If the session is not returning rows, and not registering and event in v$session_wait, then it must be on CPU.

This didn’t seem an ideal situation, and so I stopped the query with CTRL-C.

The next step was to run the query on a smaller and not very busy 11.2.0.2 database.  This time I saw that rows were being returned, but very slowly.

So now it was time to trace the execution and find out what was going on.

alter session set tracefile_identifier='JKSTILL';

set linesize 200 trimspool on

alter session set events '10046 trace name context forever, level 12';

select  sql_id, sql_fulltext
from v$sqlstats
where sql_id in (
   select  distinct sql_id
   from (
      select sql_id, last_captured
      from (
         select sql_id, last_captured
         from V$SQL_BIND_CAPTURE
         order by last_captured desc nulls last
      )
      where rownum <= 20
   )
)
/

alter session set events '10046 trace name context off';

exit

Coming back to this several minutes later, the resulting trace file was processed with the Method R Profiler to find out just where the time was going.

 

image

 

The ‘SQL*Net message from client’ event can be ignored; most of that time was accumulated waiting for me to come back and exit sqlplus.  While the script example shows that the 10046 trace was turned off and the session exited, I had forgot to include those two line for this first run.

No matter, as the interesting bit is the next line, ‘CPU: FETCH dbcalls’.  More than 6 minutes was spent fetching a few rows, so clearly something was not quite right. The SQL plan in the profile showed what the problem was, as the execution plan was far less than optimal. The following is the execution plan from AWR data:

 

  1  select *
  2  from TABLE(
  3     dbms_xplan.display_awr(sql_id => :sqlidvar, plan_hash_value => 898242479, format => 'ALL ALLSTATS LAST')
  4*    )
sys@oravm1 SQL- /

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------
SQL_ID 4h7qfxa9t1ukz
--------------------
select  sql_id, sql_fulltext from v$sqlstats where sql_id in (  select
distinct sql_id         from (          select sql_id, last_captured            from (
   select sql_id, last_captured from V$SQL_BIND_CAPTURE order by
last_captured desc nulls last           )               where rownum <= 20      ) )

Plan hash value: 898242479

-------------------------------------------------------------------------------------------
| Id  | Operation                 | Name         | E-Rows |E-Bytes| Cost (%CPU)| E-Time   |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |              |        |       |     1 (100)|          |
|   1 |  FILTER                   |              |        |       |            |          |
|   2 |   FIXED TABLE FULL        | X$KKSSQLSTAT |      1 |  2023 |     0   (0)|          |
|   3 |   VIEW                    |              |      1 |     8 |     1 (100)| 00:00:01 |
|   4 |    COUNT STOPKEY          |              |        |       |            |          |
|   5 |     VIEW                  |              |      1 |     8 |     1 (100)| 00:00:01 |
|   6 |      SORT ORDER BY STOPKEY|              |      1 |    43 |     1 (100)| 00:00:01 |
|   7 |       FIXED TABLE FULL    | X$KQLFBC     |      1 |    43 |     0   (0)|          |
-------------------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$88122447
   2 - SEL$88122447 / X$KKSSQLSTAT@SEL$4
   3 - SEL$6        / from$_subquery$_002@SEL$5
   4 - SEL$6
   5 - SEL$FEF91251 / from$_subquery$_003@SEL$6
   6 - SEL$FEF91251
   7 - SEL$FEF91251 / X$KQLFBC@SEL$10

Note
-----
   - Warning: basic plan statistics not available. These are only collected when:
       * hint 'gather_plan_statistics' is used for the statement or
       * parameter 'statistics_level' is set to 'ALL', at session or system level


39 rows selected.

 

While useful, this plan is not giving much information about why this took so long.  If pressed I would just whip up a Bash and Awk one-liner  to parse the trace files and find out where this time was going.  In this case though I could just consult the Method R profile again.

 

image

 

Yikes!  There were 106.3E6 rows returned from from X$KQLFBC.

Collecting the Fixed Object Statistics

Rather than spend time analyzing this further, it seemed that here was a clear case for collecting statistics on fixed objects in the database.  The following SQL was run:

 

exec dbms_stats.gather_fixed_objects_stats

 

The next step was to rerun the query.  This time it ran so quickly I wondered if it had even worked.  As before, tracing had been enabled, and a profile generated from the trace. There was now quite an improvement seen in the execution plan:

 

image

 

The 0.0047 seconds required to return 442 rows from X$KQLFBC was quite a reduction from the previously seen time of nearly 396 seconds.

Why This Is Important

This issue came to light due to a custom query I was running. The optimizer will probably never run that same query, but it was clear that the fixed object statistics needed to be updated.

Now imagine your customers using your application; they may be waiting on the database for what seems like an eternity after pressing ENTER on a web form.  And what are they waiting on?  They may be waiting on the optimizer to evaluate a SQL statement and determine the best plan to use.  The reason for the waiting in this case would simply be that the DBA has not taken steps to ensure the optimizer has the correct information to effectively query the database’s own metadata.   Until the optimizer has the correct statistics, performance of query optimization will be sub-optimal.  In a busy system this may result in mutex waits suddenly showing as a top event in AWR reports.  Troubleshooting these waits can be difficult as there are many possible causes of them.

Do your customers, your database and yourself a favor – include updates of fixed tables statistics in your regular database maintenance schedule, and avoid a possible source of performance problems.

Categories: DBA Blogs

When the default value is not the same as the default

Pythian Group - Mon, 2016-04-25 10:39

I was working on a minor problem recently where controlfile autobackups were written to the wrong location during rman backups. Taking controlfile autobackups is generally a good idea, even if you configure controlfile backups yourself. Autobackups also include an spfile backup, though not critical for restore, is still convenient to have. And autobackups are taken not only after backups, but more importantly every time you change the physical structure of your database, like adding or removing datafiles and tablespaces which would make a restore with an older controlfile a lot harder.

What happened in this case was that the CONTROLFILE AUTOBACKUP FORMAT parameter was changed from the default ‘%F’ to the value ‘%F’. Yes, the values are the same. But setting a value and not leaving it at the default changed the behaviour of those autobackups. Where by default ‘%F’ means writing to the flash recovery area, explicitly setting the format parameter to ‘%F’ will save the autobackup to the folder $ORACLE_HOME/dbs/.

See for yourself. This shows an autobackup while the parameter is set to the default and as expected, the autobackup is written to the flash recovery area. So that is the correct location but the filename is a bit off. It should be c-DBID-YYYYMMDD-SERIAL.

RMAN> SHOW CONTROLFILE AUTOBACKUP FORMAT;

RMAN configuration parameters for database with db_unique_name CDB1 are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default

RMAN> backup spfile;

Starting backup at 18-APR-16
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 18-APR-16
channel ORA_DISK_1: finished piece 1 at 18-APR-16
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2016_04_18/o1_mf_nnsnf_TAG20160418T172428_ckb62f38_.bkp tag=TAG20160418T172428 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-APR-16

Starting Control File and SPFILE Autobackup at 18-APR-16
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/autobackup/2016_04_18/o1_mf_s_909509070_ckb62gko_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 18-APR-16

Now we are setting the to format string to ‘%F’ and observe the autobackup is not written to the FRA but $ORACLE_HOME/dbs. At least it has the filename we were expecting.

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
new RMAN configuration parameters are successfully stored

RMAN> backup spfile;

Starting backup at 18-APR-16
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 18-APR-16
channel ORA_DISK_1: finished piece 1 at 18-APR-16
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2016_04_18/o1_mf_nnsnf_TAG20160418T172447_ckb62z7f_.bkpx tag=TAG20160418T172447 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-APR-16

Starting Control File and SPFILE Autobackup at 18-APR-16
piece handle=/u01/app/oracle/product/12.1.0.2/db_1/dbs/c-863887021-20160418-04 comment=NONE
Finished Control File and SPFILE Autobackup at 18-APR-16

RMAN> SHOW CONTROLFILE AUTOBACKUP FORMAT;

RMAN configuration parameters for database with db_unique_name CDB1 are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';

This is like Schrödinger’s parameter, where you can either get the correct location or the correct name, but not both. To be fair, not assigning the right name to the autobackup in the FRA does not matter much because the files will be found during a restore anyway.

At this point it is good to remember how to use CLEAR to reset a parameter to it’s default instead of just setting the default value.

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;

old RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
RMAN configuration parameters are successfully reset to default value

RMAN> SHOW CONTROLFILE AUTOBACKUP FORMAT;

RMAN configuration parameters for database with db_unique_name CDB1 are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default

I have tested this in versions 10g, 11g and 12.1.0.2 with the same result. The behaviour is also not unknown. In fact, bug 4248670 was logged against this in 2005 but has not been resolved so far. My Oracle Support does mention the above workaround of clearing the parameter in note 1305517.1 though.

Categories: DBA Blogs

MySQL Query Best Practices

Pythian Group - Mon, 2016-04-25 10:30

You can get many returns from a Google search for “MySQL Query Best Practices” or “MySQL Query Optimization.” The drawback is that too many rules can provide confusing or even conflicting advice. After doing some research and tests, I outlined the essential and important ones below:

1) Use proper data types

1.1) Use the smallest data types if possible

MySQL tries to load as much data as possible into memory (innodb-buffer-pool, key-buffer), so a small data type means more rows of data in memory, thus improving performance. Also, small data sizes reduces disk i/o.

1.2) Use Fixed-length Data Types if Possible

MySQL can calculate quickly the position of a fixed-length column in a specific row of a table.

With the flexible-length data type, the row size is not fixed, so every time it needs to do a seek, MySQL might consult the primary key index. However, the flexible-length data type can save data size, and the disk space required.

In practice, if the column data size varies a lot, then use a flexible-length data type (e.g., varchar); if the data length is short or length barely changes, then use a fixed data type.

1.3) Use not null unless there is reason not to

It is harder for MySQL to optimize queries that refer to nullable columns, because they make indexes, index statistics, and value comparisons more complicated. A nullable column uses more storage space and requires special processing inside MySQL.

When a nullable column is indexed, it requires an extra byte per entry and can even cause a fixed-size index (e.g., an index on a single integer column) to be converted to a variable-sized one in MyISAM.

2)Use indexes smartly

2.1) Make primary keys short and on meaningful fields

A shorter primary key will benefit your queries, because the smaller your primary key, the smaller the index, and the less pages in the cache. In addition, a numeric type is prefered because numeric types are stored in a much more compact format than character formats and so it will make primary key shorter.

Another reason to make primary key shorter, is because we usually use primary key to join with the other tables.

It is a good idea to use a primary key on a meaningful field, because MySQL uses a cluster index on a primary key. We usually just need the info from primary key, and especially when joined with other tables, it will only search in the index without reading from the data file in disk, and benefit the performance. When you use a meaningful field as the primary key, make sure the uniqueness on the fields wouldn’t change, otherwise it might affect all the tables using this as foreign key when you have to change the primary key.

2.2) Index on the search fields only when needed

Usually we add indexes on the fields that frequently show up in a where clause — that is the purpose of indexing. But while an index will benefit reads, it can make writes slower (inserting/updating), so index only when you need it and index smartly.

2.3) Index and use the same data types for join fields

MySQL can do joins on different data types, but the performance is poor as it has to convert from one type to the other for each row. Use the same data type for join fields when possible.

2.4) Use a composite index if your query has has more than one field in the where clause

When the query needs to search on multiple columns of a table, it might be a good idea to create a compound index for those columns. This is because with composite index on multiple columns, the search will be able to narrow down the result set by the first column, then the second, and so on.

Please note that the order of the columns in the composite index affects the performance, so put the columns in the order of the efficiency of narrowing down the search.

2.5) Covering index for most commonly used fields in results

In some cases, we can put all the required fields into an index (i.e., a covering index) with only some of the fields in the index used for searching and the others for data only. This way, MySQL only need to access the index and there is no need to search in another table.

2.6) Partial index for long strings or TEXT, BLOB data types by index on prefix

There is a size limitation for indexes (by default, 1000 for MyISAM, 767 for InnoDB). If the prefix part of the string already covers most of the unique values, it is good to just index the prefix part.

2.7) Avoid over-indexing

Don’t index on the low cardinality values, MySQL will choose a full table scan instead of use index if it has to scan the index more than 30%.

If a field already exists in the first field of a composite index, you may not need an extra index on the single field. If it exists in a composite index but not in the leftmost field, you will usually need a separate index for that field only if required.

Bear in mind that indexing will benefit in reading data but there can be a cost for writing (inserting/updating), so index only when you need it and index smartly.

3) Others
3.1) Avoid SELECT *
There are many reasons to avoid select * from… queries. First, it can waste time to read all the fields if you don’t need all the columns. Second, even if you do need all columns, it is better to list the all the field names, to make the query more readable. Finally, if you alter the table by adding/removing some fields, and your application uses select * queries, you can get unexpected results.

3.2) Prepared Statements
Prepared Statements will filter the variables you bind to them by default, which is great for protecting your application against SQL injection attacks.

When the same query is being used multiple times in your application, you can assign different values to the same prepared statement, yet MySQL will only have to parse it once.

3.3) If you want to check the existence of data, use exists instead SELECT COUNT

To check if the data exists in a table, using select exists (select *…) from a table will perform better than select count from a table, since the first method will return a result once it gets one row of the required data, while the second one will have to count on the whole table/index.

3.4) Use select limit [number]

Select… limit [number] will return the only required lines of rows of data. Including the limit keyword in your SQL queries can have performance improvements.

3.5) Be careful with persistent connections

Persistent connections can reduce the overhead of re-creating connections to MySQL. When a persistent connection is created, it will stay open even after the script finishes running. The drawback is that it might run out of connections if there are too many connections remaining open but in sleep status.

3.6) Review your data and queries regularly

MySQL will choose the query plan based on the statistics of the data in the tables. When the data size changes, the query plan might change, and so it is important to check your queries regularly and to make optimizations accordingly. Check regularly by:

3.6.1) EXPLAIN your queries

3.6.2) Get suggestions with PROCEDURE ANALYSE()

3.6.3) Review slow queries

Categories: DBA Blogs

Partition Storage -- 5 : Partitioned Table versus Non-Partitioned Table ? (in 12.1)

Hemant K Chitale - Mon, 2016-04-25 09:13
Reviewing my second blog post in this series, I found it strange that Partition P_100 (populated by Serial Inserts of 1 row, 100,000 rows, 500,000 rows and 500,000 rows) had such a High Water Mark.

For 1.1million rows of an Average Row Length of 11, the High Water Mark was 3,022 blocks.

In the fourth blog post, a simple ALTER TABLE MOVE PARTITION had brought the High Water Mark to 2,482 blocks !

This needs further investigation.

Let's compare a single Partition of a Partitioned Table with a Non-Partitioned Table for exactly the same data and same pattern of INSERT statements.

Starting with a new Partitioned Table.

SQL> l
1 create table new_part_tbl (id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7* partition p_max values less than (maxvalue))
SQL> /

Table created.

SQL>
SQL> insert into new_part_tbl values (51,'Fifty One');

1 row created.

SQL>
SQL> insert into new_part_tbl
2 select 25, 'New Row'
3 from dual
4 connect by level < 100001
5 /

100000 rows created.

SQL> insert into new_part_tbl
2 select 45, 'New Row'
3 from dual
4 connect by level < 500001
5 /

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','NEW_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'NEW_PART_TBL'
4 and partition_name = 'P_100'
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 3022

SQL>
SQL> REM Let's MOVE the Partition
SQL> alter table new_part_tbl move partition P_100;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','NEW_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'NEW_PART_TBL'
4 and partition_name = 'P_100'
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2484

SQL>
SQL> l
1 select extent_id, blocks
2 from dba_extents
3 where segment_name = 'NEW_PART_TBL'
4 and segment_type = 'TABLE PARTITION'
5 and partition_name = 'P_100'
6 and owner = 'HEMANT'
7* order by 1
SQL> /

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


As expected (see the first blog post), the Extents are still 8MB each.  But the High Water Mark has "magicallly" shrunk from 3,022 blocks to 2,484 blocks.

Let's create a Non-Partitioned Table with the same columns and rows.

SQL> create table non_part_tbl (id_column number(6), data_column varchar2(100));

Table created.

SQL> insert into non_part_tbl values (51,'Fifty One');

1 row created.

SQL> insert into non_part_tbl
2 select 25, 'New Row'
3 from dual
4 connect by level < 100001
5 /

100000 rows created.

SQL> insert into non_part_tbl
2 select 45, 'New Row'
3 from dual
4 connect by level < 500001
5 /

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','NON_PART_TBL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
2 from user_tables
3 where table_name = 'NON_PART_TBL'
4 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2512

SQL>
SQL> REM Let's MOVE the Table
SQL> alter table non_part_tbl move;

Table altered.

SQL> select avg_row_len, num_rows, blocks
2 from user_tables
3 where table_name = 'NON_PART_TBL'
4 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2512

SQL>
SQL> l
1 select extent_id, blocks
2 from dba_extents
3 where segment_name = 'NON_PART_TBL'
4 and segment_type = 'TABLE'
5 and owner = 'HEMANT'
6* order by 1
SQL> /

EXTENT_ID BLOCKS
---------- ----------
0 8
1 8
2 8
3 8
4 8
5 8
6 8
7 8
8 8
9 8
10 8
11 8
12 8
13 8
14 8
15 8
16 128
17 128
18 128
19 128
20 128
21 128
22 128
23 128
24 128
25 128
26 128
27 128
28 128
29 128
30 128
31 128
32 128
33 128
34 128

35 rows selected.

SQL>


The Non-Partitioned Table had a High Water Mark of 2,512 blocks.  This did not change with a MOVE.  The allocation of Extents is also expected in AutoAllocate.

Why, then, does the Partition behave differently ?  It started with a High Water Mark of 3,022 blocks which shrunk to 2,484 blocks after a MOVE ?

Is the Average Row Length or the actual data a factor ?  (Note : I am *not* using Table Compression).

To be explored further with a larger row size ...........

Possibly, to be explored with a different pattern of INSERT statements  ......

Possibly to be compared in 11.2 as well. ......
.
.
.

Categories: DBA Blogs

Partition Storage -- 4 : Resizing Partitions

Hemant K Chitale - Sun, 2016-04-24 09:38
Building on Posts 2 (Adding Rows) and 3 (Adding Partitions) where we saw Table Partitions using 8MB Extents ..... is there a way to "resize" Partitions to smaller Extents (and, maybe, lesser space consumed) without using Compression ?

Let's explore.

Beginning with Partitions P_100 and P_200 ....

SQL> select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 and partition_name in ('P_100','P_200')
5 order by 1,2
6 /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 24576 3
MY_PART_TBL P_200 32768 4
MY_PART_TBL_NDX P_100 28672 43
MY_PART_TBL_NDX P_200 33792 48

SQL>
SQL> alter table my_part_tbl move partition p_100 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_100 storage (initial 64K next 64K)
2 /

Index altered.

SQL> alter table my_part_tbl move partition p_200 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_200 storage (initial 64K next 64K)
2 /

Index altered.

SQL>
SQL> select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 and partition_name in ('P_100','P_200')
5 order by 1,2
6 /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 20480 35
MY_PART_TBL P_200 21504 36
MY_PART_TBL_NDX P_100 18432 33
MY_PART_TBL_NDX P_200 19456 34

SQL>
SQL> select partition_name, blocks, num_rows
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 and partition_name in ('P_100','P_200')
5 order by 1
6 /

PARTITION_NA BLOCKS NUM_ROWS
------------ ---------- ----------
P_100 3022 1100001
P_200 3668 1100001

SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, blocks, num_rows
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 and partition_name in ('P_100','P_200')
5 order by 1
6 /

PARTITION_NA BLOCKS NUM_ROWS
------------ ---------- ----------
P_100 2482 1100001
P_200 2639 1100001

SQL>
SQL>
SQL> l
1 select partition_name, blocks, count(*)
2 from dba_extents
3 where owner = 'HEMANT'
4 and segment_name = 'MY_PART_TBL'
5 and segment_type = 'TABLE PARTITION'
6 and partition_name in ('P_100','P_200')
7 group by partition_name, blocks
8* order by 1,2
SQL> /

PARTITION_NA BLOCKS COUNT(*)
------------ ---------- ----------
P_100 8 16
P_100 128 19
P_200 8 16
P_200 128 20

SQL>


Partition P_100 has shrunk from 3 extents of 8MB adding up to 24,576KB to 35 extents adding up to 20,480KB. The High Water Mark has shrink from 3,022 blocks to 2,482 blocks (Remember : P_100 was populated with a Serial Insert.  Partition P_200 that had been populated with Parallel (DoP=4) insert has also shrunk from 32,768KB to 21,504KB and the High Water Mark from 3,668 blocks to 2,639 blocks.  The Extents are a combinaion of 64KB (the first 16, adding up to 1MB) and 1MB sizes.
Even the Index Partitions seem to have shrunk.

So, a MOVE/REBUILD (the REBUILD of the Index Partitons was required because I did a Partition MOVE without UPDATE INDEXES), could be used to shrink the Partitions with newer, smaller, Extents allocated.

But what about the case of SPLIT Partition, where Partitions SPLIT from an 8MB Partition resulted in 2 8MB Partitions, even for empty Partitions.

Here's a workaround.  Before SPLITting the P_MAX Partition, I resize it.

SQL> alter table my_part_tbl move partition p_max storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_max storage (initial 64K next 64K);

Index altered.

SQL> alter table my_part_tbl
2 split partition p_max
3 at (1001)
4 into (partition p_1000, partition p_max)
5 /

Table altered.

SQL> alter table my_part_tbl
2 split partition p_1000
3 at (901)
4 into (partition p_900, partition p_1000)
5 /

Table altered.

SQL> alter table my_part_tbl
2 split partition p_900
3 at (801)
4 into (partition p_800, partition p_900)
5 /

Table altered.

SQL>
SQL> l
1 select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4* order by 1,2
SQL>
SQL> /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 20480 35
MY_PART_TBL P_200 21504 36
MY_PART_TBL P_300 8192 1
MY_PART_TBL P_400 8192 1
MY_PART_TBL P_600 8192 1
MY_PART_TBL P_680 8192 1
MY_PART_TBL P_700 8192 1
MY_PART_TBL P_800 64 1
MY_PART_TBL P_900 64 1
MY_PART_TBL P_1000 64 1
MY_PART_TBL P_MAX 64 1
MY_PART_TBL_NDX P_100 18432 33
MY_PART_TBL_NDX P_200 19456 34
MY_PART_TBL_NDX P_300 64 1
MY_PART_TBL_NDX P_400 64 1
MY_PART_TBL_NDX P_600 64 1
MY_PART_TBL_NDX P_680 64 1
MY_PART_TBL_NDX P_700 64 1
MY_PART_TBL_NDX P_800 64 1
MY_PART_TBL_NDX P_900 64 1
MY_PART_TBL_NDX P_1000 64 1
MY_PART_TBL_NDX P_MAX 64 1

22 rows selected.

SQL>


(Note : I have manually relocated Partition P_1000 in the listing).
Partitions P_600, P_680 and P_700 had been created by SPLIT PARTITION commands in the previous post, beginning with segment-created P_MAX partition.  However, after rebuilding P_MAX to 64KB Extents, subsequently SPLITted Partitions (P_800 to P_1000) are also 64KB.

Note : I am not advising that all have to Partitions be 64K.  (Observe how AutoAllocate did allocate 1MB Extents to P_100 and P_200 after the first 1MB of space usage (using 16 64KB Extents).
.
.
.


Categories: DBA Blogs

Partition Storage -- 3 : Adding new Range Partitions with SPLIT

Hemant K Chitale - Sat, 2016-04-23 09:04
Building on the Partitioned Table in the previous two blog posts...

We know that the Table is a Range Partitioned Table.  With a MAXVALUE Partition, the only way to add new Partitions is to use the SPLIT PARTITION command.

First, let's review the Table, Partitions and Segments.

SQL> select table_name, num_rows
2 from user_tables
3 where table_name = 'MY_PART_TBL'
4 /

TABLE_NAME NUM_ROWS
---------------- ----------
MY_PART_TBL 2200004

SQL> select partition_name, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by 1
5 /

PARTITION_NA NUM_ROWS BLOCKS
------------ ---------- ----------
P_100 1100001 3022
P_200 1100001 3668
P_300 1 1006
P_400 1 1006
P_MAX 0 0

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

8 rows selected.

SQL>


So, the table has 5 partitions P_100 to P_MAX but only 4 have segments created after one or more rows have been populated.  P_MAX has no segment created for either the Table Partition or the Index Partition.

What happens if we SPLIT P_MAX (an empty, segmentless Partition) to create a new Partition ?

SQL> alter table my_part_tbl
2 split partition p_max
3 at (501)
4 into (partition p_500, partition p_max)
5 /

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, high_value, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by partition_position
5 /

PARTITION_NA HIGH_VALUE NUM_ROWS BLOCKS
------------ ---------------- ---------- ----------
P_100 101 1100001 3022
P_200 201 1100001 3668
P_300 301 1 1006
P_400 401 1 1006
P_500 501 0 0
P_MAX MAXVALUE 0 0

6 rows selected.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

8 rows selected.

SQL>


So, the process of creating Partition P_500 did not create a segment for it, because P_MAX which it was SPLIT from, was segmentless.  What happens if I split a Partition with 1 or more rows ?

SQL> insert into my_part_tbl
2 select 550, 'Five Hundred Fifty'
3 from dual
4 /

1 row created.

SQL> commit;
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

10 rows selected.

SQL>
SQL> alter table my_part_tbl
2 split partition p_max
3 at (601)
4 into (partition p_600, partition p_max)
5 /

Table altered.

SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

12 rows selected.

SQL>


So, the row for ID_COLUMN=550 created the segment for Partition P_MAX. Subsequently, SPLITting this Partition into P_600 and P_MAX resulted into two Partitions of 8MB each.
The row for ID_COLUMN=550 would be in the P_600 Partition and the P_MAX Partition would now be the empty Partition.  Yet, even P_MAX now takes an 8MB extent, unlike earlier.

Let's try doing such a SPLIT that, say P_700 is created empty but P_MAX inherits the row.

SQL> insert into my_part_tbl
2 select 900, 'Nine Hundred'
3 from dual
4 /

1 row created.

SQL> commit;

Commit complete.

SQL> alter table my_part_tbl
2 split partition p_max
3 at (701)
4 into (partition p_700, partition p_max)
5 /

Table altered.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

14 rows selected.

SQL> select count(*) from my_part_tbl partition (P_700);

COUNT(*)
----------
0

SQL>


Again, both Partitions (P_700 and P_MAX) have a segment of 8MB.
This means that, once a Segment for a Partition is created, any SPLIT of that Partition results into two Segments inheriting the same 8MB Extent Size, irrespective of the fact that one of the two may be empty.

SQL> alter table my_part_tbl
2 split partition p_700
3 at (681)
4 into (partition p_680, partition p_700)
5 /

Table altered.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_680 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_680 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

16 rows selected.

SQL>


That is confirmation that SPLITting a Partition that has a segment (even if it is empty) results into two segmented partitions, even if both are empty.

Going back to Parttion P_500 (which is present but segmentless), what happens if we split it ?

SQL> alter table my_part_tbl
2 split partition p_500
3 at (451)
4 into (partition p_450, partition p_500)
5 /

Table altered.

SQL>
SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by partition_position
5 /

PARTITION_NA HIGH_VALUE
------------ ----------------
P_100 101
P_200 201
P_300 301
P_400 401
P_450 451
P_500 501
P_600 601
P_680 681
P_700 701
P_MAX MAXVALUE

10 rows selected.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_680 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_680 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

16 rows selected.

SQL>


Splitting segmentless Partition P_500 into P_450 and P_500 did *not* result into new Segments.

 This has implications for your SPLIT Partition strategy.  If you need to do a recursive split to create, say, 90 1-day Partitions and you start with a Partition that has a segment (even if empty), you get 90 new segments as well.  Thus, the table would suddenly "grow" by 720MB without having inserted a single row on the day you create these 90 Partitions.  You may get some questions from IT Operations / Support about the sudden "growth" in 1 day.
On the other hand, starting with a segmentess Partition, you get 90 new segmentless Partitions.  Their segments will be created when they are populated.
.
.

.
Categories: DBA Blogs

Proud to Work at Pythian, One of Canada’s Top 25 ICT Professional Services Companies

Pythian Group - Fri, 2016-04-22 12:18

It’s only four months into 2016, and there’s a lot to be excited about. In addition to moving Pythian’s global headquarters in Ottawa, Canada to the hip and happening neighbourhood of Westboro, we’ve been receiving accolades for being one of Canada’s top ICT professional services companies, and a great place to work. Following are three reasons to be proud to work at Pythian.

In April Pythian was recognized as one of Canada’s Top 25 Canadian ICT Professional Services Companies on the prestigious Branham300 list. We also appeared on the Top 250 Canadian ICT Companies list for the second year in a row.

The Branham300 is the definitive listing of Canada’s top public and private ICT companies, as ranked by revenues. Not too many people can say that they work at a company that is one of the Top 25 ICT Professional Services Companies in Canada.

In February, our CEO Paul Vallée was named “Diversity Champion of the Year” by Women in Communications and Technology (WCT). In 2015 Pythian launched the Pythia Project, a corporate initiative designed to increase the percentage of talented women who work and thrive at Pythian, especially in tech roles. A new metric called the “Pythia Index” was also introduced. It measures the proportion of people in a business, or in a team, who are women leaders or report to a woman leader. Pythian was also the first Canadian tech company to release its gender stats, and invite other Canadian tech companies to join in the battle against “bro culture”. Stay tuned for more news on the Pythia program in the coming months.

And last, but not least, in March, Pythian was selected as one of Canada’s Top Small & Medium Employers for 2016. This award recognizes small and medium employers with exceptional workplaces and forward-thinking human resource policies. Everyone that works at Pythian is aware of the amazing benefits, but there is a hard working team that really goes the extra mile to make the company a great place to work. Thank you.

Clearly 2016 is off to a fantastic start! I’m looking forward to more good news to share.

Categories: DBA Blogs

How to set up Flashback for MongoDB Testing

Pythian Group - Fri, 2016-04-22 11:52

 

After you’ve upgraded your database to a new version, it’s common that the performance degrades in some cases. To prevent this from happening, we could capture the production database operations and replay them in the testing environment which has the new version installed.

Flashback is a MongoDB benchmark framework that allows developers to gauge database performance by benchmarking queries. Flashback records the real traffic to the database and replays operations with different strategies. The framework is comprised of a set of scripts that fall into 2 categories:

  1. Records the operations(ops) that occur during a stretch of time
  2. Replays the recorded ops
Installation

The framework was tested on Ubuntu 10.04.4 LTS

Prerequisites

-go 1.4

-git 2.3.7

-python 2.6.5

-pymongo 2.7.1

-libpcap0.8 and libpcap0.8-dev

 

  1. Download Parse/Flashback source code

# go get github.com/ParsePlatform/flashback/cmd/flashback

  1. Manually modify the following file to workaround a mongodb-tools compatibility issue

In pass_util.go file:

func GetPass() string {
–    return string(gopass.GetPasswd())
+    if data, errData := gopass.GetPasswd(); errData != nil {
+        return “
+    } else {
+        return string(data)
+    }

 

  1. Compile the go lang part of the tool

# go build -i ./src/github.com/ParsePlatform/flashback/cmd/flashback

 

Configuration

Suppose you have to two shards, Shard a and Shard b. Each shard has 3 nodes. In each shard a, primary is a1. In shard b, primary is b2.

1. copy sample config file for editing

# cp ./src/github.com/ParsePlatform/flashback/record/config.py.example  config.py

2. Change config for testing

DB_CONFIG = {

# Indicates which database(s) to record.

“target_databases”: [“test”],

# Indicates which collections to record. If user wants to capture all the

# collections’ activities, leave this field to be `None` (but we’ll always

# skip collection `system.profile`, even if it has been explicit

# specified).

“target_collections”: [“testrecord”],

 

“oplog_servers”: [

{ “mongodb_uri”: “mongodb://mongodb.a2.com:27018” },

{ “mongodb_uri”: “mongodb://mongodb.b1.com:27018” }

 

],

 

# In most cases you will record from the profile DB on the primary

# If you are also sending queries to secondaries, you may want to specify

# a list of secondary servers in addition to the primary

“profiler_servers”: [

{ “mongodb_uri”: “mongodb://mongodb.a1.com:27018” },

{ “mongodb_uri”: “mongodb://mongodb.b2:27018” }

],

 

“oplog_output_file”: “./testrecord_oplog_output”,

“output_file”: “./testrecord_output”,

 

# If overwrite_output_file is True, the same output file will be

# overwritten is False in between consecutive calls of the recorer. If

# it’s False, the recorder will append a unique number to the end of the

# output_file if the original one already exists.

“overwrite_output_file”: True,

 

# The length for the recording

“duration_secs”: 3600

}

 

APP_CONFIG = {

“logging_level”: logging.DEBUG

}

 

duration_secs indicates the length for the recording. For production capture, should set it at least to 10-12 hrs.

Make sure has write permission to the output dir

Recording
  1. Set all primary servers profiling level to 2

db.setProfilingLevel(2)

2. Start operations recording

./src/github.com/ParsePlatform/flashback/record/record.py

3. The script starts multiple threads to pull the profiling results and oplog entries for collections and databases that we are interested in. Each thread works independently. After fetching the entries, it will merge the results from all sources to get a full picture of all operations as one output file.

4. You can run the record.py from any server as long as the server has flashback installed  and can connect to all mongod servers.

5. As a side note, running mongod in replica set mode is necessary (even when there is only one node), in order to generate and access the oplogs

 

Replay
  1. Run flashback. Style can be “real” or ”stress”

        Real: replay ops in accordance to their original timestamps, which allows us to imitate regular traffic.

        Stress: will preload the ops to the memory and replay them as fast as possible. This potentially limits the number of  ops played back per session to the             available memory on the Replay host.

For sharded collections, point the tool to a mongos. You could also point to a single shard primary for non-sharded collections.

./flashback -ops_filename=”./testrecord_output” -style=”real” -url=”localhost:27018″ -workers=10

Observations
  • Several pymongo (python’s MongoDB driver) arguments in the code are deprecated causing installation and running errors.
  • Need to define a faster restore method (ie. LVM snapshots) to rollback the test environment after each replay.
  • Need to capture execution times for each query included in the test set to be able to detect excecution plan changes.
  • In a sharded cluster, record can be executed from a single server with access to all primaries and/or secondaries.
  • Pulling oplogs from secondaries is recommended if we are looking to reduce load on the primaries.
  • Memory available would dramatically affect operation’s merge process after recording
  • Memory available would also affect replay times (see Tests summary)
Tests summary

 

Record test scenario 1

 

Record server: mongos server (8G RAM)

Time : about 2 hours to finish the recording

Details: Ran record while inserting and updating 1000 documents

 

Record test scenario 2

 

Record server: shard a primary node a1 (80G RAM)

Time: about 2 minutes to finish the recording

Details: Ran record while inserting and updating 1000 documents

Record test scenario 3

 

Record server: shard a primary node a1 (80G RAM)

Time: it took about 20 minutes to finish the recording

Details: Ran record while inserting and updating 100,000 documents

Replay test scenario 1

Replay server: mongos server (8G RAM)

Time: it took about 1 hour to finish the replay

Details: replayed 1000 operations in “real” style

 

Replay test scenario 2

Replay server: shard a primary node a1 (80G RAM)

Time: about 5 minutes to finish the replay

Details: replayed 1000 operations in “real” style

Replay test scenario 3

Replay server: mongos server (8G RAM)

Time: failed due to insufficient memory

Details: replayed 1000 operations in “stress” style

 

Replay test scenario 4

Replay server: shard a primary node a1 (80G RAM)

Time: about 1minute to finish the replay

Details: replayed 1000 operations in “stress” style

 

Replay test scenario 5

Replay server: shard a primary node a1 (80G RAM)

Time: about 20 minutes to finish the replay

Details: replayed 50,000 operations in “stress” style

Categories: DBA Blogs

Introducing Oracle WebLogic Server 12.2.1 Multitenancy: A Q&A Game

Next to our Partner Webcast «Oracle WebLogic Server 12.2.1 Multitenancy and Continuous Availability» delivered earlier this month on the 21st of April 2016, where we've focused on the two new main...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Links for 2016-04-21 [del.icio.us]

Categories: DBA Blogs

Data Encryption at Rest in Oracle MySQL 5.7

Pythian Group - Wed, 2016-04-20 12:28

 

I’ve previously evaluated MariaDB’s 10.1 implementation of data encryption at rest (https://www.pythian.com/blog/data-encryption-rest), and recently did the same for Oracle’s implementation (https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html) in their MySQL 5.7.

 

First, here’s a walkthrough of enabling encryption for MySQL 5.7:

1. Install keyring plugin.

1a. Add the following to the [mysqld] section of /etc/my.cnf:

...
early-plugin-load=keyring_file.so

<script src=”https://gist.github.com/parham-pythian/a625bf472456da4774dec424dbbb4932.js”></script>
1b. Restart the server:

...
service mysqld restart

1c. Verify:

...
mysql> SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME LIKE 'keyring%';
+--------------+---------------+
| PLUGIN_NAME  | PLUGIN_STATUS |
+--------------+---------------+
| keyring_file | ACTIVE        |
+--------------+---------------+

2. Ensure innodb_file_per_table is on.

2a. Check.

...
mysql> show global variables like 'innodb_file_per_table';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_file_per_table | ON    |
+-----------------------+-------+

2b. If OFF, add the following to the [mysqld] section of /etc/my.cnf, restart, and alter each existing table to move it to its own tablespace:

innodb_file_per_table=ON

Get list of available InnoDB tables:

mysql>select table_schema, table_name, engine from information_schema.tables where engine='innodb' and table_schema not in ('information_schema');

Run ALTER … ENGINE=INNODB on each above InnoDB tables:

mysql><strong>ALTER</strong> TABLE [TABLE_SCHEMA].[TABLE_NAME] ENGINE=INNODB;

 

Next, I walked through some testing.

1. Create some data.

...
[root@localhost ~]# mysqlslap --concurrency=50 --number-int-cols=2 --number-char-cols=3 --auto-generate-sql --auto-generate-sql-write-number=10000 --no-drop

2. Observe the mysqlslap.t1 table is not automatically encrypted. Unlike MariaDB’s implementation, there is not an option to encrypt tables by default.

2a. Via the mysql client:

...
mysql> SELECT TABLE_SCHEMA, TABLE_NAME, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES WHERE CREATE_OPTIONS LIKE '%ENCRYPTION="Y"%';
Empty set (0.05 sec)

2b. Via the command line:

(Install xxd if required.)

...
[root@localhost ~]# yum install vim-common
...
[root@localhost ~]# xxd /var/lib/mysql/mysqlslap/t1.ibd | grep -v "0000 0000" | less
...
0010dc0: 5967 4b30 7530 7942 4266 664e 6666 3143  YgK0u0yBBffNff1C
0010dd0: 5175 6470 3332 536e 7647 5761 3654 6365  Qudp32SnvGWa6Tce
0010de0: 3977 6576 7053 3730 3765 4665 4838 7162  9wevpS707eFeH8qb
0010df0: 3253 5078 4d6c 6439 3137 6a7a 634a 5465  2SPxMld917jzcJTe
...

3. Insert some identifiable data into the table:

...
mysql> <strong>insert</strong> into mysqlslap.t1 values (1,2,"private","sensitive","data");
Query OK, 1 row affected (0.01 sec)

mysql> select * from mysqlslap.t1 where charcol2="sensitive";
+---------+---------+----------+-----------+----------+
| intcol1 | intcol2 | charcol1 | charcol2  | charcol3 |
+---------+---------+----------+-----------+----------+
|       1 |       2 | private  | sensitive | data     |
+---------+---------+----------+-----------+----------+
1 row in set (0.02 sec)

4. Observe this data via the command line:

...
[root@localhost ~]# xxd /var/lib/mysql/mysqlslap/t1.ibd | grep -v "0000 0000" | less
...
04fa290: 0002 7072 6976 6174 6573 656e 7369 7469  ..privatesensiti
...

5. Encrypt the mysqlslap.t1 table:

...
mysql> <strong>alter</strong> table mysqlslap.t1 encryption='Y';
Query OK, 10300 rows affected (0.31 sec)
Records: 10300  Duplicates: 0  Warnings: 0

6. Observe the mysqlslap.t1 table is now encrypted:

6a. Via the mysql client:

...
mysql> SELECT TABLE_SCHEMA, TABLE_NAME, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES WHERE CREATE_OPTIONS LIKE '%ENCRYPTION="Y"%';
+--------------+------------+----------------+
| TABLE_SCHEMA | TABLE_NAME | CREATE_OPTIONS |
+--------------+------------+----------------+
| mysqlslap    | t1         | ENCRYPTION="Y" |
+--------------+------------+----------------+

6b. Via the command line:

...
[root@localhost ~]# xxd /var/lib/mysql/mysqlslap/t1.ibd | grep "private"
[root@localhost ~]#

6c. Observe snippet of the file:

...
[root@localhost ~]# xxd /var/lib/mysql/mysqlslap/t1.ibd | grep -v "0000 0000" | less
...
0004160: 56e4 2930 bbea 167f 7c82 93b4 2fcf 8cc1  V.)0....|.../...
0004170: f443 9d6f 2e1e 9ac2 170a 3b7c 8f38 60bf  .C.o......;|.8`.
0004180: 3c75 2a42 0cc9 a79b 4309 cd83 da74 1b06  &amp;lt;u*B....C....t..
0004190: 3a32 e104 43c5 8dfd f913 0f69 bda6 5e76  :2..C......i..^v
...

7. Observe redo log is not encrypted:

...
[root@localhost ~]# xxd /var/lib/mysql/ib_logfile0 | less
...
23c6930: 0000 0144 0110 8000 0001 8000 0002 7072  ...D..........pr
23c6940: 6976 6174 6573 656e 7369 7469 7665 6461  ivatesensitiveda
23c6950: 7461 3723 0000 132e 2f6d 7973 716c 736c  ta7#..../mysqlsl
...

This is expected because the documentation (https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html) reports encryption of files outside the tablespace is not supported: “Tablespace encryption only applies to data in the tablespace. Data is not encrypted in the redo log, undo log, or binary log.”

Conclusions

I found in my testing of MariaDB’s implementation of data encryption at rest that there were still places on the file system that a bad actor could view sensitive data. I’ve found the same in this test of Oracle’s implementation. Both leave data exposed in log files surrounding the tablespace files.

Bonus

As a bonus to this walkthrough, during this testing, the table definition caught my eye:

...
mysql> show create table mysqlslap.t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `intcol1` int(32) DEFAULT NULL,
  `intcol2` int(32) DEFAULT NULL,
  `charcol1` varchar(128) DEFAULT NULL,
  `charcol2` varchar(128) DEFAULT NULL,
  `charcol3` varchar(128) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1 ENCRYPTION='Y'
1 row in set (0.00 sec)

As discussed in https://jira.mariadb.org/browse/MDEV-9571, the MariaDB implementation does not include the “encrypted=yes” information in the table definition when tables are implicitly encrypted.

I was curious what would happen if I did a mysqldump of this encrypted table and attempted to restore it to a nonencrypted server. DBAs expect mysqldump to create a portable file to recreate the table definition and data on a different version of mysql. During upgrades, for example, you might expect to use this for rollback.

Here is my test. I first did the dump and looked inside the file.

...
[root@localhost ~]# mysqldump mysqlslap t1 > mysqlslap_t1_dump
[root@localhost ~]# less mysqlslap_t1_dump
...
CREATE TABLE `t1` (
  `intcol1` int(32) DEFAULT NULL,
  `intcol2` int(32) DEFAULT NULL,
  `charcol1` varchar(128) DEFAULT NULL,
  `charcol2` varchar(128) DEFAULT NULL,
  `charcol3` varchar(128) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1 ENCRYPTION='Y';

<strong>INSERT</strong> INTO `t1` VALUES (
...
,(1,2,'private','sensitive','data');

As expected, that definition makes the dump less portable. The restore from dump is not completed and throws an error (this is not remedied by using –force):

On a slightly older 5.7 version:

...
mysql> select version();
+-----------+
| version() |
+-----------+
| 5.7.8-rc  |
+-----------+

[root@centosmysql57 ~]# mysql mysqlslap < mysqlslap_t1_dump
ERROR 1064 (42000) at line 25: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ENCRYPTION='Y'' at line 7

On a different fork:

...
MariaDB [(none)]> select version();
+-----------------+
| version()       |
+-----------------+
| 10.1.12-MariaDB |
+-----------------+
1 row in set (0.00 sec)

[root@maria101 ~]# mysql mysqlslap < mysqlslap_t1_dump
ERROR 1911 (HY000) at line 25: Unknown option 'ENCRYPTION'

This doesn’t have anything to do with the encrypted state of the data in the table, just the table definition. I do like the encryption showing up in the table definition, for better visibility of encryption. Maybe the fix is to have mysqldump strip this when writing to the dump file.

Categories: DBA Blogs