Skip navigation.

Yann Neuhaus

Syndicate content
All blog entries from http://www.dbi-services.com/
Updated: 2 weeks 14 hours ago

SQL Server 2014: FCIs, availability groups, and TCP port conflict issues

Mon, 2015-01-12 23:26

After giving my session about SQL Server AlwaysOn and availability groups at the last French event “Les journées SQL Server 2014”, I had several questions concerning the port conflict issues, particularly the differences that exist between FCIs and availability groups (AAGs) on this subject.

In fact, in both cases, we may have port conflicts depending on which components that are installed on each cluster node. Fundamentally, FCIs and AAGs are both clustered-based features but each of them use the WSFC differently: SQL Server FCIs are “cluster-aware” services while AAGs use standalone instances by default (using of clustered instances with AAGs is possible but this scenario is relatively uncommon and it doesn’t change in any way the story).

First of all, my thinking is based on the following question: Why does having an availability group listener on the same TCP port than an SQL Server instance (but on a different process) cause a conflict issue whereas having both SQL Server FCIs with the same port is working fine?

Let’s begin with the SQL Server FCIs. When you install two SQL Server FCIs (on the same WSFC), you can configure the same listen port for the both instances and it works perfectly right? Why? The main reason is that each SQL Server FCI has its dedicated virtual IP address and as you know, a process can open a socket to a particular IP address on a specific port. However, two or more processes that attempt to open a socket on the same specific port and on the same IP address will result to a conflict. For instance, in my case, I have two SQL Server FCIs - SQLCLUST-01\SQL01 and SQLCLUST-02\SQL02 – that respectively listen on the same TCP port number: 1490. Here the picture of netstat –ano command output

 

blog_26_-_netstat_ano_-_1

 

Notice that each SQL Server process listens to its IP address and only to this one. We can confirm this by taking a look at each SQL Server error log.

 

blog_26_-_sqlclust01_sql01_error_log_-_2

 

...

 

blog_26_-_sqlclust02_sql02_error_log_-_3

 

Now let’s continue with the availability groups. The story is not the same because in most scenarios, we use standalone instances and by default they listen on all available IP addresses. In my case, this time I have two standalone instances – MSSQLSERVER (default) and APP - that listen respectively on the TCP port 1433 and 1438. By looking at the netstat –ano output we can notice that each process listen on all available IP addresses (LocalAddress = 0.0.0.0)

 

blog_26_-_netstat_ano_-_4

 

We can also verify the SQL Server error log of each standalone instance (default and APP)

 

blog_26_-_sql141_error_log_-_5

 

...

 

blog_26_-_sql141_app_error_log_-_6

 

At this point I am sure you are beginning to understand the issue you may have with availability groups and listeners. Let’s try to create a listener for an availability group with the default instances (MSSQLSERVER). My default instances on each cluster node listen on the port 1433 whereas the APP instances listen on the port 1438 as showed on the above picture. If I attempt to create my listener LST-DUMMY on the port 1433 it will be successful because my availability group and my default instances are on the same process.

 

blog_26_-_netstat_ano_-_7

 

Notice that the listener LST-DUMMY listens to the same port than the default instance and both are on the same process (PID = 1416). Of course if I try to change the TCP port number of my listener with 1438, SQL Server will raise the well-known error message with id 19486.

 

USE [master] GO ALTER AVAILABILITY GROUP [dummy] MODIFY LISTENER N'LST-DUMMY'(PORT=1438); GO

 

Msg 19486, Level 16, State 1, Line 3 The configuration changes to the availability group listener were completed, but the TCP provider of the instance of SQL Server failed to listen on the specified port [LST-DUMMY:1438]. This TCP port is already in use. Reconfigure the availability group listener, specifying an available TCP port. For information about altering an availability group listener, see the "ALTER AVAILABILITY GROUP (Transact-SQL)" topic in SQL Server Books Online.

 

The response becomes obvious now. Indeed, the SQL Server instance APP listens on TCP port 1438 for all available IP addresses (including the IP address of the listener LST-DUMMY).

 

blog_26_-_netstat_ano_-_8

 

You don't trust me? Well, I can prove it by connecting directly to the SQL Server named instance APP with the IP address of the listener LST-DUMMY - 192.168.0.35 - and the TCP port of the named instance – 1438 -

 

blog_26_-_sqlcmd_-_9

 

To summarize:

  • Having several SQL Server FCI that listen on the same port is not a problem because they can open a socket on their distinct IP address. However you can face port conflicts in the case you have also a standalone instance installed on one of the cluster node.
  • Having an availability group with a listener that listen on the same TCP port than the standalone instance on the same process will not result to a TCP port conflict.
  • Having an availability group with a listener that listen on the same TCP port than the standalone instance on a different process will result to a TCP port conflict. In this case each SQL Server process will attempt to open a socket on the same TCP port and on the same address IP.

Hope it helps!

SOUG-Romand: Journée performance le 21 mai

Mon, 2015-01-12 10:19

(english below)

Bonne nouvelle pour les francophones: le SOUG-R est de plus en plus actif.

Le 21 mai 2015 une journée performance est organisée sur Lausanne.

Flashback logging overhead: 'db file sequential read' on UNDO tablespace

Fri, 2015-01-09 13:09

in my previous post I've reproduced an issue where some bulk insert into a GTT was generating too much undo. But the real case I encountered was worse: the insert was reading lot of UNDO. And lot of this undo was read from disk.

Insert into GTT: bulk with APPEND_VALUES

Fri, 2015-01-09 13:04

This post is about the case I had where a GTT was generating too much undo. The effects were amplified by the fact that the database was in flashback logging (see Jonathan Lewis answer in my question on OTN forum about that, but that's for the next post.

Don't forget to configure Power Management settings on Hyper-V

Tue, 2015-01-06 02:51

Recently I had the opportunity to audit a SQL Server database hosted on a Hyper-V 2012 cluster. I noticed that the guest operating system had the Power Plan configured to High performance. This is great thing but when I talked to the system administrator to verify if the same option is turned on on the Hyper-V operating system, this was unfortunately not the case. 

As a reminder, the power policy setting has no effect on the guest operating system in case of virtual environments and we always have to verify if this option is configured correctly at the right level. 

I performed a quick demonstration to my customer by using the SuperPI benchmark tool that is pretty simple: it calculates pi to a specific number of digits by using one thread and for my purpose it's sufficient. 

--> Let's have the situation when Power Saver is enabled on the Hyper-V side and High performance turned on on the guest side. Then let's run SuperPI tool with 512K of digit to compute:

 

blog_25_-_superpi_calculation_-_power_saving

 

Here the time taken by the guest to calculate pi:

 

blog_25_-_superpi_calculation_-_power_saving_-_result

 

 

Now let's change the story by reversing the power settings value: High performance on the Hyper-V side and Power Saver on the guest side. Then we can do the same benchmark test:

 

blog_25_-_superpi_calculation_-_high_perf_-_result

 

 5,688 seconds for this test against 13,375 seconds for the first test - 57% of improvement .. not so bad :-) but let's have a more suitable situation. Indeed in most configurations power management setting is configured to Balanced by default and my customer asked me if there is a noticable difference if we leave the default configuration. In order to justify my recommandation we performed the same test but this time I decided to change the number of digits to compute to simulate a more realistic OLTP transaction (short and requiere all CPU resources during a short time). The table lists and compare the both results:

 

Settings Duration (s) Hyper - V : Load balancing 0.219 Hyper - V : High performance  0.141 

 

We can notice a 64% of CPU time improvement in the context of my customer! So after that, my customer was convinced to change this setting and I hope it is the same for you! Of course with long running queries that consume a lot of CPU resources during a long time the difference may be less discernible because the processor wake-up time is very small compared to the total worker time consumed by them.

Keep in mind that changing Power Management state from the guest has no effect on virtualized environment. You must take care of this setting directly on the hypervisor.

Happy virtualization !!

Managed Backup with SQL Server 2014

Mon, 2015-01-05 21:37

In a previous blog post called Backup a SQL Server database from On-Premise to Azure, I presented the different tools to backup your on-premise databases on Azure Storage. SQL Server Managed Backup to Windows Azure was one of these tools.

In my opinion, Managed Backup is a great tool. That is why I decided to dedicate an entire blog to this feature.

 

Understanding Managed Backup

Managed Backup is a new feature introduced in SQL Server 2014 working with Windows Azure. This feature allows to manage and automate SQL Server backups (from your on-premise or Azure SQL Server instance), configurable by … script only (T-SQL or PowerShell)!

Microsoft recommends to use Managed Backup for Windows Azure virtual machines.

Managed backup only works with user databases in Full or Bulk-logged Recovery Model, and can only perform Full and Log backups.

SQL Backups supports a point in time restore, and are stored following a retention period. This setting indicates the desired lifespan of a backup stored in Azure Storage. Once the period is reached, the backup is deleted.

SQL Backups are scheduled following the transaction workload of the database.

A full database backup is scheduled when:

  • The Managed backup feature is enabled for the first time
  • The log growth is 1 GB or larger
  • The last full database is older than 1 week
  • The log chain is broken

A transaction log backup is scheduled when:

  • No log backup history is available
  • The log space is 5 MB or larger
  • The last log backup is older than 2 hours
  • A full database has been performed

 

Configuring Managed Backup

First, you need to activate SQL Server Agent service in order to use the feature.

In this example, I have 3 user databases as follows:

 

Database Named

Recovery Model

Data Files Location

AdventureWorks2012

Simple

On-premise

AdventureWorks2014

Full

On-premise

Hybriddb

Bulk-logged

Azure Storage

 

Managed Backup can be enabled at the instance level or database level.

If you decide to activate the feature at the instance level, the configuration will be set for all user databases of your instance (even for databases added after the configuration).

On the other hand, you can activate the feature for specific user databases. If the feature is also configured at the instance level, it will be overridden by the configuration at the database level.

To configure the feature, you must provide a set of parameters:

  • The URL of the Azure Storage
  • The retention period in days
  • The credential name
  • The encryption algorithm

If the encryption algorithm is not set to ‘NO_ENCRYPTION’, you also need to provide these parameters:

  • The encryptor type
  • The encryptor name

Moreover, when you configure your Managed Backup, you need to specify if you want to activate your Managed Backup.

 

You can perform a database backup with COPY_ONLY. To do this, you need to use 'smart_admin.sp_backup_on_demand' stored procedure, by specifying the database name.

However, this stored procedure will use the configuration of the Managed Backup at the database level. That means you must configure and enable the Managed Backup for your database.

 

We need to create a credential in order to be able to connect to Azure Storage:

 

CREATE CREDENTIAL dbiservices

WITH IDENTITY = 'dbiservices',

SECRET = 'password'

 

Let’s configure our Managed Backup at the instance level:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @enable_backup = 0,

    @storage_url = 'https://dbiservices.blob.core.windows.net',

    @retention_days = 15,

    @credential_name = 'dbiservices';

    @encryption_algorithm = 'NO_ENCRYPTION';

 

 

 

 

 

 

 

 

 

If I want to display the instance configuration:

 

USE msdb;

GO

SELECT * FROM smart_admin.fn_backup_instance_config();

 

Here is the result:

 

display-configuration.png

 

We will override the Managed Backup configuration for ‘hybriddb’ database:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @database_name = 'hybriddb',

    @enable_backup = 0,

    @credential_name = 'dbiservices',

    @storage_url = 'https://dbiservices.blob.core.windows.net,

    @retention_days = 25,

    @encryption_algorithm = 'NO_ENCRYPTION';

 

If I want to display the database configuration of all databases of the instance:

 

USE msdb;

SELECT db_name, is_managed_backup_enabled, retention_days, storage_url, encryption_algorithm

FROM smart_admin.fn_backup_db_config(NULL)

 

Here is the result:

 

diplay-databases-configuration.png

 

Notice that ‘AdventureWorks2012’ database has ‘is_managed_backup_enabled’ set to ‘NULL’. Indeed, this database is not sensitive to Managed Backup because it has his Recovery Model set to Simple.

 

Now, I activate the Managed Backup at the instance level:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @enable_backup = 1;

GO

 

Now, I activate the Managed Backup for ‘hybriddb’ database:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @database_name = 'hybriddb',

    @enable_backup

 

If I explore Azure Storage, I can find my backups:

 

backups.png

 

 

Conclusion

As I said in Introduction, Managed Backup is a great feature. Easily and quicly, you can configure and enable backups for your user databases.

However, it has some serious limitations... We can expect Managed Backup to be extended to system databases. Moreover, we can also expect Managed Backup to allow backups from user databases in Simple Recovery Model.

Furthermore, this feature is only available to Azure Storage. Indeed, I would like to choose my storage destination. I do not understand why we cannot back up to local disks for example.

Oracle multitenant dictionary: upgrade

Mon, 2015-01-05 16:10

This is a second part of the previous post about metadata link. I've shown how a sharing=metadata function becomes a sharing=none function when it is changed in the pdb - i.e when not having the same DDL, not having a different signature.

Here is another experimentation doing the opposite: change the function in root and see what happens in the pdb. Again playing with internals in order to understand the 'upgrade by unplug-plug' feature available in 12c multi-tenant (and single-tenant).

Oracle multitenant dictionary: metadata links

Mon, 2015-01-05 16:02

As in previous posts (1 and 2) I'm still playing with multitenant dictionary. You may wonder why I'm spending time to do unsupported things that we will never be able to use in real life.

Of course, playing with internals is fun ;)

But it's not only that. We will implement CDB databases (multi-tenant, and single-tenant as I think non-CDB will be deprecated in future versions). We will operate pluggable databases (clone, upgrade, downgrade,...), encounter issues, etc. Before doing that on critical environments, I need to be confident about the multitenant implementation. I'ts new, it's undocumented, and - at the first sight - it looks like a 'hack' in the way the dictionary is working for years.

Happy New Year 27, 104, 2015 and 2558

Wed, 2014-12-31 06:25

calendar          today  tomorrow  message
----------------- ------ --------- --------------
Arabic Hijrah     1436   1436
English Hijrah    1436   1436
Gregorian         2014   2015      Happy New Year
Japanese Imperial 0026   0027      Happy New Year
Persian           1393   1393
ROC Official      0103   0104      Happy New Year
Thai Buddha       2557   2558      Happy New Year

Oracle multitenant dictionary: object links

Mon, 2014-12-29 15:28

I've described Oracle 12c metadata and object links internals in a previous post. But before that, the first time I investigated on it, I made a wrong assumption because I was looking at AUDIT_ACTIONS which is not correctly implemented. That investigation came from a question on dba-village. And recently Ivica Arsov (@IvicaArsov) has made an interesting comment about AUDIT_ACTIONS object link table, so I'll explain here what is special with it.

AUDIT_ACTIONS

Here is how is defined AUDIT_ACTIONS:

SQL> select object_name,object_type,sharing from dba_objects where object_name in ('DBA_AUDIT_TRAIL','AUDIT_ACTIONS') order by object_name,object_type;

OBJECT_NAME          OBJECT_TYPE     SHARING
-------------------- --------------- -------------
AUDIT_ACTIONS        SYNONYM         METADATA LINK
AUDIT_ACTIONS        TABLE           OBJECT LINK
DBA_AUDIT_TRAIL      SYNONYM         METADATA LINK
DBA_AUDIT_TRAIL      VIEW            METADATA LINK
It's a sharing=object table so you expect that the data is common to all containers. And we will also query a view that reads that table - DBA_AUDIT_TRAIL.

Then let's query the table from CDB$ROOT and from a PDB and check from ROWID if we read the same rows:

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select rowid,action,name,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','AUDIT_ACTIONS') file_id from AUDIT_ACTIONS where action=3;

ROWID                  ACTION NAME       FILE_ID
------------------ ---------- ------- ----------
AAABG7AABAAACo5AAD          3 SELECT           1

SQL> alter session set container=PDB1;
Session altered.

SQL> select rowid,action,name,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','AUDIT_ACTIONS') file_id from AUDIT_ACTIONS where action=3;

ROWID                  ACTION NAME       FILE_ID
------------------ ---------- ------- ----------
AAABG5AABAAAA3pAAD          3 SELECT           8

The rows are not coming from the same file, but from the local SYSTEM tablespace of each container. This is a proof that this OBJECT LINK table is not common at all.

DBA_AUDIT_TRAIL

Now I want to check what happens when we query through the view. I don't have the ROWID so let's update the table in the PDB so that we can distinguish rows coming from CDB$ROOT and from PDB1:

SQL> update AUDIT_ACTIONS set name='select' where action=3;

1 row updated.

SQL> select rowid,action,name from AUDIT_ACTIONS where action=3;

ROWID                  ACTION NAME
------------------ ---------- -------
AAABG5AABAAAA3pAAD          3 select

SQL> select distinct dbid,action,action_name from DBA_AUDIT_TRAIL;

      DBID     ACTION ACTION_NAME
---------- ---------- ----------------------------
 314687597          3 select

Ok. I've changed one 'ACTION_NAME' to lowercase - only in the PDB1. And when I query through the view I see the local row. This definitly prooves that the implementation of AUDIT_ACTIONS is not achieving the goal of multinenant dictionary: store common oracle objects only in CDB$ROOT to avoid duplication and faciliate upgrade. Note that it is not a big problem anyway as it is just a 200 rows table.

DBA_CPOOL_INFO

In order to show the normal behaviour of object links I'll do the same with DBA_CPOOL_INFO which is a view over SYS.CPOOL$. I've described this behaviour previously by creating my own objects but here I'll show how it is used to store the DRCP information which is at CDB level. Here are the involved table and views:

SQL> select object_name,object_type,sharing from dba_objects where object_name in ('CPOOL$','INT$DBA_CPOOL_INFO','DBA_CPOOL_INFO') order by object_name,object_type;

OBJECT_NAME          OBJECT_TYPE     SHARING
-------------------- --------------- -------------
CPOOL$               TABLE           OBJECT LINK
DBA_CPOOL_INFO       SYNONYM         METADATA LINK
DBA_CPOOL_INFO       VIEW            METADATA LINK
INT$DBA_CPOOL_INFO   VIEW            OBJECT LINK
CPOOL$ is defined with sharing=object. An internal view INT$DBA_CPOOL_INFO is defined on it with sharing=object as well. And finally that view is exposed through DBA_CPOOL_INFO.

As before, I check the ROWID of CPOOL$ row from CDB$ROOT and PDB1:
SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$;

ROWID                 MINSIZE    FILE_ID
------------------ ---------- ----------
AAABz5AABAAADb5AAA          4          1

SQL> alter session set container=PDB1;
Session altered.

SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$;

ROWID                 MINSIZE    FILE_ID
------------------ ---------- ----------
AAABz3AABAAABQJAAA          4          8

So this is the same as we have seen before: an OBJECT LINK has its data in each PDB.

But what is different here is the view charing which is sharing=object. Let's query that view after changing the value in PDB1:

SQL> update SYS.CPOOL$ set minsize=0;
1 row updated.

SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$;

ROWID                 MINSIZE    FILE_ID
------------------ ---------- ----------
AAABz3AABAAABQJAAA          0          8

SQL> select minsize from INT$DBA_CPOOL_INFO;

   MINSIZE
----------
         4

SQL> select minsize from DBA_CPOOL_INFO;

   MINSIZE
----------
         4

Now we have a view which will always show the CDB$ROOT rows, even when we are in a PDB container. We still have rows in the PDB containers, but they will not be used. Once again, this defeats the goal of deduplication, but this is a very small table.

AWR tables

The main advantage of multitenant dictionary architecture is with the big tables storing data which is common in the whole CDB, such as the AWR data:

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select con_id,count(*) from containers(WRH$_SQLTEXT) group by con_id;

    CON_ID   COUNT(*)
---------- ----------
         1       5549

SQL> alter session set container=PDB1;

Session altered.

SQL> select count(*) from WRH$_SQLTEXT;

  COUNT(*)
----------
         0

This information - stored only from CDB$ROOT - is shared in all PDB through the OBJECT LINK view.

Creating a schema synonym in Oracle - an unsupported feature

Sun, 2014-12-28 14:59

Ivica Arsov (@IvicaArsov) has made an interesting comment about AUDIT_ACTIONS object link table. I'll blog about it soon, but in the meantime when checking its definition in cataudit.sql it came upon the following:

/* SCHEMA SYNONYMS will be added in 12g */
-- insert into audit_actions values (222, 'CREATE SCHEMA SYNONYM');
-- insert into audit_actions values (224, 'DROP SCHEMA SYNONYM');

which caught my attention.

 

MySQL versions performance comparison

Fri, 2014-12-26 03:18

This blog aims to make a performance comparison between the different MySQL versions/editions and also comparing the differents MySQL forks such as Percona Server and MariaDB.  Indeed number of improvements as been done to innodb storage engine in the last MySQL versions. You can find below some of the performance improvements applied to InnoDB these last years (non exhaustive list):


MySQL 5.0

1. New compact storage format which can save up to 20% of the disk space required in previous MySQL/InnoDB versions.
2. Faster recovery from a failed or aborted ALTER TABLE.
3. Faster implementation of TRUNCATE TABLE.


MySQL 5.5

1. MySQL Enterprise Thread Pool, As of MySQL 5.5.16, MySQL Enterprise Edition distributions include a thread pool plugin that provides an alternative thread-handling model designed to reduce overhead and improve performance.
2. Changes to the InnoDB I/O subsystem enable more effective use of available I/O capacity. The changes also provide more control over configuration of the I/O subsystem.


MySQL 5.6

1. Improvements to the algorithms for adaptive flushing make I/O operations more efficient and consistent under a variety of workloads. The new algorithm and default configuration values are expected to improve performance and concurrency for most users. Advanced users can fine-tune their I/O responsiveness through several configuration options.
2. InnoDB has several internal performance enhancements, including reducing contention by splitting the kernel mutex, moving flushing operations from the main thread to a separate thread, enabling multiple purge threads, and reducing contention for the buffer pool on large-memory systems.
3. You can now set the InnoDB page size for uncompressed tables to 8KB or 4KB, as an alternative to the default 16KB. This setting is controlled by the innodb_page_size configuration option. You specify the size when creating the MySQL instance. All InnoDB tablespaces within an instance share the same page size. Smaller page sizes can help to avoid redundant or inefficient I/O for certain combinations of workload and storage devices, particularly SSD devices with small block sizes.


MySQL 5.7

1. In MySQL 5.7.2, InnoDB buffer pool dump and load operations are enhanced. A new system variable, innodb_buffer_pool_dump_pct, allows you to specify the percentage of most recently used pages in each buffer pool to read out and dump. When there is other I/O activity being performed by InnoDB background tasks, InnoDB attempts to limit the number of buffer pool load operations per second using the innodb_io_capacity setting.

2. As of MySQL 5.7.4, InnoDB supports multiple page cleaner threads for flushing dirty pages from buffer pool instances. A new system variable, innodb_page_cleaners, is used to specify the number of page cleaner threads. The default value of 1 maintains the pre-MySQL 5.7.4 configuration in which there is a single page cleaner thread. This enhancement builds on work completed in MySQL 5.6, which introduced a single page cleaner thread to offload buffer pool flushing work from the InnoDB master thread.

 

You can find an exhaustive performance improvement list on:

http://dev.mysql.com/doc/refman/5.1/en/mysql-nutshell.html

http://dev.mysql.com/doc/refman/5.5/en/mysql-nutshell.html

http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html

http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html

  Test limitations

This test won't take into consideration all new possible optimizations provided through new variables and functionnalities. The aim of this one is simply to demonstrate the performance improvement with a non optimized but consistent configuration. In this context, a limited set of variables available in all MySQL versions (since version 5.0) have been set up.

This test is obvisously not representative of your own environnement (hardware, queries, database schema, storage engine, data type, etc..). Therefore you probably won't have the same performance behavior.

 

MySQL performance test Hardware configuration

This test has been done with sysbench 0.5, it has been run on a laptop equiped with a Processor Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GH and 16Go RAM. The data are stored on a Samsung SSD 840 PRO Series.

 

First step: Installation

The first step consists in installing several different MySQL versions. Thanks to mysql_multi I've been able to run the following versions in parallel:

 

Editor/Provider

MySQL Server

Version

Port

Edition

Oracle

mysqld1

5.0.15

33001

Community Edition

Oracle

mysqld2

5.1.73

33002

Community Edition

Oracle

mysqld3

5.5.39

33003

Community Edition

Oracle

mysqld4

5.6.20

33004

Community Edition

Oracle

mysqld5

5.7.4

33005

Community Edition

Oracle

mysqld6

5.6.21

33006

Enterprise Edition

Percona

mysqld7

5.6.20

33007

N/A

Mariadb

mysqld8

10.0.15

33008

N/A

 

These servers have been setup with the same settings. However depending on the MySQL version, the default MySQL settings are different. For instance, on MySQL 5.0.15 the default value for global variable innodb_buffer_pool_size is 8388608 wheras on MySQL 5.1.73 the default value is 134217728. The default MySQL version settings have not been changed.

The only variables which have been set up are the following:

  • max_connections = 8000
  • table_open_cache=8000
  • open_files_limit = 8192


max_connections: The maximum permitted number of simultaneous client connections
table_open_cache: (or table_cache): The number of open tables for all threads:
open_files_limit: The number of files that the operating system permits mysqld to open. The value of this variable at runtime is the real value permitted by the system and might be different from the value you specify at server startup.

 

The OFA (Optimal Flexible Architecture) directory structure has been used to install the MySQL Servers.

 

You can find below an example of this structure:

port           = 33001
mysqladmin     = /u00/app/mysql/product/mysql-5.0.15/bin/mysqladmin
mysqld         = /u00/app/mysql/product/mysql-5.0.15/bin/mysqld
socket         = /u00/app/mysql/admin/mysqld1/socket/mysqld1.sock
pid-file       = /u00/app/mysql/admin/mysqld1/socket/mysqld1.pid
log-error      = /u00/app/mysql/admin/mysqld1/log/mysqld1.err
datadir        = /u01/mysqldata/mysqld1
basedir        = /u00/app/mysql/product/mysql-5.0.15


Second step: Test preparation

Once all MySQL Server installed and running, the second step is to prepare the table containing the records where the queries will be performed. In this test I decided to create only one table. This one is automatically named sbtest1 by sysbench. Notice that it is possible to create several tables by using “oltp-table-count” parameter.

The number of rows in this table is specified by the parameter “oltp-table-size”. This test table will contain 20'000'000 rows. The test mode is OLTP. According to sysbench documentation, this test mode was written to benchmark a real database performance.

At the prepare stage the following table is created:


mysql> desc sbtest1;
+-------+------------------+------+-----+---------+----------------+
| Field | Type             | Null | Key | Default | Extra          |
+-------+------------------+------+-----+---------+----------------+
| id    | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| k     | int(10) unsigned | NO   | MUL | 0       |                |
| c     | char(120)        | NO   |     |         |                |
| pad   | char(60)         | NO   |     |         |                |
+-------+------------------+------+-----+---------+----------------+


Each record contains random strings in the fields c and pad and random integers between 1 and oltp-table-size in the field k as presented in the following picture:

 sbtest.png

 

Sysbench prepare script:

sysbench \
--db-driver=mysql \
--mysql-table-engine=innodb \
--oltp-table-size=20000000 \
--mysql-socket=/u00/app/mysql/admin/mysqld1/socket/mysqld1.sock \
--mysql-port=33301 \
--mysql-db=sysbench \
--mysql-user=sbtest \
--mysql-password=sbtest \
--test=/home/mysql/sysbench/oltp.lua \
prepare

In order to be sure to have the same set of data on each server a MySQL dump has been done on the server after the first load. This dump has been imported on each server.

 

Third step: Running the test

The test has been run with different number of threads in order to understand how the different version/edition and fork of MySQL scale depending on the number of threads. The parameter max-request limits the total number of requests. The OLTP test mode (oltp.lua) has been written to improve performance's benchmarking of database servers by providing a realistic scenario of an OLTP database.

 

sysbench \
--db-driver=mysql \
--test=oltp \
--num-threads=1 \
--mysql-user=sbtest \
--mysql-password=sbtest \
--mysql-db=sysbench \
--max-requests=10000 \
--oltp-test-mode=complex \
--test=/home/mysql/sysbench/oltp.lua \
--mysql-socket=/u00/app/mysql/admin/mysqld1/socket/mysqld1.sock \
--oltp-table-name=sbtest1 \
run

 

In order to ensure correct results, avoiding any side effects due to external process and ensuring consistent results over time, the benchmark has been run twice.

 

Fourth step: Collecting results

All the results have been collected in an excel sheet and the following graph directly comes from these results:

 MySQLPerformanceComparison_20141226-105949_1.png

  Fifth step: results analysis

1. innodb has been improved over time in regards of scalability and the tests results tempt to proove that. The performance with 64 threads are radically different depending on the MySQL Version:

MySQL 5.0.15 – 1237 tps
MySQL 5.1.73 – 1818 tps
MySQL 5.5.39 -  2978 tps
MySQL 5.6.20 – 2801 tps
MySQL 5.6.21 – 2830 tps
MySQL 5.7.4 – 2725 tps
Percona 5.6.21 – 2853 tps
Mariadb 10.0.15 – 2941 tps

 

2. For application using only one thread the peformance between MySQL version (with default settings) is more or less equivalent (+/-10%):

MySQL 5.0.15 – 163 tps
MySQL 5.1.73 – 158 tps
MySQL 5.5.39 -  150 tps
MySQL 5.6.20 – 145 tps
MySQL 5.6.21 – 149 tps
MySQL 5.7.4 – 145 tps
Percona 5.6.21 – 145 tps
Mariadb 10.0.15 – 143 tps

 

3. For large number of threads it definitively worth to use pool of threads plugin from Percona. During these tests a improvement factor of x30 has been observed. Unfortunately I didn't see any performance improvement with MySQL 5.6.21 with the thread_pool plugin and thread_pool_size parameter set to 36 ( Best performances with Sysbench according to http://dev.mysql.com/doc/refman/5.6/en/thread-pool-tuning.html) . Regarding Percona I set up the parameter thread_pool_high_prio_mode to transactions. You can find below the results with 4096 thread:

MySQL 5.0.15 – error
MySQL 5.1.73 – 3.97 tps
MySQL 5.5.39 -  9.05 tps
MySQL 5.6.20 – 9.29 tps
MySQL 5.6.21 – 9.07 tps
MySQL 5.6.21 pool of thread plugin – 8.75
MySQL 5.7.4 – 5.64 tps
Percona 5.6.21 – 9.83 tps
Percona 5.6.21 pool of thread plugin – 295.4 tps
Mariadb 10.0.15 – 8.04 tps

It is interesting to notice that performance degradation can occur with the thread pool plugin activated for MySQL and for Percona. This performance degradation has been observed for a number of thread between 16 and 128 for Percona and 32 and 512 with MySQL.

 

Conclusion

These results tempt to prove that last MySQL releases perform better than older ones especially with several threads (64 threads in this case). The only exception is MySQL 5.7.4 which is a development release.


Applications using only one thread won't benefit from a huge performance improvement with the last MySQL versions. However enhancements provided in last versions such as ONLINE DDL, faster deadlock detection, dynamic innodb_buffer_pool_size parameter, etc, etc.. will for sure save you lots of time.


MySQL forks such as Percona and MariaDB, perform as MySQL Server. In addition I didn't observe any performance difference between MySQL Enterprise Edition and MySQL Community Edition. It is interesting to notice that thread pool plugin provided by Percona provide a huge performance improvement with large number of threads compared to standard behavior.


Regarding MySQL Enterprise Edition I haven't been able to see any performance improvement with MySQL Thread Pool plugin activated even with large number of threads. This is perhaps due to a misconfiguration from my side... however I presented these results to an Oracle MySQL specialist present on the Oracle UKOUG booth and he hasn't been able to find any error in my configuration.

dbi services wishes you a Merry Christmas with this SQL Server script

Thu, 2014-12-18 07:57

The end of the year approachs and soon it will be time to celebrate Christmas with your family and friends. At dbi services, we wish you a merry christmas via SQL Server with the following script, to execute in your SQL Server Management Studio for example (supported only by SQL Server 2012 and SQL Server 2014). You will find a suprise in the result pane :-)

 

SET NOCOUNT ON;   IF EXISTS(SELECT * FROM tempdb.sys.objects WHERE name LIKE N'#SURPRISE%')        DROP TABLE #SURPRISE; GO   CREATE TABLE #SURPRISE ( col1 CHAR(290) ); GO   ------------------------------   INSERT #SURPRISE VALUES (REPLICATE(CHAR(111),16)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1), 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(77) + CHAR(101) + CHAR(114) + CHAR(114) + CHAR(121)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(246)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(149)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(67) + CHAR(104) + CHAR(114) + CHAR(105) + CHAR(115) + CHAR(116) + CHAR(109) + CHAR(97) + CHAR(115) ); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(47)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(47)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, + CHAR(92)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(182)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(164)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(164)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(2) + CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(91)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(93)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + REPLICATE(CHAR(42), 19)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(3) + CHAR(100) + CHAR(98) + CHAR(105)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(115) + CHAR(101) + CHAR(114) + CHAR(118) + CHAR(105) + CHAR(99) + CHAR(101) + CHAR(115)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(5) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, REPLICATE(CHAR(111), 17));   ------------------------------   DECLARE @SQL VARCHAR(100);   SELECT        @SQL = 'DBCC TRACEON(3604); DBCC PAGE (''tempdb'', 1, ' + CAST(allocated_page_page_id AS VARCHAR(20)) + ', 1)' FROM sys.dm_db_database_page_allocations(2, OBJECT_ID('tempdb..#SURPRISE'), 0, NULL, DEFAULT) WHERE is_iam_page = 0 AND is_allocated = 1;   EXEC(@SQL);

 

Merry Christmas to all of you!

How to store SQL Server data files in Azure Storage

Wed, 2014-12-17 22:02

A new functionality, introduced with SQL Server 2014, enables to store data files from a SQL Server database in Windows Azure Storage. In this posting, I will show how it works.

 

General.png

 

Accessing Azure Storage

The Azure Storage account is named “dbiservices”, and is composed of the “datafiles” container. This container does not contain blobs yet.

If you are not familiar with the Windows Azure Storage terms, you should read this Introduction to Microsoft Azure Storage.

In my example, I want to grant to share an access to my “datafiles” container without having to expose my account key. I teherfore need to generate a Shared Access Signature.

 

Shared Access Signature Generation

For this part, I will use a Third Party tool called Azure Storage Explorer.

As soon as you have installed and launched this software, you must register you Storage Account:

 

Register-Azure-Storage-Account.png

 

You must enter your Storage account name and your Storage account key. The key has been erased voluntary in this example.

As a Best Practice, I advise to enter your Secondary Access Key and not your Primary Access Key. Indeed, the Secondary Access Key is commonly used as a temporary key, and can be regenerated if necessary.

To generate my Shared Access Signature, I have to edit the Security of my container:

 

Edit-Security.png

 

I select the permissions and the duration linked to my Shared Access Signature:

 

Generate-Signature.png

 

I generate a Shared Access Signature available one week with all permissions. The Shared Access Signature generated is an URI related to the container. I voluntary deleted partially the URI.

You need to copy the URI from “sv=” to the end.

 

Credential Creation

I need to create a new Credential in SQL Server Management Studio:

 

New-Credential.png

Create-Credential.png

 

The name of my Credential is the URI location of my container, and the Password is the Shared Access Signature previously created.

 

Creating a database with Data Files in Azure Storage

I will perform two examples: first I will create a new database directly in Azure Storage, then I will migrate an existing database to Azure Storage.

 

Hybrid database creation

I execute a script to create a database with its data files in Azure Storage:

 

Create-Database-hybrid.png

 

If we refresh the “datafiles” container in Azure Storage explorer, we can see the datafiles previously created:

 

Explore-Datafiles.png


On-premise database creation

Now, I execute a script to create an on-premise database:

 

Create-database-onpremise.png

 

Then, I take the database offline:

 

Take-Offline-Database.png

 

I upload the data files in Azure Storage using Azure Storage Explorer tool:

 

Upload-Datafiles-to-Azure.png

 

Then, we need to alter the onpremisedb database to reference the datafiles moved to the Azure Storage:

 

Alter-Database.png

 

And now, I bring the database online:

 

Bring-Online-Database.png

 

But the following error occurred:

 

Bring-Online-Database-Error.png

 

To understand the origin of the problem, let’s see the datafiles in the Azure Storage Explorer:

 

Blob-Type.png

 

Conclusion

This new feature offers some advantages such as high availability or easy migration.

But on the other hand, you cannot use it on a existing database, which is a serious drawback.

Furthermore, I do not believe that this feature would be used with on-premsie SQL Server databases, due to the latency. But I think it can be used with a virtual machine running in Azure.

Journées SQL Server 2014: Vidéos des sessions disponibles

Tue, 2014-12-16 02:18
Les vidéos des sessions des journées SQL Server 2014 sont enfin en ligne:   > Infrastructure et AlwaysOn > Industrialisation des audits (avec Sarah Bessard)   Si vous avez des questions n'hésitez pas à me contacter par email ou par le biais des commentaires sur ce blog.   Bon visionnage!

Oracle lateral inline view, cursor expression and 12c implicit statement result

Mon, 2014-12-15 09:50

I'll present here 3 ways to run a query for each result of another query. Let's take an exemple: get all executions plan (select from dbms_xplan.display_cursor) for each of my queries (identified from v$sql). The 90's way was to run the first query, which generates the second queries into a spool file, and execute that file. Here are easier ways, some of them coming from 12c new features lateral join and implicit statement result.

Oracle 12c: Can we disable logging for DML?

Mon, 2014-12-15 09:38

If we don't mind about loosing our changes, then can we disable logging for DML? This is a question I've heard a lot. Ok, you don't need to recover your changes but Oracle may want to recover the consistency of its datafiles anyway. And that's why datafiles blocks changed though the buffer cache always generate redo.

But yes, in 12c you can do DML and generate only minimal redo. All DML: even updates and deletes. And that post is not about underscore parameters that allows corruption.

SQL Server 2014 : sortie du livre "Développer et administrer pour la performance"

Sun, 2014-12-14 23:50

Un billet en cette fin d'année qui approche à grand pas pour vous annoncer la sortie prochaine (fin décembre 2014) du livre SQL Server 2014 : Développer et administrer pour la performance en français et auquel j'ai eu l'immense plaisir de participer avec Frédéric Brouard (alias SQLPro - MVP SQL Server), Nicolas Souquet (alias Elsuket - MVP SQL Server) et Christian Soutou

Ce livre est destiné aussi bien aux développeurs qu'aux administrateurs débutants ou confirmés soucieux de la performance et couvre un ensemble complet de domaines (l'administration, le développement, la sécurité ou encore la haute disponibilité).

Bonne lecture !

 

 

 

UKOUG 2014 - Middleware Day 3

Wed, 2014-12-10 15:05

Today, no more sessions on middleware, but a lot of ADF, APEX, developments sessions that looks like really interesting! Unfortunately I can’t attend each one but here are some I selected!

The picture that paints a thousand words: Data Visualization (ADF) Duncan Mills (Oracle)

In this session Duncan was talking about my favorite topic: data visualization. I am quite used to ADF as I developed an interface for my Capacity Planning project. It was sometimes hard to have something clean and well built, a little bit confusing…

But this time as ended! With the new Alta interface!

Instead of building another stack over hundreds stacks, Oracle preferred to re-imagine and rebuild the whole interface. What a good idea. Then Alta was born, more clear, more precise and more “actual”.

It has been rewritten in a mobile based way as it’s the trend. But a new design is not sufficient it also includes more interactions and animations. The UI is designed to be more responsive, clean and simplified. One goal is to build the UI as disclosure info, it means when you click something in a chart the rest will disappear and go into a more detailed view about what you clicked. This way the UI will lead the user to the flow of the information. The user has to “search” the info through a path which builds the context of this information.

There is now a lot of slides and effect to increase this flow. For example you have a new chart build as a multi-layered circle so when you click on a part of the circle it will go into and open this part then build another whole circle to create a new level of information.

You have over than 50 graphs and charts types. All with the same kind of interactions to help the user have a coherence between charts such as zooming, zoomout, selection, redesigned time axis, improved data labeling and so on.

One of the new chars is called NBox. It’s a chart used for categorizing things or people. You have boxes, and you put people in it regarding a filter such as the company sector. You can drag and drop easily, it’s really to classify things.

ADF is getting really beautiful thanks to this new UI design and interactions, allowing more mobile app look and feel is a good thing as more and more processes in the industry can be monitored directly through smartphones.

SQL injection in APEX - More Attacks (& Defences) Tim Austwick and Nathan Catlow (Recx)

This session is more talking about security and SQL injections than APEX itself but they did some demos showing APEX could ease the usage of SQL injections.

Tim started by introducing his company Recx as they built a tool called ApexSec which can point out SQL injections spots in an APEX application; they said it can find about 90% of breaches. Thanks to their tool they heavily improved the security of Oracle Application Express, a tool from Oracle helping building application quite fast.

For Tim and Nathan, a SQL injection can be spotted when you don’t execute the SQL directly but you first generate the SQL and then send it to Oracle Database. In fact there is several kind of breaches such as dynamic SQL or query substitutions; these are the most found. It happens most of time when SQL write and execution are separated.

With SQL injections you can gather more data by adding columns or table joins, this is a “select” injection. You can also corrupt data, this is called an “update” injection. Then you have more serious issues when you can insert code between BEGIN and END in a PL/SQL function as you can call other functions or procedures such as Oracle procedures, which can be harmful for your application or database.

For example Tim did a live demo on how he could insert a call to the LDAP package and then connect it to his own LDAP on his machine. The result? He managed to get credentials directly in his LDAP console! Could be a serious issue isn’t it?

Vulnerabilities can appears if you append a variable in the build process of your query: when you put it at the end. Also when you create a function returning a SQL command. You better use bind variables instead of substitutions.

Tim also presented another demo where he managed to push a hidden button that committed a text area through a short javascript script. Thanks to a breach in the SQL behind the text area he could get the database version. With a simple research he found a known issue in this version. Then he wrote a pl/sql script and injected it in the text area. He managed to set the java permission to the schema, then he wrote a file directly on the file system with a script within. He could execute the file as sysdba as it was on the local system. The script gave the schema the DBA privileges, then he could get the passwords.

It was an amazing session.

Their advices to avoid SQL injection are the following:

- Avoid substitutions directly in SQL and prefer bind variables
- Ensure that APEX objects are protected
- Use APEX 4.2.1 or above
- Avoid dynamic SQL

Conclusion

This is the last day of UKOUG 2014 and we clearly see that the middlewares trend is the Cloud, Mobile applications and of course security. I can’t wait to see more sessions in the next UKOUG!

Taskset: a useful tool to set CPU affinity

Wed, 2014-12-10 05:14

Today at the #ukoug_tech14 I had the chance to attend to the Christo Kutrovky's session @kutrovsky (Pythian) about "Measuring Performance in Oracle Solaris & Oracle Linux". This session aimed to present how to measure system utilization in the Linux and Oracle Solaris operating systems and how to use these information for tuning and capacity planning. During this session we had a very good introduction to a bunch of performance monitoring tools that can be categorized in four categories (non exhaustive list of tool):


1. CPU:

  • top
  • vmstat
  • time
  • mpstat

2. Memory - RAM

2.1 Global

  • /proc/meminfo
  • vmstat
  • ipcs

2.2 Per process

  • -pmap -x

3. Disk

  • vmstat
  • iostat
  • iotop
  • dtrace

4. Network

  • ifconfig
  • netstat
  • nicstat
  • iftop
  • iperf
  • tcpdump

Regarding this list, I would like to point out a specific command which could be useful in the context of a performance test. This command is taskset. According to Manual page of taskset: taskset  is  used  to  set  or  retrieve  the CPU affinity of a running process given its PID or to launch a new COMMAND  with a given CPU affinity.  CPU affinity is a scheduler property that "bonds" a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs.

Let's try to make a short test, with sysbench and mysql in order to see how taskset works. This test consists of running sysbench with 4096 threads, the first time without taskset and a second time with taskset by setting CPU affinity on CPU number 3.

 

1. Sysbench test without CPU affinity

1.1 Let's tart the benchmark with sysbench

mysql@ThinkPad-T540p:/home/mysql/ [mysqld1] ./innodb/runallinnodb5.ksh

 

1.2 In order to proove that sysbench is running 4096 threads we can execute "top + C":

22110 mysql     20   0 4747720 334964   1548 S   5,3  2,1   0:06.55 sysbench --db-driver mysql --test oltp --num-threads 4096 --mysql-user sbtest --mysql-password sbtest --mysql-db sysbench .....

 

1.3 Let's now have a look on CPU usage with command "top + 1":

top - 11:47:38 up 42 min,  4 users,  load average: 5,97, 15,54, 16,48
Tasks: 238 total,   2 running, 236 sleeping,   0 stopped,   0 zombie
%Cpu0  : 25,5 us,  1,7 sy,  0,0 ni, 69,9 id,  3,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu1  : 15,1 us,  0,7 sy,  0,0 ni, 78,9 id,  5,4 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu2  : 21,3 us,  1,3 sy,  0,0 ni, 77,4 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu3  : 15,8 us,  1,0 sy,  0,0 ni, 83,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu4  : 18,6 us,  1,3 sy,  0,0 ni, 79,8 id,  0,3 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu5  : 15,4 us,  1,0 sy,  0,0 ni, 83,7 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu6  : 27,8 us,  1,0 sy,  0,0 ni, 71,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu7  : 14,8 us,  0,0 sy,  0,0 ni, 85,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
KiB Mem:  16309072 total,  4207804 used, 12101268 free,   206348 buffers
KiB Swap: 16651260 total,        0 used, 16651260 free.  1628872 cached Mem

 

1.4 As we could expect all CPUs are in use because no CPU affinity has been set:

steulet@ThinkPad-T540p:~$ sudo taskset -pc 22110

pid 21767's current affinity list: 0-7

 

2. Sysbench test with CPU affinity

2.1 Let's start by setting CPU Affinity of mysql process:

 

steulet@ThinkPad-T540p:~$ sudo taskset -pc 3 22110

pid 22110's current affinity list: 0-7
pid 22110's new affinity list: 3

 

2.2 We can now restart the benchmark:

 

mysql@ThinkPad-T540p:/home/mysql/ [mysqld1] ./innodb/runallinnodb5.ksh

 

2.3 Let's now have a look on CPU usage using command "top + 1":

Tasks: 240 total,   2 running, 238 sleeping,   0 stopped,   0 zombie
%Cpu0  :  1,7 us,  0,3 sy,  0,0 ni, 97,7 id,  0,0 wa,  0,0 hi,  0,3 si,  0,0 st
%Cpu1  :  0,3 us,  0,0 sy,  0,0 ni, 99,7 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu2  :  0,3 us,  0,0 sy,  0,0 ni, 99,7 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu3  : 98,0 us,  1,7 sy,  0,0 ni,  0,3 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu4  :  2,0 us,  0,3 sy,  0,0 ni, 97,3 id,  0,3 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu5  :  0,7 us,  0,0 sy,  0,0 ni, 99,3 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu6  :  0,3 us,  0,7 sy,  0,0 ni, 97,7 id,  1,3 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu7  :  0,7 us,  0,0 sy,  0,0 ni, 95,7 id,  3,7 wa,  0,0 hi,  0,0 si,  0,0 st

 

The processor 3 own all the mysql load as expected. Thanks to taskset you can also have a better understanding of HiperThreaded CPUs load. In order to better understand the problems related to the monitoring of Hyperthreaded CPU and CPU affinity, I suggest you to read the following blog: http://www.pythian.com/blog/virtual-cpus-with-amazon-web-services/