Yann Neuhaus

Subscribe to Yann Neuhaus feed Yann Neuhaus
Updated: 2 months 6 days ago

Step-by-Step Guide to Enabling Copilot in SSMS

Mon, 2025-06-23 06:45

With the latest version of SSMS, SSMS 21, there is now integration with Copilot.

In this blog post, we’ll go through the setup step by step, with screenshots, to get everything ready to use Copilot in SSMS.
There is already some official documentation on this, but I believe many people will find it clearer to see everything illustrated.

Copilot is not free: you need an Azure subscription which pays according to the number and complexity of prompts.

I will use the API Key but you can also use Microsoft Entra to authenticate to Azure OpenAI for Copilot in SSMS which is probably safer.

Azure OpenAI Service

To start with, we’ll use the Azure OpenAI service.

Click “Create Azure OpenAI”

Create a Resource group if needed and choose a name.

Leave the “All networks” option selected on the Network pane.

AI Model deployment

Now, In the newly created resource click on “Go to Azure AI Foundry Portal”.

Then, on the left panel choose “Deployments” and click “Deploy model” and select “Deploy base model”.

Select “gpt-4o” and Confirm. It’s the only supported model for now.

Then click on “Customize”.

Here you can modify the name and select a Deployment type.
Here, I recommend reading the documentation to choose the option that best fits your needs.
Doc: Deployment types for Azure AI Foundry Models

SSMS

Now go to SSMS and click “Copilot”.

The Azure OpenAI Endpoint and the API Key can be found on the Azure OpenAI Service page, by clicking on “Click here to view endpoints”.

Finally the Open AI Deployment is the name we defined earlier. “ai-snrel-ssms-deployment” for me.

Note that the configuration can be edited under “Tools”, “Options”, “Copilot”.

We are done. I can now start chatting with the AI.

Hopefully, this blog made it easier for you to set up and start using Copilot in SSMS.

Regarding privacy it is written here that no prompts will be used by OpenAI to improve or train models so it should be quite safe.

L’article Step-by-Step Guide to Enabling Copilot in SSMS est apparu en premier sur dbi Blog.

SQL Server 2025 – Retirement of SQL Server Reporting Services (SSRS)

Fri, 2025-06-20 04:37

Microsoft has announced the end of the SQL Server Reporting Services (SSRS) service. In other words, Microsoft will no longer provide new versions of SSRS. Currently, the latest available version is the 2022 version (https://www.microsoft.com/fr-FR/download/details.aspx?id=104502).

However, it is important to note that this service will be supported until January 11, 2033. Moreover, it will still be possible to use SSRS and host databases from this service on a recent version of SQL Server.

The replacement for SQL Server Reporting Services (SSRS) is Power BI Report Server (PBIRS). This is the service to adopt for future installations. In fact, we have observed that many clients have been using Power BI Report Server for several years.

Reference: https://www.microsoft.com/en-us/sql-server/blog/2025/06/19/enhancing-reporting-and-analytics-with-sql-server-2025-tools-and-services/


I use SQL Server Reporting Services and would like to migrate to Power BI Report Server. What are my options?

Several solutions are possible (the following list is neither exhaustive nor in any particular order):

  • Migrate the databases that support the SSRS service to PBIRS
    • Current environment:
      • Server 1 (S1):
        • Hosted services: SSRS and SQL Server Database Engine. The SSRS service and my databases are hosted on my S1 server.
    • New environment:
      • Server 2 (S2):
        • Hosted services: PBIRS and SQL Server Database Engine. The PBIRS service and the database engine are hosted on my S2 server.
        • We will migrate the databases from S1 to S2.
        • The PBIRS service will connect to the migrated databases.

Reference: https://learn.microsoft.com/en-us/power-bi/report-server/migrate-report-server


  • Use the RDL Migration Tool (RdlMigration) to migrate reports from SSRS to PBIRS

Reference: https://github.com/microsoft/RdlMigration


  • Use the Reporting Services PowerShell API, which allows for fine-grained control over the migration of reports

Reference: https://github.com/microsoft/ReportingServicesTools


I am currently using SSRS 2022 and would like to migrate to Power BI Report Server. What are my options?

According to the documentation (as of now), it is currently not possible to migrate from SSRS 2022 to Power BI Report Server.

Reference: https://learn.microsoft.com/en-us/power-bi/report-server/migrate-report-server


Thank you, Amine Haloui.

L’article SQL Server 2025 – Retirement of SQL Server Reporting Services (SSRS) est apparu en premier sur dbi Blog.

Experiencing SQLBits 2025 in London

Wed, 2025-06-18 13:24

This week, I’m in London attending the SQLBits 2025 conference.
Since joining dbi services as a consultant, this is my second time taking part in the event.
The last time was in 2022, but I attended remotely due to the uncertainties surrounding international travel at the time, as COVID-19 was still a concern.
You can actually find the blog post I wrote back then: SQLBits 2022 – Levelling up my Performance Tuning skills.

This year, I’m here in person and fully intend to make the most of the experience.

For context, SQLBits 2025 is one of the largest Microsoft data platform events in Europe. Taking place in London from June 18 to 21, it features dozens of sessions covering SQL Server, Azure, Power BI, AI, and data engineering. The event brings together experts, MVPs, and Microsoft product team members for intense learning and networking. Known for its high-quality content and friendly atmosphere, SQLBits is a must-attend event for anyone working in the Microsoft data ecosystem.

As you can see on my badge, I’ve chosen to attend a wide range of sessions during the first two days, and on Friday I’ll be taking part in a full-day training on “Real-World Data Engineering: Practical Skills for Microsoft Fabric” led by Prathy Kamasani.

On the agenda, agenda, you’ll see that the trending topics stand out. Personally, I’m very interested in sessions on SQL Server 2025, which has just been released in public beta, or sessions on the SQL Server engine in general, but of course these aren’t the majority of sessions.
The trends are now:

  • AI and AI Agents, with multiple sessions exploring how AI is being integrated into the Microsoft data ecosystem including intelligent assistants, automation, and real-world use cases.
  • Microsoft Fabric is front and center, everywhere.
  • Analytics and Power BI continue to be key pillars, with advanced sessions on DAX, large-scale reporting, data modeling, and real-time analytics.
  • Data Transformation and DataOps are well represented, especially through practical sessions on orchestration, data pipelines, automation, and governance.

Today I attended the following sessions:

  • How Microsoft’s new, open-source Data API builder fits into Solution Architecture
  • Unlocking the Power of Open Mirroring in Microsoft Fabric
  • AI Agents in Action: Enhancing Applications with Azure AI Services
  • Azure SQL Managed Instance Demo Party
  • Empower Your Data with Real-Time Intelligence leveraging Data Activator within Fabric
  • From B-Trees to V-Order. Told differently than usual
  • Resolving Deadlocks in SQL Server: Practical Demo

It’s a huge amount of information, subjects to explore and ideas for solving real-life problems for our customers.

That was only the first day, but I’m already extremely satisfied with the conference: from the quality of the sessions and the content to the expertise of the speakers. I’m looking forward to making the most of the upcoming sessions, and I highly recommend that any Microsoft Data professional consider attending SQLBits. If not this year, then why not next year?

L’article Experiencing SQLBits 2025 in London est apparu en premier sur dbi Blog.

Parallel execution of Ansible roles

Tue, 2025-06-10 13:56
Introduction

You can run a playbook for specific host(s), a group of hosts, or “all” (all hosts of the inventory).

Ansible will then run the tasks in parallel on the specified hosts. To avoid an overload, the parallelism – called “forks” – is limited to 5 per default.

A task with a loop (e.g. with_items:) will be executed serially per default. To run it in parallel, you can use the “async” mode.

But unfortunately, this async mode will not work to include roles or other playbooks in the loop. In this blog post we will see a workaround to run roles in parallel (on the same host).

Parallelization over the ansible hosts

In this example, we have 3 hosts (dbhost1, dbhost2, dbhost3) in the dbservers group
(use ansible-inventory --graph to see all your groups) and we run the following sleep1.yml playbook

- name: PLAY1
  hosts: [dbservers]
  gather_facts: no
  tasks:
    - ansible.builtin.wait_for: timeout=10

The tasks of the playbook will run in parallel on all hosts of the dbservers group, but not more at the same time as specified with the “forks” parameter. (specified in ansible.cfg, shell-variable ANSIBLE_FORKS, commandline parameter –forks)
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_strategies.html

$ time ansible-playbook sleep1.yml --forks 2
...
ok: [dbhost1]  #appears after 10sec
ok: [dbhost2]  #appears after 10sec
ok: [dbhost3]  #appears after 20sec
...
real    0m22.384s

With forks=2 the results of dbhost1 and dbhost2 will both be returned after 10 seconds (sleep 10 in parallel). dbhost3 has to wait until one of the running tasks is completed. So the playbook will complete after approx. 20 seconds. If forks is 1, then it takes 30s, if forks is 3, it takes 10s (plus overhead).

Parallelization of loops

Per default, a loop is not run in parallel

- name: PLAY2A
  hosts: localhost
  tasks:
    - set_fact:
        sleepsec: [ 1, 2, 3, 4, 5, 6, 7 ]

    - name: nonparallel loop
      ansible.builtin.wait_for: "timeout={{item}} "
      with_items: "{{sleepsec}}"
      register: loop_result

This sequential run will take at least 28 seconds.

To run the same loop in parallel, use “async”
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_async.html

- name: PLAY2B
  hosts: localhost
  gather_facts: no
  tasks:
    - name: parallel loop
      ansible.builtin.wait_for: "timeout={{item}}"
      with_items: "{{sleepsec}}"
      register: loop_result
      async: 600  # Maximum runtime in seconds. Adjust as needed.
      poll: 0     # Fire and continue (never poll here)

    # in the meantime, you can run other tasks

    - name: Wait for parallel loop to finish (poll)
      async_status:
        jid: "{{ item.ansible_job_id }}"
      register: loop_check
      until: loop_check.finished
      delay: 1      # Check every 1 seconds
      retries: 600  # Retry up to 600 times. 
                    # delay*retries should be "async:" above
      with_items: "{{ loop_result.results }}"

In the first task we start all sleeps in parallel. It will timeout after 600 seconds. We will not wait for the result (poll: 0). A later task polls the background processes until all parallel loops are finished. This execution only takes a little bit more than 7 seconds (the longest sleep plus some overhead). Between the loop and the poll you can add other tasks to use the waiting time for something more productive. Or if you know your loop takes at least 1 minute, then you can add to reduce the overhead of the polling loop, an ansible.builtin.wait_for: "timeout=60".

For example, we have an existing role to create and configure a new useraccount with many, sometimes longer running steps, e.g. add to LDAP, create NFS share, create a certificate, send a welcome-mail, ….; most of these tasks are not bound to a specific host, and will run on “localhost” calling a REST-API.

The following code example is a dummy role for copy/paste to see how it works with parallel execution.

# roles/create_user/tasks/main.yml    
    - debug: var=user
    - ansible.builtin.wait_for: timeout=10

Now we have to create many useraccounts and would like to do that in parallel. We use the code above and adapt it:

- name: PLAY3A
  hosts: localhost
  gather_facts: no
  tasks:
    - set_fact:
        users: [ 'Dave', 'Eva', 'Hans' ]

    - name: parallel user creation
      ansible.builtin.include_role: name=create_user
      with_items: "{{users}}"
      loop_control:
        loop_var: user
      register: loop_result
      async: 600
      poll: 0

But unfortunately, Ansible will not accept include_role:
ERROR! 'poll' is not a valid attribute for a IncludeRole

The only solution is to rewrite the role and to run every task with the async mode.

But is there no better solution to re-use existing roles? Let’s see…

Parallel execution of roles in a loop

As we already know

  • Ansible can run playbooks/tasks in parallel over different hosts (hosts parameter of the play).
  • Ansible can run tasks with a loop in parallel with the async option, but
  • Ansible can NOT run tasks with a loop in parallel for include_role or include_tasks

So, the trick will be to run the roles on “different” hosts. There is a special behavior of localhost. Well-known is the localhost IP 127.0.0.1; But also 127.0.0.2 to 127.255.255.254 refer to localhost (check it with ‘ping’). For our create-user script: we will run it on “different” localhosts in parallel. For that, we create a host-group at runtime with localhost addresses. The number of these localhost IP’s is equal to the number of users to create.

users[0] is Dave. It will be created on 127.0.0.1
users[1] is Eva. It will be created on 127.0.0.2
users[2] is Hans. It will be created on 127.0.0.3

- name: create dynamic localhosts group
  hosts: localhost
  gather_facts: no
  vars:
    users: [ 'Dave', 'Eva', 'Hans' ]
  tasks:
    # Create a group of localhost IP's; 
    # Ansible will treat it as "different" hosts.
    # To know, which locahost-IP should create which user:
    # The last 2 numbers of the IP matches the element of the {{users}} list:
    # 127.0.1.12 -> (1*256 + 12)-1 = 267 -> users[267]
    # -1: first Array-Element is 0, but localhost-IP starts at 127.0.0.1
    - name: create parallel execution localhosts group
      add_host:
        name: "127.0.{{item|int // 256}}.{{ item|int % 256 }}"
        group: localhosts
      with_sequence:  start=1  end="{{users|length}}" 

- name: create useraccounts
  hosts: [localhosts]  # [ 127.0.0.1, 127.0.0.2, ... ]
  connection: local
  gather_facts: no
  vars:
    users: [ 'Dave', 'Eva', 'Hans' ]
  # this play runs in parallel over the [localhosts] 
  tasks:
    - set_fact:
        ip_nr: "{{ inventory_hostname.split('.') }}"

    - name: parallel user creation
      ansible.builtin.include_role:
        name: create_user
      vars:
        user: "{{ users[ (ip_nr[2]|int*256 + ip_nr[3]|int-1) ] }}"

In this example: With forks=3 it runs in 11 seconds. With forks=1 (no parallelism) it takes 32 seconds.

The degree of parallelism (forks) depends on your use-case and your infrastructure. If you have to restore files, probably the network-bandwith, disk-I/O or the number of tape-slots is limited. Choose a value of forks that does not overload your infrastructure.

If some tasks or the whole role has to be run on another host than localhost (e.g. create a local useraccount on a server), then you can use delegate_to: "{{remote_host}}".

This principle can ideally be used for plays that are not bound to a specific host, usually for tasks that will run from localhost and calling a REST-API without logging in with ssh to a server.

Summary

Ansible is optimized to run playbooks on different hosts in parallel. The degree of parallelism can be limited by the “forks” parameter (default 5).

Ansible can run loops in parallel with the async mode. Unfortunately that does not work if we include a role or tasks.

The workaround to run roles in parallel on the same host is to assign every loop item to a different host, and then to run the role on different hosts. For the different hosts we can use the localhost IP’s between 127.0.0.1 and 127.255.255.254 to build a dynamic host-group; the number corresponds to number of loop items

L’article Parallel execution of Ansible roles est apparu en premier sur dbi Blog.

M-Files Backup considerations

Tue, 2025-06-10 11:24
Once upon a time in IT project land

What an achievement, it’s been a good while your team is working on a huge project and it’s finally completed. All related documentation is well registered and managed into your knowledge workers enterprise content management, M-Files.

Time to get some rest and move on other tasks. Hence time is passing until dark moments show up and bring your attention to round to review and check your previous project documents to find answers regarding colleagues questionings. Bad luck, it looks like several content have disappeared. Even worse, you discovered others are corrupted. Leaving aside the root cause of all this, your first objective now is to retrieve your precious documents. And guess what, backups are your best friends.

M-Files deployment context

Before considering technical details, let’s assume your M-Files vault(s) is on-premise deployed. Note that if you are working with M-Files Cloud vault(s) then M-Files Corporation is maintaining and backing up your vault for you. M-Files Cloud offers a default standard backup service as part of every M-Files subscription. Moreover, it is possible to extend your backup capabilities based on retention and recovery point settings.

This said, do mind we shall have different situations:

  • M-Files Server + Microsoft SQL Database + object files distributed either on a single server or across multiple servers
  • M-Files Server + Firebird SQL Database + object files only on a single server

One must admit Firebird SQL Database is a nice choice when you start working with M-Files in order to be confidant quickly and easily with the tool. But keep in mind, as soon as you are managing millions of objects, furthermore in a production context, it is highly recommended to review your local server infrastructure and consider working with / migrating to Microsoft SQL database.

M-Files server-specific data

Whatever M-Files configuration type is in place, you must always take care of your master database. This is where server-specific data reside and it is always stored in an embedded Firebird SQL database. Information stored are:

  • Information on transactions since the last backup
  • Login accounts
    • M-Files login passwords
    • Personal information
    • License types
    • Server-level roles
  • Scheduled jobs
  • Notification settings
  • Licenses
  • M-Files Web settings

As you can imagine, these information are important and must be taken into account with care as much as your Document Vault backup does. Hence, I would recommend to take regular Master Database backups according to your needs and system criticality.

M-Files vault-specific data

According to database software used to store these information, either an embedded Firebird or Microsoft SQL server, M-Files provides you the possibility to do full and differential backup. Differential backups contain changes since the last full backup which includes all object files and vault metadata such as:

  • Objects with their version history, files, and metadata
  • Value lists, property definitions and classes
  • Metadata structure
  • Users, user groups and named access control lists
  • User permissions and vault roles
  • Workflows
  • Connections to other systems
  • Event log
  • Vault applications and scripts

Note it is possible to store object files inside Microsoft SQL server database, whereas this is not an option with Firebird (always stored in file system). Nevertheless, this may lead to performance issues in large environment and should be consider with care if not avoided.

In case of using embedded Firebird database, simply schedule all vaults back up jobs in M-Files Admin.

With Microsoft SQL server database engine, you must back up both the Microsoft SQL database and the files in the file system separately. It is important to always back up Microsoft SQL database first and then the file system data to avoid any references to non-existing object files.

Other data to consider

Some secondary data are not saved during above backup procedures. Instead, these are re-created after a restore operation. Think about index files, PDF renditions and thumbnails stored on M-Files server hard drive and pay attention to rebuilding duration. In large vault, the amount of time to rebuild search indexes can take a lot of time. Hopefully, it is feasible to back up and restore them accordingly. Depending on search engine used, procedure can be slightly different but feasible.

Last, do not forget any system modification(s) such as Windows registry, notification message templates or whatever M-Files Server installation folder files since these will not be included in Master or Vault backups.

Advices

Things are always easier said than done but it might get even worse than ever if you do not pay attention to few M-Files backup best practices. Hardware failure, accidental deletions, cyberattack, logical and or human errors happen. To be prepared to face such issues:

  • Plan, do, act and check your backup plan considering your systems criticality, Business and end-users SLA and constraints.
  • Do not hesitate to lean on your IT Administrator support teams and M-Files Architect to validate choices made and review potential architecture weakness.
  • Avoid taking data file system snapshot on active M-Files system servers where data is stored (inconsistency and damaged files may occurred).
  • Apply the 3-2-1 backup strategy for the best (store 3 copies of your data, in at least 2 types of storage media with1 in a secure off-site location)

On top of this, OR/DR test exercise shall prevent and reveal most of your backup plan gap(s). Thus you will be prepared to solve backups issues validating, in passing, their integrities.

If you need further assistance, do not hesitate to contact dbi services for any M-Files support and guidance around this topic and others.

Enjoy your knowledge work automation tasks with M-Files.

L’article M-Files Backup considerations est apparu en premier sur dbi Blog.

Azure Bootcamp Switzerland 2025

Tue, 2025-06-10 02:48

Last week I attended the Azure Bootcamp Switzerland 2025 with my colleague Adrien from the Cloud team. It was the 1st time for us but it was already the 6th edition of the event. This is a community event and non-profit event on an obvious topic: Azure

It’s a quite intensive day, 2 keynotes and 15 presentations split in 3 different streams. We took the ticket some ago and I was quite surprised by the amount of people interested in Azure topics. Indeed, just before the keynote, they announced the event was sold out. I know already the location since it’s at the same place than the Data Community Conference I participated last year.

Since we were two people at the event, we were able to split in the different streams. I followed sessions on really different topics: building AI application for Swiss government, Building Data platform, FinOps, Platform engineering with Terraform and DNS for hybrid environments.

All presentations were interesting but I’ll bring the focus on 2 of them: Cost observability with new FinOps standard FOCUS and DNS in hybrid environments because I think these 2 topics are really in important in the current deployments. The 1st one is quite obvious, you should not go to public Cloud without monitoring your costs. And the 2nd one, in my opinion, many companies are deploying Cloud but still have on-premise workload and it’s important to know how to live in this hybrid world.

Level up your Cost Observability with the new FinOps standard FOCUS and Microsoft Fabric

We discussed this FinOps topic already within the Cloud team at dbi services. But we may have overlooked the FOCUS specification by trying to create our own mapping in the different Cloud cost usage exports. The public Cloud providers are not strictly following the specification and there are some tips in the slides. Using Fabrics is then a good example on how the cost data can be processed.

DNS in hybrid cloud environments – can it be that hard?

On DNS side, the main take-away of this presentation is to correctly plan your DNS needs. In most cases, you will not use fixed IPs when you work with public Cloud. Usually, IPs are provided automatically when network interfaces are created. Then DNS is a critical component to ensure a good communication between all your instances/services. The DNS should be part of your network design from the beginning to ease the management of the DNS and avoid DNS private zones scattered at different places.

L’article Azure Bootcamp Switzerland 2025 est apparu en premier sur dbi Blog.

M-Files Online June 2025 (25.6) Sneak Preview

Sun, 2025-06-08 04:07

The June 2025 release of the M-Files Client is just around the corner, and as an M-Files partner, we have the opportunity to access the Early Access Release. As it rained heavily over the Whitsun holiday, I decided to spend some time installing the new version and exploring its features and improvements.
A quick look at the release notes revealed that this update would introduce the much-anticipated graphical workflow feature. This allows end users to easily observe the workflow status. But that’s not all, as you can read below.

New Features and Enhancement highlights Visual Workflows

During the M-Files Global Conference 2025, the launch of visual workflows was announced to great enthusiasm. I actually heard about the implementation of this feature several months ago and have been waiting for it ever since. I am therefore very happy that it is now available, as it will provide real added value for M-Files end users. Graphical views are easier for people to understand when they want to quickly grasp the status of a task or, in this case, the workflow of a document.

The screenshot below shows the new graphical view. As you can see, the current workflow status and the next step are displayed at the bottom. Furthermore, if you are the person with the next action in the workflow, there are buttons above the workflow status to easily perform the action.
For me, the action buttons in the new client are key, as the new client does not have the left-hand actions bar that the old client has. In my opinion, this implementation enhances the end-user experience and encourages end-users to migrate to the new, modern client.

Global Search

With global search enabled in the M-Files client, you can search all the vaults you have access to. Furthermore, the home screen now displays all the vaults that you have access to. You can access them with just one click.
You can also perform global searches directly using the search box on the home screen. This enhancement is especially useful for users who have access to multiple M-Files vaults, like me. It makes my work faster and smoother and is a great feature.

Co-authoring improvements

Create new Office documents and co-author them in Microsoft 365 Office desktop applications. This new release makes co-authoring the default behaviour when opening a document. See the latest live document changes in M-Files. Any user with the necessary access rights can co-author a document and save new versions of the document to M-Files.

M-Files URL improvements

The implementation of this feature is a clear statement of the willingness to integrate the known functions from the M-Files Classic Client in the new and modern M-Files Client. I had a look in to the client development roadmap and it confirms me the commitment of M-Files to enhance the new and much faster client to make them available for the end-users. The next logical step was to enable and improve URL handling. Please find below a brief summary of the URL improvement news.

  • Open view and edit specified document in Web via links
  • Support for search links
  • Property-based search in Web and Desktop via Links
  • External ID support for objects in grouping levels
Conclusion

I am pleased to inform you that a new client will be available in June for all M-Files end-users. In my professional opinion, the most significant enhancement is the implementation of the graphical workflow, and of course the deeper integration within the Microsoft ecosystem, with the enablement of co-authoring with M-Files.

I hope you are also feeling as overwhelmed as I am about this new M-Files Client. Should you wish to see them in action, please do not hesitate to contact us or me. We will be happy to arrange a demonstration to illustrate the potential of M-Files to enhance your business.

L’article M-Files Online June 2025 (25.6) Sneak Preview est apparu en premier sur dbi Blog.

Azure Bootcamp Switzerland 2025, an Azure day in Bern

Fri, 2025-06-06 03:42

On June 5th I could participate to Azure Bootcamp Switzerland in Bern. The event gave me the opportunity to attend multiple presentations on various Azure topics.

From AI to serverless integration to the cloud and networking, the conference’s topics were various and always interesting.

We had for example two keynotes during the day, one speaking about how to transform DevOps with a multi tools platform and the other one giving us an AI journey in the Nvidia scope.

I attended as well some security and networking sessions. For example, I attended a session on how to master network security in Azure, where we had some details on how to manage correctly Network Security Groups or Azure Firewalls.

This day was very interesting and the diversity of topics was really something that made me like it even more. I could learn a lot of things during all the presentations. Hopefully it takes place again next year and I can come again.

L’article Azure Bootcamp Switzerland 2025, an Azure day in Bern est apparu en premier sur dbi Blog.

Oracle DMK release 2.2.0 new features / Windows support.

Fri, 2025-06-06 03:05

DMK is a Database Management Kit that can be used in various database environments.

It simplifies command-line tasks for Databases Administrator on Unix and Windows systems. It supports multiple databases including Oracle, MongoDB, MariaDB, and PostgreSQL. It also includes additional modules for automating RMAN backups, database duplication, and database creation.
dbi-services recently released version 2.2.0, introducing significant improvements for Oracle databases with RAC/ASM and enhanced support for Windows.

You can check here more details:

https://www.dbi-services.com/products/dmk-management-kit/

Below example how its showing on command line processes output :

Windows support:

Now managing your Oracle databases on Windows server has never been easier.

Just clicking DMK.cmd shortcut on your Desktop open you directly command line with DMK API.

Using commands like istat/CDBNAME/lspdb we can easly navigate between our databases.

With simple alias ‘u’ we can see all databases on Windows server.

You can easily list or connect to your Multitenant PDBs just by using alias like lspdb. Use PDBNAME to directly login to your PDB:

in connection of DMK_backup and DMK_dbcreate DMK_dbduplicate its quite a powerful pack to manage your databases on Windows platform.

Now syntax and output is fully compatible with Linux version

This version also not need extra Perl installation, at it will use Perl binaries from Oracle Home on Unix and Windows OS.

sqlplus/rman/asmcmd work with command history, simply use sqh and arrrow ‘up’ and ‘down’ to navigate over your previous selects:

Usefull aliases:

taa – tail on alertlog

cdh- move to current ORACLE_HOME

cdd -move to DMK home

sqh – sqlplus with history

asmh -asmcmd with history

rmanh – rman with history

cddi – move to current DB DIAG_DEST

vit/vil/vio – giving you fast access to edit your current tnsnames listener and oratab config without long time looking them all over filesystem.

Full list of aliased you can find in DMK documentation under this link

Below example of tail on alertlog:

Now also when your environment change:

·         new ORACLE_HOME

·         changed ORACLE_HOME

·         new/removed ORACLE_SID

·         new/removed PDB’s

DMK will automatically adjust informations and aliases.

Now when you type ‘u’ or SID, all your instances are refreshed from actual state also when some of database up/down status changed:

These changes also apply to Linux version.

Oracle RAC / Clusterware :

For Oracle with RAC/Clusterware configuration there are also some improvements.

Use cstat to see crs resources in colored output.

  Also you can check your ASM instance:

What is also interesting for srvctl command we implemented command-completion with tab

So you don’t need to think anymore about proper srvctl command arguments (and as you know its a lot of options). Now, just choose from the list using [tab]

For example, stopping and starting database.

DMK not only helps to use command line commands , but it also helps to keep structural Oracle Optimal Flexible Architecture (OFA) standards of your naming convention/folder structure and keep Oracle best practices for database configuration. By installing additional modules for backup/duplication/dbcreate, you can provide instant standardization and improve quality for your environments.

You can read more in DMK documentation under this link.

L’article Oracle DMK release 2.2.0 new features / Windows support. est apparu en premier sur dbi Blog.

Guidance for a SQL Server backup strategy in AWS EC2

Tue, 2025-06-03 04:45
INTRODUCTION

S3 storage is the most resilient and available service in AWS: is a trusted service for storing backups over the long term.

From a Windows Server machine, in AWS EC2 or elsewhere, the easiest and most reliable way to make a backup from SQL Server:

1- Concept

For data that must be available over a long period of time (finance, insurance, medical), less costly archiving can be envisaged as follows:

  • By optimizing the type of storage class
  • Reduce the volume of backups with a retention strategy
a- Backup Retention

For a customer project, we chose to implement the following strategy:

  • All backups with 30 days retention (with standard storage class)
  • A Full backup a month with 12 months retention (with glacier storage class)
  • A Full backup a year with 15 years retention (with deep-archive storage class)

To do this, we decided to create, in a Bucket, the following 3 Prefixes:

  • Daily: contains all backups (.bak, .trn…) in last 30 days
  • Monthly: contains the first FULL backup of the month in the last 12 months
  • Yearly: contains the first FULL backup of the year in the last 15 years
b- S3 Lifecycle Policy

To implement the retention, we did set up a lifecycle rule on each 3 prefixes.
These rules include the following parameters:

  • A Transition Action: the object is converted into the default storage class for the prefix, based on the backup retention decided above. Of course, it’s more efficient to create them in the right storage class first
  • A Retention Action: the object is deleted after a defined period
c- Archiving backups

While lifecycle exists natively in S3 with Lifecycle Policies, there is no native mechanism which moves S3 objects between prefixes.
Generating a backup once a month and/or a year from SQL Server to S3 is not efficient since backups are already present in Daily prefixes.
To do this, you need to set up a custom script which will archive backups from Daily to Monthly and Yearly.

This script can be scheduled, for example, by AWS Lambda or another scheduler tool.
In our context, we did choose SQL Server Agent to schedule the archiving after the backup step in a job for the following reasons:

  • Execute archiving as soon as a new backup (if eligible) was generated
  • Get access easily and modify it, if necessary, from the database server. Of course, the script is server and instance agnostic… but you may want to temporarily modify it on a server as a workaround
2- Implementation a- Accessing S3

First, a IAM role must be attached to the EC2 instance to authorize the access to the S3 Bucket where backups are stored. Keep in mind this access is available at the machine level, that’s why I recommend to restrict access to read/write only.
This role is accessible, on the OS, through a metadata token as follows (IMDSv2 requirement):

[string]$token = Invoke-RestMethod -Headers @{"X-aws-ec2-metadata-token-ttl-seconds" = "21600"} -Method PUT -Uri http://169.254.169.254/latest/api/token
# This sens a PUT request to get a token valid for 6 hours

Once the token retrieved, you can securely accessed the role assigned the the EC2 instance:

$roleName = Invoke-RestMethod -Uri "http://169.254.169.254/latest/meta-data/iam/security-credentials/" -Headers @{"X-aws-ec2-metadata-token"=$token}

From there, you can get temporary security AWS credentials for this role:

$credentials = Invoke-RestMethod -Uri "http://169.254.169.254/latest/meta-data/iam/security-credentials/$roleName" -Headers @{"X-aws-ec2-metadata-token"=$token}

Then, you can set the credentials using AWS Tools PowerShell module:

Set-AWSCredential -AccessKey $credentials.AccessKeyId -SecretKey $credentials.SecretAccessKey -SessionToken $credentials.Token
# Credentials set for the current PowerShell session
b- Retrieving backups to archive

Using dbatools PowerShell module, we retrieve the first backup based on a reference date.
Here is an example when a Storage Gateway with File Gateway is used as a destination:

# $RefDate = New-Object DateTime( $Today.Year, $Today.Month, 1 ); # First backup of the month
# $RefDate = New-Object DateTime( $Today.Year, 1, 1 ); # First backup of the year

$BackupHistory = Get-DbaDBBackupHistory -SqlInstance $ConnectionString -Since $RefDate -Type Full -DeviceType Disk | Group-Object Database;

foreach ($Database in $BackupHistory){
      $Backup = ($Database.Group | Sort-Object End)[0];
      $Files = $Backup.Path;
      foreach ($File in $Files){
            # Process your copy
      }
}
c- Archiving backups

To process the copy, we must construct the key location and the key destination in S3 based on the information retrieved in msdb.
So, it mainly depends on your prefix structure in your bucket ; which must remains the same over the time.
Here is an example when the structure is RetentionPeriod\InstanceFolder\DatabaseName\FileName:

# Process copy for each file
$Leaf = Split-Path -Path $File -Leaf;

# Construct Source Key in S3
$SourceKey = "$($RefSource)/$($InstanceFolder)/$($Backup.Database)/$($Leaf)";

# Construct Destination Key in S3
$DestinationKey = "$($RefDestination)/$($InstanceFolder)/$($Backup.Database)/$($Leaf)";

# Copy the backup to the new Prefix
Copy-S3Object -BucketName $bucketName -Key $SourceKey -DestinationKey $DestinationKey -StorageClass $StorageClass -Region $Region;
CONCLUSION

In this blog, we demystified all the concepts and steps required to implement a backup strategy in AWS EC2 for SQL Server.
For other RDBMS running in EC2, the approach and code used for archiving would be very similar.

If you restrict access to Storage Gateway with File Gateway with READ/WRITE permissions and you enable COMPRESSION in SQL Server, then I recommend you read this blog I wrote:
Random permission denied when backing up SQL Server databases to AWS Storage Gateway

L’article Guidance for a SQL Server backup strategy in AWS EC2 est apparu en premier sur dbi Blog.

Random permission denied when backing up SQL Server databases to AWS Storage Gateway

Tue, 2025-05-27 11:32

Before addressing the issue of backup size reduction with compression enabled, let me provide some context.

When SQL Server is hosted on AWS EC2 instances, you’ll need to manage the backup strategy appropriate to your business.

In order to persist and archive your SQL Server backups securely and permanently, AWS S3 is a prime target: it’s the service with the best availability and durability in AWS.

However, if you don’t have a version prior to SQL Server 2022, you can’t natively backup to S3.

In this context, you need to use an intermediary service: a Storage Gateway with File Gateway.

The Storage Gateway is a storage service that enables you to replicate (and extend) your data in the S3 Cloud. It can be deployed either on-premises or in the Cloud, making it an hybrid cloud solution.
To use it with SQL Server, you need the File Gateway type, which exposes an SMB file share; in the context of Windows Server.

Here is a simple architecture example with a SQL Server hosted on EC2:

In the S3 bucket, a classic structure is to create several prefixes (kind of folders) as follows:

  • A prefix for daily backups with a short-term retention policy
  • A prefix for monthly backups with a middle-term retention policy
  • A prefix for yearly backups with a long-term retention policy

Of course, storage class may differ between these different prefixes to optimize cost.
In addition, for security and cost reasons, you’ll want to use an S3-level retention policy to remove backups over time (called S3 Lifecycle rules).

For added security, and to prevent backups being modified or deleted from the SMB fileshare, access can be restricted to read and write permissions only: this is what we’ve done in our context.
Moreover, to save space as much as possible, we enabled compression at SQL Server level.

This is the combination of these specifications which can, randomly, introduce the following error: Unable to open the physical file. Operating system error 5: “5(Access is denied)”.

To understand this error, let’s see how backup compression works with space allocation for the backup file.
SQL Server will first estimate the size of the compressed file (with a preallocation algorithm), but the final size may vary (can be at the end higher or lower) depending on how compressible the data is.

The above error occurs when the backup file is higher than needed, and the system tries to shrink the file to release unused space. This action requires Modify permissions, and an Access Denied will be triggered.

In our context, Modify permissions was not an option for security reasons.
As a workaround, we did enable Trace Flag 3042 which bypass preallocation algorithm. As a result, the file will grow as the backup progresses, requiring only Write permissions.

Keep in mind that increasing backup files over the time are less performant than preallocated algorithm: backup duration may increase for databases); but acceptable in our context.

L’article Random permission denied when backing up SQL Server databases to AWS Storage Gateway est apparu en premier sur dbi Blog.

Optimize materialization of Exadata PDB sparse clones

Mon, 2025-05-26 15:25

Well, this blog is unfortunately reserved to DBA folk running Oracle engineered systems #Exadata, #ExaCC, #ExaCS and Oracle Multitenant database (aka. CDB) architecture.

One of our #ExaCC customer has a strong DevOps focused business and heavily relies on Exadata sparse clones to quickly provision “lightweight” database clones for their development teams.

As part of the move of an mission critical application to #ExaCC we enhanced our spare PDB materialization scripts.

What’s wrong with the Materialization?

At a first look, nothing! it’s a straightforward command as easy as creating a PDB using “CREATE PLUGGABLE DATABASE .. FROM …”. Except that it takes time…

below some “fun facts”:

PDB size8TB (about 2500 datafiles)Provisioning of a sparse clone runtime
(locally – same CDB // DOP = 8)15minsProvisioning of a full clone runtime
(locally – same CDB // DOP = 8)>3.5 hoursMaterialization of sparse clone runtime
(DB_CREATE_FILE_DEST = <sparse diskgroup>)>20hoursMaterialization of sparse clone runtime
(DB_CREATE_FILE_DEST = <data diskgroup>)> 6hours

Indeed, we have set a “fixed” degree of parallelism to “constraint” the resources (CPU, IO) used by all kind of cloning activities.

Nevertheless, the “ALTER PLUGGABLE DATABASE MATERIALIZE” is at least 5 times slower compared to the creation of a full PDB clone when copying all database block onto SPARSE diskgroup.

Why does it take so much time?

A simple check in the database alert*log provides some insights:

2025-05-25T09:02:46.980450+02:00

Moving datafile +SPRC4/AVQD001T_SITE1/35F11D075E0FB991E0631A1FA10A6291/DATAFILE/data_000014750.2483.1202028085 (2967) to +DATAC4

Indeed, we get flooded by “ALTER DATABASE MOVE DATAFILE” commands. A quick session tracing activity confirmed that Oracle performs sequentially online database move operations. Last one but not least, this operations runs all database server (VMcluster node).

*** 2025-05-22T21:22:52.961911+02:00 (CDB$ROOT(1))

Moving datafile +SPRC4/AVQD001T_SITE1/360EA26B7CE6F135E0631A1FA10A00C8/DATAFILE/data_000011575.2664.1202154873 (6656) to +DATAC4

kcffo_mv_prepare: the secondary file +DATAC4/AVQD001T_SITE1/360EA26B7CE6F135E0631A1FA10A00C8/DATAFILE/data_000011575.2015.1202162415 is created with size 216

kcffo_mv_domove: Blocks copied for file +DATAC4/AVQD001T_SITE1/360EA26B7CE6F135E0631A1FA10A00C8/DATAFILE/data_000011575.2015.1202162415 size 216

Move operation committed for file +DATAC4/AVQD001T_SITE1/360EA26B7CE6F135E0631A1FA10A00C8/DATAFILE/data_000011575.2015.1202162415

Are we able to monitor the “materialize” process ?

Yes, of course as usual operations which last for more than 6 seconds we can query GV$SESSION_LONGOPS:

> SELECT sid, serial#, opname, sofar, totalwork,
                 ROUND(sofar/totalwork*100,2) PCT_COMPLETE
     FROM   V$SESSION_LONGOPS
   WHERE TOTALWORK != 0
        AND    SOFAR != TOTALWORK
 ORDER BY 1

      SID   SERIAL# OPNAME                                               SOFAR     TOTALWORK   PCT_COMPLETE
------- ---------- ---------------------------- ----------------- ----------------- -----------------
   2228      30474  Online data file move                  4796186624       8615100416                    55.67

1 row selected.
Can we parallelize “ALTER PLUGGABLE DATABASE” process?

Actually, IMHO Oracle shall provide through any kind of “magic” parameters. Unfortunately, researches on https://oradiff.oracle.com as of Oracle 19.27 were unfruitful.

Luckily, we did all scripting with #python using #cx_oracle driver and such quickly develop the necessary function to iterate over all pluggable database datafiles using parallel task execution with conurrent.futures module.

The optimized process using 8 database sessions, all moving database files in parallel was worth; The runtime for materializing the sparse clone is equal (or even less) to the time required to create a full PDB clone.

Wrap-up

For those, who already are confident with Exadata sparse clone (& Exadata sparse ASM diskgroup) you might have spotted the we moved the database into the DATA diskgroup. This can be achieved by setting DB_CREATE_FILE_DEST parameter at pluggable database level before starting to materialize the datafiles. So, Materializing the PDB sparse cloning into DATA diskgroup is only 2 times slower compared to copying all blocks to SPARSE diskgroup.


Oracle Exadata Exascale & Oracle 23ai will definitively bring more flexibility to Exadata PDB sparse clones, at least the known limitations for Oracle19c will be from the past as communicated by Oracle.
However, the whole materialization remains uncovered so far.

At the time of writing this post the Oracle “Cloud” support team has been informed.

Last one but not least, from time to time we need to cleanup some leftovers from ASM sparse diskgroup (subject to another Oracle Service request).

2025-05-25T09:02:35.623108+02:00
WARNING: Cannot delete old file +SPRC4/AVQD001T_SITE1/35F11D075E0FB991E0631A1FA10A6291/DATAFILE/data_000014770.1000.1202028085 left after datafile move
2025-05-25T09:02:35.623248+02:00
Errors in file /u02/app/oracle/diag/rdbms/avqd001t_site1/AVQD001T1/trace/AVQD001T1_ora_203496.trc:

ORA-15028: Oracle Automatic Storage Management (Oracle ASM) file '+SPRC4/AVQD001T_SITE1/35F11D075E0FB991E0631A1FA10A6291/DATAFILE/data_000014770.1000.1202028085' not dropped; currently being accessed

Keep you posted!

L’article Optimize materialization of Exadata PDB sparse clones est apparu en premier sur dbi Blog.

SQL Server 2025 – AG Commit Time

Mon, 2025-05-26 08:08

SQL Server 2025 preview has been publicly available for a week now: you can find all the details in this blog “SQL Server 2025 Public Preview and SSMS 21 now available“.
In this blog, I want to make an highlight on one Engine High Availability (HA) feature: Availability Group Commit Time.

For a little background, Always On Availability Group (AG) is a feature released in SQL Server 2012.
This HA technology has benefited from numerous improvements over time, and most notably in SQL Server 2016 (automatic seeding, read-only replicas, dtc support, database health detection, distributed AG…).
To address a performance issue, SQL Server 2016 has introduced AG Commit Time. For nodes in synchronous mode, this concept reduces latency by specifying the time after which a transaction must be sent to the replicas.
If this value is not respected, it helps identifying possible bottlenecks between primary and secondary replicas: it improves monitoring and troubleshooting.

In some specific scenarios, this default value may not correspond business requirements when 10 ms delay (default value) is too high. As a result, SQL Server 2025 makes this value configurable through server configuration (availability group commit time).
Let’s look at how to change the configuration, and more interestingly, how to measure the impact of the new value.

After downloading the latest version SQL Server Management Studio (SSMS) – release 21 – let’s connect to the Primary node. Here is my AG configuration (2 replicas + a fileshare witness):

As announced slightly before, this value is configurable at instance level:

SELECT name, value, value_in_use, is_advanced, is_dynamic
FROM sys.configurations WHERE name = 'availability group commit time (ms)'

0 means default value is used: 10ms. To change this default value – let’s say 1 ms for the example – it can be done through the stored procedure sp_configure:

EXEC sp_configure 'show advanced options', '1';
RECONFIGURE;

EXEC sp_configure 'availability group commit time (ms)', '0'
RECONFIGURE;

EXEC sp_configure 'show advanced options', '0';
RECONFIGURE;

SELECT name, value, value_in_use, is_advanced, is_dynamic FROM sys.configurations
WHERE name = 'availability group commit time (ms)'

It is possible to use the DMV sys.dm_hadr_database_replica_states to get an idea of the AG commit time between Primary and Secondary:

SELECT 
    database_id,
    is_primary_replica,
    synchronization_state_desc,
    last_commit_time,
    last_hardened_time,
    last_received_time,
    last_sent_time
FROM sys.dm_hadr_database_replica_states;

It is also possible to calculate the time between the hard_log_block is sent, and when the hard_receive_harden_lsn_message is received on the Primary.
To do so, let’s set up an Extended Events (XE) session:


CREATE EVENT SESSION [AG_Commit_Latency_Tracking] ON SERVER
ADD EVENT sqlserver.hadr_log_block_send_complete(
    ACTION(sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_receive_harden_lsn_message(
    ACTION(sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_log_block_group_commit(
    ACTION(sqlserver.sql_text)
)
ADD TARGET package0.event_file (
    SET filename = N'C:\Program Files\Microsoft SQL Server\MSSQL17.MSSQLSERVER\MSSQL\Log\AG_Commit_Latency.xel', max_file_size = 50, max_rollover_files = 5

)
WITH (STARTUP_STATE = ON);
GO

ALTER EVENT SESSION [AG_Commit_Latency_Tracking] ON SERVER STATE = START;
SELECT 
    event_data.value('(event/@name)[1]', 'varchar(100)') AS event_name,
    event_data.value('(event/@timestamp)[1]', 'datetime2') AS [timestamp],
    event_data.value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(max)') AS sql_text
FROM 
(
    SELECT CAST(event_data AS XML) AS event_data
    FROM sys.fn_xe_file_target_read_file('C:\Program Files\Microsoft SQL Server\MSSQL17.MSSQLSERVER\MSSQL\Log\AG_Commit_Latency*.xel', NULL, NULL, NULL)
) AS events
ORDER BY [timestamp] DESC;

As both events are gathered on the Primary, the message received does not match exactly the time the secondary processed the message, but it gives a good idea if the duration is respected.

Keep in mind the default value (10 ms) is suitable for most workloads, as it is a good balance between performance and efficiency.
On the other hand, a lower value may just unbalance the load distribution on the primary, forcing it to synchronize more often than required; and it will benefit less from log block compression on the network.

L’article SQL Server 2025 – AG Commit Time est apparu en premier sur dbi Blog.

SQL Server 2025 – ZSTD – A new compression algorithm for backups

Thu, 2025-05-22 18:43
Introduction

SQL Server 2025 introduces a new algorithm for backup compression: ZSTD. As a result, SQL Server 2025 now offers three solutions for backup compression:

  • MS_XPRESS
  • QAT
  • ZSTD

In this blog, we will compare MS_XPRESS and ZSTD.

Environment

To perform these tests, the following virtual machine was used:

  • OS: Windows Server 2022 Datacenter
  • SQL Server: 2025 Standard Developer
  • CPU: 8 cores
  • VM memory: 12 GB
  • (SQL) Max server memory: 4 GB

Additionally, I used the StackOverflow database to run the backup tests (reference: https://www.brentozar.com/archive/2015/10/how-to-download-the-stack-overflow-database-via-bittorrent/).

ZSTD usage

There are several ways to use the new ZSTD compression algorithm. Here are two methods:

  • Add the following terms to the SQL backup commands: WITH COMPRESSION (ALGORITHM = ZSTD)
BACKUP DATABASE StackOverflow TO DISK = 'T:\S1.bak' WITH INIT, FORMAT, COMPRESSION (ALGORITHM = ZSTD), STATS = 5
  • Change the compression algorithm at the instance level:
EXECUTE sp_configure 'backup compression algorithm', 3; 
RECONFIGURE;
The initial data

The StackOverflow database used has a size of approximately 165 GB. To perform an initial test using the MS_XPRESS algorithm, the commands below were executed:

SET STATISTICS TIME ON
BACKUP DATABASE StackOverflow TO DISK = 'T:\S1.bak' WITH INIT, FORMAT, COMPRESSION, STATS = 5;

Here is the result:

BACKUP DATABASE successfully processed 20 932 274 pages in 290.145 seconds (563.626 MB/sec).
SQL Server Execution Times: CPU time = 11 482 ms,  elapsed time = 290 207 ms.

For the second test, we are using the ZSTD algorithm with the commands below:

SET STATISTICS TIME ON
BACKUP DATABASE StackOverflow TO DISK = 'T:\S1.bak' WITH INIT, FORMAT, COMPRESSION (ALGORITHM = ZSTD), STATS = 5

Here is the result:

BACKUP DATABASE successfully processed 20 932 274 pages in 171.338 seconds (954.449 MB/sec).
CPU time = 10 750 ms,  elapsed time = 171 397 ms.

It should be noted that my storage system cannot sustain its maximum throughput for an extended period. In fact, when transferring large files (e.g., 100 GB), the throughput drops after about 15 seconds (for example, from 1.2 GB/s to 500 MB/s).

According to the initial data, the CPU time between MS_XPRESS and ZSTD is generally the same. However, since ZSTD allows backups to be performed more quickly (based on the tests), the overall CPU time is lower with the ZSTD algorithm. Indeed, because the backup duration is reduced, the time the CPU spends executing instructions (related to backups) is also lower.

Comparison table for elapsed time with percentage gain:

Test NumberCompression TypeDuration In Seconds1MS_XPRESS2902ZSTD171PerformanceApproximately 41% faster
Comparison of captured data

During the tests, performance counters were set up to gain a more accurate view of the behavior of the two algorithms during a backup. For this, we used the following counters:

  • Backup throughput/sec (KB)
  • Disk Read KB/sec (in my case, Disk Read KB/sec is equal to the values of the Backup Throughput/sec (KB) counter). In fact, the “Backup throughput/sec (KB)” counter reflects the reading of data pages during the backup.
  • Disk Write KB/sec
  • Processor Time (%)

We observe that the throughput is higher with the ZSTD algorithm. The drop that appears is explained by the fact that ZSTD enabled the backup to be completed more quickly. As a result, the backup operation took less time, and the amount of data collected is lower compared to the other solution. Additionally, it should be noted that the database is hosted on volume (S) while the backups are stored on another volume (T).

We also observe that the write throughput is higher when using the ZSTD algorithm.

For the same observed period, the CPU load is generally the same however ZSTD allows a backup to be completed more quickly (in our case). As a result, the overall CPU load is generally lower.

We also observe that the backup ratio (on this database) is higher with the ZSTD algorithm. This indicates that the size occupied by the compressed backup is smaller with ZSTD.

backup_ratiodatabase_namebackup_typecompressed_backup_size (bytes)compression_algorithm3.410259900691847063StackOverflowFull50 283 256 836MS_XPRESS3.443440933211591093StackOverflowFull49 798 726 852ZSTD Conclusion

Based on the tests performed, we observe that the ZSTD algorithm allows:

  • Faster backup creation
  • Reduced CPU load because backups are produced more quickly
  • Reduced backup size

However, it should be noted that further testing is needed to confirm the points above.

Thank you, Amine Haloui.

L’article SQL Server 2025 – ZSTD – A new compression algorithm for backups est apparu en premier sur dbi Blog.

Using dlt to get data from Db2 to PostgreSQL

Wed, 2025-05-21 06:34

For a recent project at one of our customers we needed to get data from a Db2 database into PostgreSQL. The first solution we thought of was the foreign data wrapper for Db2. This is usually easy to setup and configure and all you need are the client libraries (for Db2 in this case). But it turned out that db2_fdw is so old that it cannot be used against a recent version of PostgreSQL (we tested 15,16,17). We even fixed some of the code but it became clear very fast, that this is not the solution to go with. There is also db2topg but this is not as advanced as it’s brother ora2pg and we did not even consider trying that. Another tool you can use for such tasks is dtl (data load tool), and it turned out this is surprisingly easy to install, configure and use. You are not limited Db2 as a source, much more options are available.

As the customer is using Red Hat 8 for the PostgreSQL nodes, we start with a fresh Red Hat 8 installation as well:

postgres@rhel8:/u02/pgdata/17/ [PG1] cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.10 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.10"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.10 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8"
BUG_REPORT_URL="https://issues.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.10
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.10"

PostgreSQL 17 is already up and running:

postgres@rhel8:/u02/pgdata/17/ [PG1] psql -c "select version()"
                                                          version
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 17.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-26), 64-bit
(1 row)

For not messing up with the Python installation from the operating system, we’ll use a Python virtual environment for dlt and install libjpeg-turbo-devel and git as those are required later on:

postgres@rhel8:/home/postgres/ [PG1] sudo dnf install libjpeg-turbo-devel git
postgres@rhel8:/u02/pgdata/17/ [PG1] sudo dnf install python3-virtualenv -y
postgres@rhel8:/u02/pgdata/17/ [PG1] python3.12 -m venv .local
postgres@rhel8:/home/postgres/ [PG1] .local/bin/pip3 install --upgrade pip
postgres@rhel8:/u02/pgdata/17/ [PG1] . .local/bin/activate

Once we have the Python virtual environment ready and activated, the installation of dlt is just a matter of asking pip to install it for us (for this you need access to the internet, of course):

postgres@rhel8:/u02/pgdata/17/ [PG1] .local/bin/pip3 install -U "dlt[postgres]"
postgres@rhel8:/home/postgres/ [PG1] which dlt
~/.local/bin/dlt

Having that installed we can initialize a new pipeline based on the sql_database template and we want “postgres” as the destination:

postgres@rhel8:/home/postgres/ [PG1] mkdir db2_postgresql && cd $_
postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] dlt init sql_database postgres
Creating a new pipeline with the dlt core source sql_database (Source that loads tables form any SQLAlchemy supported database, supports batching requests and incremental loads.)
NOTE: Beginning with dlt 1.0.0, the source sql_database will no longer be copied from the verified sources repo but imported from dlt.sources. You can provide the --eject flag to revert to the old behavior.
Do you want to proceed? [Y/n]: y

Your new pipeline sql_database is ready to be customized!
* Review and change how dlt loads your data in sql_database_pipeline.py
* Add credentials for postgres and other secrets to ./.dlt/secrets.toml
* requirements.txt was created. Install it with:
pip3 install -r requirements.txt
* Read https://dlthub.com/docs/walkthroughs/create-a-pipeline for more information

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] ls -l
total 20
-rw-r--r--. 1 postgres postgres    34 May 21 09:07 requirements.txt
-rw-r--r--. 1 postgres postgres 12834 May 21 09:07 sql_database_pipeline.py

As mentioned in the output above, additional dependencies need to be installed:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] cat requirements.txt 
dlt[postgres,sql-database]>=1.11.0(.local)
postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] pip install -r requirements.txt
postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] find .
.
./.dlt
./.dlt/config.toml
./.dlt/secrets.toml
./.gitignore
./sql_database_pipeline.py
./requirements.txt

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] pip install ibm-db-sa

Now is the time to configure the credentials and connection parameters for the source and destination databases, and this is done in the “secrets.toml” file:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] cat .dlt/secrets.toml 
[sources.sql_database.credentials]
drivername = "db2+ibm_db"
database = "db1" 
password = "manager" 
username = "db2inst1" 
schema = "omrun"
host = "172.22.11.93"
port = 25010 

[destination.postgres.credentials]
database = "postgres" 
password = "postgres"
username = "postgres"
host = "192.168.122.60"
port = 5432
connect_timeout = 15

When we initialized the pipeline a template called “sql_database_pipeline.py” was created, and this is what we need to adjust now. There are several samples in that template, we’ve used the load_select_tables_from_database skeleton:

# flake8: noqa
import humanize
from typing import Any
import os

import dlt
from dlt.common import pendulum
from dlt.sources.credentials import ConnectionStringCredentials

from dlt.sources.sql_database import sql_database, sql_table, Table

from sqlalchemy.sql.sqltypes import TypeEngine
import sqlalchemy as sa


def load_select_tables_from_database() -> None:
    """Use the sql_database source to reflect an entire database schema and load select tables from it.

    This example sources data from the public Rfam MySQL database.
    """
    # Create a pipeline
    pipeline = dlt.pipeline(pipeline_name="omrun", destination='postgres', dataset_name="omrun")

    # This are the tables we want to load
    source_1 = sql_database(schema="omrun").with_resources("loadcheck_a", "loadcheck_b")

    # Run the pipeline. The merge write disposition merges existing rows in the destination by primary key
    info = pipeline.run(source_1, write_disposition="replace")
    print(info)

if __name__ == "__main__":
    # Load selected tables with different settings
    load_select_tables_from_database()

That’s all the code which is required for this simple use case. We’ve specified the database schema (omrun) and the two tables we want to load the data from (“loadcheck_a”, “loadcheck_b”). In addition we want the data to be replaced on the target (there is also merge and append).

This is how it looks like in Db2 for the first table:

Ready to run the pipeline:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] python sql_database_pipeline.py 
Pipeline omrun load step completed in 0.73 seconds
1 load package(s) were loaded to destination postgres and into dataset omrun
The postgres destination used postgresql://postgres:***@192.168.122.60:5432/postgres location to store data
Load package 1747817065.3199458 is LOADED and contains no failed jobs

Everything seems to be OK, let’s check in PostgreSQL. The schema “omrun” was created automatically:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] psql -c "\dn"
          List of schemas
     Name      |       Owner       
---------------+-------------------
 omrun         | postgres
 omrun_staging | postgres
 public        | pg_database_owner
(3 rows)

Looking at the tables in that schema, both tables are there and contain the data:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] psql -c "set search_path='omrun'" -c "\d"
SET
                List of relations
 Schema |        Name         | Type  |  Owner   
--------+---------------------+-------+----------
 omrun  | _dlt_loads          | table | postgres
 omrun  | _dlt_pipeline_state | table | postgres
 omrun  | _dlt_version        | table | postgres
 omrun  | loadcheck_a         | table | postgres
 omrun  | loadcheck_b         | table | postgres
(5 rows)

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] psql -c "select count(*) from omrun.loadcheck_a"
 count  
--------
 102401
(1 row)

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] psql -c "select * from omrun.loadcheck_a limit 5"
 spalte0 | spalte1 |    _dlt_load_id    |    _dlt_id     
---------+---------+--------------------+----------------
       1 | test1   | 1747817065.3199458 | tmQTbuEnpjoJ8Q
       2 | test2   | 1747817065.3199458 | Y5D4aEbyZmaDVw
       3 | test3   | 1747817065.3199458 | RxcyPugGndIRQA
       4 | test4   | 1747817065.3199458 | YHcJLkKML48/8g
       5 | test5   | 1747817065.3199458 | ywNZhazXRAlFnQ
(5 rows)

Two additional columns have been added to the tables. “_dlt_load_id” and “_dlt_id” are not there in Db2, but get added automatically by dlt for internal purposes. The same is true for the “omrun_staging” schema.

Inspecting the pipeline can be done with the “info” command:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] dlt pipeline omrun info 
Found pipeline omrun in /home/postgres/.dlt/pipelines
Synchronized state:
_state_version: 2
_state_engine_version: 4
dataset_name: omrun
schema_names: ['sql_database']
pipeline_name: omrun
default_schema_name: sql_database
destination_type: dlt.destinations.postgres
destination_name: None
_version_hash: e/mg52/UONZ79Z5wrl8THEl8LeuKw+xQlA8FqYvgdaU=

sources:
Add -v option to see sources state. Note that it could be large.

Local state:
first_run: False
initial_cwd: /home/postgres/db2_postgresql
_last_extracted_at: 2025-05-21 07:52:50.530143+00:00
_last_extracted_hash: e/mg52/UONZ79Z5wrl8THEl8LeuKw+xQlA8FqYvgdaU=

Resources in schema: sql_database
loadcheck_a with 1 table(s) and 0 resource state slot(s)
loadcheck_b with 1 table(s) and 0 resource state slot(s)

Working dir content:
Has 6 completed load packages with following load ids:
1747813450.4990926
1747813500.9859562
1747813559.5663254
1747813855.3201842
1747813968.0540593
1747817065.3199458

Pipeline has last run trace. Use 'dlt pipeline omrun trace' to inspect 

If you install the “streamlit” package, you can even bring up a website and inspect your data using the browser:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] pip install streamlit
postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] dlt pipeline omrun show
Found pipeline omrun in /home/postgres/.dlt/pipelines

Collecting usage statistics. To deactivate, set browser.gatherUsageStats to false.


  You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8501
  Network URL: http://192.168.122.60:8501
  External URL: http://146.4.101.46:8501

Really nice.

This was a very simple example, there is much more can do with dlt. Check the documentation for further details.

L’article Using dlt to get data from Db2 to PostgreSQL est apparu en premier sur dbi Blog.

SQL Server 2025 – Standard Developer edition

Tue, 2025-05-20 07:02
Introduction

The arrival of SQL Server 2025 introduces the Standard Developer edition, allowing companies to deploy across all development, quality and testing environments using an edition equivalent to the Standard Edition without having to pay the associated licensing fees.

Here are the different editions available in SQL Server 2025:

  • Express
  • Web
  • Standard
  • Enterprise
  • Standard Developer
  • Enterprise Developer
What problem does this solve?

Some companies deploy the Developer edition of SQL Server in environments that are not production in order to reduce licensing costs. The Developer edition, however, is functionally equivalent to the Enterprise Edition.

This can result in the following scenario:

  • Test environment: Developer edition
  • Development environment: Developer edition
  • Production environment: Standard edition

The Developer and Standard editions differ significantly, and some features available in the Developer edition are not available in the Standard edition. For example, index rebuilds can be done online with the Enterprise or Developer editions, but this is not possible with the Standard edition.

As a result, behavior and performance can vary greatly between environments when different editions are used.

Here is an example where the editions are not aligned:

With SQL Server 2025, it’s now possible to use the same edition across all environments without having to license instances used for test and development environments:

How is the installation performed?

Using the graphical interface, edition selection is done simply:

However, if the installation is performed using an .ini file, you must use the PID parameter with the following value:

PID=”33333-00000-00000-00000-00000″

This allows us to install the Standard Developer edition of SQL Server 2025:

Thank you, Amine Haloui.

L’article SQL Server 2025 – Standard Developer edition est apparu en premier sur dbi Blog.

SQLDay 2025 – Wrocław – Sessions

Mon, 2025-05-19 13:59

After a packed workshop day, the SQLDay conference officially kicked off on Tuesday with a series of sessions covering cloud, DevOps, Microsoft Fabric, AI, and more. Here is a short overview of the sessions I attended on the first day of the main conference.

Morning Kick-Off: Sponsors and Opening

The day started with a short introduction and a presentation of the sponsors. A good opportunity to acknowledge the partners who made this event possible.

Session 1: Composable AI and Its Impact on Enterprise Architecture

This session (by Felix Mutzl) provided a strategic view of how AI is becoming a core part of enterprise architecture.

Session 2: Migrate Your On-Premises SQL Server Databases to Microsoft Azure

A session (by Edwin M Sarmiento) that addressed one of the most common challenges for many DBAs and IT departments: how to migrate your SQL Server workloads to Azure. The speaker shared a well-structured approach, highlighting the key elements to consider before launching a migration project:

  • Team involvement: Ensure all stakeholders are aligned.
  • Planning: Migration isn’t just about moving data, dependencies must be mapped.
  • Cost: Evaluate Azure pricing models and estimate consumption.
  • Testing: Validate each stage in a non-production environment.
  • Monitoring: Post-migration monitoring is essential for stability.

Session 3: Fabric Monitoring Made Simple: Built-In Tools and Custom Solutions

This session was produced by Just Blindbaek and he talked about how Microsoft Fabric is gaining traction quickly, and with it comes the need for robust monitoring. This session explored native tools like Monitoring Hub, Admin Monitoring workspace, and Workspace Monitoring. In addition, the speaker introduced FUAM (Fabric Unified Admin Monitoring), an open-source solution supported by Microsoft that complements the built-in options.

Session 4: Database DevOps…CJ/CD: Continuous Journey or Continuous Disaster?

A hands-on session (by Tonie Huizer) about introducing DevOps practices in a legacy team that originally used SVN and had no automation. The speaker shared lessons learned from introducing:

  • Sprint-based development cycles
  • Git branching strategies
  • Build and release pipelines
  • Manual vs Pull Request releases
  • Versioned databases and IDPs

It was a realistic look at the challenges and practical steps involved when modernizing a database development process.

Session 5: (Developer) Productivity, Data Intelligence, and Building an AI Application

This session (from Felix Mutzl) shifted the focus from general AI to productivity-enhancing solutions. Built on Databricks, the use case demonstrated how to combine AI models with structured data to deliver real-time insights to knowledge workers. The practical Databricks examples were especially helpful to visualize the architecture behind these kinds of applications.

Session 6: Azure SQL Managed Instance Demo Party

The final session of the day was given by Dani Ljepava and Sasa Popovic and was more interactive and focused on showcasing the latest Azure SQL Managed Instance features. Demos covered:

  • Performance and scaling improvements
  • Compatibility for hybrid scenarios
  • Built-in support for high availability and disaster recovery

The session served as a great update on where Azure SQL MI is heading and what tools are now available for operational DBAs and cloud architects.

Thank you, Amine Haloui.

L’article SQLDay 2025 – Wrocław – Sessions est apparu en premier sur dbi Blog.

SQLDay 2025 – Wrocław – Workshops

Mon, 2025-05-19 13:58

I had the chance to attend SQLDay 2025 in Wrocław, one of the largest Microsoft Data Platform conferences in Central Europe. The event gathers a wide range of professionals, from database administrators to data engineers and Power BI developers. The first day was fully dedicated to pre-conference workshops. The general sessions are scheduled for the following two days.

In this first post, I’ll focus on Monday’s workshops.

Day 1 – Workshop Sessions

The workshop day at SQLDay is always a strong start. It gives attendees the opportunity to focus on a specific topic for a full day. This year, several tracks were available in parallel, covering various aspects of the Microsoft data stack: from Power BI and SQL Server to Azure and Microsoft Fabric.

Here are the sessions that were available:

Advanced DAX

This session was clearly targeted at experienced Power BI users. Alberto Ferrari delivered an in-depth look into evaluation context, expanded tables, and advanced usage of CALCULATE. One focus area was the correct use of ALLEXCEPT and how it interacts with complex relationships.

Execution Plans in Depth

For SQL Server professionals interested in performance tuning, this workshop provided a detailed walkthrough of execution plans. Hugo Kornelis covered a large number of operators, explained how they work internally, and showed how to analyze problematic queries. The content was dense but well-structured.

Becoming an Azure SQL DBA

This workshop was led by members of the Azure SQL product team. It focused on the evolution of the DBA role in cloud environments. The agenda included topics such as high availability in Azure SQL, backup and restore, cost optimization, and integration with Microsoft Fabric. It was designed to understand the shared responsibility model and how traditional DBA tasks are shifting in cloud scenarios.

Enterprise Databots

This workshop explored how to build intelligent DataBots using Azure and Databricks. The session combined theoretical content with practical labs. The goal was to implement chatbots capable of interacting with SQL data and leveraging AI models. Participants had the opportunity to create bots from scratch.

Analytics Engineering with dbt

This session was focused on dbt (data build tool) and its role in ELT pipelines. It was well-suited for data analysts and engineers looking to standardize and scale their workflows.

Build a Real-time Intelligence Solution in One Day

This workshop showed how to implement real-time analytics solutions using Microsoft Fabric. It covered Real-Time Hub, Eventstream, Data Activator, and Copilot.

From Power BI Developer to Fabric Engineer

This workshop addressed Power BI developers looking to go beyond the limitations of Power Query and Premium refresh schedules. The session focused on transforming reports into scalable Fabric-based solutions using Lakehouse, Notebooks, Dataflows, and semantic models. A good starting point for anyone looking to shift from report building to full data engineering within the Microsoft ecosystem.

Thank you, Amine Haloui.

L’article SQLDay 2025 – Wrocław – Workshops est apparu en premier sur dbi Blog.

SQL Server 2025 Public Preview and SSMS 21 now available

Mon, 2025-05-19 12:15

This is a short blog post to share that the SQL Server 2025 public preview is now available for download. At the same time, SSMS 21 has also been released and is now generally available.

The LinkedIn post by Bob Ward announcing the news can be found here: Announcing SQL Server 2025 Public Preview

In his post you’ll find a summary of the key changes coming with this new release.

Also note that the recommended version of SSMS for SQL Server 2025 is SSMS 21, which was just announced.

Here is the blog post by Erin Stellato: SQL Server Management Studio (SSMS) 21 is now generally available (GA)

There are many changes between SSMS 20 and 21, notably the fact that it’s now based on Visual Studio 2022, includes built-in Copilot integration, and finally introduces a Dark Theme.

I strongly recommend installing it, starting to use it, and providing feedback if you encounter any bugs or areas for improvement. You can do so here.

Now that SQL Server 2025 is available for testing, other blog posts written by my colleagues or myself will likely follow to showcase some of the new features.
I’m particularly thinking of the following:

L’article SQL Server 2025 Public Preview and SSMS 21 now available est apparu en premier sur dbi Blog.

APEX Connect 2025 (Day 3)

Thu, 2025-05-15 15:49
This image has an empty alt attribute; its file name is 2025-Apex_Connect-mit-DB-Banner-820x312-facebook_Header.jpg

After the “Welcome 3rd Day APEX Connect, DOAG e.V.”, and the very entertaining Keynote “Trouble in the Old Republic” by Samuel Nitsche, I decided to attend presentations on following topics:
– 23ai – Building an AI Vector Search API using APEX, ORDS, REST and PL/SQL
– APEX in Style – Ein Überblick über die verschiedenen UI-Customizingmöglichkeiten
– SQL und PL/SQL: Tipps & Tricks für APEX Entwickler
– Oracle APEX & Entra ID: Effiziente Benutzerverwaltung mit Workflows und SSO
Beside the presentations I also had the privilege to have 1:1 sessions with Carsten Czarski, Florian Grasshoff and Mark Swetz from the APEX development Team.

23ai – Building an AI Vector Search API using APEX, ORDS, REST and PL/SQL

Vectors are lists of numbers and their dimension is given by the amount of numbers in the vector definition. The creation of a vector from any other data is called vectorizing or embedding.
Oracle 23ai is having a new vector datatype and an associated PL/SQL package DBMS_VECTOR. Pre-trained Models can be imported based on ONNX standard.
APEX can be used to call external AI models as web services. Any needed transformation can be done thanks to the DBMS_VECTOR package.
One of the main advantage of vector search is language independent.

APEX in Style – Ein Überblick über die verschiedenen UI-Customizingmöglichkeiten

New template components allow to set attributes in templates for declarative usage in APEX.
This can be combined with dedicated CSS to be used in the template component. Those component can be used in any kind of pages (e.g. Interactive Report, Card Reports, …).
When changing templates, it is recommended to do them on a copy in order to be able to rollback to the original one if needed.
Beside the templates Theme can be modified globally with Theme styles over the theme roller. Theme changes can even be allowed to end users in APEX so they can personalize the look & Feel of the application.

SQL und PL/SQL: Tipps & Tricks für APEX Entwickler

SQL queries are in the heart of APEX. Looking into the debugger details of the SQL produced by APEX can be seen. Any filtering or other change will add to the original query and generate a new query which can be seen as onion SQL with following levels adding up:
– Component SQL (written by the developer)
– LOVs
– Computed columns
– Filter
– Sorts
– Aggregation
– Pagination
This means the query run by APEX can be very different than the one entered by the developer.
As a consequence, sorting with ORDER BY should never be part of the component SQL. Use the declarative column sorting parameter instead.
APEX allows to use pseudo-hints in the declarative optimizer hints field in order manage the pagination type.
PL/SQL tips:
– functions in SELECT are run on every row selected (expensive)
– functions in WHERE are run for all rows of the selected table (even more expensive)
– use bind variables so that substitution is happening in the database
– strictly define constants
– name loops
– map parameters
– always raise in “when others” clause of exception handling
– use conditional compilation

Oracle APEX & Entra ID: Effiziente Benutzerverwaltung mit Workflows und SSO

User Management requires an IAM system for easier and centralized use. One combination for APEX is with Microsoft ENTRA.
Possible usage:
– on / offboarding
– details and contact management
APEX is managing access to ENTRA through web services which allows to easily cover the previous use cases. Web services are part of declarative setup to address the Microsoft Graph interface and manage authorizations mapped to application and delegation over groups with ENTRA.
Access is secured with oAuth authentication.

NEWS!

One last news, the support of the last 3 APEX versions (23.2, 24.1 and 24.2) might be extended to 2 years instead of 18 months.

You can find a summary of Day 2 here.

That was the final day of APEX Connect 2025 conference. Great organization, great presentations and great people.
Hope to see all again on APEX Connect 2026.
How about you? Are you planning to join?

L’article APEX Connect 2025 (Day 3) est apparu en premier sur dbi Blog.

Pages