Feed aggregator

World Password Day and Oracle Security

Pete Finnigan - Fri, 2026-05-15 16:28
I am slightly late with this one as the event itself was on the 7th May 2026. The World Password Day 2026 is a day to try and highlight that passwords are weak. An article I saw on line said....[Read More]

Posted by Pete On 11/05/26 At 12:37 PM

Categories: Security Blogs

Securing Data in Oracle without Cost Options

Pete Finnigan - Fri, 2026-05-15 16:28
I did a presentation at the UKOUG conference at the East Side rooms in Birmingham at the end of 2025. The focus of this talk was to highlight the problem of securing data held in an Oracle database without using....[Read More]

Posted by Pete On 05/05/26 At 11:25 AM

Categories: Security Blogs

Unable to Perform ONLINE DDLs on tables when Supplemental Logging is enabled

Tom Kyte - Fri, 2026-05-15 16:28
Dear Tom, In our ERP, we are actively consuming both EBR & Supplemental Logging. EBR is for upgrades with a near zero downtime while Supplemental Logging is mainly for CDC, LogMiner & GoldenGate. But we encounter errors when ALTER TABLE statements are executed for normal tables in ONLINE mode while Supplemental Logging is enabled. The error we are getting is: <i>ORA-14416: Online DDL's cannot be used with certain types of tables.</i> Quick Test Steps: -- enable minimal supplemental logging (from CDB) <code> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;</code> -- create first table and its constraints <code> CREATE TABLE ORDER_TABLE ( ID VARCHAR2(50), DESCRIPTION VARCHAR2(100), ORDER_DATE DATE, CUSTOMER_ID VARCHAR2(50), CF_ID VARCHAR2(50) ); ALTER TABLE ORDER_TABLE ADD CONSTRAINT ORDER_PK PRIMARY KEY (ID); ALTER TABLE ORDER_TABLE ADD CONSTRAINT ORDER_CFK UNIQUE (CF_ID) USING INDEX; </code> -- create second table and its constraints <code> CREATE TABLE ORDER_CF_TABLE ( CF_ID VARCHAR2(50), AUTH_ID VARCHAR2(100), AUTH_DATE DATE ); ALTER TABLE ORDER_CF_TABLE ADD CONSTRAINT ORDER_CF_PK PRIMARY KEY (CF_ID); ALTER TABLE ORDER_CF_TABLE ADD CONSTRAINT ORDER_CF_RK FOREIGN KEY (CF_ID) REFERENCES ORDER_TABLE (CF_ID) ON DELETE CASCADE; ALTER TABLE ORDER_CF_TABLE ADD CONSTRAINT ORDER_CF_TABLE_CF_ID_NN CHECK ("CF_ID" IS NOT NULL); </code> Now try to execute below: <code>ALTER TABLE ORDER_CF_TABLE DROP CONSTRAINT ORDER_CF_RK KEEP INDEX ONLINE;</code> Error ORA-14416 is raised. Since both ONLINE mode for table DDLs & Supplemental Logging are key functionalities in Oracle database, what we believe is, it should be possible to use them at the same time. Could you please explain this behavior & any possible ways to achieve ONLINE DDLs on tables for upgrades while supplementary logging is enabled? Thanks & Kind Regards, Navinth
Categories: DBA Blogs

Authid current user functionality

Tom Kyte - Fri, 2026-05-15 16:28
Hi Connor, Let me describe the situation. Our client is running a warehouse management system. There are two schemas wh1 and wh2. All the packages and procedures are created in schema wh1 with authid current user. Today I have faced the issue with the SKU master. Both the schemas have the SKU master, which should be ideally identical. When a particular procedure of a package is called from schema wh2 and was looking for an SKU which was present in schema wh2 but was missing from wh1 it flagged an error that the SKU is missing. When I created the SKU in schema wh1 it processed successfully. This is really puzzling. To best of my knowledge when the procedure is being called from schema wh2 it should access schema wh2 tables by default when we are not prefixing the table name with schema name. Am I missing something. Please share your view. Let me try with a sample code: tablename : sku schemas : wh1 and wh2 The said table is created in both the schemas. Lets sku 'SAMPLE1' is in schema wh2. But this sku does not exist in schema wh1. create or replace package wh1.sync_sku is authid current_user; begin upsert_sku(p_sku varchar2); end; create or replace package body wh1.sync_sku is begin procedure upsert_sku(p_sku varchar2) is declare v_found char(1) := 'N'; begin select 'Y' into v_found from sku where sku = p_sku; exception when no_data_found then raise_appliocation_error(-20001, 'SKU does not exist'); when others then raise; end; end; When the procedure upsert_sku is executed from schema wh2 with parameter 'SAMPLE1' its showing the error 'SKU does not exist' although the sku is exist in schema wh2. As soon as we insert the sku in schema wh1 the procedure executes successfully. The schema wh2 have all the required rights to execute the procedure of schema wh1.
Categories: DBA Blogs

segregration of duties template for Oracle Database

Tom Kyte - Fri, 2026-05-15 16:28
Oracle has published following document for MySQL: https://blogs.oracle.com/mysql/why-your-application-should-not-use-one-mysql-user-for-everything. I have not found similar document for Oracle Database: I would like to know if Oracle has documented something similar for Oracle Database ? Thanks.
Categories: DBA Blogs

oracle error 1408 and 6502

Tom Kyte - Fri, 2026-05-15 16:28
how to find which field raise the error 6502 or 1408?
Categories: DBA Blogs

include heading while downloading an interactive report into pdf

Tom Kyte - Fri, 2026-05-15 16:28
hi I have created an interactive report I want to include heading e.g. Amountwise advances as on how can I do this Also I want to include heding while downloading as pdf please help
Categories: DBA Blogs

Index logging

Tom Kyte - Fri, 2026-05-15 16:28
what is the difference between logging and nologging when creating an index
Categories: DBA Blogs

add a new column to table The column will be of type NUMBER(19,0) and nullable (null by default).

Tom Kyte - Fri, 2026-05-15 16:28
My question is if they add the column the table will be block during the coluum add because it's not enteprise but standard edition
Categories: DBA Blogs

Customer case study – automating SQL Server TLS Encryption with Ansible and Certificates (Architecture)

Yann Neuhaus - Fri, 2026-05-15 14:35

When working with SQL Server environments, securing client connections can become an important requirement, especially when TLS encryption must be implemented using certificates. In this context, a customer asked us to develop an Ansible playbook and role to automate the configuration of TLS for SQL Server. The certificates are generated from the customer PKI and provided as PEM files containing the server certificate, the private key, and the certificate chain.

However, some extractions and conversions are required before these certificates can be used on Windows and configured for SQL Server.

Here, the idea is to propose a solution (the architecture) that prepares the certificate, imports it on the SQL Server host, and configures SQL Server to use it.

We will also see how to separate the preparation and activation steps in order to reduce the impact on the SQL Server service.

In this blog post, we will describe the global approach and the Ansible logic used to implement certificate-based TLS encryption for SQL Server.

Implementation logic

Before configuring TLS encryption on SQL Server, the first point was to understand the certificate format provided by the customer PKI.

In our case, the generated file is a <machine>.pem file. This file contains the server certificate used for TLS, the private key and the certificate chain with the intermediate and root certificates.

As this format cannot be directly used as-is on the Windows side for SQL Server, some extraction and conversion steps are required.

The general idea is to use the Ansible control node as a working area.

The PEM file is first copied into a temporary folder where the different parts of the certificate are extracted:

  • the leaf certificate
  • the intermediate certificate
  • the root certificate
  • the private key

These elements are then used to build a PFX file which can be imported on the Windows SQL Server host.

The PFX is installed in the LocalMachine\My certificate store while the intermediate and root certificates are imported into the appropriate Windows certificate stores.

The implementation has been designed around three different execution modes: stage, activate, and full.

The stage mode is used to prepare the certificate without any impact on the SQL Server service. It copies the PEM file, performs the extractions, builds the PFX file, copies it to the managed Windows node and imports the certificates into the Windows certificate stores. No registry change is performed, and the SQL Server service is not restarted. This mode is useful when we want to prepare the server in advance before switching SQL Server to the new certificate.

The activate mode assumes that the certificate is already present on the Windows server. Its role is to configure SQL Server to use the installed certificate and depending on the selected option, restart the SQL Server service or leave the change pending until the next planned reboot.

This can be useful when the certificate activation must be aligned with an existing maintenance window, for example during monthly OS patching.

The full mode executes the complete configuration from end to end. It performs the extraction and conversion steps, imports the certificates, grants the required permissions, configures SQL Server to use the expected certificate, and restarts the SQL Server service only if required. To avoid unnecessary impact, the role relies on the certificate thumbprint. If the expected certificate is already configured, no change is applied and the SQL Server service is not restarted. This behavior is important for idempotency.

For example, if the full mode is executed after an activate mode, nothing should be changed if the certificate is already the correct one. The same logic applies if the playbook is executed by mistake while the certificate has not been renewed.

Another point to manage is the restart of the SQL Server service. SQL Server loads the certificate configuration when the service starts. Therefore, when a new certificate is configured, the change is only effective after a restart of the SQL Server service.

For this reason the role should provide an option to control whether the restart is performed immediately or postponed to the next planned reboot.

We also have to consider DNS aliases. The standard use case is to generate a certificate containing at least the short name and the FQDN of the SQL Server host in the subjectAltName. If DNS aliases are used by client applications, they can also be added to the certificate SAN.

For example:

[alt_names]
DNS.1 = A-WS2022-2.lab.local
DNS.2 = A-WS2022-2

Finally, the customer confirmed that the private key included in the PEM file is not encrypted.

This simplifies the conversion process to PFX, but it also means that the PEM file must be handled carefully during the Ansible execution, especially in temporary folders and during file transfers. With this approach, the role provides a controlled way to prepare, activate, or fully configure TLS encryption for SQL Server while keeping the impact on the SQL Server service under control.

Logical workflow

The complete workflow can be represented as follows:

Architecture summary

The certificate manipulation is performed on the Ansible control node.

The Windows certificate import and SQL Server configuration are performed on the managed Windows SQL Server host.

This separation is useful because the PEM processing and PFX generation are handled with Linux tools such as OpenSSL while the certificate installation, private key permissions, registry configuration and SQL Server restart are handled through Windows modules and PowerShell. The design also supports a controlled deployment approach.

The certificate can first be staged without service impact then activated later during a maintenance window.

The full mode can be used when the complete implementation must be executed in a single run. The use of the certificate thumbprint is important for idempotency. It allows the role to detect whether SQL Server is already configured with the expected certificate and avoids unnecessary service restarts when no change is required.

Remarks

For certain reasons we do not disclose the code of the created role.

Thank you. Amine Haloui

L’article Customer case study – automating SQL Server TLS Encryption with Ansible and Certificates (Architecture) est apparu en premier sur dbi Blog.

How Row Goal shapes your SQL Server query strategy by hunting for pierogis

Yann Neuhaus - Fri, 2026-05-15 06:34
The Wroclaw Connection

SQLDay 2026 took place this week, from May 11th to 13th, in Wroclaw. Among the featured speakers was Erik Darling, who delivered both a main session and a full-day workshop dedicated to SQL Server performance. During his presentations, he emphasized a concept that is not always widely understood, known as the Row Goal.

The purpose of this article is to recap Erik’s key observations and to introduce this topic, which can serve as a powerful lever for query optimization.

A quick culinary detour and why pierogis matter

In order to understand the explanations below, one key concept must be understood: the Pierogi.

Pierogi are filled dumplings made from unleavened dough, popular in Polish cuisine and enjoyed worldwide, with various savory and sweet fillings[1], [2].

To be honest, this has nothing to do with our technical topic, but this dish discovered during this trip is so good that I simply had to include it in this blog.

Filling the aisles and designing our database

In this article, we will use a custom-made database simulating a Polish supermarket selling pierogis. Unfortunately, there aren’t many left, and the product distribution is not uniform. In fact, pierogis account for much less than 1% of the supermarket’s total stock.
Here is the script to create the DB, along with its article reference table and inventory:

USE master;
GO

IF EXISTS (SELECT * FROM sys.databases WHERE name = 'PierogiMart')
    DROP DATABASE PierogiMart;
GO

CREATE DATABASE PierogiMart;
GO

USE PierogiMart;
GO

CREATE TABLE Articles (
    ArticleID INT IDENTITY(1,1) PRIMARY KEY,
    ArticleName VARCHAR(50) NOT NULL,
    Price DECIMAL(10, 2) NOT NULL
);

CREATE TABLE Inventory (
    ReferenceID INT IDENTITY(1,1) PRIMARY KEY,
    ArticleID INT NOT NULL,
    ValidityDate DATETIME NOT NULL,
    Quantity INT NOT NULL,
    CONSTRAINT FK_Article FOREIGN KEY (ArticleID) REFERENCES Articles(ArticleID)
);
GO

INSERT INTO Articles (ArticleName, Price)
VALUES 
('Pierogi', 12.50),
('Pasta', 8.00),
('Sandwich', 6.50),
('Quiche', 9.00);
GO

INSERT INTO Inventory (ArticleID, ValidityDate, Quantity)
SELECT TOP 100000 
    2, 
    DATEADD(DAY, ABS(CHECKSUM(NEWID())) % 365, '2025-01-01'), 
    ABS(CHECKSUM(NEWID())) % 100
FROM sys.all_columns a CROSS JOIN sys.all_columns b;

INSERT INTO Inventory (ArticleID, ValidityDate, Quantity)
SELECT TOP 10000 
    3, 
    DATEADD(DAY, ABS(CHECKSUM(NEWID())) % 365, '2025-01-01'), 
    ABS(CHECKSUM(NEWID())) % 100
FROM sys.all_columns a CROSS JOIN sys.all_columns b;

INSERT INTO Inventory (ArticleID, ValidityDate, Quantity)
SELECT TOP 50000 
    4, 
    DATEADD(DAY, ABS(CHECKSUM(NEWID())) % 365, '2025-01-01'), 
    ABS(CHECKSUM(NEWID())) % 100
FROM sys.all_columns a CROSS JOIN sys.all_columns b;

INSERT INTO Inventory (ArticleID, ValidityDate, Quantity)
SELECT TOP 10 
    1, 
    '2026-12-31', 
    5
FROM sys.all_columns;
GO

We are also including a few indexes to simulate a real-world use case and to support our queries, ensuring we get realistic execution plans:

CREATE NONCLUSTERED INDEX IDX_INV_QUANT ON [dbo].[Inventory] ([Quantity]) include (ArticleID)

CREATE NONCLUSTERED INDEX IDX_INV_VALIDITY on [dbo].[Inventory] ([ValidityDate]) include (ArticleID)

CREATE NONCLUSTERED INDEX IDX_INV_ART on [dbo].[Inventory] (ArticleID)
What exactly is a Row Goal?

Normally, the SQL Server optimizer seeks to minimize the total cost of processing all data for a query. However, if it knows that you only need a specific number of rows (for example, via a TOP, FAST(N), or EXISTS clause), it changes its strategy.

The Row Goal is this specific row target that pushes the optimizer to favor a plan capable of delivering the first few rows as quickly as possible, even if that same plan would be catastrophic for processing the entire table.

TOP(N): Hunting for the best Pierogi

To illustrate the definition above, let’s search for the pierogis with the furthest expiration dates.
Note that the IDX_INV_VALIDITY index supports this query:

SELECT 
    A.ArticleName, 
    A.Price, 
    I.ValidityDate
FROM Articles A
INNER JOIN Inventory I ON A.ArticleID = I.ArticleID
WHERE A.ArticleName = 'Pierogi'
order by I.ValidityDate desc;

SELECT top 10
    A.ArticleName, 
    A.Price, 
    I.ValidityDate
FROM Articles A
INNER JOIN Inventory I ON A.ArticleID = I.ArticleID
WHERE A.ArticleName = 'Pierogi'
order by I.ValidityDate desc

The difference between these two queries is that one requests only the first 10 rows, while the other requests all matching rows. However, this simple distinction is not merely applied when displaying the results; this condition is pushed deeper into the execution plan to influence the choice of operators (Nested Loop, Hash Join, Merge Join) further down the tree.

For the first query, here is the resulting plan:

As we can see, the optimizer chose a Hash Join given the volume of data to be joined. A Hash Match implies that all the data must be read in order to produce the desired result.

For the second query, here is the execution plan:

We can see that this time, the optimizer chose a Nested Loop, which takes each row from the reference table (Inventory) and joins them with the Articles table. This operation can be very time-consuming if a large number of rows must be processed. However, this is where EstimateRowsWithoutRowGoal comes into play. The value of this property is 40’002.5; this means that in a case where a subset of rows was not specifically required, the optimizer would have estimated the number of rows returned by this operator at that value. We can see, however, that the estimation actually used is 10 rows for one execution, a value clearly derived from the TOP(10).

In summary, adding the TOP(10) allowed the optimizer to use a less expensive join for a small amount of data, even though the TOP operator is located at the very end of the execution plan (since a plan is read from right to left).

EXISTS: The search for the first match

As explained previously, the EXISTS clause has a cardinality of 1 because the very first row meeting the internal condition is enough to validate the case. This triggers a Row Goal, as the optimizer must estimate how many rows it will need to read to satisfy (or not) this condition.

Note: In cases where the condition is never met, the optimizer’s plan can become highly inefficient; for full details, see Erik Darling’s blog [here].

We will now observe this behavior with the following query, varying the internal condition of the EXISTS clause by testing one highly selective (discriminant) case and another much less so.

SELECT 
    A.ArticleName, 
    A.Price
FROM Articles A
WHERE not EXISTS (
    SELECT 1/0
    FROM Inventory I 
    WHERE I.ArticleID = A.ArticleID 
    AND I.Quantity > 10 -- vs 98
);

As you may have noticed, I am looking here for products that maintain a certain quantity for every possible consumption date. My goal, of course, is to avoid depleting the stocks of these excellent Polish pierogis so that everyone can enjoy them!

The case where we want to ensure that all existing quantities for an item are greater than 10 is very difficult to satisfy; based on the statistics available to the optimizer, all items have 10 or more units in stock, except for the pierogis!
Since this condition is so widespread, the optimizer knows it will have to scan a large number of rows to find a single case where the condition is not met. This is why it opts for a Scan. This behavior is evidenced by the estimated number of rows to be read (160’010, which represents the entire table).

On the other hand, for a very restrictive condition (quantity > 98), the optimizer recognizes that this condition is highly selective. This is why it favors a Nested Loop, estimating that only 1’608 rows will be necessary to prove the non-existence of the condition.

In summary, EXISTS forces the optimizer to estimate the number of rows required to find a single occurrence that proves whether a condition is met or not, thereby triggering a local optimization of the execution plan.

OPTION(FAST N): Manually steering the engine

The OPTION(FAST N) hint allows you to manually introduce the Row Goal concept into a query. This hint does not limit the total number of results returned; instead, it optimizes the execution plan to retrieve the first N rows as quickly as possible (potentially at the expense of performance for the remaining rows).

In our example below, we have two identical queries retrieving items with a quantity greater than 10. However, the second one uses an execution plan optimized to return the first row as fast as possible (just to make sure no one steals the last available pierogi from the top of the pile!).

select * from Inventory i
where i.Quantity > 10 
order by i.ArticleID

select * from Inventory i
where i.Quantity > 10 
order by i.ArticleID option(fast 1)

Once again, the plans diverge. To retrieve a single row, the IDX_INV_ART index (which already contains sorted ArticleIDs) is used. It performs a Seek on the smallest ArticleID to check if it satisfies the condition of having a quantity greater than 10.

However, by enabling SET STATISTICS TIME ON, we can see that the second execution plan is slower than the first when returning all requested rows (250ms vs. 204ms). While the gap is not massive due to the small table size, the difference is nonetheless observable.

Wrapping up and how to survive the Row Goal gamble

To conclude, the Row Goal is a double-edged sword; brilliant when you only need a quick glimpse of your data, but it can become a real performance trap if the optimizer’s “bet” fails.

Fortunately, if you find that SQL Server is making bad decisions by being too optimistic, you can take back control. By using the hint OPTION (USE HINT ('DISABLE_OPTIMIZER_ROWGOAL')), you force the optimizer to stop daydreaming and focus on the actual cost of the query. It’s the ultimate tool to ensure your execution plan doesn’t end up as messy as a dropped plate of pierogis!

L’article How Row Goal shapes your SQL Server query strategy by hunting for pierogis est apparu en premier sur dbi Blog.

SQL Server Snapshot Backup and Restore with Proxmox ZFS – REST API with SQL Server 2025 (3/3)

Yann Neuhaus - Thu, 2026-05-14 16:39

The proposed architecture consists in adding a small internal REST API on the Proxmox server in order to expose a controlled ZFS snapshot operation. SQL Server 2025 can then call this API through sp_invoke_external_rest_endpoint, instead of running SSH commands directly or relying on an external tool.

The role of the API is deliberately limited: it receives a snapshot request, checks that the requested zvol is authorized, and then runs the zfs snapshot command on the Proxmox side. An allowlist is used to restrict the ZFS volumes that can be accessed. This prevents a REST call from being able to manipulate any dataset on the server.

With this approach, we can reproduce a behavior close to what an enterprise storage array provides, but using Proxmox and ZFS. It is important to note that Proxmox does not natively provide the same level of integration as Pure Storage for SQL Server snapshots. Pure Storage provides dedicated mechanisms and integrations. In our case, we need to build a specific orchestration layer. The REST API therefore acts as an adapter between SQL Server, which drives the snapshot backup workflow, and ZFS, which actually performs the storage-level snapshot.

Architecture

Here is a global overview of the architecture:

  • SQL Server freezes the database I/Os
  • SQL Server 2025 calls the internal REST API
  • The REST API validates the request and checks the zvol allowlist
  • The API triggers the ZFS snapshot on Proxmox
  • The API returns the snapshot information to SQL Server
  • SQL Server creates the metadata-only backup
  • The database I/Os are released
REST API implementation

Under Proxmox, we install the required packages:

apt update
apt install -y python3-venv sudo openssl

We create a dedicated user:

useradd --system \
  --home /opt/sql-zfs-api \
  --shell /usr/sbin/nologin \
  sqlsnap

We create the following folders:

mkdir -p /opt/sql-zfs-api
mkdir -p /etc/sql-zfs-api

We declare the authorized zvol :

cat >/etc/sql-zfs-api/allowed-zvols <<'EOF'
sqlpool/pve/vm-302-disk-0
EOF

We create a root-only allowlist:

chown root:root /etc/sql-zfs-api/allowed-zvols
chmod 600 /etc/sql-zfs-api/allowed-zvols

Then we create the secured ZFS helper. This script is executed as root through sudo, but it rejects any dataset that is not defined in the allowlist.

cat >/usr/local/sbin/sql-zfs-helper <<'EOF'
#!/usr/bin/env bash
set -euo pipefail

ALLOW_FILE="/etc/sql-zfs-api/allowed-zvols"
LOCK_FILE="/run/sql-zfs-helper.lock"

die() {
  echo "$*" >&2
  exit 1
}

exec 9>"$LOCK_FILE"
flock -n 9 || die "another snapshot operation is already running"

[[ -r "$ALLOW_FILE" ]] || die "allowlist not readable: $ALLOW_FILE"

mapfile -t ALLOWED_DATASETS < <(grep -Ev '^\s*(#|$)' "$ALLOW_FILE")

is_allowed() {
  local ds="$1"
  local allowed
  for allowed in "${ALLOWED_DATASETS[@]}"; do
    [[ "$ds" == "$allowed" ]] && return 0
  done
  return 1
}

valid_snapname() {
  [[ "$1" =~ ^[A-Za-z0-9_.:-]{1,120}$ ]]
}

ACTION="${1:-}"
shift || true

case "$ACTION" in
  snapshot)
    SNAPNAME="${1:-}"
    shift || true

    valid_snapname "$SNAPNAME" || die "invalid snapshot name: $SNAPNAME"
    [[ "$#" -ge 1 ]] || die "no zvol specified"
    [[ "$#" -le 8 ]] || die "too many zvols"

    SNAPSHOTS=()

    for DS in "$@"; do
      is_allowed "$DS" || die "dataset not allowed: $DS"
      /sbin/zfs list -H -t volume -o name "$DS" >/dev/null 2>&1 || die "zvol not found: $DS"

      FULLSNAP="${DS}@${SNAPNAME}"

      if /sbin/zfs list -H -t snapshot -o name "$FULLSNAP" >/dev/null 2>&1; then
        die "snapshot already exists: $FULLSNAP"
      fi

      SNAPSHOTS+=("$FULLSNAP")
    done

    /sbin/zfs snapshot "${SNAPSHOTS[@]}"
    /sbin/zfs hold sqlsnap "${SNAPSHOTS[@]}"

    printf '{"status":"ok","snapshots":['
    SEP=""
    for S in "${SNAPSHOTS[@]}"; do
      printf '%s"%s"' "$SEP" "$S"
      SEP=","
    done
    printf ']}\n'
    ;;

  list)
    /sbin/zfs list -H -t snapshot -o name -r sqlpool | grep '@sql_' || true
    ;;

  *)
    die "usage: sql-zfs-helper snapshot SNAPNAME ZVOL [ZVOL...]"
    ;;
esac
EOF

chown root:root /usr/local/sbin/sql-zfs-helper
chmod 750 /usr/local/sbin/sql-zfs-helper

We only allow the helper through sudo:

cat >/etc/sudoers.d/sql-zfs-helper <<'EOF'
sqlsnap ALL=(root) NOPASSWD: /usr/local/sbin/sql-zfs-helper *
EOF

chmod 440 /etc/sudoers.d/sql-zfs-helper
visudo -cf /etc/sudoers.d/sql-zfs-helper

We install the FastAPI API:

python3 -m venv /opt/sql-zfs-api/venv
/opt/sql-zfs-api/venv/bin/pip install fastapi "uvicorn[standard]"

We create the application file:

cat >/opt/sql-zfs-api/app.py <<'EOF'
import os
import re
import json
import socket
import secrets
import subprocess
from datetime import datetime, timezone
from fastapi import FastAPI, Header, HTTPException
from pydantic import BaseModel, Field

API_KEY = os.environ.get("SQL_ZFS_API_KEY", "")
ALLOW_FILE = "/etc/sql-zfs-api/allowed-zvols"
SNAP_RE = re.compile(r"^[A-Za-z0-9_.:-]{1,120}$")

app = FastAPI(title="SQL ZFS Snapshot API", version="1.0.0")


class SnapshotRequest(BaseModel):
    database: str = Field(..., min_length=1, max_length=128)
    vmid: int = 302
    snapname: str = Field(..., min_length=1, max_length=120)
    zvols: list[str] = Field(..., min_length=1, max_length=8)


def load_allowed_zvols() -> set[str]:
    with open(ALLOW_FILE, "r", encoding="utf-8") as f:
        return {
            line.strip()
            for line in f
            if line.strip() and not line.strip().startswith("#")
        }


def check_api_key(x_sqlsnap_key: str | None) -> None:
    if not API_KEY:
        raise HTTPException(status_code=500, detail="API key not configured")

    if not x_sqlsnap_key:
        raise HTTPException(status_code=401, detail="missing API key")

    if not secrets.compare_digest(x_sqlsnap_key, API_KEY):
        raise HTTPException(status_code=403, detail="invalid API key")


@app.get("/health")
def health():
    return {
        "status": "ok",
        "host": socket.gethostname(),
        "utc": datetime.now(timezone.utc).isoformat(),
    }


@app.post("/v1/sql-zfs/snapshot")
def create_snapshot(
    req: SnapshotRequest,
    x_sqlsnap_key: str | None = Header(default=None, alias="x-sqlsnap-key"),
):
    check_api_key(x_sqlsnap_key)

    if not SNAP_RE.fullmatch(req.snapname):
        raise HTTPException(status_code=400, detail="invalid snapname")

    allowed = load_allowed_zvols()

    for zvol in req.zvols:
        if zvol not in allowed:
            raise HTTPException(status_code=403, detail=f"zvol not allowed: {zvol}")

    cmd = [
        "sudo",
        "/usr/local/sbin/sql-zfs-helper",
        "snapshot",
        req.snapname,
        *req.zvols,
    ]

    try:
        completed = subprocess.run(
            cmd,
            text=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
            timeout=30,
            check=False,
        )
    except subprocess.TimeoutExpired:
        raise HTTPException(status_code=504, detail="zfs snapshot timeout")

    if completed.returncode != 0:
        raise HTTPException(
            status_code=500,
            detail={
                "error": completed.stderr.strip(),
                "stdout": completed.stdout.strip(),
            },
        )

    snapshots = [f"{zvol}@{req.snapname}" for zvol in req.zvols]

    return {
        "status": "ok",
        "database": req.database,
        "vmid": req.vmid,
        "snapname": req.snapname,
        "snapshots": snapshots,
        "media_description": "zfs|" + socket.gethostname() + "|" + ";".join(snapshots),
    }
EOF

chown -R root:root /opt/sql-zfs-api
chmod 755 /opt/sql-zfs-api
chmod 644 /opt/sql-zfs-api/app.py

We configure and generate the key:

APIKEY="$(openssl rand -hex 32)"
echo "$APIKEY"

We create the environment file:

cat >/etc/sql-zfs-api/sql-zfs-api.env <<EOF
SQL_ZFS_API_KEY=$APIKEY
EOF

chown root:root /etc/sql-zfs-api/sql-zfs-api.env
chmod 600 /etc/sql-zfs-api/sql-zfs-api.env

We need to save the generated key.

Next, we enable HTTPS. SQL Server sp_invoke_external_rest_endpoint calls HTTPS endpoints, and the documentation specifies that only HTTPS endpoints with TLS are supported.

openssl req -x509 -newkey rsa:4096 -sha256 -days 360 -nodes \
  -keyout /etc/sql-zfs-api/tls.key \
  -out /etc/sql-zfs-api/tls.crt \
  -subj "/CN=promox1" \
  -addext "subjectAltName=DNS:promox1,IP:192.168.1.110"

chown root:sqlsnap /etc/sql-zfs-api/tls.key /etc/sql-zfs-api/tls.crt
chmod 640 /etc/sql-zfs-api/tls.key
chmod 644 /etc/sql-zfs-api/tls.crt

The /etc/sql-zfs-api/tls.crt certificate must be imported into the Windows trusted root certification authorities on the SQL Server side. Otherwise, the HTTPS call may fail.

We create the systemd service:

cat >/etc/systemd/system/sql-zfs-api.service <<'EOF'
[Unit]
Description=SQL Server to ZFS Snapshot API
After=network-online.target
Wants=network-online.target

[Service]
User=sqlsnap
Group=sqlsnap
WorkingDirectory=/opt/sql-zfs-api
EnvironmentFile=/etc/sql-zfs-api/sql-zfs-api.env
ExecStart=/opt/sql-zfs-api/venv/bin/uvicorn app:app --host 0.0.0.0 --port 8443 --ssl-keyfile /etc/sql-zfs-api/tls.key --ssl-certfile /etc/sql-zfs-api/tls.crt
Restart=on-failure
RestartSec=3

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable --now sql-zfs-api
systemctl status sql-zfs-api

We check the status of our API:

It is possible to call the API in PowerShell using Invoke-RestMethod with PowerShell 7:

$headers = @{
"Content-Type"  = "application/json"
"x-sqlsnap-key" = "MyKey"
}

$body = @{
database = "StackOverflow"
vmid     = 302
snapname = "StackOverflow_test010"
zvols    = @("sqlpool/pve/vm-302-disk-0")
} | ConvertTo-Json -Depth 5

Invoke-RestMethod `
-Uri "https://192.168.1.110:8443/v1/sql-zfs/snapshot" `
-Method Post `
-Headers $headers `
-Body $body `
-ContentType "application/json" `
-SkipCertificateCheck

This gives:

Test from SQL Server

A certificate was generated on Proxmox and it needs to be imported on the SQL Server host. In my case, it was located here:

I then imported it on Windows Server:

For testing purposes, I created something simple. On the SQL Server side, we can create a database that will be used to store our future stored procedure. This procedure will allow us to interact with the API. In my case, I created a database called dbi_tools:

This database will contain a credential. In our case, the DATABASE SCOPED CREDENTIAL is used to securely store the authentication information required to call the REST API from SQL Server. This allows us, for example, to protect the API key:

USE [dbi_tools]
GO

IF NOT EXISTS (
    SELECT 1
    FROM sys.symmetric_keys
    WHERE name = '##MS_DatabaseMasterKey##'
)
BEGIN
    CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'MyStrongPassword_%99';
END
GO

CREATE DATABASE SCOPED CREDENTIAL [https://192.168.1.110:8443/v1/sql-zfs/snapshot]
WITH
    IDENTITY = 'HTTPEndpointHeaders',
    SECRET = '{"x-sqlsnap-key":"MyAPIKey"}';
GO

We then create a stored procedure to encapsulate the code used to call the API:

USE dbi_tools;
GO

CREATE OR ALTER PROCEDURE dbo.usp_BackupDatabase_WithZfsSnapshot
    @DatabaseName sysname,
    @BackupDirectory nvarchar(4000) = N'D:\Backups\'
AS
BEGIN
    SET NOCOUNT ON;

    DECLARE @Url nvarchar(4000) =
        N'https://192.168.1.110:8443/v1/sql-zfs/snapshot';

    DECLARE @Vmid int = 302;

    DECLARE @ZvolsJson nvarchar(max) =
        N'["sqlpool/pve/vm-302-disk-0"]';

    DECLARE @Stamp varchar(20) =
        REPLACE(REPLACE(CONVERT(varchar(19), SYSUTCDATETIME(), 126), '-', ''), ':', '') + 'Z';

    DECLARE @SafeDbName nvarchar(128) =
        REPLACE(REPLACE(REPLACE(@DatabaseName, N' ', N'_'), N'[', N''), N']', N'');

    DECLARE @SnapName nvarchar(128) =
        CONCAT(N'sql_', @SafeDbName, N'_', @Stamp);

    DECLARE @BackupFile nvarchar(4000) =
        CONCAT(@BackupDirectory, N'\', @SafeDbName, N'_', @Stamp, N'.bkm');

    DECLARE @Payload nvarchar(max) =
    (
        SELECT
            @DatabaseName AS [database],
            @Vmid AS [vmid],
            @SnapName AS [snapname],
            JSON_QUERY(@ZvolsJson) AS [zvols]
        FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
    );

    DECLARE @ReturnCode int;
    DECLARE @Response nvarchar(max);
    DECLARE @SnapshotList nvarchar(max);

    SELECT @SnapshotList =
        STRING_AGG(CONCAT([value], N'@', @SnapName), N';')
    FROM OPENJSON(@ZvolsJson);

    DECLARE @MediaDescription nvarchar(max) =
        CONCAT(N'zfs|promox1|', @SnapshotList);

    DECLARE @Sql nvarchar(max);

    BEGIN TRY
        SET @Sql =
            N'ALTER DATABASE ' + QUOTENAME(@DatabaseName) +
            N' SET SUSPEND_FOR_SNAPSHOT_BACKUP = ON;';

        EXEC sys.sp_executesql @Sql;

        EXEC @ReturnCode = sys.sp_invoke_external_rest_endpoint
            @url = @Url,
            @method = N'POST',
            @headers = N'{"Content-Type":"application/json","Accept":"application/json"}',
            @payload = @Payload,
            @credential = [https://192.168.1.110:8443/v1/sql-zfs/snapshot],
            @timeout = 30,
            @response = @Response OUTPUT;

        IF @ReturnCode <> 0
        BEGIN
            DECLARE @Err nvarchar(max) =
                CONCAT(N'ZFS snapshot API failed. ReturnCode=', @ReturnCode, N' Response=', @Response);
            THROW 51001, @Err, 1;
        END;

        SET @Sql =
            N'BACKUP DATABASE ' + QUOTENAME(@DatabaseName) + N'
              TO DISK = @BackupFile
              WITH METADATA_ONLY,
                   FORMAT,
                   MEDIANAME = @MediaName,
                   MEDIADESCRIPTION = @MediaDescription,
                   NAME = @BackupName;';

        EXEC sys.sp_executesql
            @Sql,
            N'@BackupFile nvarchar(4000),
              @MediaName nvarchar(128),
              @MediaDescription nvarchar(max),
              @BackupName nvarchar(128)',
            @BackupFile = @BackupFile,
            @MediaName = @SnapName,
            @MediaDescription = @MediaDescription,
            @BackupName = @SnapName;

        SELECT
            @DatabaseName AS database_name,
            @SnapName AS zfs_snapshot_name,
            @SnapshotList AS zfs_snapshots,
            @BackupFile AS metadata_backup_file,
            @MediaDescription AS media_description,
            @Response AS api_response;
    END TRY
    BEGIN CATCH
        IF DATABASEPROPERTYEX(@DatabaseName, 'IsDatabaseSuspendedForSnapshotBackup') = 1
        BEGIN
            SET @Sql =
                N'ALTER DATABASE ' + QUOTENAME(@DatabaseName) +
                N' SET SUSPEND_FOR_SNAPSHOT_BACKUP = OFF;';

            EXEC sys.sp_executesql @Sql;
        END;

        THROW;
    END CATCH
END;
GO

We then call the stored procedure:

EXEC dbi_tools.dbo.usp_BackupDatabase_WithZfsSnapshot
    @DatabaseName = N'StackOverflow',
    @BackupDirectory = N'D:\Backups\';

The backup was generated :

References

sp_invoke_external_rest_endpoint

Thank you. Amine Haloui

L’article SQL Server Snapshot Backup and Restore with Proxmox ZFS – REST API with SQL Server 2025 (3/3) est apparu en premier sur dbi Blog.

SQL Server Snapshot Backup and Restore with Proxmox ZFS – Powershell implementation (2/3)

Yann Neuhaus - Thu, 2026-05-14 16:35

In the previous section, we discussed the drawbacks of running the commands manually. Indeed, the manual process was taking too much time and could directly impact the database state while the freeze was occurring.

To address this issue, it is possible to automate the solution with PowerShell. The idea is to automate the different operations involved in the snapshot backup and restore process.

We will use two scripts:

  • One script to perform the backups and create the snapshots.
  • One script to perform the restores.
Backup process

Here is how the backup process works:

  • We connect to the corresponding SQL Server instance.
  • We change the state of the database using ALTER DATABASE … SET SUSPEND_FOR_SNAPSHOT_BACKUP = ON. At this point, the I/Os are frozen.
  • We connect to the hypervisor through SSH.
  • We create the snapshot.
  • We back up the database using BACKUP DATABASE … WITH METADATA_ONLY.
  • We change the state of the database using ALTER DATABASE … SET SUSPEND_FOR_SNAPSHOT_BACKUP = OFF. At this point, the I/Os are unfrozen.
Powershell implementation (backup)

Here is the code used to perform the backup:

param(
    [string]$SqlInstance = "VM-WS25-SQL2",
    [string]$Database    = "StackOverflow",
    [string]$BackupDir   = "D:\Backups",
    [string]$PveHost     = "192.168.1.110",
    [string]$PveUser     = "MyUser",
    [string[]]$Zvols     = @("sqlpool/pve/vm-302-disk-0")
)

$Timestamp = Get-Date -Format "yyyyMMddTHHmmss"
$SnapName  = "sql_${Database}_${Timestamp}"

$DbSafe = $Database.Replace("]", "]]")
$BackupFile = Join-Path $BackupDir "${Database}_${Timestamp}.bkm"

$ZfsSnapshots = $Zvols | ForEach-Object { "$_@$SnapName" }
$ZfsSnapshotArgs = $ZfsSnapshots -join " "

$MediaDescription = "zfs|$PveHost|$ZfsSnapshotArgs"

$BackupFileSql = $BackupFile.Replace("'", "''")
$MediaSql = $MediaDescription.Replace("'", "''")

$connString = "Server=$SqlInstance;Database=master;Integrated Security=True;TrustServerCertificate=True;Application Name=ZFS-TSQL-Snapshot;"
$conn = New-Object System.Data.SqlClient.SqlConnection $connString

function Invoke-SqlNonQuery {
    param([string]$Sql)

    $cmd = $conn.CreateCommand()
    $cmd.CommandTimeout = 0
    $cmd.CommandText = $Sql
    [void]$cmd.ExecuteNonQuery()
}

try {
    $conn.Open()

    Write-Host "Freezing SQL database writes..."
    Invoke-SqlNonQuery "ALTER DATABASE [$DbSafe] SET SUSPEND_FOR_SNAPSHOT_BACKUP = ON;"

    Write-Host "Taking ZFS snapshot on Proxmox..."
    ssh "$PveUser@$PveHost" "zfs snapshot $ZfsSnapshotArgs && zfs hold sqlsnap $ZfsSnapshotArgs"

    if ($LASTEXITCODE -ne 0) {
        throw "ZFS snapshot failed on $PveHost"
    }

    Write-Host "Writing SQL metadata backup..."

    Invoke-SqlNonQuery @"
BACKUP DATABASE [$DbSafe]
TO DISK = N'$BackupFileSql'
WITH METADATA_ONLY,
     MEDIADESCRIPTION = N'$MediaSql',
     NAME = N'$SnapName';
"@

    Write-Host "Snapshot backup completed:"
    Write-Host "  Snapshot: $ZfsSnapshotArgs"
    Write-Host "  Metadata: $BackupFile"
}
catch {
    Write-Warning $_

    try {
        Write-Warning "Attempting to unfreeze SQL database..."
        Invoke-SqlNonQuery "ALTER DATABASE [$DbSafe] SET SUSPEND_FOR_SNAPSHOT_BACKUP = OFF;"
    }
    catch {
        Write-Warning "Could not unfreeze cleanly. Check SQL Server error log."
    }

    throw
}
finally {
    $conn.Close()
}
Restore process

Here is how the restore process works:

  • We connect to the corresponding SQL Server instance.
  • We take the database offline.
  • The volume dedicated to the StackOverflow database is taken offline.
  • We connect to the hypervisor through SSH.
  • We roll back the corresponding snapshot.
  • We restore the database using the corresponding backup, which was created at the same time as the snapshot.
Powershell implementation (restore)

Here is the code used to perform the restore:

param(
    [string]$SqlInstance = "VM-WS25-SQL2",
    [string]$Database    = "StackOverflow",
    [string]$BackupFile  = "D:\Backups\StackOverflow_20260514T122642.bkm",
    [string]$SnapName    = "sql_StackOverflow_20260514T122642",
    [string]$PveHost     = "192.168.1.110",
    [string]$PveUser     = "MyUser",
    [string[]]$Zvols     = @("sqlpool/pve/vm-302-disk-0"),
    [string[]]$DatabaseDriveLetters = @("T"),
    [switch]$NoRecovery
)

$ErrorActionPreference = "Stop"

function Assert-SafeName {
    param(
        [string]$Value,
        [string]$Name,
        [string]$Pattern
    )

    if ($Value -notmatch $Pattern) {
        throw "$Name contained not allowed characters : $Value"
    }
}

function Normalize-DriveLetter {
    param([string]$DriveLetter)

    $letter = $DriveLetter.Trim().TrimEnd(":").ToUpperInvariant()

    if ($letter -notmatch '^[A-Z]$') {
        throw "Drive letter invalid : $DriveLetter"
    }

    return $letter
}

function Get-DiskForDriveLetter {
    param([string]$DriveLetter)

    $letter = Normalize-DriveLetter $DriveLetter

    $partition = Get-Partition -DriveLetter $letter -ErrorAction Stop
    $disk = $partition | Get-Disk -ErrorAction Stop

    return [pscustomobject]@{
        DriveLetter = $letter
        DiskNumber  = [int]$disk.Number
        IsOffline   = [bool]$disk.IsOffline
        FriendlyName = $disk.FriendlyName
        Size        = $disk.Size
    }
}

function Invoke-SshChecked {
    param([string]$Command)

    Write-Host "SSH $PveUser@$PveHost :: $Command"

    & ssh "$PveUser@$PveHost" "$Command"

    if ($LASTEXITCODE -ne 0) {
        throw "SSH command failed with code $LASTEXITCODE : $Command"
    }
}

function New-SqlConnection {
    $connString = "Server=$SqlInstance;Database=master;Integrated Security=True;TrustServerCertificate=True;Application Name=ZFS-TSQL-Restore-NoVmRestart;"
    return New-Object System.Data.SqlClient.SqlConnection $connString
}

function Invoke-SqlNonQuery {
    param([string]$Sql)

    $conn = New-SqlConnection

    try {
        $conn.Open()
        $cmd = $conn.CreateCommand()
        $cmd.CommandTimeout = 0
        $cmd.CommandText = $Sql
        [void]$cmd.ExecuteNonQuery()
    }
    finally {
        $conn.Close()
    }
}

function Invoke-SqlScalar {
    param([string]$Sql)

    $conn = New-SqlConnection

    try {
        $conn.Open()
        $cmd = $conn.CreateCommand()
        $cmd.CommandTimeout = 0
        $cmd.CommandText = $Sql
        return $cmd.ExecuteScalar()
    }
    finally {
        $conn.Close()
    }
}

function Set-DatabaseDisksOffline {
    param([object[]]$DiskInfos)

    $offlinedByScript = @()

    foreach ($diskInfo in ($DiskInfos | Sort-Object DiskNumber -Unique)) {
        if ($diskInfo.IsOffline) {
            Write-Host "Disque $($diskInfo.DiskNumber) déjà offline. Lecteur $($diskInfo.DriveLetter):"
            continue
        }

        Write-Host "Taking the Windows disk offline $($diskInfo.DiskNumber), drive $($diskInfo.DriveLetter):"
        Set-Disk -Number $diskInfo.DiskNumber -IsOffline $true

        $offlinedByScript += $diskInfo
    }

    return $offlinedByScript
}

function Set-DatabaseDisksOnline {
    param([object[]]$DiskInfos)

    foreach ($diskInfo in ($DiskInfos | Sort-Object DiskNumber -Unique)) {
        Write-Host "Bringing the Windows disk back online. $($diskInfo.DiskNumber), drive $($diskInfo.DriveLetter):"
        Set-Disk -Number $diskInfo.DiskNumber -IsOffline $false
    }

    Write-Host "Update-HostStorageCache..."
    Update-HostStorageCache
}

Assert-SafeName -Value $SnapName -Name "SnapName" -Pattern '^[A-Za-z0-9_.:-]{1,160}$'

foreach ($zvol in $Zvols) {
    Assert-SafeName -Value $zvol -Name "Zvol" -Pattern '^[A-Za-z0-9_.:/-]{1,240}$'
}

$DbQuoted = "[" + $Database.Replace("]", "]]") + "]"
$DbLiteral = $Database.Replace("'", "''")
$BackupFileSql = $BackupFile.Replace("'", "''")

$ZfsSnapshots = $Zvols | ForEach-Object { "$_@$SnapName" }
$ZfsSnapshotArgs = ($ZfsSnapshots | ForEach-Object { "'$_'" }) -join " "

$RecoveryOption = if ($NoRecovery) { "NORECOVERY" } else { "RECOVERY" }

$DatabaseDiskInfos = @()
$DisksOfflinedByScript = @()

Write-Host ""
Write-Host "Restore SQL Server from a ZFS snapshot, without restarting the VM"
Write-Host "SQL Instance : $SqlInstance"
Write-Host "Database     : $Database"
Write-Host "BackupFile   : $BackupFile"
Write-Host "DB volumes   : $($DatabaseDriveLetters -join ', ')"
Write-Host "Snapshots    :"
$ZfsSnapshots | ForEach-Object { Write-Host "  $_" }
Write-Host ""

try {
    Write-Host "Checking ZFS snapshots..."
    Invoke-SshChecked "zfs list -H -t snapshot -o name $ZfsSnapshotArgs >/dev/null"

    Write-Host "Identifying Windows disks containing SQL Server files..."
    foreach ($driveLetter in $DatabaseDriveLetters) {
        $diskInfo = Get-DiskForDriveLetter $driveLetter
        $DatabaseDiskInfos += $diskInfo

        Write-Host "Drive $($diskInfo.DriveLetter): -> Windows disk $($diskInfo.DiskNumber) [$($diskInfo.FriendlyName)]"
    }

    $backupDrive = $null
    if ($BackupFile -match '^([A-Za-z]):\\') {
        $backupDrive = Normalize-DriveLetter $Matches[1]

        try {
            $backupDiskInfo = Get-DiskForDriveLetter $backupDrive
            $targetDiskNumbers = @($DatabaseDiskInfos | ForEach-Object { $_.DiskNumber } | Select-Object -Unique)

            if ($targetDiskNumbers -contains $backupDiskInfo.DiskNumber) {
                throw @"
The backup file $BackupFile is located on drive $backupDrive, which is on the same Windows disk as the SQL Server data volume.
Taking the data disk offline would make the .bkm file inaccessible, and a rollback could also make the .bkm file disappear.
Move the .bkm file to C:, a network share, or another disk that is not rolled back.
"@
            }
        }
        catch {
            throw
        }
    }

    Write-Host "Checking whether the SQL Server database exists..."
    $DbExists = Invoke-SqlScalar "SELECT CASE WHEN DB_ID(N'$DbLiteral') IS NULL THEN 0 ELSE 1 END;"

    if ($DbExists -eq 1) {
        Write-Host "Taking database $Database OFFLINE..."
        Invoke-SqlNonQuery @"
ALTER DATABASE $DbQuoted SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE $DbQuoted SET OFFLINE WITH ROLLBACK IMMEDIATE;
"@
    }
    else {
        Write-Host "Database $Database does not exist in SQL Server. Continuing with disk offline and ZFS rollback."
    }

    Write-Host "Taking Windows disks containing MDF/LDF files offline..."
    $DisksOfflinedByScript = Set-DatabaseDisksOffline -DiskInfos $DatabaseDiskInfos

    Write-Host "Rolling back ZFS snapshot..."
    $RollbackCommands = ($ZfsSnapshots | ForEach-Object { "zfs rollback -r '$_'" }) -join "; "
    Invoke-SshChecked "set -e; $RollbackCommands"

    Write-Host "Bringing Windows disks back online..."
    Set-DatabaseDisksOnline -DiskInfos $DisksOfflinedByScript
    $DisksOfflinedByScript = @()

    Write-Host "Short pause to let Windows and SQL Server detect the restored disk state..."
    Start-Sleep -Seconds 5

    Write-Host "Restoring SQL Server metadata-only backup..."

    $RestoreSql = @"
RESTORE DATABASE $DbQuoted
FROM DISK = N'$BackupFileSql'
WITH METADATA_ONLY,
     REPLACE,
     $RecoveryOption;
"@

    Invoke-SqlNonQuery $RestoreSql

    if (-not $NoRecovery) {
        Write-Host "Setting database back to MULTI_USER..."
        Invoke-SqlNonQuery @"
ALTER DATABASE $DbQuoted SET MULTI_USER;
"@
    }

    Write-Host ""
    Write-Host "Restore completed."
    Write-Host "Database : $Database"
    Write-Host "Snapshot : $SnapName"
    Write-Host "Backup   : $BackupFile"
}
catch {
    Write-Warning "Restore failed: $_"

    if ($DisksOfflinedByScript.Count -gt 0) {
        try {
            Write-Warning "Attempting to bring disks offlined by the script back online..."
            Set-DatabaseDisksOnline -DiskInfos $DisksOfflinedByScript
            $DisksOfflinedByScript = @()
        }
        catch {
            Write-Warning "Unable to automatically bring the disks back online. Check with Get-Disk."
        }
    }

    try {
        $DbExistsAfterError = Invoke-SqlScalar "SELECT CASE WHEN DB_ID(N'$DbLiteral') IS NULL THEN 0 ELSE 1 END;"

        if ($DbExistsAfterError -eq 1 -and -not $NoRecovery) {
            Write-Warning "Attempting to set the database back ONLINE/MULTI_USER..."
            Invoke-SqlNonQuery @"
ALTER DATABASE $DbQuoted SET ONLINE;
ALTER DATABASE $DbQuoted SET MULTI_USER;
"@
        }
    }
    catch {
        Write-Warning "Unable to automatically set the database back ONLINE/MULTI_USER."
    }

    throw
}
What does it look like?

We start the backup process:

We verify that the snapshot is present:

We verify that the backup is present:

We drop the StackOverflow database:

We start the restore process:

The database is available again. The restore took only a few seconds for a database of approximately 200 GB.

Major drawbacks

In my case, the solution is executed from the SQL Server itself. Ideally, it should rather be hosted on another server or client machine. We could also imagine running these scripts from a scheduler such as RedDeck, for example.

During the database restore, the database is switched to SINGLE_USER mode. This could be an issue if the applications using the database reconnect very frequently. A better approach would probably be to explicitly terminate the active sessions using the KILL command.

We have also not yet covered the use of a REST API.

Thank you. Amine Haloui

L’article SQL Server Snapshot Backup and Restore with Proxmox ZFS – Powershell implementation (2/3) est apparu en premier sur dbi Blog.

SQL Server Snapshot Backup and Restore with Proxmox ZFS (1/3)

Yann Neuhaus - Thu, 2026-05-14 16:26

We are currently working with clients on migrations to SQL Server 2022 and SQL Server 2025. During a discussion with one client, we reviewed some of the benefits introduced in the latest SQL Server 2022 and 2025 releases.

Among the available features, starting with SQL Server 2022, we have:

Starting with SQL Server 2025:

The customer’s environment consists of a very large number of instances, some of which host very large SQL Server databases. In this customer’s case, we are referring to a database of approximately 6–7 TB, configured for high availability using Always On Availability Groups. For this database, backups take around two hours, and restores take slightly longer.

In addition, the customer has a Pure Storage array.

We explained to the customer that it is possible to use certain SQL Server 2025 features together with their Pure Storage array to perform snapshots and restores very quickly.

In summary, the process consists of performing the following operations:

  • Change the database state to suspend writes.
  • Create the snapshot using the storage system.
  • Perform a backup using the BACKUP DATABASE MyDB WITH METADATA_ONLY command to indicate that a snapshot has been taken.

Reference: https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-transact-sql-snapshot-backup?view=sql-server-ver17

However, the customer raised several interesting questions, which, reading between the lines, can be summarized as follows:

  • Can this also be applied to PostgreSQL?
  • Are we dependent on Pure Storage to achieve this?

Several articles have been published about the implementation of this process between SQL Server and Pure Storage including the following one:

In my opinion, it is possible to reproduce this operating model with other systems. In my case, we will use Proxmox and ZFS.

Context and environment

ZFS pool provides fast, storage-level, copy-on-write snapshots with minimal space overhead. This makes it well suited for SQL Server snapshot backups, where the database writes are briefly suspended while the underlying virtual disk is captured. ZFS also allows precise rollback or cloning of a snapshot, which is useful for both restore testing and recovery scenarios.

On Proxmox, it integrates naturally with VM disks, making it a practical alternative to enterprise storage snapshot platforms.

The environment consists of a server and two disks: one disk used to store the VMs, and a 1 TB Samsung T7 disk that will be used to create our ZFS pool.

Proxmox Setup

We identity the path of the related volume (Samsung T7) :

for d in /dev/disk/by-id/*; do
  [ "$(readlink -f "$d")" = "/dev/sda" ] && echo "$d"
done

We create the pool. Everything stored in the disk will be erased :

DISK="/dev/disk/by-id/usb-Samsung_PSSD_T7_S6TWNJ0T300328F-0:0"

wipefs -a "$DISK"
sgdisk --zap-all "$DISK"
zpool create \
  -o ashift=12 \
  -o autotrim=on \
  -O compression=lz4 \
  -O atime=off \
  -O xattr=sa \
  -O acltype=posixacl \
  -m /mnt/sqlpool \
  sqlpool "$DISK"

Then we create a Proxmox dataset for the VM disks:

zfs create sqlpool/pve

We add it to proxmox:

pvesm add zfspool sql-zfs \
  --pool sqlpool/pve \
  --content images,rootdir \
  --sparse 1

We check the pool:

zpool status sqlpool

zfs list

pvesm status
pool: sqlpool
state: ONLINE

config:
       NAME                                       STATE     READ WRITE CKSUM
       sqlpool                                    ONLINE       0     0     0
       usb-Samsung_PSSD_T7_S6TWNJ0T300328F-0:0    ONLINE       0     0     0

errors: No known data errors

NAME          USED  AVAIL  REFER  MOUNTPOINT
sqlpool       636K   899G    96K  /mnt/sqlpool
sqlpool/pve    96K   899G    96K  /mnt/sqlpool/pve

Name             Type     Status     Total (KiB)      Used (KiB) Available (KiB)        %
local             dir     active        98497780        42429080        51019152   43.08%
local-lvm     lvmthin     active      3746553856       285112748      3461441107    7.61%
sql-zfs       zfspool     active       942931428              96       942931332    0.00%

My VM ID is 302 and we have to add the virtual disk into the ZFS pool:

VMID=302
qm set "$VMID" --agent enabled=1
qm set "$VMID" --scsihw virtio-scsi-single
qm set "$VMID" --scsi1 sql-zfs:700,cache=none,discard=on,iothread=1,ssd=1

Be carefull to the scsi ID. You may overwrite a used volume.

What does it look like ?

Once the pool created we have something like this :

On the virtual machine side, I have 3 disks :

  • 1 for my virtual machine (for Windows Server)
  • 1 for SQL Server
  • 1 linked to the ZFS pool to store the user database (the StackOverflow database)
SQL Server setup

The virtual machine used for the tests runs with:

  • Windows Server 2025 Standard Edition
  • SQL Server 2025 Enterprise Developer Edition

The mounted zvol is represented by the Databases (T:) volume. Most of the files related to the SQL Server installation are stored on the SQL (D:) volume while the StackOverflow database is located on the Databases (T:) volume.

Manual process flow (snapshot)

Here is how we will proceed to create a snapshot and then restore the database:

  • ALTER DATABASE [StackOverflow] SET SUSPEND_FOR_SNAPSHOT_BACKUP = ON;
  • Create the snapshot using the zfs snapshot command.
  • Run BACKUP DATABASE [StackOverflow] … WITH METADATA_ONLY.

To avoid confusion and to be able to link the snapshot to the backup, we will include the snapshot name in the MEDIADESCRIPTION clause.

Here are the corresponding commands to create the snapshot:

ALTER DATABASE [StackOverflow] SET SUSPEND_FOR_SNAPSHOT_BACKUP = ON;

We perform the snapshot:

zfs snapshot sqlpool/pve/vm-302-disk-0@StackOverflow_11052026_235500

In the same session as the ALTER DATABASE command, we perform a backup:

BACKUP DATABASE [StackOverflow]
TO DISK = N'D:\Backups\StackOverflow_11052026_235500.bkm'
WITH METADATA_ONLY, MEDIADESCRIPTION = N'zfs|proxmox1|sqlpool/pve/vm-302-disk-0@StackOverflow_11052026_235500';

The error log shows the following:

We verify that the snapshot has been successfully created:

And the SQL backup :

Manual process flow (restore)

We now need to be able to restore the database. Before doing so, we can delete a few tables to verify that the database has been restored as expected. We deleted most of the tables, leaving only three:

To perform the restore, we will follow these steps:

  • Take the database offline.
  • Rollback the snapshot using the zfs rollback command.
  • Restore the database using the SQL backup created earlier.

This is done using the following commands:

ALTER DATABASE [StackOverflow] SET OFFLINE WITH ROLLBACK IMMEDIATE;

Snapshot restore:

zfs rollback -r sqlpool/pve/vm-302-disk-0@StackOverflow_13052026_230000

Database restore:

RESTORE DATABASE [StackOverflow]
FROM DISK = N'D:\Backups\StackOverflow_13052026_230000.bkm'
WITH METADATA_ONLY, REPLACE, NORECOVERY;

RESTORE DATABASE [StackOverflow] WITH RECOVERY;

We were able to restore our database in less than one second, even though it is approximately 207 GB in size.

Major drawbacks

The process is manual, and we need to switch between running commands in SQL Server and performing the snapshot/restore operations in Proxmox. This freezes the database for a certain amount of time. During that period, connected applications could generate errors or timeouts.

The solution to this problem would be to automate the process using PowerShell, for example.

What was not covered in this section

While writing this blog post, I omitted two points:

  • When the database is deleted, it is necessary to take the volume dedicated to the StackOverflow database, Databases (D:), offline. Indeed, When you run a DROP DATABASE, SQL Server deletes the files from disk, and the database no longer exists. Then, if you perform a zfs rollback while Windows still sees the disk as online, you are effectively changing the disk “under Windows feet” Windows may keep the previous NTFS state cached: an empty directory, MFT information, file handles, volume metadata, and so on. As a result, the ZFS rollback may have completed successfully, but Windows does not properly refresh its view of the disk.
  • We did not make any calls to a REST API. Indeed, this functionality does not exist in my case, but it is possible to implement it.

Thank you. Amine Haloui

L’article SQL Server Snapshot Backup and Restore with Proxmox ZFS (1/3) est apparu en premier sur dbi Blog.

When a Python driver configuration issue may cause blocking in SQL Server

Yann Neuhaus - Thu, 2026-05-14 16:21

One of our clients encountered blocking during their daily data load. The process loads several million rows and then performs an ALTER TABLE … SWITCH operation into a partitioned table. This operation usually takes some time, but in this case it became blocked.

Context

Initially, I did not have access to much information. The only element I received from the client was a extract of the output from the sp_WhoIsActive procedure.

Initial analysis

Based on this extract, we were able to perform a first-level analysis:

A Python session executed a query against MyTable without applying a date filter. On a table containing approximately 244 million rows, this prevented proper partition elimination and forced SQL Server to read a much broader data set than necessary. Queries against partitioned tables only benefit from partition elimination when the predicate references the partitioning column without such a predicate, SQL Server may have to search or scan all partitions.

The Python session eventually became sleeping but remained with open_tran_count = 1. This is a typical sign of an unclosed transaction on the client side: autocommit disabled, cursor not closed, result set not fully consumed, connection returned to the pool without a rollback…

Session 146 then attempted to perform the partition TRUNCATE/SWITCH operation. However, TRUNCATE TABLE requires a schema modification lock, Sch-M, and ALTER TABLE … SWITCH also requires a Sch-M lock on both the source and target tables.

This Sch-M lock could not be acquired while session 167 was still referencing the object. SQL Server documents Sch-M as the lock required to modify the schema and to ensure that no other session is referencing the object. Once the Sch-M request from session 146 was queued, new read queries were also blocked behind it. Even NOLOCK would not avoid this issue: queries still acquire Sch-S locks during compilation and execution, and Sch-S and Sch-M locks block each other.

Second analysis

After some time, we were able to access the client’s environment. Query Store was enabled on the affected database, and an Extended Events session was configured on the SQL Server instance to track blocking.

Querying the Extended Events session provided detailed information about the blocking events that occurred, and we were able to identify the specific blocking issue reported by the client.

By looking more closely at this blocking issue, we found the following:

EXEC [STAGING_DB].[ETL].[sp_ETL_Exec]
    @ETL_StepIKs_List = '["Exec-[TARGET_DB].dbo.[SP_Load_TargetTable]"]',
    @StartAsJob = 0

Which is blocked by:

WITH position AS
(
    SELECT ...
    FROM [SOURCE_DB].[SCHEMA_NAME].[LARGE_PARTITIONED_TABLE]
    ...
)

<blocking-process>
    spid="167"
    status="sleeping"
    trancount="1"
    clientapp="python[version]"
    hostname="client-host-..."
    loginname="user_account"
    inputbuf="... WITH position AS ..."
</blocking-process>

However, the blocking report highlights an important point: session 167 was no longer actively executing the query at the time the report was captured:

  • status = sleeping
  • trancount = 1

However, by correlating this information with Query Store data, we were able to obtain additional details. By retrieving the corresponding query, we could better understand what was happening.

The blocking report also showed that session 146 was requesting a Sch-M lock, meaning a Schema Modification Lock. This is a strong lock required for operations such as TRUNCATE, ALTER TABLE, and partition SWITCH.

According to the data, session 146 waited for more than two hours, approximately 7,770,160 ms.

However, by correlating this information with Query Store data, we were able to obtain additional details. Specifically, by retrieving the query:

It was executed 30 times during the following time interval: 05-05-2026 from 2:00 PM to 3:00 PM. The average execution time was 49.1 seconds, with a maximum execution time of approximately 57 seconds. This represents a total of around 24 minutes of cumulative execution time over a one-hour period.

Based on this data, the issue was therefore not caused by the performance of the query itself, but rather by the state of session 167. Indeed, the session left a transaction open, with an open_tran_count of 1, thereby locking the corresponding objects and preventing other sessions from accessing them.

How is it related to Python driver configuration?

The observed blocking can likely be explained by a misconfiguration or misuse of the Python driver used to access SQL Server. The root session was a Python connection in a sleeping state, but with trancount = 1, which indicates that a transaction was still open even though the query was no longer actively running.

In this situation, SQL Server may continue to hold transaction-related locks even if the application appears to have completed its work.

If the Python driver was running with autocommit = 0, each SELECT statement could implicitly start a transaction that then had to be explicitly closed with a commit or rollback. If the cursor was not closed properly, the result set was not fully consumed, or a rollback was not issued before returning the connection to the pool, the session could remain open on the SQL Server side. This residual transaction likely prevented the related ETL process from acquiring the Sch-M lock required for the TRUNCATE or partition SWITCH operation.

As a result the ETL session was not the initial root cause. It was waiting for a lock held by an idle Python connection.

Next queries then accumulated behind the pending Sch-M lock request, creating the impression of a global outage.

Switching to autocommit = 1 significantly reduces this risk, because read operations are no longer tied to an open transaction by default. Finally, preventing parallel pipeline execution helps avoid amplifying the issue when a job is delayed.

Thank you. Amine Haloui

L’article When a Python driver configuration issue may cause blocking in SQL Server est apparu en premier sur dbi Blog.

A Misleading SSAS Error in Power BI Report Server When Using DirectQuery Mode

Yann Neuhaus - Thu, 2026-05-14 16:17

Our client was experiencing issues after publishing a report that used Direct Query mode. Specifically, when the report was queried, the following error occurred:

Error :  We couldn’t connect to the Analysis Services server. Make sure you’ve entered the connection string correctly.

However, this issue did not occur in Power BI Desktop.

In Power BI, several data loading modes are available. Import mode loads data into the Power BI model, which usually provides faster performance and richer modeling capabilities. DirectQuery mode does not store the data in the model instead, each interaction sends queries to the source system in real time. Import is generally better for speed and flexibility, while DirectQuery is useful when data must stay in the source or remain near real-time. The trade-off is that DirectQuery depends more heavily on source performance, network latency, and source-system limitations.

Configuration

At first glance, one might think that the corresponding report is trying to connect to an SSAS service and that there is a connectivity issue between Power BI Report Server and a SQL Server Analysis Services instance.

However, after reviewing the data source, there was no connection to SSAS:

We did not have this type of configuration:

The questions that arise

Why are we getting an error message even though the report is not trying to connect to a SQL Server Analysis Services instance?

Why is our client seeing this error message and unable to query the report?

Troubleshooting

By reviewing the Power BI Report Server logs, it was possible to see this type of message:

Failed to get CSDL. —> MsolapWrapper.MsolapWrapperException: Failure encountered while getting schema.

CannotRetrieveModelException: An error occurred while loading the model… Verify that the connection information is correct and that you have permissions to access the data source.

It is also possible to retrieve some information from the ExecutionLog3 table:

Indeed,  whenever a Power BI report is rendered or a scheduled refresh is executed, new entries are written to the ExecutionLog3 table. These entries can be queried through the ExecutionLog3 view in the Report Server catalog database. The ConceptualSchema event corresponds to a user viewing the report.

When querying the Event Viewer, it returned these errors at the time we tried to query the report:

More details about the first errors

We have two error messages that seem to point in two different directions. In reality, the first error messages are not very useful and appear because although the error message refers to Analysis Services, the report was not connecting to an external SSAS instance. Power BI Report Server uses an internal Analysis Services engine to load and query Power BI report models. Therefore, the error was raised by the internal PBIRS Analysis Services engine, not by a standalone SQL Server Analysis Services instance.

Power BI Report Server may report an Analysis Services-related error even when the report does not connect to an external SSAS instance. This is because PBIRS uses an internal Analysis Services engine to host and execute the Power BI semantic model behind the report. In DirectQuery mode, the data remains in SQL Server, but the report model, metadata, relationships, measures, and DAX queries are still processed through this internal engine.

When a user opens the report, PBIRS asks this local Analysis Services process to load the model and generate the queries sent to SQL Server.

Therefore, if the internal engine fails while loading the model, validating metadata, or connecting to the SQL Server data source, the error may mention Analysis Services. This does not mean that the report is connected to a standalone SSAS instance.

More details about the second errors

This was the second error that pointed us in the right direction to actually resolve the issue. After looking at it more closely, we started considering connection encryption and certificates. This problem is documented, and several solutions are available.

Indeed, the SQL Server instance queried to retrieve the data did not have a certificate issued by a trusted certificate authority. It was using a self-generated certificate.

This can lead to errors such as the ones mentioned above, or errors like the following:

Microsoft SQL: A connection was successfully established with the server, but then an error occurred during the login process. Provider: SSL Provider, error: 0 – The certificate chain was issued by an authority that is not trusted.

Solutions

We had at least three options to resolve this issue:

  • Change the connection mode to Import
  • Install a certificate issued by a trusted certificate authority however this would represent a major change
  • Create a new environment variable on the Power BI Report Server

The client chose the easiest solution to implement: creating the corresponding environment variable.

We then restarted the corresponding Power BI Report Server service and this resolved the issue.

References :

https://learn.microsoft.com/en-us/power-bi/report-server/scheduled-refresh-troubleshoot

https://learn.microsoft.com/en-us/power-query/connectors/sql-server#sql-server-certificate-isnt-trusted-on-the-client-power-bi-desktop-or-on-premises-data-gateway

Thank you. Amine Haloui

L’article A Misleading SSAS Error in Power BI Report Server When Using DirectQuery Mode est apparu en premier sur dbi Blog.

SQLDay 2026 Workshops Overview

Yann Neuhaus - Thu, 2026-05-14 16:16

SQLDay 2026 offers a full-day workshop program on 11 May 2026, before the main conference scheduled for 12–13 May 2026 in Wrocław, with onsite and online participation options depending on the session. The workshops cover several areas of the modern data platform: advanced BI, AI and MLOps, SQL performance tuning, PostgreSQL adoption, and Microsoft Fabric automation.

DAX – Beyond the Basics

This workshop is designed for Power BI users who already know the basics of DAX but now need to solve more complex business problems. The focus is on moving from simple reports to reusable, efficient and business-oriented DAX patterns.

Participants will work on practical scenarios such as advanced slicer logic, hierarchical calculations, year-to-date reporting, visual calculations, cumulative totals, ranking, and relative-period analysis. The main objective is to extend the participant’s DAX toolbox and help them write expressions that are both more powerful and better performing

AI in Databricks: Training, Deployment and Monitoring

This Polish onsite workshop covers the complete lifecycle of machine learning models in Databricks. The goal is to show how to move from data preparation to training, automation, deployment and monitoring in a production-oriented environment.

The workshop focuses on the practical implementation of MLOps using Databricks and MLflow. Topics include AI/ML architecture, data pipelines, feature engineering, model training, deep learning, CI/CD, orchestration, model versioning and monitoring. It is mainly targeted at engineers, data scientists and architects who are already working with machine learning or planning to start.

Building an Intelligent Agent in One Day with Copilot Studio

This Polish onsite workshop focuses on building conversational and autonomous agents with Microsoft Copilot Studio. The format is highly practical, with most of the time dedicated to hands-on exercises.

Participants will build agents that automate business processes, use multimodal data, generate data-driven answers and connect to enterprise data sources. The workshop also covers Dataverse grounding, flows, plugins, actions, autonomous triggers, Responsible AI, moderation and access control. It is a good fit for participants who want to understand how Copilot Studio can be used beyond simple chatbot scenarios.

Advanced T-SQL Triage: The Art of Fixing Terrible Code

This workshop is focused on real-world SQL Server troubleshooting and refactoring. The starting point is familiar to many DBAs and developers: complex stored procedures, poor query patterns, blocking data modifications, bad use of CTEs, problematic window functions, indexed views, dynamic SQL, user-defined functions and execution plans that are difficult to understand.

The objective is not only to identify what is slow, but also to understand why it is slow and how to rewrite it properly. This session is especially relevant for people who regularly inherit problematic T-SQL code and need a structured way to fix it without guessing.

Adding PostgreSQL to your SQL Server Skill Set

This workshop targets SQL Server professionals who need to add PostgreSQL to their technical scope. The context is clear: many organizations are adding PostgreSQL without immediately replacing SQL Server, which creates a need for people who understand both platforms.

The workshop compares the two database engines, explains the areas of overlap, and highlights the differences that can make PostgreSQL challenging for SQL Server users. It also covers tooling, documentation, cloud options and practical resources to support the learning path.

Automating Your Microsoft Fabric Data Platform: From Blueprint to Reality

This onsite hands-on lab focuses on automation in Microsoft Fabric. The goal is to help participants automate the full lifecycle of a Fabric data platform, from design and setup to deployment and documentation.

The workshop covers platform setup using code and configuration scripts, metadata-driven ingestion, semantic model foundations, CI/CD with GitHub and Azure DevOps, Fabric CLI, REST APIs and the fabric-cicd Python library. The expected outcome is a more robust, scalable and repeatable approach to building Fabric solutions, with less manual work and lower operational risk.

Conclusion

The SQLDay 2026 workshop program is clearly oriented toward practical implementation. Each session addresses a common challenge faced by data teams: improving analytical models, industrializing AI, fixing complex SQL code, extending SQL Server skills to PostgreSQL, or automating a modern Microsoft Fabric platform.

The common thread is operational efficiency. These workshops are not only about learning features; they are about applying them in real environments, with constraints such as performance, maintainability, automation, governance and production readiness.

Thank you. Amine Haloui.

References :

L’article SQLDay 2026 Workshops Overview est apparu en premier sur dbi Blog.

Pages

Subscribe to Oracle FAQ aggregator