Yann Neuhaus
FreeBSD basics – 8 – Running Linux Containers
When people talk about containers in FreeBSD, they usually talk about Jails. We’ve already seen how this works in general here and here. An additional option you have, is to install Podman on FreeBSD. Podman is well known in the Linux world and can be used to build and run containers. With the release of FreeBSD 14.2, the FreeBSD project is now also publishing OCI-compatible container images of FreeBSD.
We again start with a fresh installation of FreeBSD 14.2, nothing was added or modified. Podman, as with most of the additional software, can either be installed as a binary package or from the ports collection. If you want to install from the ports collection you can find Podman here:
root@freebsd14-latest:~ $ find /usr/ports/ -name "*podman*"
/usr/ports/sysutils/podman
/usr/ports/sysutils/podman/files/podman_service.in
/usr/ports/sysutils/podman/files/podman.in
/usr/ports/sysutils/podman-suite
As it is faster to install from binary packages, we’ll do it like this for the scope of this post:
root@freebsd14-latest:~ $ pkg update
The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y
Bootstrapping pkg from pkg+https://pkg.FreeBSD.org/FreeBSD:14:amd64/quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
Installing pkg-1.21.3...
Extracting pkg-1.21.3: 100%
Updating FreeBSD repository catalogue...
Fetching meta.conf: 100% 178 B 0.2kB/s 00:01
Fetching data.pkg: 100% 7 MiB 7.5MB/s 00:01
Processing entries: 100%
FreeBSD repository update completed. 35568 packages processed.
All repositories are up to date.
root@freebsd14-latest:~ $ pkg upgrade
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Updating database digests format: 100%
Checking for upgrades (1 candidates): 100%
Processing candidates (1 candidates): 100%
Checking integrity... done (0 conflicting)
Your packages are up to date.
root@freebsd14-latest:~ $ pkg install podman
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 33 package(s) will be affected (of 0 checked):
New packages to be INSTALLED:
conmon: 2.1.12_3
containernetworking-plugins: 0.5_9
containers-common: 0.59.1
gettext-runtime: 0.22.5
...
--
The FreeBSD port of the Podman container engine is experimental and should be
used for evaluation and testing purposes only.
$ sudo podman run --rm docker.io/dougrabson/hello
Podman can restart containers after a host is rebooted. To enable this, use:
$ sudo sysrc podman_enable=YES
and start the container with a restart policy:
$ sudo podman run -d --restart=always myimage
It is possible to run many Linux container images using FreeBSD's Linux emulation:
$ sudo sysrc linux_enable=YES
$ sudo service linux start
$ sudo podman run --rm --os=linux alpine cat /etc/os-release | head -1
NAME="Alpine Linux"
The package gives you quite some information at the end of the installation process. First of all, this is not yet considered production ready and should be used for testing only. Second, you will need to enable Podman and the Linux compatibility layer:
root@freebsd14-latest:~ $ sysrc podman_enable=YES
podman_enable: -> YES
root@freebsd14-latest:~ $ sysrc linux_enable=YES
linux_enable: NO -> YES
root@freebsd14-latest:~ $ service linux start
If you want the containers to restart automatically, you should also mount the fdescfs file system:
root@freebsd14-latest:~ $ mount -t fdescfs fdesc /dev/fd
root@freebsd14-latest:~ $ df -h | grep fd
fdescfs 1.0K 0B 1.0K 0% /compat/linux/dev/fd
fdescfs 1.0K 0B 1.0K 0% /dev/fd
If you try to search for container images using Podman, you’ll notice that there is no result:
root@freebsd14-latest:~ $ podman search rockylinux
The reason is, that no container registries are defined, but this can easily be fixed by adding a registry to “/usr/local/etc/containers/registries.conf”:
root@freebsd14-latest:~ $ tail -1 /usr/local/etc/containers/registries.conf
unqualified-search-registries = ['docker.io']
Once this is configured, you can search the registry for container images:
root@freebsd14-latest:~ $ podman search rockylinux
NAME DESCRIPTION
docker.io/rockylinux/rockylinux
docker.io/library/rockylinux The official build of Rocky Linux.
docker.io/rockylinux/rocky-toolbox Toolbox image for Rocky Linux - https://gith...
docker.io/rockylinux/rockylinux-shim RockyLinux shim-review images
docker.io/amd64/rockylinux The official build of Rocky Linux.
docker.io/litmusimage/rockylinux
docker.io/arm64v8/rockylinux The official build of Rocky Linux.
docker.io/ericpaulsen/rockylinux
docker.io/romcheck/rockylinux
docker.io/robertdebock/rockylinux Container to test Ansible roles in, includin...
docker.io/lightnear/rockylinux
docker.io/accupara/rockylinux
docker.io/detravi/rockylinux
docker.io/ppc64le/rockylinux The official build of Rocky Linux.
docker.io/aursu/rockylinux
docker.io/s390x/rockylinux The official build of Rocky Linux.
docker.io/uacontainers/rockylinux Up-to-date Rocky Linux Docker images with th...
docker.io/buluma/rockylinux
docker.io/gammabytehosting/rockylinux Rocky Linux minimal image.
docker.io/whatwewant/rockylinux
docker.io/kubestation/rockylinux
docker.io/4geniac/rockylinux
docker.io/jeksterslab/rockylinux
docker.io/ihanick/rockylinux
docker.io/douglarek/rockylinux
Running a container is exactly the same as you know it from Linux:
root@freebsd14-latest:~ $ podman run -it --rm --platform linux/amd64 rockylinux/rockylinux:9 /bin/bash
Resolving "rockylinux/rockylinux" using unqualified-search registries (/usr/local/etc/containers/registries.conf)
Trying to pull docker.io/rockylinux/rockylinux:9...
Getting image source signatures
Copying blob 3442e16c7069 done |
Copying config bb8a97547d done |
Writing manifest to image destination
WARN[0009] Failed to load cached network config: network podman not found in CNI cache, falling back to loading network podman from disk
WARN[0009] Failed to load cached network config: network podman not found in CNI cache, falling back to loading network podman from disk
Error: plugin type="bridge" failed (add): cni plugin bridge failed: The pf kernel module must be loaded to support ipMasq networks
… and this fails because we forgot to configure the container networking. There actually are instructions how to do this in the output of the Podman package installation:
Message from containernetworking-plugins-0.5_9:
--
Container networking relies on NAT to allow container network packets
out to the host's network. This requires a PF firewall to perform the
translation. A simple example is included - to use it:
# cp /usr/local/etc/containers/pf.conf.sample /etc/pf.conf
...
Edit /etc/pf.conf and set v4egress_if, v6egress_if to your network interface(s)
...
# sysrc pf_enable=YES
# service pf start
The sample PF configuration includes support for port redirections. These are
implemented as redirect rules in anchors nested under cni-rdr.
Support for redirecting connections from the container host to services running
inside a container is included for FreeBSD 13.3 and later. To enable this, first
load the pf kernel module, by adding pf_load="YES" to /boot/loader.conf and
enable PF support for these redirections using sysctl:
# kldload pf
# sysctl net.pf.filter_local=1
# service pf restart
Once this is done, all is fine:
root@freebsd14-latest:~ $ podman run -it --rm --platform linux/amd64 rockylinux/rockylinux:9 /bin/bash
[root@71ef409c1d40 /]$ cat /etc/os-release
NAME="Rocky Linux"
VERSION="9.5 (Blue Onyx)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.5"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Rocky Linux 9.5 (Blue Onyx)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:9::baseos"
HOME_URL="https://rockylinux.org/"
VENDOR_NAME="RESF"
VENDOR_URL="https://resf.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
SUPPORT_END="2032-05-31"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9"
ROCKY_SUPPORT_PRODUCT_VERSION="9.5"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.5"
Quite impressive and easy to setup. You might wonder if running Linux containers on FreeBSD actually is a thing. Personally, I think that this is really great, because it gives you another option in the FreeBSD world. Containers are everywhere today, and having more choice where to run them is great.
L’article FreeBSD basics – 8 – Running Linux Containers est apparu en premier sur dbi Blog.
Oracle Database Replay for a RAC environment
The database Capture/Replay feature of the Real Application Testing (RAT) suite enables you to capture the workload from a production database and replay it on a test or development environment. This allows for realistic testing of database changes, such as upgrades, patches, or hardware migrations.
In this blog, we’ll explore how to accomplish this when both your production and test databases are configured in a two-node RAC environment.
Restrictions :You should be aware that some workload operations are not supported, such as flashback queries and distributed transactions.
In most cases, Database Replay refrains from capturing these operations, or does not replay them. However you should refer to the oracle documentation bellow for the full list of non-supported operations and related considerations :
The Workload CaptureThe capture procedure on a RAC database is the same as for a single instance database, you just need to run it on one of the instances, without adding an instance filter :
The path you use to store the raw workload files must be defined as a database directory, and must be empty.
SQL>
CREATE
DIRECTORY capture_dir
AS
'/u90/db_capture/'
;
It is recommended, but NOT mandatory, to restart your database before the capture, so that all current processes are finished, and the capture would catch absolutely all the changes from the timestamp of its start.
Let’s take for instance a 4 hours workload capture (14400 seconds) :
BEGIN
DBMS_WORKLOAD_CAPTURE.start_capture (
name
=>
'capture_4h'
,
dir =>
'CAPTURE_DIR'
,
duration => 14400);
END
;
/
This package will capture the workload on both RAC instances for 4 hours and will automatically stop. If you need to stop it earlier, you can do it manually using:
SQL>
exec
DBMS_WORKLOAD_CAPTURE.finish_capture;
The capture process does not significantly impact performance (it adds about 3-5% CPU overhead). However, it may require substantial storage space. A common reason to stop the capture prematurely is running out of space in the designated directory.
Oracle Support Note Doc ID 1920275.1 provides a method to estimate the space required. For a detailed explanation, please refer to the note:
The general formula for the space needed provided by this note is:
2 * Bytes received via SQL*Net
from
client (
as
seen
in
AWR
or
StatsPack).
Alternatively, you can perform a short capture, such as for 10 minutes, and extrapolate the estimate based on the size captured during that period.
At the end of the capture, you should generate an html summary, and export related AWR data, for that, you’ll need the ID of you capture :
SELECT
id,
name
FROM
dba_workload_captures;
SQL>
ID
NAME
---------- ------------------------------
11 capture_4h
2 test_capture_1
SQL>
So the capture id in my case is : 11
– Generating The html report (It is generated in the directory of the capture) :
SQL>
DECLARE
l_report CLOB;
BEGIN
l_report := DBMS_WORKLOAD_CAPTURE.report(capture_id => 11,
format => DBMS_WORKLOAD_CAPTURE.TYPE_HTML);
END
;
– Exporting the AWR data (It generates a datapump dump file contaiting all awr data of the period of the capture) :
SQL>
exec
DBMS_WORKLOAD_CAPTURE.export_awr (capture_id => 11);
To prepare the replay you need two things :
– To copy all the files from the directory of the capture to the test system (As it is two nodes RAC it should be on a shared location).
– Prepare a test database : The html report of the capture contains the SCN of the start, it’s best practice for instance to duplicate your database up to that specific SCN, in the test environment, before applying the changes you want to evaluate (for instance an upgrade, or parameters changes).
Once your test database is ready, and the captured files are available on a shared location :
– We create the directory of the replay :
SQL>
CREATE
OR
REPLACE
DIRECTORY replay_dir
AS
'/u90/replay/'
;
The captured raw files must be processed by the database, before the capture could be replayed, this step could take some time depending of the size of the workload raw files :
SQL>
exec
DBMS_WORKLOAD_REPLAY.process_capture(
'REPLAY_DIR'
);
Then we initiate the replay (just an initiation, the replay is not yet started) :
SQL>
exec
DBMS_WORKLOAD_REPLAY.initialize_replay(replay_name =>
'replay_capture_4h'
, replay_dir =>
'REPLAY_DIR'
);
This step generated an entry in the DBA_WORKLOAD_REPLAYS view, we must retrieve the associated id :
SQL>
select
id,status
from
dba_workload_replays;
ID STATUS
---------- ----------------------------------------
1 COMPLETED
11 COMPLETED
21 INITIALIZED
So, here, the ID of the replay is 21.
Next step is to remap the workload connections. To be able to apply the replay on both instances, we must use a TNS entry to a scan host and scan service :
SQL>
set
linesize 250
set
pages 9999
spool remap_connections.sql
select
'exec dbms_workload_replay.remap_connection ( connection_id => '
||conn_id||
', replay_connection => '
'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=a_scan_host)(PORT=your_port))(CONNECT_DATA=(SERVICE_NAME=a_scan_service)))'
');'
from
dba_workload_connection_map
where
replay_id=21;
spool
off
Then we clean and execute the script :
sqlplus /
as
sysdba
SQL>
@remap_connections.sql
…………………
PL/SQL
procedure
successfully completed.
PL/SQL
procedure
successfully completed.
SQL>
Next we prepare the replay :
SQL>
exec
DBMS_WORKLOAD_REPLAY.prepare_replay (synchronization =>
TRUE
);
PL/SQL
procedure
successfully completed.
SQL>
synchronization => TRUE : ensures that the replay sessions are synchronized, maintaining the order and dependency between transactions as they occurred during the workload capture.
Checking the status of the replay :
SQL>
select
id,status
from
dba_workload_replays;
ID STATUS
---------- ----------------------------------------
1 COMPLETED
11 COMPLETED
21
PREPARE
Next we need to calibrate the replay clients, we use the wrc program in calibrate mode. wrc is The Workload Replay Client tool, used to replay database workload captured by Oracle Database.
The calibration mode purpose is to ensures that the workload replay environment has enough replay clients to handle the workload in the capture. After calibration, the wrc tool will output the minimum number of replay clients required.
[oracle@LinuxHost~]$ wrc mode=calibrate replaydir=/u90/replay
Workload Replay Client: Release 19.23.0.0.0 - Production
on
Wed
Dec
11 10:58:06 2024
Copyright (c) 1982, 2019, Oracle
and
/
or
its affiliates.
All
rights reserved.
Workload Report
-----------------------
Recommendation:
Consider using
at
least 52 clients divided among 14 CPU(s)
You will need
at
least 152 MB
of
memory per client process.
If your machine(s) cannot match that number, consider using more clients.
Workload Characteristics:
-
max
concurrency: 3984 sessions
- total number
of
sessions: 56136
Assumptions:
- 100 concurrent sessions per client process
- 4 client processes per CPU
- 256 KB
of
memory cache per concurrent session
- think
time
scale = 100
-
connect
time
scale = 100
- synchronization =
TRUE
[oracle@LinuxHost ~]$
Based on this calibration, and after checking the cpu and memory available, I will use 56 clients (56 x 152 MB = 8.5G of minimal memory needed for the clients : Ok for me).
And this means using 28 clients per node, as I want to balance the overhead on the two nodes.
The command to start a single client is :
[oracle@LinuxHost]$ wrc system/
password
@test mode=replay replaydir=/u90/replay
But I prepared a script to start 28 clients at once in background mode, for instance in the first node :
[oracle@LinuxHost]$
cat
replay_node1.sh
#!/bin/bash
export
ORACLE_HOME=
/u01/app/oracle/product/19
.0.0.0
/dbhome_1
export
PATH=
/home/oracle/
.
local
/bin
:
/home/oracle/bin
:
/usr/local/bin
:
/usr/bin
:
/usr/local/sbin
:
/usr/sbin
:
/u01/app/oracle/product/19
.0.0.0
/dbhome_1/bin
# Configuration
USER=
"system"
PASSWORD=
"Oracle$"
DB_CONNECT=
"Instance1"
MODE=
"replay"
REPLAY_DIR=
"/u90/replay"
LOG_DIR=
"./wrc_logs"
# Directory to store log files
NUM_CLIENTS=28
# Number of clients to start
# Ensure the log directory exists
mkdir
-p $LOG_DIR
# Start WRC clients in a loop
for ((i=1; i<=NUM_CLIENTS; i++))
do
nohup wrc ${USER}/${PASSWORD}@${DB_CONNECT} mode=${MODE} replaydir=${REPLAY_DIR} > ${LOG_DIR}/wrc_client_$i.log 2>&1 &
echo "Started WRC client $i, log: ${LOG_DIR}/wrc_client_$i.log"
done
echo
"All $NUM_CLIENTS clients started. Logs are in $LOG_DIR."
[oracle@LinuxHost]
starting the clients on the first node :
[oracle@LinuxHost]$ ./replay_node1.sh
Started WRC client 1, log: ./wrc_logs/wrc_client_1.log
Started WRC client 2, log: ./wrc_logs/wrc_client_2.log
Started WRC client 3, log: ./wrc_logs/wrc_client_3.log
Started WRC client 4, log: ./wrc_logs/wrc_client_4.log
Started WRC client 5, log: ./wrc_logs/wrc_client_5.log
Started WRC client 6, log: ./wrc_logs/wrc_client_6.log
Started WRC client 7, log: ./wrc_logs/wrc_client_7.log
Started WRC client 8, log: ./wrc_logs/wrc_client_8.log
Started WRC client 9, log: ./wrc_logs/wrc_client_9.log
Started WRC client 10, log: ./wrc_logs/wrc_client_10.log
Started WRC client 11, log: ./wrc_logs/wrc_client_11.log
Started WRC client 12, log: ./wrc_logs/wrc_client_12.log
Started WRC client 13, log: ./wrc_logs/wrc_client_13.log
Started WRC client 14, log: ./wrc_logs/wrc_client_14.log
Started WRC client 15, log: ./wrc_logs/wrc_client_15.log
Started WRC client 16, log: ./wrc_logs/wrc_client_16.log
Started WRC client 17, log: ./wrc_logs/wrc_client_17.log
Started WRC client 18, log: ./wrc_logs/wrc_client_18.log
Started WRC client 19, log: ./wrc_logs/wrc_client_19.log
Started WRC client 20, log: ./wrc_logs/wrc_client_20.log
Started WRC client 21, log: ./wrc_logs/wrc_client_21.log
Started WRC client 22, log: ./wrc_logs/wrc_client_22.log
Started WRC client 23, log: ./wrc_logs/wrc_client_23.log
Started WRC client 24, log: ./wrc_logs/wrc_client_24.log
Started WRC client 25, log: ./wrc_logs/wrc_client_25.log
Started WRC client 26, log: ./wrc_logs/wrc_client_26.log
Started WRC client 27, log: ./wrc_logs/wrc_client_27.log
Started WRC client 28, log: ./wrc_logs/wrc_client_28.log
All
28 clients started. Logs are
in
./wrc_logs.
[oracle@LinuxHost]$
We do the same thing on the second node, we just need to change the DB_CONNECT variable in the script.
Finally, we can start the replay :
BEGIN
DBMS_WORKLOAD_REPLAY.start_replay;
END
;
/
The replay is finally started, it can be monitored by :
– Querying the views : dba_workload_replays and v$workload_replay_thread.
– Checking the clients log files.
Hope it helps !
L’article Oracle Database Replay for a RAC environment est apparu en premier sur dbi Blog.
Microsoft Foundry in Action: Building Smarter AI Solutions with Your Data
At the 2024 edition of Microsoft Ignite, two key themes dominated the stage: Artificial Intelligence (AI) and Security. These pillars reflect the current and future technological landscape. Staying true to its innovative legacy, Microsoft unveiled Microsoft Foundry, a unified platform designed to simplify the design, customization, and management of AI solutions.
In this article, I’ll share my hands-on experience testing this innovative platform through an intelligent application built with my own data.
2 ContextThe integration of AI in our daily operational is no longer a possibility, but it is a present reality. However, many misconceptions still surround AI, leading to hesitancy or misuse. Here’s a reality check:
Misconceptions about AI:
- AI will replace human jobs.
- AI freely uses your data to train models.
- AI is only for large corporations.
- AI is too complex for average users
- AI is only useful in tech industries
The Reality:
- AI handles repetitive tasks, freeing up humans for creative work.
- Provides a creative starting point (overcoming “blank page” anxiety)
- Reduces human error in routine tasks
- Enables 24/7 operational capabilities
- Facilitates predictive maintenance
- Your data remains secure and isn’t used to train underlying foundation models without consent.
Recognizing the importance of ethics and safety in AI, Microsoft integrates Responsible AI, Microsoft Purview for data governance, and content safety protocols into its solutions:
- Responsible AI: Microsoft emphasizes fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
- Microsoft Purview: A comprehensive tool to enforce governance policies that dictate what AI can return to end users.
- Content Safety: Prevents harmful outputs by monitoring and filtering responses.
For this demonstration, I built an interactive AI solution based on my own data, showcasing Microsoft Foundry’s capabilities.
3.2 Overview of Microsoft FoundryMicrosoft Foundry is a comprehensive platform for developing and deploying generative AI apps responsibly. As stated by Microsoft here, Microsoft is designed for developers to:
- Build generative AI applications on an enterprise-grade platform.
- Explore, build, test, and deploy using cutting-edge AI tools and ML models, grounded in responsible AI practices.
- Collaborate with a team for the full life-cycle of application development.
Microsoft Foundry is incredibly accessible:
- Free to explore
- No sign-in required for basic features
- Full functionality with an Azure account
- Access via https://int.ai.azure.com/
Its main entry point is as per below picture:
Clicking on Explore models open the model catalog. At the time of writing, the platform boasted an impressive 1,812 models – a number constantly expanding. This vast library means there’s likely a model for almost any use case.
3.3 PrerequisitesBefore diving into application development, the following resources were set up:
- Resource Group (rg-foundrytest): Contains all resources for efficient management and deletion to avoid costs.
- AI Search (aisearch-foundry): Enables intelligent search capabilities.
- Azure OpenAI (openai-foundry): Provides foundational AI services.
- Custom Data: For this demo, I uploaded the file Multiturn-ContosoBenefits.pdf.
Project is where you do most of your development work. Projects are easy-to-manage containers for your work—and the key to collaboration, organization, and connecting data and other services
From the Foundry portal, click Create Project and proceed with default settings.
Hub is a collaborative environment where Team members can share the projects
Project is created
After setting up the project, necessary resources were verified in the Azure portal.
3.4.2 Adding Own data to the projectThe bringing own data feature supports diverse sources, including on-premises and cloud data as depicted in the following picture:
But I simplified the process by uploading the Multiturn-ContosoBenefits.pdf as follows:
From the new created project homepage, click on New Data
Then, upload the mentioned Multiturn-ContosoBenefits.pdf
The following screen confirms that custom data are added to the project:
3.4.3 Model Selection and IntegrationI selected the model GPT-35-turbo-16k and integrated it to the project through following steps:
As already said, more than 1800 models are available
The integrated model gpt-35-turbo-16k (version:0613) at this level is tied with pre-trained data. Thanks to that, the following query returns valid result
3.4.4 Connect the Model to the custom dataNow, we are instructing the model to use our custom data
The following picture confirms that our model is now linked to our custom data:
As of now, the model is restricted tied with our custom data, it is configured to exclusively utilize custom data.
Because of that, when querying content outside the data source’s scope, the system returned an error as shown below:
However, queries related to the data yield provides precise results. In this case, the system provides accurate and context-specific responses. This demonstrates strict adherence to the defined dataset.
It’s remarkable how seamlessly it provides relevant information!
3.4.5 Integrate into Client ApplicationsOnce the model is fine-tuned, its capabilities are easily integrated to client applications following the below architecture:
Their capabilities can be embedded into a client application using pre-generated code snippets available in languages like C#, Curl, Go, Java, Javascript, JSON and Python.
The following picture shows how to get the generated code
The integration can also be easily done by following the wizard to deploy an App Service as shown below:
4 In SummaryMicrosoft Foundry is a game-changer in AI application development. It democratizes AI by enabling users—regardless of technical expertise—to:
- Leverage an extensive catalog of pre-trained models.
- Tailor applications using custom datasets.
- Seamlessly integrate AI capabilities into enterprise solutions.
Stay tuned for more hands-on insights in upcoming posts!
L’article Microsoft Foundry in Action: Building Smarter AI Solutions with Your Data est apparu en premier sur dbi Blog.
FreeBSD basics – 7 – The Linux compatibility layer
Now that we know how to work with the ports collection let’s have a look another option you have in FreeBSD: Sometimes it might be required to run unmodified Linux binaries on FreeBSD and to achieve this, FreeBSD comes with a Linux compatibility layer. This is optional and not enabled by default, but pretty easy to setup and use. Before you do that, please check if the functionality you’re looking for is not already there as either a package or the ports collection. If it is there, use it from there.
To enable the layer at boot time, use “sysrc” as we’ve done it in the previous post and start the corresponding service:
root@freebsd14:~ $ sysrc linux_enable="YES"
linux_enable: NO -> YES
root@freebsd14:~ $ service linux start
root@freebsd14:~ $ service linux describe
Enable Linux ABI
Once the service is running, you’ll see additional mountpoints which Linux applications usually are expecting under “/compat/linux”:
root@freebsd14:~ $ df -h | grep compat
linprocfs 8.0K 0B 8.0K 0% /compat/linux/proc
linsysfs 8.0K 0B 8.0K 0% /compat/linux/sys
devfs 1.0K 0B 1.0K 0% /compat/linux/dev
fdescfs 1.0K 0B 1.0K 0% /compat/linux/dev/fd
tmpfs 2.2G 4.0K 2.2G 0% /compat/linux/dev/shm
This also loaded the required kernel modules:
root@freebsd14:~ $ kldstat | grep -i linux
10 1 0xffffffff82e2f000 30a80 linux.ko
11 4 0xffffffff82e60000 c2a8 linux_common.ko
12 1 0xffffffff82e6d000 2de10 linux64.ko
The next step is to install the Linux userland. Currently, two of those are available (Debian is available with the debootstrap package):
root@freebsd14:~ $ pkg search linux_base
linux_base-c7-7.9.2009_3 Base set of packages needed in Linux mode (Linux CentOS 7.9.2009)
linux_base-rl9-9.4 Base set of packages needed in Linux mode (Rocky Linux 9.4)
As CentOS 7 is end of life, we’ll obviously go for the Rocky Linux 9.4 userland (this is not the most recent minor release as of today, but it is close). You’ll also see those userlands in the ports tree in the “emulators” category:
root@freebsd14:~ $ ls -la /usr/ports/emulators/linux_base-*
/usr/ports/emulators/linux_base-c7:
total 124
drwxr-xr-x 2 root wheel 8 Dec 11 09:49 .
drwxr-xr-x 193 root wheel 194 Dec 11 09:49 ..
-rw-r--r-- 1 root wheel 6478 Dec 11 09:49 Makefile
-rw-r--r-- 1 root wheel 36074 Dec 11 09:49 distinfo
-rw-r--r-- 1 root wheel 468 Dec 11 09:49 pkg-descr
-rw-r--r-- 1 root wheel 95620 Dec 11 09:49 pkg-plist.aarch64
-rw-r--r-- 1 root wheel 108836 Dec 11 09:49 pkg-plist.amd64
-rw-r--r-- 1 root wheel 95512 Dec 11 09:49 pkg-plist.i386
/usr/ports/emulators/linux_base-rl9:
total 132
drwxr-xr-x 2 root wheel 8 Dec 11 09:49 .
drwxr-xr-x 193 root wheel 194 Dec 11 09:49 ..
-rw-r--r-- 1 root wheel 6943 Dec 11 09:49 Makefile
-rw-r--r-- 1 root wheel 1380 Dec 11 09:49 Makefile.version
-rw-r--r-- 1 root wheel 40711 Dec 11 09:49 distinfo
-rw-r--r-- 1 root wheel 466 Dec 11 09:49 pkg-descr
-rw-r--r-- 1 root wheel 91404 Dec 11 09:49 pkg-plist.aarch64
-rw-r--r-- 1 root wheel 104964 Dec 11 09:49 pkg-plist.amd64
As usual, when it comes to binary packages, pkg is used to bring this onto the system:
root@freebsd14:~ $ pkg install linux_base-rl9
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Updating database digests format: 100%
The following 1 package(s) will be affected (of 0 checked):
New packages to be INSTALLED:
linux_base-rl9: 9.4
Number of packages to be installed: 1
The process will require 297 MiB more space.
39 MiB to be downloaded.
Proceed with this action? [y/N]: y
[1/1] Fetching linux_base-rl9-9.4.pkg: 100% 39 MiB 20.5MB/s 00:02
Checking integrity... done (0 conflicting)
[1/1] Installing linux_base-rl9-9.4...
[1/1] Extracting linux_base-rl9-9.4: 100%
Looking again at the “/compat/linux” structure, this is now populated:
root@freebsd14:~ $ ls -la /compat/linux/
total 39
drwxr-xr-x 11 root wheel 16 Dec 11 17:04 .
drwxr-xr-x 3 root wheel 3 Dec 11 16:15 ..
drwxr-xr-x 2 root wheel 2 Dec 11 17:04 afs
lrwxr-xr-x 1 root wheel 7 May 16 2022 bin -> usr/bin
drwxr-xr-x 10 root wheel 512 Dec 11 17:04 dev
drwxr-xr-x 24 root wheel 60 Dec 11 17:04 etc
lrwxr-xr-x 1 root wheel 7 May 16 2022 lib -> usr/lib
lrwxr-xr-x 1 root wheel 9 May 16 2022 lib64 -> usr/lib64
drwxr-xr-x 2 root wheel 2 Dec 11 17:04 opt
dr-xr-xr-x 1 root wheel 0 Dec 11 17:06 proc
lrwxr-xr-x 1 root wheel 8 Oct 31 02:04 run -> /var/run
lrwxr-xr-x 1 root wheel 8 May 16 2022 sbin -> usr/sbin
drwxr-xr-x 2 root wheel 2 Dec 11 17:04 srv
dr-xr-xr-x 1 root wheel 0 Dec 11 17:06 sys
drwxr-xr-x 8 root wheel 9 Dec 11 17:04 usr
drwxr-xr-x 16 root wheel 17 Dec 11 17:04 var
If we chroot into this, we have a Rocky Linux environment to work with:
root@freebsd14:~ 1 chroot /compat/linux/
sh-5.1$ cat /etc/os-release
NAME="Rocky Linux"
VERSION="9.4 (Blue Onyx)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.4"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Rocky Linux 9.4 (Blue Onyx)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:9::baseos"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
SUPPORT_END="2032-05-31"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9"
ROCKY_SUPPORT_PRODUCT_VERSION="9.4"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.4"
What we don’t have, is neither rpm nor dnf:
sh-5.1$ which rpm
which: no rpm in (/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/root/bin)
sh-5.1$ which dnf
which: no dnf in (/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/root/bin)
rpm can be installed by installing the rpm4 package first:
root@freebsd14:~ $ pkg search rpm4
rpm4-4.18.2_1 Red Hat Package Manager
root@freebsd14:~ $ pkg install rpm4
Once we have that, we can download a rpm e.g. from the Rocky Linux repositories:
root@freebsd14:/compat/linux $ fetch https://dl.rockylinux.org/pub/rocky/9/BaseOS/x86_64/os/Packages/u/unzip-6.0-57.el9.x86_64.rpm
unzip-6.0-57.el9.x86_64.rpm 179 kB 3851 kBps 00s
Confirm that unzip is not yet available:
root@freebsd14:/compat/linux $ find . -name "unzip"
Install it, and verify it is usable:
root@freebsd14:/compat/linux $ rpm2cpio < unzip-6.0-57.el9.x86_64.rpm | cpio -id
787 blocks
root@freebsd14:/compat/linux $ find . -name "unzip"
./usr/bin/unzip
./usr/share/licenses/unzip
./usr/share/doc/unzip
root@freebsd14:/compat/linux $ ./usr/bin/unzip --help
UnZip 6.00 of 20 April 2009, by Info-ZIP. Maintained by C. Spieler. Send
bug reports using http://www.info-zip.org/zip-bug.html; see README for details.
Usage: unzip [-Z] [-opts[modifiers]] file[.zip] [list] [-x xlist] [-d exdir]
Default action is to extract files in list, except those in xlist, to exdir;
file[.zip] may be a wildcard. -Z => ZipInfo mode ("unzip -Z" for usage).
-p extract files to pipe, no messages -l list files (short format)
-f freshen existing files, create none -t test compressed archive data
-u update files, create if necessary -z display archive comment only
-v list verbosely/show version info -T timestamp archive to latest
-x exclude files that follow (in xlist) -d extract files into exdir
modifiers:
-n never overwrite existing files -q quiet mode (-qq => quieter)
-o overwrite files WITHOUT prompting -a auto-convert any text files
-j junk paths (do not make directories) -aa treat ALL files as text
-U use escapes for all non-ASCII Unicode -UU ignore any Unicode fields
-C match filenames case-insensitively -L make (some) names lowercase
-X restore UID/GID info -V retain VMS version numbers
-K keep setuid/setgid/tacky permissions -M pipe through "more" pager
-O CHARSET specify a character encoding for DOS, Windows and OS/2 archives
-I CHARSET specify a character encoding for UNIX and other archives
See "unzip -hh" or unzip.txt for more help. Examples:
unzip data1 -x joe => extract all files except joe from zipfile data1.zip
unzip -p foo | more => send contents of foo.zip via pipe into program more
unzip -fo foo ReadMe => quietly replace existing ReadMe if archive file newer
Nice. If you have the requirement to use Linux tools on FreeBSD, this can be very helpful.
L’article FreeBSD basics – 7 – The Linux compatibility layer est apparu en premier sur dbi Blog.
JSON in PostgreSQL : a query tuning case
Over the years, ORMs have industrialized query generation, allowing developers to focus on optimizing the few queries that truly need it. However, they can sometimes produce overly complex SQL with excessive joins and business logic, making tuning a nightmare. It is sometimes hard to know when you should split a query in multiple smaller parts. The complexity is amplified with the use of JSON, adding another layer for DBAs to manage, although this is not a new data type, best practices on how to use JSON in PostgreSQL is still kind of a dark magic. This blog post intends to put some lights on all of this with a practical example.
Improving ORM-Generated Queries Example of an Inefficient ORM-Generated QueryBelow is a truncated and anonymized version of a complex SQL query generated by an ORM which I had to tune since the execution time (+90min) wasn’t in line with the business need. Note that the original query was over 1500 lines :
WITH result_list AS (
SELECT o.id,
o.data AS order_data,
ARRAY_AGG(DISTINCT cr.id) AS customer_refs_id,
ARRAY_AGG(DISTINCT cr.data) AS customer_refs_data,
ARRAY_AGG(DISTINCT cp.policy_id) AS customer_policy_id,
ROW_NUMBER() OVER () AS rowid
FROM orders o
LEFT JOIN customer_refs cr ON cr.data->>'id' = o.data->'customer'->>'id'
LEFT JOIN customer_policies cp ON cp.policy_data->>'id' = o.data->'customer'->>'id'
AND cp.policy_data->>'region' = ANY (ARRAY['GLOBAL', 'REGION_A', 'REGION_B', 'REGION_C', 'REGION_D'])
AND (
(
cp.policy_data->>'status' IS NOT NULL
AND cp.policy_data->>'status' NOT IN ('inactive', 'blocked', 'suspended')
)
AND (
(
cp.policy_data->>'customer_type' IS NOT NULL
AND cp.policy_data->>'customer_type' = 'issuer'
)
OR (
cp.policy_data->>'customer_type' IS NOT NULL
AND cp.policy_data->>'customer_type' IN ('type1', 'type2', 'type3')
)
OR (
get_text_from_path(cp.policy_data->'shared_groups', '$[*]', 'NONE') IS NOT NULL
AND get_text_from_path(cp.policy_data->'shared_groups', '$[*]', 'NONE') && ARRAY['GROUP_1']
)
)
)
WHERE EXISTS (
SELECT 1
FROM account_policies ap
WHERE ap.policy_id = ARRAY_TO_STRING(ARRAY[
o.data->'account'->'id'->>'system_ref',
o.data->'account'->'id'->>'type_ref',
o.data->'account'->'id'->>'region',
o.data->'account'->'id'->>'id'], '-')
AND ap.policy_data->>'region' = ANY (ARRAY['GLOBAL', 'REGION_A', 'REGION_B', 'REGION_C', 'REGION_D'])
AND ap.policy_data->>'status' NOT IN ('inactive', 'blocked', 'suspended')
AND ap.policy_data->>'contract' != 'prospect'
AND (
ap.policy_data->>'account_type' IN ('typeX', 'typeY', 'typeZ')
OR get_text_from_path(ap.policy_data->'shared_groups', '$[*]', 'NONE') && ARRAY['GROUP_1']
)
)
AND (
(
o.data->>'order_type' = 'MONEY_ORDER'
AND (
o.data->'close_date'->>'value' > '2024-09-10'
OR o.data->'close_date'->>'value' IS NULL
OR o.data->'close_date'->>'value' = ''
)
)
OR (
o.data->>'order_type' != 'MONEY_ORDER'
AND COALESCE(
NULLIF(o.data->'valuation'->'quantity'->>'value', '')::DECIMAL,
0
) != 0
)
)
AND (
LOWER(o.data->>'display_name') LIKE '%current%'
OR LOWER(o.data->'product'->'item'->'item_name'->>'abbreviation') LIKE '%current%'
OR LOWER(o.data->'product'->'item'->'item_name'->>'full') LIKE '%current%'
OR LOWER(o.data->'product'->'item'->>'identifier') LIKE '%current%'
OR LOWER(o.data->'product'->'issuer'->>'display_name') LIKE '%current%'
OR get_text_for_search_from_path(o.data->'product'->'underlying', '$[*].item.item_name.abbreviation', 'LOWER') LIKE '%current%'
)
GROUP BY o.id, o.data
ORDER BY o.id
),
ordered_list AS (
SELECT *
FROM result_list
ORDER BY rowid
LIMIT 23
)
SELECT *
FROM ordered_list
ORDER BY rowid;
Key Characteristics of the Query:
- Multiple Joins: The query includes several
LEFT JOIN
operations with complexON
conditions involving JSON fields. - Nested JSON Access: Frequent use of JSON operators (
->
,->>
) to access deeply nested fields. - Custom Functions: Usage of functions like
get_text_from_path
andget_text_for_search_from_path
, which may not be optimized for performance. - Complex WHERE Conditions: The
WHERE
clause contains multiple nested conditions and subqueries, often involving JSON fields.
- Accessing deeply nested JSON fields using operators in large datasets can be slow, especially if not properly indexed this will cause full table scans and increase drastically execution time.
- The query performs numerous
LOWER()
andLIKE
operations on JSON fields, which can be computationally expensive with the addition of preventing index usage effectively. - The indexes on JSON fields may not be utilized due to the way conditions are written.
- The use of
LEFT JOIN
with conditions that include functions and JSON field comparisons leads to inefficient execution plans. - The
EXISTS
subquery further complicates the execution, potentially causing full table scans. - ORMs often generate generic SQL that doesn’t consider database-specific optimizations.
- The abstraction can lead to redundant or unnecessary conditions and joins.
Limit (cost=292646835.98..292646835.99 rows=1 width=1813) (actual time=5504844.738..5504844.745 rows=23 loops=1)
Buffers: shared hit=3622772239 read=7862262 dirtied=3639
-> Sort (cost=292646835.98..292646835.99 rows=1 width=1813) (actual time=5502255.883..5502255.889 rows=23 loops=1)
Sort Key: (row_number() OVER (?))
Sort Method: quicksort Memory: 114kB
Buffers: shared hit=3622772239 read=7862262 dirtied=3639
-> WindowAgg (cost=292646835.93..292646835.97 rows=1 width=1813) (actual time=5502254.506..5502255.810 rows=43 loops=1)
Buffers: shared hit=3622772236 read=7862262 dirtied=3639
-> GroupAggregate (cost=292646835.93..292646835.96 rows=1 width=1805) (actual time=5502253.088..5502253.249 rows=43 loops=1)
Group Key: o.id
Buffers: shared hit=3622772236 read=7862262 dirtied=3639
-> Sort (cost=292646835.93..292646835.93 rows=1 width=1930) (actual time=5502244.952..5502244.964 rows=43 loops=1)
Sort Key: o.id, cr.id
Sort Method: quicksort Memory: 71kB
Buffers: shared hit=3622772236 read=7862262 dirtied=3639
-> Nested Loop (cost=0.98..292646835.92 rows=1 width=1930) (actual time=2015030.222..5502243.020 rows=43 loops=1)
Join Filter: (array_to_string(ARRAY[...]) = ap.policy_id)
Rows Removed by Join Filter: 246198062
Buffers: shared hit=3622772236 read=7862262 dirtied=3639
---
"Planning:"
" Buffers: shared hit=649 read=2"
"Planning Time: 133.626 ms"
"JIT:"
" Functions: 32"
" Options: Inlining true, Optimization true, Expressions true, Deforming true"
" Timing: Generation 226.277 ms, Inlining 631.748 ms, Optimization 801.960 ms, Emission 1156.811 ms, Total 2816.797 ms"
"Execution Time: 5505139.477 ms"
Observations from the Execution Plan:
Execution Time: ~90 minutes
Buffer Usage:
- Shared hits: ~3.6 billion
- Reads: ~7.8 million
- Dirtied: ~3,639
Key Issues:
Functions in Joins and Conditions: Prevented effective index usage
Nested Loops: Inefficient row-by-row processing.
Sort Operations: High-cost sorting due to lack of index usage.
Full Table Scans: Extensive rows removed by join filters.
Optimized Query suggested :
Here I modified some things, to help the optimizer use the indexes :
- Indexes and created some generated columns to avoid applying functions in the where clause on some fields.
- I changed the LEFT JOIN to a INNER JOIN when possible to reduce the data set early in the query execution.
- I rewrote some conditions to avoid the use of functions on the JSON fields.
ALTER TABLE orders ADD COLUMN customer_id TEXT GENERATED ALWAYS AS (data->'customer'->>'id') STORED;
ALTER TABLE orders ADD COLUMN order_type TEXT GENERATED ALWAYS AS (data->>'order_type') STORED;
ALTER TABLE orders ADD COLUMN display_name_lower TEXT GENERATED ALWAYS AS (LOWER(data->>'display_name')) STORED;
CREATE INDEX idx_orders_customer_id ON orders (customer_id);
CREATE INDEX idx_orders_order_type ON orders (order_type);
CREATE INDEX idx_orders_display_name_lower ON orders (display_name_lower);
--- ... etc
WITH result_list AS (
SELECT o.id,
o.data AS order_data,
ARRAY_AGG(DISTINCT cr.id) AS customer_refs_id,
ARRAY_AGG(DISTINCT cr.data) AS customer_refs_data,
ARRAY_AGG(DISTINCT cp.policy_id) AS customer_policy_id,
ROW_NUMBER() OVER () AS rowid
FROM orders o
INNER JOIN customer_refs cr ON cr.data->>'id' = o.customer_id
INNER JOIN customer_policies cp ON cp.policy_data->>'id' = o.customer_id
AND cp.policy_data->>'region' = ANY (ARRAY['GLOBAL', 'REGION_A', 'REGION_B', 'REGION_C', 'REGION_D'])
AND cp.policy_data->>'status' NOT IN ('inactive', 'blocked', 'suspended')
AND (
cp.policy_data->>'customer_type' = 'issuer'
OR cp.policy_data->>'customer_type' IN ('type1', 'type2', 'type3')
OR get_text_from_path(cp.policy_data->'shared_groups', '$[*]', 'NONE') && ARRAY['GROUP_1']
)
WHERE EXISTS (
SELECT 1
FROM account_policies ap
WHERE ap.policy_id = ARRAY_TO_STRING(ARRAY[
o.data->'account'->'id'->>'system_ref',
o.data->'account'->'id'->>'type_ref',
o.data->'account'->'id'->>'region',
o.data->'account'->'id'->>'id'], '-')
AND ap.policy_data->>'region' = ANY (ARRAY['GLOBAL', 'REGION_A', 'REGION_B', 'REGION_C', 'REGION_D'])
AND ap.policy_data->>'status' NOT IN ('inactive', 'blocked', 'suspended')
AND ap.policy_data->>'contract' != 'prospect'
AND (
ap.policy_data->>'account_type' IN ('typeX', 'typeY', 'typeZ')
OR get_text_from_path(ap.policy_data->'shared_groups', '$[*]', 'NONE') && ARRAY['GROUP_1']
)
)
AND o.order_type = 'MONEY_ORDER'
AND (o.data->'close_date'->>'value' > '2024-09-10' OR o.data->'close_date'->>'value' IS NULL OR o.data->'close_date'->>'value' = '')
AND o.display_name_lower LIKE '%current%'
GROUP BY o.id, o.data
ORDER BY o.id
)
SELECT *
FROM result_list
ORDER BY rowid
LIMIT 23;
Some DEVs were also working on tuning this query and decided to implement a materialized view which solved the issue differently but with the same intent.
Limit (cost=9013972.46..9013972.46 rows=1 width=1789) (actual time=175757.446..175757.449 rows=0 loops=1)
Buffers: shared hit=20716139 read=836484 dirtied=1676
-> Sort (cost=9013972.46..9013972.46 rows=1 width=1789) (actual time=174722.049..174722.052 rows=0 loops=1)
Sort Key: (row_number() OVER (?))
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=20716139 read=836484 dirtied=1676
-> WindowAgg (cost=9013972.40..9013972.45 rows=1 width=1789) (actual time=174722.010..174722.013 rows=0 loops=1)
Buffers: shared hit=20716136 read=836484 dirtied=1676
-> GroupAggregate (cost=9013972.40..9013972.44 rows=1 width=1781) (actual time=174722.008..174722.011 rows=0 loops=1)
Group Key: positions.id
Buffers: shared hit=20716136 read=836484 dirtied=1676
-> Sort (cost=9013972.40..9013972.41 rows=1 width=1906) (actual time=174722.007..174722.009 rows=0 loops=1)
Sort Key: positions.id, client_refs.id
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=20716136 read=836484 dirtied=1676
-> Nested Loop (cost=1.41..9013972.39 rows=1 width=1906) (actual time=174722.001..174722.004 rows=0 loops=1)
Buffers: shared hit=20716136 read=836484 dirtied=1676
-> Nested Loop Left Join (cost=0.99..9013961.75 rows=1 width=1933) (actual time=184.358..170945.266 rows=8554 loops=1)
Buffers: shared hit=20683249 read=835147 dirtied=1672
-> Nested Loop Left Join (cost=0.56..9013950.91 rows=1 width=1931) (actual time=138.461..146595.622 rows=8554 loops=1)
Join Filter: (client_refs.id = pv.client_id)
Rows Removed by Join Filter: 328,884,138
Buffers: shared hit=20655467 read=828553 dirtied=1153
-> Nested Loop (cost=0.56..9008615.75 rows=1 width=1736) (actual time=33.568..75336.287 rows=8554 loops=1)
Buffers: shared hit=27,870 read=824,707 dirtied=851
-> Seq Scan on positions_view pv (cost=0.00..9008612.97 rows=1 width=78) (actual time=16.384..12504.255 rows=8554 loops=1)
Filter: ...
Rows Removed by Filter: 32,144,940
Buffers: shared hit=7 read=809,465
-> Index Scan using positions_pkey on positions positions (cost=0.56..2.78 rows=1 width=1685) (actual time=7.327..7.327 rows=1 loops=8554)
Index Cond: (id = pv.id)
Buffers: shared hit=27,550 read=15,242 dirtied=602
-> Seq Scan on client_refs (cost=0.00..4513.96 rows=65,696 width=195) (actual time=0.004..5.734 rows=38,449 loops=8554)
Buffers: shared hit=20,627,597 read=3,846 dirtied=302
-> Index Scan using clients_policy_pkey on clients_policy (cost=0.43..10.84 rows=1 width=26) (actual time=2.841..2.841 rows=0 loops=8554)
Index Cond: (policy_id = pv.client_id)
Filter: ...
Rows Removed by Filter: 1
Buffers: shared hit=27,782 read=6,594 dirtied=519
-> Index Scan using idx_btree_portfolios_policy_id on portfolios_policy (cost=0.42..10.64 rows=1 width=28) (actual time=0.439..0.439 rows=0 loops=8554)
Index Cond: (policy_id = pv.portfolio_id)
Filter: ...
Rows Removed by Filter: 1
Buffers: shared hit=32,887 read=1,337 dirtied=4
---
"Planning:"
" Buffers: shared hit=954 read=78 dirtied=9"
"Planning Time: 130.627 ms"
"JIT:"
" Functions: 33"
" Options: Inlining true, Optimization true, Expressions true, Deforming true"
" Timing: Generation 5.483 ms, Inlining 81.013 ms, Optimization 530.635 ms, Emission 423.850 ms, Total 1040.982 ms"
"Execution Time: 175799.036 ms"
Execution Time: Under 3 minutes
Buffer Usage:
- Shared hits: ~20.7 million
- Reads: ~836,484
- Dirtied: ~1,676
Improvements:
Efficient Joins: Usage of INNER JOIN
, limited the dataset early.
Indexed Columns: Enabled rapid data retrieval without full scans.
Simplified Conditions: Eliminating some functions in WHERE
clauses, allowed index usage.
Reduced Buffer Operations: Allowing lower I/O overhead significantly.
Comparison to the Previous Execution Plan:
- Despite processing a large number of rows, the optimized query executes much faster.
- The use of materialized views allows for more efficient joins and filtering, as data is stored in a format that is easier for the database to handle.
- Schema Flexibility is Required: If your application deals with varying data structures or frequent schema changes, JSON allows you to store this data without altering the database schema.
- Storing Semi-Structured or Unstructured Data: JSON is ideal for data that doesn’t fit neatly into a relational schema, such as logs, configuration files, or user-generated content.
- Aggregating Diverse Data: When consolidating data from multiple sources with different schemas, JSON can accommodate the discrepancies without data loss.
- Embedding Related Data: Storing related data together as JSON can simplify data retrieval in certain scenarios, reducing the need for complex joins.
- Data Integrity is Crucial: Relational columns allow for constraints, foreign keys, and data types that enforce data validity.
- Frequent Updates on Specific Fields: Updating a field within a JSON object requires rewriting the entire object, whereas individual columns can be updated more efficiently.
- Complex Queries and Reporting: SQL excels at querying structured data. If you need to perform complex queries, aggregations, or reports, structured columns are more efficient.
- Performance Optimization: Indexing and query planning are generally more effective with traditional columns.
PostgreSQL offers a rich set of functions and operators for querying JSON data:
- Access Operators:
->
: Returns a JSON object field by key.->>
: Returns a JSON object field by key as text.#>
: Returns JSON sub-object at the specified path.#>>
: Returns JSON sub-object at the specified path as text.
postgres=# create database jsonlab;
CREATE DATABASE
postgres=# \c jsonlab
You are now connected to database "jsonlab" as user "postgres".
jsonlab=# CREATE TABLE users (
id serial PRIMARY KEY,
data jsonb
);
CREATE TABLE
jsonlab=# INSERT INTO users (data) VALUES
('{"name": "Alice", "age": 30, "address": {"city": "New York", "zip": "10001"}}'),
('{"name": "Bob", "age": 25, "address": {"city": "Los Angeles", "zip": "90001"}}'),
('{"name": "Charlie", "age": 35, "address": {"city": "Chicago", "zip": "60601"}}');
INSERT 0 3
jsonlab=# SELECT
data->'name' AS name_json, -- Returns name as JSON
data->>'age' AS age_text, -- Returns age as text
data#>'{address}' AS address_json, -- Returns the address object as JSON
data#>>'{address,city}' AS city_text -- Returns the city from the address as text
FROM users;
name_json | age_text | address_json | city_text
-----------+----------+-----------------------------------------+-------------
"Alice" | 30 | {"zip": "10001", "city": "New York"} | New York
"Bob" | 25 | {"zip": "90001", "city": "Los Angeles"} | Los Angeles
"Charlie" | 35 | {"zip": "60601", "city": "Chicago"} | Chicago
(3 rows)
Indexing for Performance
To speed up querying JSON data, we can create indexes on the fields being accessed frequently. PostgreSQL allows you to create different types of indexes for JSON and JSONB fields.
Here are some example indexes to support the above query:
idx_users_data_gin
:
A general-purpose GIN index on thedata
column to speed up key-value searches.idx_users_data_name
:
A BTREE index on thename
field to quickly access user names.idx_users_data_city
:
A BTREE index on thecity
field in theaddress
to efficiently retrieve users by their city.idx_users_age_above_30
:
A partial index when conditions are common and you want to reduce the index overhead onage
(this is just for the example it doesn’t make sense here).
These indexes will significantly improve the performance of JSON queries, especially when dealing with large datasets.
CREATE INDEX idx_users_data_gin ON users USING gin (data);
CREATE INDEX idx_users_data_name ON users ((data->>'name'));
CREATE INDEX idx_users_data_city ON users ((data#>>'{address,city}'));
CREATE INDEX idx_users_age_above_30 ON users ((data->>'age'))
WHERE (data->>'age')::int > 30;
Avoid Unnecessary Casting
Because the optimizer is still the same when you are using JSON operators, “hiding” in complexity the intent of your code to the optimizer still has the same drawbacks. Casting JSON data unnecessarily can lead to performance degradation, since the optimizer might not take advantage of present indexes. Using the appropriate operator to retrieve data in the desired format is important.
Leverage JSON Path QueriesHere are some JSON Path Queries principles.
PostgreSQL 12 introduced this, allowing for more advanced querying capabilities. This was a huge step forward to navigate into complex JSON structures. While XML was generally used with highly structured data in the past, this feature made the case for JSONB once more.
In this example, I want to display all orders that contain at least one item priced above 100 :
SELECT data
FROM orders
WHERE data @? '$.items[*] ? (@.price > 100)';
data
: This is the JSONB column storing the order details.@?
: This operator is used to evaluate a JSON path predicate and returnstrue
orfalse
based on whether the condition is met.'$.items[*] ? (@.price > 100)'
:$
: Represents the root of the JSON document.items[*]
: Selects all elements in theitems
array.?
: The filter expression that applies a condition.@.price > 100
: The condition being checked — in this case, whether theprice
of any item is greater than 100.
The equivalent of that query in standard relational table would look like that :
SELECT DISTINCT o.order_id
FROM orders o
JOIN items i ON o.order_id = i.order_id
WHERE i.price > 100;
By using materialized views or views to pre-extract JSON fields into regular columns, we can dramatically improve query performance. This approach reduces the computational burden during query execution and allowed the database optimizer to effectively use indexes. Oddly enough, old tricks are still working :).
JSON support in PostgreSQL 17 allow developers and DBAs to design flexible, efficient, and scalable applications. By understanding when and how to use JSON effectively, you can leverage PostgreSQL’s full potential, combining the reliability of a relational database with the flexibility and dynamic schema. In a world where RAG search through vector or graph databases is the new trend, the knowledge of how JSONB and PostgreSQL are working together can bring you closer to the finish line.
L’article JSON in PostgreSQL : a query tuning case est apparu en premier sur dbi Blog.
FreeBSD basics – 6 – The ports collection
If you followed the previous introduction posts about FreeBSD (here, here, here, here and here) you should already know how to install binary packages. On FreeBSD this is done using pkg, and if you are familiar with apt, dnf, or zypper (or any other package manager on Linux) this should not be a big deal. There is, however, another official way to get additional software onto FreeBSD and this are the so called “ports”. In contrast to the binary packages, ports are source code that comes with all the patches and Makefiles which are required to compile and install a specific piece of program on FreeBSD. Think of it as an automated way of compiling software for FreeBSD.
Depending on how you installed FreeBSD the ports collection might already be there:
root@freebsd14:~ $ ls -l /usr/ports/ | wc -l
81
If it’s not there, you have a couple of options to get it installed. Either you get it with git (the following will get the HEAD branch, check the link for a description how to checkout a quarterly branch):
root@freebsd14:~ $ pkg install git
root@freebsd14:~ $ git clone --depth 1 https://git.FreeBSD.org/ports.git /usr/ports
… or you get it with portsnap:
root@freebsd14:~ $ mkdir -p /var/db/portsnap
root@freebsd14:~ $ portsnap fetch
root@freebsd14:~ $ portsnap extract
… or you download the the tarball from the website. No matter how you do it, once the ports are available you’ll see something like this:
root@freebsd14:/usr/ports $ pwd
/usr/ports
root@freebsd14:/usr/ports $ ls
.arcconfig Keywords accessibility comms finance japanese net-im science x11-clocks
.git MOVED arabic converters french java net-mgmt security x11-drivers
.gitignore Makefile archivers databases ftp korean net-p2p shells x11-fm
.hooks Mk astro deskutils games lang news sysutils x11-fonts
.mailmap README audio devel german mail polish textproc x11-servers
CHANGES Templates benchmarks dns graphics math ports-mgmt ukrainian x11-themes
CONTRIBUTING.md Tools biology editors hebrew misc portuguese vietnamese x11-toolkits
COPYRIGHT UIDs cad emulators hungarian multimedia print www x11-wm
GIDs UPDATING chinese filesystems irc net russian x11
Each of those directories is a category, and the categories contain the ports grouped into that category:
root@freebsd14:~ $ ls /usr/ports/databases/
Makefile p5-DBIx-Log4perl py-aiosqlite
R-cran-DBI p5-DBIx-MySQLSequence py-alembic
R-cran-RMySQL p5-DBIx-NoSQL py-apache-arrow
R-cran-RPostgreSQL p5-DBIx-Password py-apsw
...
p5-DBIx-Inspector py-aiopg xtrabackup84
p5-DBIx-Introspector py-aioredis zodb
p5-DBIx-Lite py-aiosql
There are thousands of ports and looking for something in the directory tree might become time consuming. Searching for something can be done with e.g. “whereis“:
root@freebsd14:~ $ whereis puredb
puredb: /usr/ports/databases/puredb
… or you search directly in the ports tree, but before you can do this you need to build the index file:
root@freebsd14:/usr/ports $ make search postgres
The search target requires INDEX-14. Please run make index or make fetchindex.
make: don't know how to make postgres. Stop
make: stopped in /usr/ports
root@freebsd14:/usr/ports $ make index
Generating INDEX-14 - please wait..
Done.
root@freebsd14:/usr/ports $
This created the INDEX file which is used as the source for searches in the tree:
root@freebsd14:/usr/ports $ ls -l INDEX-14
-rw-r--r-- 1 root wheel 48207086 Dec 11 10:13 INDEX-14
root@freebsd14:/usr/ports $ make search name=postgresql | less
Port: R-cran-RPostgreSQL-0.7.7
Path: /usr/ports/databases/R-cran-RPostgreSQL
Info: R Interface to the 'PostgreSQL' Database System
Maint: tota@FreeBSD.org
...
Port: databases/postgresql11-pgtcl
Moved: databases/postgresql12-pgtcl
Date: 2023-12-31
Reason: Has expired: Security support ended on 2023-11-09
The README comes with additional hints and information:
root@freebsd14:/usr/ports $ cat README
This is the FreeBSD Ports Collection. For an easy to use
WEB-based interface to it, please see:
https://ports.FreeBSD.org
For general information on the Ports Collection, please see the
FreeBSD Handbook ports section which is available from:
https://docs.freebsd.org/en/books/handbook/ports/
for the latest official version
or:
The ports(7) manual page (man ports).
These will explain how to use ports and packages.
If you would like to search for a port, you can do so easily by
saying (in /usr/ports):
make search name="<name>"
or:
make search key="<keyword>"
which will generate a list of all ports matching <name> or <keyword>.
make search also supports wildcards, such as:
make search name="gtk*"
For information about contributing to FreeBSD ports, please see the Porter's
Handbook, available at:
https://docs.freebsd.org/en/books/porters-handbook/
NOTE: This tree will GROW significantly in size during normal usage!
The distribution tar files can and do accumulate in /usr/ports/distfiles,
and the individual ports will also use up lots of space in their work
subdirectories unless you remember to "make clean" after you're done
building a given port. /usr/ports/distfiles can also be periodically
cleaned without ill-effect.
Lets say we want to install PostgreSQL 17, what do we need to do. Actually this is quite easy. First, enter the ports directory:
root@freebsd14:/usr/ports $ cd databases/postgresql17-server/
root@freebsd14:/usr/ports/databases/postgresql17-server $
If you’re curious, have a look at the Makefile to see what will happen, otherwise just install the port (“clean” will cleanup afterwards to save space on disk):
root@freebsd14:/usr/ports/databases/postgresql17-server $ make install clean
This will bring up a new windows which gives you some options to either enable or disable:
Once you’re happy with it, let it build and install. This will start downloading the PostgreSQL sources and then ask for options for bison:
This goes on for all the tools required to build PostgreSQL:
… (and so on) and finally you’ll see the usual “configure” and “make” stuff on the screen (this will run the regression tests as well, so expect it to take some time). If you just want to get a feeling about ports, maybe don’t chose PostgreSQL as your first port, because this really takes time as all the dependencies (llvm, …, ) get build from source as well.
Once it is done you’ll find the binaries in “/usr/local/bin/”:
root@freebsd14:/usr/ports/databases/postgresql17-server $ ls /usr/local/bin/pg*
/usr/local/bin/pg_amcheck /usr/local/bin/pg_ctl /usr/local/bin/pg_restore
/usr/local/bin/pg_archivecleanup /usr/local/bin/pg_dump /usr/local/bin/pg_rewind
/usr/local/bin/pg_basebackup /usr/local/bin/pg_dumpall /usr/local/bin/pg_test_fsync
/usr/local/bin/pg_checksums /usr/local/bin/pg_isready /usr/local/bin/pg_test_timing
/usr/local/bin/pg_config /usr/local/bin/pg_receivewal /usr/local/bin/pg_upgrade
/usr/local/bin/pg_controldata /usr/local/bin/pg_recvlogical /usr/local/bin/pg_waldump
/usr/local/bin/pg_createsubscriber /usr/local/bin/pg_resetwal /usr/local/bin/pgbench
root@freebsd14:/usr/ports/databases/postgresql17-server $ /usr/local/bin/psql --version
psql (PostgreSQL) 17.2
All the question which have been answered can also be given on the command line to automate the whole process (‘-‘ means do not include, ‘+’ means include):
root@freebsd14:/usr/ports/databases/postgresql17-server $ make pretty-print-config
-DEBUG +DOCS -DTRACE -GSSAPI -LDAP +LLVM +LZ4 +NLS -OPTIMIZED_CFLAGS -PAM +SSL -TZDATA +XML +ZSTD
A more verbose description of those can be shown like this:
root@freebsd14:/usr/ports/databases/postgresql17-server $ make showconfig
===> The following configuration options are available for postgresql17-server-17.2:
DEBUG=off: Build with debugging support
DOCS=on: Build and/or install documentation
DTRACE=off: Build with DTrace probes
GSSAPI=off: Build with GSSAPI support
LDAP=off: Build with LDAP authentication support
LLVM=on: Build with support for JIT-compiling expressions
LZ4=on: Build with LZ4 compression support
NLS=on: Use internationalized messages
OPTIMIZED_CFLAGS=off: Builds with compiler optimizations (-O3)
PAM=off: Build with PAM Support
SSL=on: Build with OpenSSL support
TZDATA=off: Use internal timezone database
XML=on: Build with XML data type
ZSTD=on: Build with ZSTD compression support
===> Use 'make config' to modify these settings
To summarize: A lot of additional stuff is available in the ports collection. If something is not available as a package, have a look there.
If you want to see what’s going in the ports collection, have a look at FreshPorts.
L’article FreeBSD basics – 6 – The ports collection est apparu en premier sur dbi Blog.
Help yourself out with the ‘log_reuse_wait_desc’ column
A few days ago a customer called me to ask if I could help him with a problem with the log file.
He tried to shrink it, but the system wasn’t doing anything and he didn’t know why.
With the column ‘log_reuse_wait_desc’ you can find it out.When I asked him what was in the ‘log_reuse_wait_desc’ column for the database, there was silence at first, then the question quickly arose as to where the column could be found at all.
But let’s start at the beginning.
I arranged a session with the customer and asked him to show me what he had in mind.
He tried to reduce the log file to the smallest possible size with the following command:
The output is successfull and nothing has happened.
But why?
I quickly found out that this was a database located in an AlwaysOn group.
Some time before his call, an application had made large changes to the database by mistake. This caused the log file to grow enormously. From there also the desire to restore the old size.
In the well-known system table sys.databases there is a column with the name ‘log_reuse_wait_desc’ which tells us why it does not work.
So I executed a query based on the name of the database and the column:
The reuse_wait ‘AVAILABILITY_REPLICA’ was recognizable.
This means that the log file is replicating the transactions that have been carried out. Until this has happened, the log file cannot be manipulated.
We therefore had to wait until all transactions had been replicated and the status had changed:
On the status ‘ACTIVE_TRANSACTION’, we are already able to shrink the previous enlargement.
Then we could reduce the size of the file.
Are there any other log waits?
The following are listed in the official Microsoft documentation:
- NOTHING
- CHECKPOINT
- LOG_BACKUP
- ACTIVE_BACKUP_OR_RESTORE
- ACTIVE_TRANSACTION
- DATABASE_MIRRORING
- REPLICATION
- DATABASE_SNAPSHOT_CREATION
- LOG_SCAN
- AVAILABILITY_REPLICA
- OLDEST_PAGE
- XTP_CHECKPOINT
- SLOG_SCAN
Customers often ask me exactly the opposite. Why can’t I reduce the size of the log file even though I have only made small changes to the database?
The main problem lies with the VLFs.When the transaction log is not growing, and you’re taking regular log backups, but the log_reuse_wait_desc stays at LOG_BACKUP, the reason is that the previous log backup did not clear any Virtual Log Files (VLFs).
How can this occur?
Consider a database with minimal insert, update, delete, or DDL activity. In this scenario, only a few log records are generated between your regular log backups, and they all reside in the same Virtual Log File (VLF). When the next log backup runs, it backs up these few log records but cannot clear the current VLF, preventing the log_reuse_wait_desc from changing. Once enough changes are made in the database to fill the current VLF and activate the next one, the subsequent log backup should be able to clear the previous VLF, causing the log_reuse_wait_desc to change to NOTHING. However, if the next log backup cannot clear the current VLF, the log_reuse_wait_desc will revert to LOG_BACKUP again.
If you want to find out more about VFLs, take a look at my colleague’s blog entry. He has covered the topic in great detail:
L’article Help yourself out with the ‘log_reuse_wait_desc’ column est apparu en premier sur dbi Blog.
Building multi-architecture images with GitLab CI/CD
Building multi-architecture images become more and more useful. Indeed, many recent computers use ARM processors architecture. Examples include MacBooks using M(x) processors, and Amazon EC2 instances using AWS Graviton processors.
However, the diversification of processor architectures adds a new level of complexity to the creation of container images. Indeed, the construction has to cope with different instruction sets.
Docker buildx, the solution for building multi-architecture imagesFor the YaK project, we want to make amd64 (x86) and arm64 images available using GitLab CI/CD.
In order to create a build compatible with several architectures, I had to use “docker buildx” in my .gitlab-ci.yml file:
build:
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
- docker buildx create --name yakbuilder --use
script:
- docker buildx build --pull --builder=yakbuilder --platform linux/amd64,linux/arm64 -t [IMG]:[TAG] --push .
How it works:
- In the
"before_script"
section , I initialize a QEMU container to emulate ARM architecture and to create a buildx context using the QEMU container - In the
"script"
section itself, instead of a simple “docker build”, I use the"docker buildx build"
command - I also pass the buildx context created in the
"before_script"
with the--builder
flag - Finally, I add the list of architectures required for the build with the
--platform
flag
With this method, the build is slower. That’s normal as several images are created (one per architecture) instead of just one.
The result can be seen in the GitLab container registry:
Now, below the image tag, a small “index” label is shown. This refers to the fact that several images are available for this tag. During the image pull, the container engine will choose the image version corresponding to its architecture.
ConclusionWith buildx and QEMU in GitLab CI/CD, building multi-architecture images is easy. You can manage different processor architectures and meet the needs of a wide range of users and ensure the compatibility of your container images.
L’article Building multi-architecture images with GitLab CI/CD est apparu en premier sur dbi Blog.
Oracle Database on Apple Silicon with Virtualbox 7.1
I blogged in the past about running an Oracle database for testing purposes on Apple Silicon. Initially with emulated Intel code here and then (when Oracle released its 19c database software for ARM) with UTM as a wrapper for the QEMU hypervisor here. In the meantime another possibility came up since Oracle released its type II hypervisor Virtualbox for ARM in version 7.1.
Below the steps to setup an Oracle database from scratch on a Virtualbox VM on my MacBook Air M1.
Install Virtualbox and a VM with Oracle Enterprise Linux 8First things first: Install Virtualbox. To do that go to www.virtualbox.org and click on the download link:
Click on “macOS / Apple Silicon hosts”, which downloads VirtualBox-7.1.4-165100-macOSArm64.dmg (at the time when writing this Blog):
In Finder double click the DMG to install Virtualbox. Go through the usual installation process until you see the following screen:
When starting Virtualbox the first time I set the Expert mode:
But before creating a VM I downloaded Oracle Enterprise Linux 8 for ARM from here:
Then I create a new VM by clicking “New” in Virtualbox:
Provided the information requested:
Then clicked “Finish”.
Before starting the VM I enabled host caching under Settings -> Storage -> Controller: VirtioSCSI:
I set the network to bridged, so that I also can reach the VM remotely for some tests (if you want to access your VM remotely, you have to adjust the firewall on your Mac):
Then I powered the machine on by clicking “Start”. I had to provide access to the keyboard and computer in the security settings of the Mac. The VM booted from the ISO and I could see the installation screen and choose to install Linux 8:
Then choose the language and other settings and begin the installation:
Then reboot the system:
Install Oracle 19c (19.25.)After the reboot I can login to the machine through ssh as root and prepare for the Oracle installation:
dnf -y update
dnf -y install oracle-database-preinstall-19c.aarch64
mkdir -p /u01/app/oracle
chown -R oracle:oinstall /u01/app
systemctl stop firewalld
passwd oracle
dnf -y install xterm.aarch64 tigervnc-server.aarch64
The Oracle software can be downloaded here. I uploaded it to the /tmp directory on my just created VM and installed Oracle 19c:
su - oracle
mkdir -p /u01/app/oracle/product/19.0.0/dbhome_1
cd /u01/app/oracle/product/19.0.0/dbhome_1
unzip -oq /tmp/LINUX.ARM64_1919000_db_home.zip
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
export ORACLE_HOSTNAME=`hostname`
export ORA_INVENTORY=/u01/app/oraInventory
./runInstaller -ignorePrereq -waitforcompletion -silent \
-responseFile ${ORACLE_HOME}/install/response/db_install.rsp \
oracle.install.option=INSTALL_DB_SWONLY \
ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \
UNIX_GROUP_NAME=oinstall \
INVENTORY_LOCATION=${ORA_INVENTORY} \
SELECTED_LANGUAGES=en,en_GB \
ORACLE_HOME=${ORACLE_HOME} \
ORACLE_BASE=${ORACLE_BASE} \
oracle.install.db.InstallEdition=EE \
oracle.install.db.OSDBA_GROUP=dba \
oracle.install.db.OSBACKUPDBA_GROUP=backupdba \
oracle.install.db.OSDGDBA_GROUP=dgdba \
oracle.install.db.OSKMDBA_GROUP=kmdba \
oracle.install.db.OSRACDBA_GROUP=racdba \
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
DECLINE_SECURITY_UPDATES=true
su - root
/u01/app/oraInventory/orainstRoot.sh
/u01/app/oracle/product/19.0.0/dbhome_1/root.sh
exit
Then install the 19.25. release update, which I copied to /tmp/19.25:
cd $ORACLE_HOME
mv OPatch OPatch_1919
unzip -oq /tmp/19.25/p6880880_190000_Linux-ARM-64.zip
cd /tmp/19.25
unzip -oq p36912597_190000_Linux-ARM-64.zip
cd 36912597/
$ORACLE_HOME/OPatch/opatch apply
Start a listener and install a DB
$ORACLE_HOME/bin/lsnrctl start
export ORACLE_SID=db19ee
export PDB_NAME=pdb1
export DATA_DIR=$ORACLE_BASE/oradata
mkdir -p $DATA_DIR
$ORACLE_HOME/bin/dbca -silent -createDatabase \
-templateName General_Purpose.dbc \
-gdbname ${ORACLE_SID} -sid ${ORACLE_SID} -responseFile NO_VALUE \
-characterSet AL32UTF8 \
-sysPassword oracle_4U \
-systemPassword oracle_4U \
-createAsContainerDatabase true \
-numberOfPDBs 1 \
-pdbName ${PDB_NAME} \
-pdbAdminPassword oracle_4U \
-databaseType MULTIPURPOSE \
-memoryMgmtType auto_sga \
-totalMemory 2500 \
-storageType FS \
-datafileDestination "${DATA_DIR}" \
-redoLogFileSize 200 \
-emConfiguration NONE \
-ignorePreReqs
$ORACLE_HOME/bin/sqlplus / as sysdba <<EOF
alter system set db_create_file_dest='${DATA_DIR}';
alter pluggable database ${PDB_NAME} save state;
exit;
EOF
As root adjust the firewalld:
# systemctl start firewalld
# cd /etc/firewalld/services
# vi oranetwork.xml
# cat oranetwork.xml
<?xml version="1.0" encoding="utf-8"?> <service>
<short>oranetwork</short>
<description>Oracle Network communication</description> <port protocol="tcp" port="1521"/>
</service>
# firewall-cmd --permanent --add-service=oranetwork
# firewall-cmd --reload
# firewall-cmd --list-all
...
services: cockpit dhcpv6-client oranetwork ssh
...
Finally I can connect to my 19.25.-DB:
[oracle@db19c-arm ~]$ . oraenv
ORACLE_SID = [db19ee] ? db19ee
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@db19c-arm ~]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Dec 5 19:35:34 2024
Version 19.25.0.0.0
Copyright (c) 1982, 2024, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.25.0.0.0
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
SQL>
Summary
Consultants and DBAs who are used to run Oracle Database test environments on Virtualbox, can do this now on Apple Silicon as well. The steps to setup and run an Oracle DB on ARM are the same as on x86-64. Please consider that there are still some limitations on ARM. E.g. 19c is not supported on Oracle Enterprise Linux 9 (OEL9) on ARM.
L’article Oracle Database on Apple Silicon with Virtualbox 7.1 est apparu en premier sur dbi Blog.
FreeBSD basics – 5 – Providing services using Jails
This post builds on top of the previous post in which we’ve created a simple thin Jail. The goal of this post is, to put a bit of automation around this and to provide a simple PostgreSQL as a service platform. We want to do this by creating one PostgreSQL instance per Jail (without any restrictions on resource usage for now, no monitoring and so on). Before we start with this, let’s remove the sample thinjail1 we’ve created in the last post:
root@freebsd14:~ $ jls
JID IP Address Hostname Path
1 192.168.122.160 thinjail1 /usr/local/jails/containers/thinjail1
root@freebsd14:~ $ service jail stop thinjail1
Stopping jails: thinjail1.
root@freebsd14:~ $ chflags -R 0 /usr/local/jails/containers/thinjail1
root@freebsd14:~ $ rm -rf /usr/local/jails/containers/thinjail1/*
root@freebsd14:~ $ jls
JID IP Address Hostname Path
root@freebsd14:~ $ zfs destroy zroot/jails/containers/thinjail1
We’ll also remove the ZFS snapshot because we’ll be creating a new one containing additional packages and configuration:
root@freebsd14:~ $ zfs destroy zroot/jails/templates/14.1-RELEASE@base
root@freebsd14:~ $ zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
zroot/ROOT/default@2024-11-26-10:56:43-0 394M - 1.46G -
zroot/ROOT/default@2024-12-04-08:41:41-0 2.23M - 2.49G -
zroot/ROOT/default@2024-12-04-08:42:46-0 1.86M - 2.58G -
Remember that this was our base for the Jail:
root@freebsd14:~ $ ls -al /usr/local/jails/templates/14.1-RELEASE/
total 95
drwxr-xr-x 18 root wheel 22 Dec 2 13:35 .
drwxr-xr-x 3 root wheel 3 Dec 2 13:29 ..
-rw-r--r-- 2 root wheel 1011 May 31 2024 .cshrc
-rw-r--r-- 2 root wheel 495 May 31 2024 .profile
-r--r--r-- 1 root wheel 6109 May 31 2024 COPYRIGHT
drwxr-xr-x 2 root wheel 49 Dec 2 13:40 bin
drwxr-xr-x 15 root wheel 69 Dec 2 13:40 boot
dr-xr-xr-x 2 root wheel 2 May 31 2024 dev
drwxr-xr-x 30 root wheel 105 Dec 2 13:40 etc
drwxr-xr-x 4 root wheel 78 Dec 2 13:40 lib
drwxr-xr-x 3 root wheel 5 May 31 2024 libexec
drwxr-xr-x 2 root wheel 2 May 31 2024 media
drwxr-xr-x 2 root wheel 2 May 31 2024 mnt
drwxr-xr-x 2 root wheel 2 May 31 2024 net
dr-xr-xr-x 2 root wheel 2 May 31 2024 proc
drwxr-xr-x 2 root wheel 150 Dec 2 13:40 rescue
drwxr-x--- 2 root wheel 7 May 31 2024 root
drwxr-xr-x 2 root wheel 150 Dec 2 13:40 sbin
lrwxr-xr-x 1 root wheel 11 May 31 2024 sys -> usr/src/sys
drwxrwxrwt 2 root wheel 2 May 31 2024 tmp
drwxr-xr-x 14 root wheel 14 May 31 2024 usr
drwxr-xr-x 24 root wheel 24 May 31 2024 var
What we’re doing now is to install PostgreSQL into this base so we’ll have that available immediately when we create new Jails afterwards. To do this we can ask pkg to perform it’s actions against a chroot environment instead of the host system by using the “-c” switch:
root@freebsd14:~ $ pkg -c /usr/local/jails/templates/14.1-RELEASE/ install postgresql17-server-17.2 postgresql17-contrib-17.2
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 17 package(s) will be affected (of 0 checked):
New packages to be INSTALLED:
gettext-runtime: 0.22.5
icu: 74.2_1,1
indexinfo: 0.3.1
libedit: 3.1.20240808,1
libffi: 3.4.6
liblz4: 1.10.0,1
libxml2: 2.11.9
llvm15: 15.0.7_10
lua53: 5.3.6_1
mpdecimal: 4.0.0
perl5: 5.36.3_2
postgresql17-client: 17.2
postgresql17-contrib: 17.2
postgresql17-server: 17.2
python311: 3.11.10
readline: 8.2.13_1
zstd: 1.5.6
Number of packages to be installed: 17
The process will require 1 GiB more space.
231 MiB to be downloaded.
Proceed with this action? [y/N]: y
[1/17] Fetching indexinfo-0.3.1.pkg: 100% 6 KiB 5.9kB/s 00:01
[2/17] Fetching libxml2-2.11.9.pkg: 100% 873 KiB 893.6kB/s 00:01
...
To use PostgreSQL, enable it in rc.conf using
sysrc postgresql_enable=yes
To initialize the database, run
service postgresql initdb
You can then start PostgreSQL by running:
service postgresql start
For postmaster settings, see ~postgres/data/postgresql.conf
NB. FreeBSD's PostgreSQL port logs to syslog by default
See ~postgres/data/postgresql.conf for more info
NB. If you're not using a checksumming filesystem like ZFS, you might
wish to enable data checksumming. It can be enabled during
the initdb phase, by adding the "--data-checksums" flag to
the postgresql_initdb_flags rcvar. Otherwise you can enable it later by
using pg_checksums. Check the initdb(1) manpage for more info
and make sure you understand the performance implications.
We can easily verify this by looking at /usr/local/bin of our base:
root@freebsd14:~ $ ls -al /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg*
-rwxr-xr-x 1 root wheel 99224 Dec 3 02:06 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_amcheck
-rwxr-xr-x 1 root wheel 43616 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_archivecleanup
-rwxr-xr-x 1 root wheel 161128 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_basebackup
-rwxr-xr-x 1 root wheel 78336 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_checksums
-rwxr-xr-x 1 root wheel 53784 Dec 3 02:06 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_config
-rwxr-xr-x 1 root wheel 57272 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_controldata
-rwxr-xr-x 1 root wheel 101656 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_createsubscriber
-rwxr-xr-x 1 root wheel 72136 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_ctl
-rwxr-xr-x 1 root wheel 422632 Dec 3 02:06 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_dump
-rwxr-xr-x 1 root wheel 121896 Dec 3 02:06 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_dumpall
-rwxr-xr-x 1 root wheel 67904 Dec 3 02:06 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_isready
-rwxr-xr-x 1 root wheel 103840 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_receivewal
-rwxr-xr-x 1 root wheel 101768 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_recvlogical
-rwxr-xr-x 1 root wheel 68312 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_resetwal
-rwxr-xr-x 1 root wheel 207752 Dec 3 02:06 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_restore
-rwxr-xr-x 1 root wheel 153560 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_rewind
-rwxr-xr-x 1 root wheel 49024 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_test_fsync
-rwxr-xr-x 1 root wheel 39088 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_test_timing
-rwxr-xr-x 1 root wheel 178256 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_upgrade
-rwxr-xr-x 1 root wheel 110984 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pg_waldump
-rwxr-xr-x 1 root wheel 207416 Dec 3 02:06 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pgbench
root@freebsd14:~ $ ls -al /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/po*
-rwxr-xr-x 1 root wheel 4149 Oct 31 02:09 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pod2html
-rwxr-xr-x 1 root wheel 15046 Oct 31 02:09 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pod2man
-rwxr-xr-x 1 root wheel 10815 Oct 31 02:09 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pod2text
-rwxr-xr-x 1 root wheel 4113 Oct 31 02:09 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/pod2usage
-rwxr-xr-x 1 root wheel 3664 Oct 31 02:09 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/podchecker
-rwxr-xr-x 1 root wheel 9548040 Dec 3 02:18 /usr/local/jails/templates/14.1-RELEASE/usr/local/bin/postgres
In the same way we can ask pkg to operate in a chroot, we can also ask sysrc to operate in a chroot and directly enable the PostgreSQL service:
root@freebsd14:~ $ sysrc -R /usr/local/jails/templates/14.1-RELEASE/ postgresql_enable=yes
postgresql_enable: -> yes
root@freebsd14:~ $ cat /usr/local/jails/templates/14.1-RELEASE/etc/rc.conf
postgresql_enable="yes"
Now we have everything we need and can create a new ZFS snapshot:
root@freebsd14:~ $ zfs snapshot zroot/jails/templates/14.1-RELEASE@postgresql17
root@freebsd14:~ $ zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
zroot/ROOT/default@2024-11-26-10:56:43-0 394M - 1.46G -
zroot/ROOT/default@2024-12-04-08:41:41-0 2.23M - 2.49G -
zroot/ROOT/default@2024-12-04-08:42:46-0 1.86M - 2.58G -
zroot/jails/templates/14.1-RELEASE@postgresql17 0B - 1.34G -
Looking at the configuration for the Jail in the last post, it looks like this:
root@freebsd14:~ $ cat /etc/jail.conf.d/thinjail1.conf
thinjail1 {
# STARTUP/LOGGING
exec.start = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
exec.consolelog = "/var/log/jail_console_${name}.log";
# PERMISSIONS
allow.raw_sockets;
exec.clean;
mount.devfs;
# HOSTNAME/PATH
host.hostname = "${name}";
path = "/usr/local/jails/containers/${name}";
# NETWORK
ip4.addr = 192.168.122.160;
interface = vtnet0;
}
The dynamic parts in that configuration are the IP address, the path, the hostname and the name of the Jail. Everything else remains static for additional Jails. So we could create a simple script which creates a new Jail configuration based on a few parameters (there is no error handling, no sanity checks, and everything else which makes a good script a good script, this really is just for demonstration purposes):
#!/usr/local/bin/bash
# The parameters for a new Jail
JAILNAME="$1"
IPADDR="$2"
# This is where all the Jails go to
BASEPATH="/usr/local/jails/containers/"
cat << EOF > /etc/jail.conf.d/${JAILNAME}.conf
${JAILNAME} {
# STARTUP/LOGGING
exec.start = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
exec.consolelog = "/var/log/jail_console_${name}.log";
# PERMISSIONS
allow.raw_sockets;
allow.sysvipc = 1;
exec.clean;
mount.devfs;
# HOSTNAME/PATH
host.hostname = "${JAILNAME}.it.dbi-services.com";
path = "${BASEPATH}/${JAILNAME}";
# NETWORK
ip4.addr = ${IPADDR};
interface = vtnet0;
}
EOF
zfs clone zroot/jails/templates/14.1-RELEASE@postgresql17 zroot/jails/containers/${JAILNAME}
service jail start ${JAILNAME}
jls
jexec -U postgres ${JAILNAME} /usr/local/bin/initdb --pgdata=/var/db/postgres/data17
jexec -l ${JAILNAME} service postgresql start
What this script is doing:
- Create the Jail configuration based on the name and IP address
- Clone the ZFS snapshot we’ve created above
- Start the Jail
- Initialize PostgreSQL
- Start up PostgreSQL
If you run that it looks like this:
root@freebsd14:~ $ ./new_jail.sh PG1 192.168.122.130
Starting jails: PG1.
JID IP Address Hostname Path
11 192.168.122.130 PG1.it.dbi-services.com /usr/local/jails/containers/PG1
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "C.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
creating directory /var/db/postgres/data17 ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default "max_connections" ... 100
selecting default "shared_buffers" ... 128MB
selecting default time zone ... Europe/Vaduz
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
/usr/local/bin/pg_ctl -D /var/db/postgres/data17 -l logfile start
2024-12-04 10:28:32.103 CET [4809] LOG: ending log output to stderr
2024-12-04 10:28:32.103 CET [4809] HINT: Future log output will go to log destination "syslog".
That’s it, PostgreSQL is up and running in the Jail:
root@freebsd14:~ $ psql -h 192.168.122.130
psql: error: connection to server at "192.168.122.130", port 5432 failed: FATAL: no pg_hba.conf entry for host "192.168.122.130", user "root", database "root", no encryption
Pretty easy to do. Now additional PostgreSQL Jails can be added easily. What is really impressive is the size of the Jail on disk:
root@freebsd14:~ $ du -sh /usr/local/jails/containers/PG1
1.4G /usr/local/jails/containers/PG1
A bit more than a GB for a complete PostgreSQL service.
L’article FreeBSD basics – 5 – Providing services using Jails est apparu en premier sur dbi Blog.
FreeBSD basics – 4 – Jails
Now that we know how to get started with FreeBSD, how to manage users and groups, and how to work with services, we’ll look a bit into Jails today. But before we do this, let’s look at what we have in Linux today when it comes to containerization and virtual machines. Obviously we can manage virtual machines, with either KVM or Xen. This is well known and there are several platforms around those which help in managing whole clusters of virtual machines, e.g. like Proxmox. The more recent trend is containers, which also comes with various options like LXC, Docker, Kubernetes, Rancher, OpenShift, and, and, and. This is somehow all summarized under “Cloud Native” and there is the Cloud Native Computing Foundation (CNCF) which is a kind of umbrella for all those projects. In FreeBSD, there are no containers but there are Jails, which you might see as sitting in the middle of virtual machines and containers, but you can do virtual machines (or virtualization) on FreeBSD (this will be a topic of another post).
Jails build on top of chroot (which is also known on Linux) but do not only limit access to the file system but also other resources such as network or processes. In short, Jails provide isolation from the host system. How much isolation you’ll get depends on the type of Jail you’re using. The FreeBSD handbook provides a good overview of the advantages and disadvantages of the different types of Jails. In short, the types are:
- Thick – Highest level of isolation
- Thin – Reduced isolation
- Service – No file system isolation
The goal of this post is to create a “thin” Jail, because it is very lightweight and we’ll use thin Jails in a later post to provide PostgreSQL instances. Think of this as a simple PostgreSQL as a service.
Before we can create Jails, we need to enable them with “sysrc”:
root@freebsd14:~ $ sysrc jail_enable="YES"
jail_enable: NO -> YES
root@freebsd14:~ $ sysrc jail_parallel_start="YES"
jail_parallel_start: NO -> YES
The second parameter above is not really required for running Jails, but it enables parallel startup of the Jails when the system is booting, what is usually what you want (except you have dependencies between the Jails).
It doesn’t matter where on disk you put the Jails, but it is recommend to create separate file systems for them, e.g.:
root@freebsd14:~ $ zfs create -o mountpoint=/usr/local/jails zroot/jails
root@freebsd14:~ $ zfs create zroot/jails/media
root@freebsd14:~ $ zfs create zroot/jails/templates
root@freebsd14:~ $ zfs create zroot/jails/containers
“/usr/local/jails zroot/containers” will contain the Jails, and “/usr/local/jails/templates” will contain the template for the thin Jail(s) we’ll be creating (usually you would put those on separate disks, and not in the root zpool, of course). “media” is the download of the FreeBSD version we’ll be using. I real live you most probably want a file system per Jail and you’d also put some quotas on them.
The template for the new Jails is actually a ZFS snapshot, but before we can do this we need to prepare it. The first step is to create a new data set (or file system) which will contain the files for the version of FreeBSD we want the Jail to run on:
root@freebsd14:~ $ zfs create -p zroot/jails/templates/14.1-RELEASE
The next step is to download the userland (everything around, but not the kernel itself). This goes into the media data set:
root@freebsd14:~ $ fetch https://download.freebsd.org/ftp/releases/amd64/amd64/14.1-RELEASE/base.txz -o /usr/local/jails/media/14.1-RELEASE-base.txz
Extract that into the data set we’ve created for the template:
root@freebsd14:~ $ tar -xf /usr/local/jails/media/14.1-RELEASE-base.txz -C /usr/local/jails/templates/14.1-RELEASE --unlink
root@freebsd14:~ $ ls -l /usr/local/jails/templates/14.1-RELEASE
total 86
-rw-r--r-- 2 root wheel 1011 May 31 2024 .cshrc
-rw-r--r-- 2 root wheel 495 May 31 2024 .profile
-r--r--r-- 1 root wheel 6109 May 31 2024 COPYRIGHT
drwxr-xr-x 2 root wheel 49 May 31 2024 bin
drwxr-xr-x 14 root wheel 68 May 31 2024 boot
dr-xr-xr-x 2 root wheel 2 May 31 2024 dev
drwxr-xr-x 30 root wheel 103 May 31 2024 etc
drwxr-xr-x 4 root wheel 78 May 31 2024 lib
drwxr-xr-x 3 root wheel 5 May 31 2024 libexec
drwxr-xr-x 2 root wheel 2 May 31 2024 media
drwxr-xr-x 2 root wheel 2 May 31 2024 mnt
drwxr-xr-x 2 root wheel 2 May 31 2024 net
dr-xr-xr-x 2 root wheel 2 May 31 2024 proc
drwxr-xr-x 2 root wheel 150 May 31 2024 rescue
drwxr-x--- 2 root wheel 7 May 31 2024 root
drwxr-xr-x 2 root wheel 150 May 31 2024 sbin
lrwxr-xr-x 1 root wheel 11 May 31 2024 sys -> usr/src/sys
drwxrwxrwt 2 root wheel 2 May 31 2024 tmp
drwxr-xr-x 14 root wheel 14 May 31 2024 usr
drwxr-xr-x 24 root wheel 24 May 31 2024 var
What we need in addition are the DNS and timezone files (just copy them from the host system):
root@freebsd14:~ $ cp /etc/resolv.conf /usr/local/jails/templates/14.1-RELEASE/etc/resolv.conf
root@freebsd14:~ $ cp /etc/localtime /usr/local/jails/templates/14.1-RELEASE/etc/localtime
As we want the Jails to start from the latest patch level, let’s patch the template:
oot@freebsd14:~ $ freebsd-update -b /usr/local/jails/templates/14.1-RELEASE/ fetch install
src component not installed, skipped
Looking up update.FreeBSD.org mirrors... 3 mirrors found.
Fetching metadata signature for 14.1-RELEASE from update2.freebsd.org... done.
Fetching metadata index... done.
Inspecting system... done.
Preparing to download files... done.
The following files will be updated as part of updating to
14.1-RELEASE-p6:
/bin/freebsd-version
/lib/libc++.so.1
...
Scanning /usr/local/jails/templates/14.1-RELEASE/usr/share/certs/trusted for certificates...
done.
Now we have the template ready and create a read only ZFS snapshot out of it:
root@freebsd14:~ $ zfs snapshot zroot/jails/templates/14.1-RELEASE@base
root@freebsd14:~ $ zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
zroot/ROOT/default@2024-11-26-10:56:43-0 353M - 1.46G -
zroot/jails/templates/14.1-RELEASE@base 0B - 451M -
All that needs to be done to create a new Jail is to clone this snapshot, e.g.:
root@freebsd14:~ $ zfs clone zroot/jails/templates/14.1-RELEASE@base zroot/jails/containers/thinjail1
root@freebsd14:~ $ ls -l /usr/local/jails/containers/thinjail1/
total 86
-rw-r--r-- 2 root wheel 1011 May 31 2024 .cshrc
-rw-r--r-- 2 root wheel 495 May 31 2024 .profile
-r--r--r-- 1 root wheel 6109 May 31 2024 COPYRIGHT
drwxr-xr-x 2 root wheel 49 Dec 2 13:40 bin
drwxr-xr-x 15 root wheel 69 Dec 2 13:40 boot
dr-xr-xr-x 2 root wheel 2 May 31 2024 dev
drwxr-xr-x 30 root wheel 105 Dec 2 13:40 etc
drwxr-xr-x 4 root wheel 78 Dec 2 13:40 lib
drwxr-xr-x 3 root wheel 5 May 31 2024 libexec
drwxr-xr-x 2 root wheel 2 May 31 2024 media
drwxr-xr-x 2 root wheel 2 May 31 2024 mnt
drwxr-xr-x 2 root wheel 2 May 31 2024 net
dr-xr-xr-x 2 root wheel 2 May 31 2024 proc
drwxr-xr-x 2 root wheel 150 Dec 2 13:40 rescue
drwxr-x--- 2 root wheel 7 May 31 2024 root
drwxr-xr-x 2 root wheel 150 Dec 2 13:40 sbin
lrwxr-xr-x 1 root wheel 11 May 31 2024 sys -> usr/src/sys
drwxrwxrwt 2 root wheel 2 May 31 2024 tmp
drwxr-xr-x 14 root wheel 14 May 31 2024 usr
drwxr-xr-x 24 root wheel 24 May 31 2024 var
Both, the template and the new Jail just consume 451MB of disk space:
root@freebsd14:~ $ df -h | grep jails
zroot/jails 23G 96K 23G 0% /usr/local/jails
zroot/jails/media 24G 199M 23G 1% /usr/local/jails/media
zroot/jails/templates 23G 96K 23G 0% /usr/local/jails/templates
zroot/jails/containers 23G 96K 23G 0% /usr/local/jails/containers
zroot/jails/templates/14.1-RELEASE 24G 451M 23G 2% /usr/local/jails/templates/14.1-RELEASE
zroot/jails/containers/thinjail1 24G 451M 23G 2% /usr/local/jails/containers/thinjail1
The final step is the configuration of the Jail, but this is also pretty straight forward in simple cases. The configuration for our Jail looks like this:
root@freebsd14:~ $ cat /etc/jail.conf.d/thinjail1.conf
thinjail1 {
# STARTUP/LOGGING
exec.start = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
exec.consolelog = "/var/log/jail_console_${name}.log";
# PERMISSIONS
allow.raw_sockets;
exec.clean;
mount.devfs;
# HOSTNAME/PATH
host.hostname = "${name}";
path = "/usr/local/jails/containers/${name}";
# NETWORK
ip4.addr = 192.168.122.160;
interface = vtnet0;
}
That’s it. The most important point for now is the network configuration: What we want is a new virtual IP address on top of our “vtnet0” interface. This is the IP address clients will use to connect to this Jail. Of course the Jail needs a name, a location and we want the normal startup and shutdown to happen with rc (we’ll look at the other parameters in a later post).
Starting up the Jail:
root@freebsd14:~ $ service jail start thinjail1
Starting jails: thinjail1.
root@freebsd14:~ $ service jail status thinjail1
JID IP Address Hostname Path
thinjail1 192.168.122.160 thinjail1 /usr/local/jails/containers/thinjail1
While it is recommended to manage a Jail from the host system, you can also jump into the Jail:
root@freebsd14:~ $ jexec -u root thinjail1
root@thinjail1:/ $ ifconfig
vtnet0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
options=4c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,TXCSUM_IPV6>
ether 52:54:00:af:79:80
inet 192.168.122.160 netmask 0xffffffff broadcast 192.168.122.160
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active
lo0: flags=1008049<UP,LOOPBACK,RUNNING,MULTICAST,LOWER_UP> metric 0 mtu 16384
options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
groups: lo
root@thinjail1:/ $ freebsd-version ; uname -a
14.1-RELEASE-p6
FreeBSD thinjail1 14.1-RELEASE-p5 FreeBSD 14.1-RELEASE-p5 GENERIC amd64
Pretty easy to setup and really lightweight. Put a bit of automation around this and you should have a new Jail up and running in a few seconds.
In the next post we’ll build on this to create a simple PostgreSQL as a service platform.
L’article FreeBSD basics – 4 – Jails est apparu en premier sur dbi Blog.
Convert Single Instance database to RAC on an ODA
In this article, I will show you how to convert a SI (Single Instance) database to a RAC database on an ODA using rconfig. Of course this is only possible with the ODA 2-HA model as it comes with a 2 nodes cluster. There is no way to do RAC database on the ODA light model. I will also ensure that there is no impact to the other databases running in the same oracle home. This is why, I will create 2 test databases with the same oracle home. Please note that there will be downtime knowing the database needs to be restarted.
Read more: Convert Single Instance database to RAC on an ODA PreparationI will first create a new oracle home on the existing ODA.
[root@node0 ~]# odacli list-dbhomes ID Name DB Version DB Edition Home Location Status ---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ---------- 03e59f95-e77f-4429-a9fc-466bea89545b OraDB19000_home4 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4 CONFIGURED [root@node0 ~]# odacli create-dbhome -de EE -v 19.23.0.0.240416 Job details ---------------------------------------------------------------- ID: 0f02be31-a2b5-4ba1-af66-76c83d9808f2 Description: Database Home OraDB19000_home5 creation with version :19.23.0.0.240416 Status: Created Created: November 26, 2024 4:02:27 PM CET Message: Create Database Home Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- [root@node0 ~]# odacli describe-job -i 0f02be31-a2b5-4ba1-af66-76c83d9808f2 Job details ---------------------------------------------------------------- ID: 0f02be31-a2b5-4ba1-af66-76c83d9808f2 Description: Database Home OraDB19000_home5 creation with version :19.23.0.0.240416 Status: Success Created: November 26, 2024 4:02:27 PM CET Message: Create Database Home Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- Setting up SSH equivalence node0 November 26, 2024 4:02:49 PM CET November 26, 2024 4:02:54 PM CET Success Setting up SSH equivalence node0 November 26, 2024 4:02:54 PM CET November 26, 2024 4:02:58 PM CET Success Creating ACFS database home node0 November 26, 2024 4:02:58 PM CET November 26, 2024 4:02:58 PM CET Success Validating dbHome available space node0 November 26, 2024 4:02:59 PM CET November 26, 2024 4:02:59 PM CET Success Validating dbHome available space node1 November 26, 2024 4:02:59 PM CET November 26, 2024 4:02:59 PM CET Success Creating DbHome Directory node1 November 26, 2024 4:03:01 PM CET November 26, 2024 4:03:01 PM CET Success Create required directories node0 November 26, 2024 4:03:01 PM CET November 26, 2024 4:03:01 PM CET Success Extract DB clone node0 November 26, 2024 4:03:02 PM CET November 26, 2024 4:05:00 PM CET Success ProvDbHome by using RHP node0 November 26, 2024 4:05:00 PM CET November 26, 2024 4:08:20 PM CET Success Enable DB options node0 November 26, 2024 4:08:21 PM CET November 26, 2024 4:08:48 PM CET Success Creating wallet for DB Client node0 November 26, 2024 4:09:02 PM CET November 26, 2024 4:09:03 PM CET Success [root@node0 ~]# odacli list-dbhomes ID Name DB Version DB Edition Home Location Status ---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ---------- 03e59f95-e77f-4429-a9fc-466bea89545b OraDB19000_home4 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4 CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 OraDB19000_home5 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 CONFIGURED
I will create 2 new databases on this new dbhome : TEST1 and TEST2.
[root@node0 ~]# odacli list-databases ID DB Name DB Type DB Version CDB Class Edition Shape Storage Status DB Home ID ---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ---------------------------------------- 2d824a9f-735a-4e8d-b6c8-5393ddc894e9 DBSI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 0393d997-50aa-4511-b5b9-c4ff2da393db DBGI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 712d542e-ded7-4d1a-9b9d-7c335042ffc0 DAWHT SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 28026894-0c2d-417b-b11a-d76516805247 DBSI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 11a12489-2483-4f8a-bb60-7145417181a1 DBSI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b cd183219-3daa-4154-b4a4-41b92d4f8155 DBBI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b fdfe3197-223f-4660-a834-4736f50110ef DBSI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 7391380b-f609-4457-be6b-bd9afa51148c DBBI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ce4350ed-e291-4815-8c43-3c6716d6402f DBGI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 274e6069-b174-43fb-8625-70e1e333f160 DBSI5 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 52eadf14-4d20-4910-91ca-a335361d53b2 RCDB SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ec54945d-d0de-4b92-8822-2bd0d31fe653 DBSI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 95420f39-db33-4c4d-8d85-a5f8d42945e6 DBBI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b b7c42ea7-6eab-4b98-8ea7-8dd4ce9517a1 DBBI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 707251cc-f19a-4b8c-89cc-63477c5747d0 DBGI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 2bbd4391-5eed-4878-b2e5-3670587527f6 DBBI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b f540b5d1-c074-457a-85e2-d35240541efd DBGI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 16a24733-cfba-4e75-a9ce-59b3779dc82e DBGI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b [root@node0 ~]# odacli create-database -n TEST1 -u TEST1 -y SI -g 0 -cl OLTP -no-c -no-co -cs UTF8 -ns AL16UTF16 -l AMERICAN -dt AMERICA -dh 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 -s odb1 -r ACFS Enter SYS and SYSTEM user password: Retype SYS and SYSTEM user password: Job details ---------------------------------------------------------------- ID: 88cd622d-7896-4a7c-b2d5-bb113438b2d2 Description: Database service creation with DB name: TEST1 Status: Created Created: November 26, 2024 4:12:37 PM CET Message: Task Name Start Time End Time Status ---------------------------------------- ---------------------------------------- ---------------------------------------- ---------------- [root@node0 ~]# odacli describe-job -i 88cd622d-7896-4a7c-b2d5-bb113438b2d2 Job details ---------------------------------------------------------------- ID: 88cd622d-7896-4a7c-b2d5-bb113438b2d2 Description: Database service creation with DB name: TEST1 Status: Success Created: November 26, 2024 4:12:37 PM CET Message: Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- Setting up SSH equivalence node0 November 26, 2024 4:12:45 PM CET November 26, 2024 4:12:49 PM CET Success Setting up SSH equivalence node0 November 26, 2024 4:12:49 PM CET November 26, 2024 4:12:52 PM CET Success Creating volume dclTEST1 node0 November 26, 2024 4:12:53 PM CET November 26, 2024 4:13:14 PM CET Success Creating volume datTEST1 node0 November 26, 2024 4:13:14 PM CET November 26, 2024 4:13:35 PM CET Success Creating ACFS filesystem for DATA node0 November 26, 2024 4:13:36 PM CET November 26, 2024 4:14:06 PM CET Success Database Service creation node0 November 26, 2024 4:14:10 PM CET November 26, 2024 4:27:39 PM CET Success Database Creation by RHP node0 November 26, 2024 4:14:10 PM CET November 26, 2024 4:24:27 PM CET Success Change permission for xdb wallet files node1 November 26, 2024 4:24:29 PM CET November 26, 2024 4:24:32 PM CET Success Place SnapshotCtrlFile in sharedLoc node0 November 26, 2024 4:24:33 PM CET November 26, 2024 4:24:37 PM CET Success SqlPatch upgrade node0 November 26, 2024 4:26:06 PM CET November 26, 2024 4:26:28 PM CET Success Running dbms_stats init_package node0 November 26, 2024 4:26:29 PM CET November 26, 2024 4:26:32 PM CET Success Set log_archive_dest for Database node0 November 26, 2024 4:26:32 PM CET November 26, 2024 4:26:34 PM CET Success Updating the Database version node1 November 26, 2024 4:26:35 PM CET November 26, 2024 4:26:40 PM CET Success Create Users tablespace node0 November 26, 2024 4:27:39 PM CET November 26, 2024 4:27:42 PM CET Success Clear all listeners from Database node0 November 26, 2024 4:27:43 PM CET November 26, 2024 4:27:44 PM CET Success Copy Pwfile to Shared Storage node0 November 26, 2024 4:27:47 PM CET November 26, 2024 4:27:50 PM CET Success Configure All Candidate Nodes node0 November 26, 2024 4:27:50 PM CET November 26, 2024 4:27:52 PM CET Success [root@node0 ~]# odacli create-database -n TEST2 -u TEST2 -y SI -g 0 -cl OLTP -no-c -no-co -cs UTF8 -ns AL16UTF16 -l AMERICAN -dt AMERICA -dh 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 -s odb1 -r ACFS Enter SYS and SYSTEM user password: Retype SYS and SYSTEM user password: Job details ---------------------------------------------------------------- ID: 95f53872-96c8-4eb4-903f-f9e5a4701db5 Description: Database service creation with DB name: TEST2 Status: Created Created: November 26, 2024 4:29:45 PM CET Message: Task Name Start Time End Time Status ---------------------------------------- ---------------------------------------- ---------------------------------------- ---------------- [root@node0 ~]# odacli describe-job -i 95f53872-96c8-4eb4-903f-f9e5a4701db5 Job details ---------------------------------------------------------------- ID: 95f53872-96c8-4eb4-903f-f9e5a4701db5 Description: Database service creation with DB name: TEST2 Status: Success Created: November 26, 2024 4:29:45 PM CET Message: Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- Setting up SSH equivalence node0 November 26, 2024 4:29:54 PM CET November 26, 2024 4:29:58 PM CET Success Setting up SSH equivalence node0 November 26, 2024 4:29:58 PM CET November 26, 2024 4:30:02 PM CET Success Creating volume dclTEST2 node0 November 26, 2024 4:30:03 PM CET November 26, 2024 4:30:26 PM CET Success Creating volume datTEST2 node0 November 26, 2024 4:30:26 PM CET November 26, 2024 4:30:51 PM CET Success Creating ACFS filesystem for DATA node0 November 26, 2024 4:30:51 PM CET November 26, 2024 4:31:24 PM CET Success Database Service creation node0 November 26, 2024 4:31:28 PM CET November 26, 2024 4:44:52 PM CET Success Database Creation by RHP node0 November 26, 2024 4:31:28 PM CET November 26, 2024 4:41:51 PM CET Success Change permission for xdb wallet files node1 November 26, 2024 4:41:53 PM CET November 26, 2024 4:41:55 PM CET Success Place SnapshotCtrlFile in sharedLoc node0 November 26, 2024 4:41:55 PM CET November 26, 2024 4:42:00 PM CET Success SqlPatch upgrade node0 November 26, 2024 4:43:26 PM CET November 26, 2024 4:43:47 PM CET Success Running dbms_stats init_package node0 November 26, 2024 4:43:47 PM CET November 26, 2024 4:43:49 PM CET Success Set log_archive_dest for Database node0 November 26, 2024 4:43:49 PM CET November 26, 2024 4:43:52 PM CET Success Updating the Database version node1 November 26, 2024 4:43:52 PM CET November 26, 2024 4:43:57 PM CET Success Create Users tablespace node0 November 26, 2024 4:44:52 PM CET November 26, 2024 4:44:56 PM CET Success Clear all listeners from Database node0 November 26, 2024 4:44:57 PM CET November 26, 2024 4:44:59 PM CET Success Copy Pwfile to Shared Storage node0 November 26, 2024 4:45:02 PM CET November 26, 2024 4:45:05 PM CET Success Configure All Candidate Nodes node0 November 26, 2024 4:45:05 PM CET November 26, 2024 4:45:07 PM CET Success
So we have 2 new databases TEST1 and TEST2 on new oracle home dbhome_5.
[root@node0 ~]# odacli list-dbhomes ID Name DB Version DB Edition Home Location Status ---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ---------- 03e59f95-e77f-4429-a9fc-466bea89545b OraDB19000_home4 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4 CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 OraDB19000_home5 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 CONFIGURED [root@node0 ~]# odacli list-databases ID DB Name DB Type DB Version CDB Class Edition Shape Storage Status DB Home ID ---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ---------------------------------------- 2d824a9f-735a-4e8d-b6c8-5393ddc894e9 DBSI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 0393d997-50aa-4511-b5b9-c4ff2da393db DBGI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 712d542e-ded7-4d1a-9b9d-7c335042ffc0 DAWHT SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 28026894-0c2d-417b-b11a-d76516805247 DBSI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 11a12489-2483-4f8a-bb60-7145417181a1 DBSI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b cd183219-3daa-4154-b4a4-41b92d4f8155 DBBI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b fdfe3197-223f-4660-a834-4736f50110ef DBSI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 7391380b-f609-4457-be6b-bd9afa51148c DBBI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ce4350ed-e291-4815-8c43-3c6716d6402f DBGI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 274e6069-b174-43fb-8625-70e1e333f160 DBSI5 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 52eadf14-4d20-4910-91ca-a335361d53b2 RCDB SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ec54945d-d0de-4b92-8822-2bd0d31fe653 DBSI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 95420f39-db33-4c4d-8d85-a5f8d42945e6 DBBI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b b7c42ea7-6eab-4b98-8ea7-8dd4ce9517a1 DBBI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 707251cc-f19a-4b8c-89cc-63477c5747d0 DBGI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 2bbd4391-5eed-4878-b2e5-3670587527f6 DBBI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b f540b5d1-c074-457a-85e2-d35240541efd DBGI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 16a24733-cfba-4e75-a9ce-59b3779dc82e DBGI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 4358287c-9cf0-45d4-a7e3-a59f933e86b2 TEST1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 110f26e7-f9f3-412e-9443-a201d24201a0 TEST2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 [root@node0 ~]#Check TEST1 and TEST2 Single Instance databases
TEST1 database is opened READ/WRITE.
oracle@node0:~/ [rdbms1900] TEST1 ****************************************************** INSTANCE_NAME : TEST1 DB_NAME : TEST1 DB_UNIQUE_NAME : TEST1 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/6 DATABASE_ROLE : PRIMARY FLASHBACK_ON : NO FORCE_LOGGING : YES VERSION : 19.23.0.0.0 NLS_LANG : AMERICAN_AMERICA.UTF8 CDB_ENABLED : NO ****************************************************** Statustime: 2024-11-27 09:25:43
The name of the instance is the name of the database, TEST1. It is a single instance and we choosed the node0 as node (option -g 0 from odacli command). This is confirmed we only have one instance running on node 0.
oracle@node0:~/ [TEST1] ps -ef | grep -i [p]mon | grep -i test1 oracle 22300 1 0 Nov26 ? 00:00:03 ora_pmon_TEST1 oracle@node1:~/ [rdbms192300_a] ps -ef | grep -i [p]mon | grep -i test oracle@node1:~/ [rdbms192300_a] oracle@node0:~/ [TEST1] srvctl status database -d TEST1 Instance TEST1 is running on node node0
Checking grid cluster configuration, we see that we only have one instance and 2 configured nodes :
Database instance: TEST1
Configured nodes: node0,node1
oracle@node0:~/ [TEST1] srvctl config database -d TEST1 Database unique name: TEST1 Database name: TEST1 Oracle home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 Oracle user: oracle Spfile: /u02/app/oracle/oradata/TEST1/dbs/spfileTEST1.ora Password file: /u02/app/oracle/oradata/TEST1/dbs/orapwTEST1 Domain: swisslos.local Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: DATA Mount point paths: /u01/app/odaorahome,/u02/app/oracle/oradata/TEST1,/u03/app/oracle/,/u01/app/odaorabase0,/u01/app/odaorabase1 Services: Type: SINGLE OSDBA group: dba OSOPER group: dbaoper Database instance: TEST1 Configured nodes: node0,node1 CSS critical: no CPU count: 0 Memory target: 0 Maximum memory: 0 Default network number for database services: Database is administrator managed
And we can check and see that the cluster_database instance parameter is set to FALSE.
oracle@node0:~/ [TEST1] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Wed Nov 27 09:32:33 2024 Version 19.23.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.23.0.0.0 SQL> select inst_id, name, value from gv$parameter where name='cluster_database'; INST_ID NAME VALUE ---------- -------------------- -------------------- 1 cluster_database FALSE SQL>
The same can be checked on the TEST2 database.
oracle@node0:~/ [TEST1] TEST2 ****************************************************** INSTANCE_NAME : TEST2 DB_NAME : TEST2 DB_UNIQUE_NAME : TEST2 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/5 DATABASE_ROLE : PRIMARY FLASHBACK_ON : NO FORCE_LOGGING : YES VERSION : 19.23.0.0.0 NLS_LANG : AMERICAN_AMERICA.UTF8 CDB_ENABLED : NO ****************************************************** Statustime: 2024-11-27 09:33:30 oracle@node0:~/ [TEST2] ps -ef | grep -i [p]mon | grep -i test2 oracle 89478 1 0 Nov26 ? 00:00:03 ora_pmon_TEST2 oracle@node1:~/ [rdbms192300_a] ps -ef | grep -i [p]mon | grep -i test oracle@node1:~/ [rdbms192300_a] oracle@node0:~/ [TEST2] srvctl status database -d TEST2 Instance TEST2 is running on node node0 oracle@node0:~/ [TEST2] srvctl config database -d TEST2 Database unique name: TEST2 Database name: TEST2 Oracle home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 Oracle user: oracle Spfile: /u02/app/oracle/oradata/TEST2/dbs/spfileTEST2.ora Password file: /u02/app/oracle/oradata/TEST2/dbs/orapwTEST2 Domain: swisslos.local Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: DATA Mount point paths: /u01/app/odaorahome,/u02/app/oracle/oradata/TEST2,/u03/app/oracle/,/u01/app/odaorabase0,/u01/app/odaorabase1 Services: Type: SINGLE OSDBA group: dba OSOPER group: dbaoper Database instance: TEST2 Configured nodes: node0,node1 CSS critical: no CPU count: 0 Memory target: 0 Maximum memory: 0 Default network number for database services: Database is administrator managed oracle@node0:~/ [TEST2] oracle@node0:~/ [TEST2] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Wed Nov 27 09:34:40 2024 Version 19.23.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.23.0.0.0 SQL> select inst_id, name, value from gv$parameter where name='cluster_database'; INST_ID NAME VALUE ---------- -------------------- -------------------- 1 cluster_database FALSE SQL>Convert TEST1 database to RAC using rconfig rconfig template
rconfig is using a XML configuration file. Templates can be found in the oracle dbhome.
oracle@node0:~/ [TEST1] ls -ltrh $ORACLE_HOME/assistants/rconfig/sampleXMLs total 8.0K -rw-r----- 1 oracle oinstall 2.6K Mar 9 2018 ConvertToRAC_PolicyManaged.xml -rw-r----- 1 oracle oinstall 2.5K Jul 16 2018 ConvertToRAC_AdminManaged.xml
We will use the ConvertToRAC_AdminManaged.xml template and adapt it for our need into a new XML file, named ConvertToRAC_TEST1.xml, in order to convert TEST1 database.
oracle@node0:~/mwagner/ [TEST1] mkdir rconfig_xml oracle@node0:~/mwagner/ [TEST1] cd rconfig_xml oracle@node0:~/mwagner/rconfig_xml/ [TEST1] cp -p $ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC_AdminManaged.xml ./ConvertToRAC_TEST1.xml oracle@node0:~/mwagner/rconfig_xml/ [TEST1] vi ConvertToRAC_TEST1.xml
Let’s list the XML file that has been updated and that will be used for the conversion.
oracle@node0:~/mwagner/rconfig_xml/ [TEST1] cat ConvertToRAC_TEST1.xml <?xml version="1.0" encoding="UTF-8"?> <n:RConfig xmlns:n="http://www.oracle.com/rconfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.oracle.com/rconfig rconfig.xsd"> <n:ConvertToRAC> <!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY --> <n:Convert verify="ONLY"> <!--Specify current OracleHome of non-rac database for SourceDBHome --> <n:SourceDBHome>/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5</n:SourceDBHome> <!--Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome --> <n:TargetDBHome>/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5</n:TargetDBHome> <!--Specify SID of non-rac database --> <n:SourceDBInfo SID="TEST1"/> <!--Specify the list of nodes that should have rac instances running for the Admin Managed Cluster Database. LocalNode should be the first node in this nodelist. --> <n:NodeList> <n:Node name="node0"/> <n:Node name="node1"/> </n:NodeList> <!--Specify RacOneNode along with servicename to convert database to RACOne Node --> <!--n:RacOneNode servicename="salesrac1service"/--> <!--Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name.--> <n:InstancePrefix></n:InstancePrefix> <!-- Listener details are no longer needed starting 11.2. Database is registered with default listener and SCAN listener running from Oracle Grid Infrastructure home. --> <!--Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type. ASM credentials are no needed for conversion. --> <n:SharedStorage type="ASM"> <!--Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path. --> <n:TargetDatabaseArea></n:TargetDatabaseArea> <!--Specify Fast Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area. --> <n:TargetFlashRecoveryArea></n:TargetFlashRecoveryArea> </n:SharedStorage> </n:Convert> </n:ConvertToRAC> </n:RConfig>
Some of the xml tags are quite easy to understand, others would need to be handled carefully.
TAGExplanationConvert verifyCan be YES, NO or ONLY.If it is set to ONLY rconfig will just check if the conversion is possible but will not run it.
In case of YES or NO, the conversion will be run. I would stongly recommend to use YES and to avoid NOSourceDBHomeDB Home used by the database to convert, in our case:
/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5TargetDBHomeDB Home the RAC database will use. We will chose to keep same one.SourceDBInfo SIDDatabase SID, in our case TEST1NodeList
Node nameList of all node hostnameRacOneNode servicenameN/A, as you can see, this tag has been put in comment. Note !–InstancePrefixPrefix to be used to name the instances.
In our case we let it blank, so it will use the db_unique_name as prefix and name the instances as db_unique_name[1-X]:
TEST11
TEST12SharedStorage typeHere, we need to pay attention. We are using ACFS, so we need to ensure we enter ASM. Configuring it incorrectly might delete all the database files.TargetDatabaseAreaHere, we need to pay attention too. We are using ACFS, so we need to ensure to leave it blank. Referring ACFS database files directory, will incorrectly delete all the database files.TargetFlashRecoveryAreaWe keep it blank as we will reuse the same recovery area.
Please pay attention of both SharedStorage type and TargetDatabaseArea parameter.
Check/test conversionWe will ensure that the convert verify value is set to ONLY in the xml template, so rconfig will only check if the conversion is possible, and will not run any conversion.
oracle@node0:~/mwagner/rconfig_xml/ [TEST1] grep -i "Convert verify" ConvertToRAC_TEST1.xml <n:Convert verify="ONLY"> oracle@node0:~/mwagner/rconfig_xml/ [TEST1]
I will now run rconfig.
oracle@node0:~/mwagner/rconfig_xml/ [TEST1] cd $ORACLE_HOME/bin oracle@node0:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/bin/ [TEST1] ./rconfig ~/mwagner/rconfig_xml/ConvertToRAC_TEST1.xml Specify sys user password for the database <?xml version="1.0" ?> <RConfig version="1.1" > <ConvertToRAC> <Convert> <Response> <Result code="0" > Operation Succeeded </Result> </Response> <ReturnValue type="object"> There is no return value for this step </ReturnValue> </Convert> </ConvertToRAC></RConfig>
The rconfig conversion check has been successfully executed, and the database is ready to be converted. Note the result code value to be 0 and the message “Operation Succeeded”.
Convert the databaseSo we can now convert the database. We will change the template and have “convert verify” setup to YES.
oracle@node0:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/bin/ [TEST1] vi ~/mwagner/rconfig_xml/ConvertToRAC_TEST1.xml oracle@node0:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/bin/ [TEST1] grep -i "Convert verify" ~/mwagner/rconfig_xml/ConvertToRAC_TEST1.xml <n:Convert verify="YES">
And we can run the conversion.
oracle@node0:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/bin/ [TEST1] ./rconfig ~/mwagner/rconfig_xml/ConvertToRAC_TEST1.xml Specify sys user password for the database Converting Database "TEST1.swisslos.local" to Cluster Database. Target Oracle Home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5. Database Role: PRIMARY. Setting Data Files and Control Files Adding Trace files Adding Database Instances Create temporary password file Adding Redo Logs Enabling threads for all Database Instances Setting TEMP tablespace Adding UNDO tablespaces Setting Fast Recovery Area Updating Oratab Creating Password file(s) Configuring related CRS resources Starting Cluster Database <?xml version="1.0" ?> <RConfig version="1.1" > <ConvertToRAC> <Convert> <Response> <Result code="0" > Operation Succeeded </Result> </Response> <ReturnValue type="object"> <Oracle_Home> /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 </Oracle_Home> <Database type="ADMIN_MANAGED" > <InstanceList> <Instance SID="TEST11" Node="node0" > </Instance> <Instance SID="TEST12" Node="node1" > </Instance> </InstanceList> </Database> </ReturnValue> </Convert> </ConvertToRAC></RConfig> oracle@node0:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/bin/ [TEST1]
Note the result code value to be 0 and the message “Operation Succeeded”. This means the conversion has been executed successfully.
UNDO and REDO creationAs expected, knowing we are now having 2 instances, we need 2 UNDO tablespace and 2 sets of ONLINE LOG. We can see that an additionnal UNDO tablespace has been created.
We can check this on the file system and see additionnal UNDO tablespace file.
Before:
oracle@node1:~/ [rdbms192300_a] ls -ltrh /u02/app/oracle/oradata/TEST1/TEST1/datafile/ total 2.4G -rw-r----- 1 oracle asmadmin 5.1M Nov 26 16:32 o1_mf_users_mnct7fwn_.dbf -rw-r----- 1 oracle asmadmin 252M Nov 26 22:06 o1_mf_temp_mncso2l7_.tmp -rw-r----- 1 oracle asmadmin 1011M Nov 27 10:18 o1_mf_sysaux_mncslcxc_.dbf -rw-r----- 1 oracle asmadmin 101M Nov 27 10:18 o1_mf_undotbs1_mncslvbn_.dbf -rw-r----- 1 oracle asmadmin 1.1G Nov 27 10:25 o1_mf_system_mncsk8kf_.dbf
After:
oracle@node1:~/ [rdbms192300_a] ls -ltrh /u02/app/oracle/oradata/TEST1/TEST1/datafile/ total 2.6G -rw-r----- 1 oracle asmadmin 5.1M Nov 26 16:32 o1_mf_users_mnct7fwn_.dbf -rw-r----- 1 oracle asmadmin 101M Nov 27 10:18 o1_mf_undotbs1_mncslvbn_.dbf -rw-r----- 1 oracle asmadmin 101M Nov 27 10:42 o1_mf_undotbs2_mnft7oh9_.dbf -rw-r----- 1 oracle asmadmin 1011M Nov 27 10:42 o1_mf_sysaux_mncslcxc_.dbf -rw-r----- 1 oracle asmadmin 1.1G Nov 27 10:42 o1_mf_system_mncsk8kf_.dbf -rw-r----- 1 oracle asmadmin 284M Nov 27 10:42 o1_mf_temp_mncso2l7_.tmp
But this can be, of course, confirmed from the alert log file.
oracle@node0:/u01/app/odaorabase/oracle/diag/rdbms/test1/TEST11/trace/ [TEST11] grep -i UNDOTBS2 alert_TEST11.log create undo tablespace UNDOTBS2 datafile size 102400K AUTOEXTEND ON MAXSIZE UNLIMITED Completed: create undo tablespace UNDOTBS2 datafile size 102400K AUTOEXTEND ON MAXSIZE UNLIMITED ALTER SYSTEM SET undo_tablespace='UNDOTBS2' SCOPE=SPFILE SID='TEST12'; oracle@node0:/u01/app/odaorabase/oracle/diag/rdbms/test1/TEST11/trace/ [TEST11] oracle@node0:/u01/app/odaorabase/oracle/diag/rdbms/test1/TEST11/trace/ [TEST11] vi alert_TEST11.log ... ... ... 2024-11-27T10:40:05.316493+01:00 ALTER SYSTEM SET undo_tablespace='UNDOTBS1' SCOPE=SPFILE SID='TEST11'; create undo tablespace UNDOTBS2 datafile size 102400K AUTOEXTEND ON MAXSIZE UNLIMITED 2024-11-27T10:40:06.594880+01:00 Completed: create undo tablespace UNDOTBS2 datafile size 102400K AUTOEXTEND ON MAXSIZE UNLIMITED 2024-11-27T10:40:06.726472+01:00 ALTER SYSTEM SET undo_tablespace='UNDOTBS2' SCOPE=SPFILE SID='TEST12'; 2024-11-27T10:40:06.735353+01:00 ALTER SYSTEM RESET undo_tablespace SCOPE=SPFILE SID='*';
In the alert log we can find the new redo log group creation for the additional instance.
oracle@node0:/u01/app/odaorabase/oracle/diag/rdbms/test1/TEST11/trace/ [TEST11] grep -i "alter database add logfile" alert_TEST11.log alter database add logfile thread 2 group 4 size 4294967296 Completed: alter database add logfile thread 2 group 4 size 4294967296 alter database add logfile thread 2 group 5 size 4294967296 Completed: alter database add logfile thread 2 group 5 size 4294967296 alter database add logfile thread 2 group 6 size 4294967296 Completed: alter database add logfile thread 2 group 6 size 4294967296 oracle@node0:/u01/app/odaorabase/oracle/diag/rdbms/test1/TEST11/trace/ [TEST11] vi alert_TEST11.log ... ... ... 2024-11-27T10:34:46.267851+01:00 CJQ0 started with pid=67, OS id=69666 alter database add logfile thread 2 group 4 size 4294967296 2024-11-27T10:34:49.541664+01:00 Completed: alter database add logfile thread 2 group 4 size 4294967296 alter database add logfile thread 2 group 5 size 4294967296 2024-11-27T10:34:52.100314+01:00 Completed: alter database add logfile thread 2 group 5 size 4294967296 alter database add logfile thread 2 group 6 size 4294967296 2024-11-27T10:34:55.037728+01:00 Completed: alter database add logfile thread 2 group 6 size 4294967296Check that TEST1 database is now a RAC database
Let’s check that TEST1 database is now a RAC database.
The database is opened READ/WRITE.
oracle@node0:~/ [rdbms1900] TEST11 ****************************************************** INSTANCE_NAME : TEST11 DB_NAME : TEST1 DB_UNIQUE_NAME : TEST1 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/6 DATABASE_ROLE : PRIMARY FLASHBACK_ON : NO FORCE_LOGGING : YES VERSION : 19.23.0.0.0 NLS_LANG : AMERICAN_AMERICA.UTF8 CDB_ENABLED : NO ****************************************************** Statustime: 2024-11-27 10:45:40
We now have 2 instances for the databases, with name db_unique_name[1-2], one running on each node.
oracle@node0:~/ [TEST11] ps -ef | grep -i [p]mon | grep -i test1 oracle 92045 1 0 10:41 ? 00:00:00 ora_pmon_TEST11 oracle@node0:~/ [TEST11] oracle@node1:~/ [rdbms192300_a] ps -ef | grep -i [p]mon | grep -i test oracle 6721 1 0 10:41 ? 00:00:00 ora_pmon_TEST12 oracle@node1:~/ [rdbms192300_a] oracle@node0:~/ [TEST11] srvctl status database -d TEST1 Instance TEST11 is running on node node0 Instance TEST12 is running on node node1
The grid infra configuration hast been updated with additional instance.
oracle@node0:~/ [TEST11] srvctl config database -d TEST1 Database unique name: TEST1 Database name: TEST1 Oracle home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 Oracle user: oracle Spfile: /u02/app/oracle/oradata/TEST1/dbs/spfileTEST1.ora Password file: /u02/app/oracle/oradata/TEST1//orapwTEST1 Domain: swisslos.local Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: Mount point paths: /u01/app/odaorahome Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: dba OSOPER group: dbaoper Database instances: TEST11,TEST12 Configured nodes: node0,node1 CSS critical: no CPU count: 0 Memory target: 0 Maximum memory: 0 Default network number for database services: Database is administrator managed oracle@node0:~/ [TEST11]
As we can see, we will need to fine tue “Password file” and “Mount point paths”.
I also checked that cluster_database parameter is now set to TRUE and for 2 instances. I also confirmed the datafile, logfile and tempfile are still the one expected in the respective ACFS file systems.
oracle@node0:~/ [TEST11] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Wed Nov 27 10:50:34 2024 Version 19.23.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.23.0.0.0 SQL> select inst_id, name, value from gv$parameter where name='cluster_database'; INST_ID NAME VALUE ---------- -------------------- -------------------- 1 cluster_database TRUE 2 cluster_database TRUE SQL> select name from v$datafile; NAME ------------------------------------------------------------------------------------------------------------------------ /u02/app/oracle/oradata/TEST1/TEST1/datafile/o1_mf_system_mncsk8kf_.dbf /u02/app/oracle/oradata/TEST1/TEST1/datafile/o1_mf_undotbs2_mnft7oh9_.dbf /u02/app/oracle/oradata/TEST1/TEST1/datafile/o1_mf_sysaux_mncslcxc_.dbf /u02/app/oracle/oradata/TEST1/TEST1/datafile/o1_mf_undotbs1_mncslvbn_.dbf /u02/app/oracle/oradata/TEST1/TEST1/datafile/o1_mf_users_mnct7fwn_.dbf SQL> select member from v$logfile; MEMBER ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ /u04/app/oracle/redo/TEST1/onlinelog/o1_mf_3_mncsngg8_.log /u04/app/oracle/redo/TEST1/onlinelog/o1_mf_2_mncsngdv_.log /u04/app/oracle/redo/TEST1/onlinelog/o1_mf_1_mncsngbc_.log /u04/app/oracle/redo/TEST1/onlinelog/o1_mf_4_mnfsxpww_.log /u04/app/oracle/redo/TEST1/onlinelog/o1_mf_5_mnfsxsls_.log /u04/app/oracle/redo/TEST1/onlinelog/o1_mf_6_mnfsxw39_.log 6 rows selected. SQL> select name from v$tempfile; NAME -------------------------------------------------------------------------------- /u02/app/oracle/oradata/TEST1/TEST1/datafile/o1_mf_temp_mncso2l7_.tmp SQL>
And the files are existing in the appropriate folder. All good!
oracle@node0:~/ [TEST11] ls -ltrh /u02/app/oracle/oradata/TEST1/TEST1/datafile/ total 2.6G -rw-r----- 1 oracle asmadmin 5.1M Nov 27 10:41 o1_mf_users_mnct7fwn_.dbf -rw-r----- 1 oracle asmadmin 101M Nov 27 10:41 o1_mf_undotbs2_mnft7oh9_.dbf -rw-r----- 1 oracle asmadmin 284M Nov 27 10:52 o1_mf_temp_mncso2l7_.tmp -rw-r----- 1 oracle asmadmin 1.1G Nov 27 10:52 o1_mf_system_mncsk8kf_.dbf -rw-r----- 1 oracle asmadmin 101M Nov 27 10:52 o1_mf_undotbs1_mncslvbn_.dbf -rw-r----- 1 oracle asmadmin 1011M Nov 27 10:52 o1_mf_sysaux_mncslcxc_.dbf oracle@node0:~/ [TEST11] ls -ltrh /u04/app/oracle/redo/TEST1/onlinelog/ total 25G -rw-r----- 1 oracle asmadmin 4.1G Nov 27 10:34 o1_mf_5_mnfsxsls_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 27 10:41 o1_mf_2_mncsngdv_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 27 10:41 o1_mf_3_mncsngg8_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 27 10:41 o1_mf_4_mnfsxpww_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 27 10:41 o1_mf_6_mnfsxw39_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 27 10:52 o1_mf_1_mncsngbc_.log oracle@node0:~/ [TEST11] oracle@node0:~/ [TEST11] ls -ltrh /u04/app/oracle/redo/TEST1/controlfile/ total 11M -rw-r----- 1 oracle asmadmin 11M Nov 27 11:23 o1_mf_mncsnfst_.ctlCheck TEST2 database
I also checked and ensure that nothing was modified for the other TEST2 database running in the same oracle dbhome.
oracle@node0:~/ [TEST11] TEST2 ****************************************************** INSTANCE_NAME : TEST2 DB_NAME : TEST2 DB_UNIQUE_NAME : TEST2 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/6 DATABASE_ROLE : PRIMARY FLASHBACK_ON : NO FORCE_LOGGING : YES VERSION : 19.23.0.0.0 NLS_LANG : AMERICAN_AMERICA.UTF8 CDB_ENABLED : NO ****************************************************** Statustime: 2024-11-27 11:18:49 oracle@node0:~/ [TEST2] ps -ef | grep -i [p]mon | grep -i test2 oracle 89478 1 0 Nov26 ? 00:00:04 ora_pmon_TEST2 oracle@node0:~/ [TEST2] oracle@node1:~/ [rdbms192300_a] ps -ef | grep -i [p]mon | grep -i test2 oracle@node1:~/ [rdbms192300_a] oracle@node0:~/ [TEST2] srvctl status database -d TEST2 Instance TEST2 is running on node node0 oracle@node0:~/ [TEST2] oracle@node0:~/ [TEST2] srvctl config database -d TEST2 Database unique name: TEST2 Database name: TEST2 Oracle home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 Oracle user: oracle Spfile: /u02/app/oracle/oradata/TEST2/dbs/spfileTEST2.ora Password file: /u02/app/oracle/oradata/TEST2/dbs/orapwTEST2 Domain: swisslos.local Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: DATA Mount point paths: /u01/app/odaorahome,/u02/app/oracle/oradata/TEST2,/u03/app/oracle/,/u01/app/odaorabase0,/u01/app/odaorabase1 Services: Type: SINGLE OSDBA group: dba OSOPER group: dbaoper Database instance: TEST2 Configured nodes: node0,node1 CSS critical: no CPU count: 0 Memory target: 0 Maximum memory: 0 Default network number for database services: Database is administrator managed oracle@node0:~/ [TEST2] oracle@node0:~/ [TEST2] ls -ltrh /u02/app/oracle/oradata/TEST2/TEST2/datafile/ total 2.5G -rw-r----- 1 oracle asmadmin 5.1M Nov 26 16:50 o1_mf_users_mncv7rhb_.dbf -rw-r----- 1 oracle asmadmin 252M Nov 26 22:06 o1_mf_temp_mnctonly_.tmp -rw-r----- 1 oracle asmadmin 96M Nov 27 11:20 o1_mf_undotbs1_mnctmgrj_.dbf -rw-r----- 1 oracle asmadmin 1.1G Nov 27 11:20 o1_mf_system_mnctkw20_.dbf -rw-r----- 1 oracle asmadmin 1.1G Nov 27 11:20 o1_mf_sysaux_mnctlzhd_.dbf oracle@node0:~/ [TEST2] ls -ltrh /u04/app/oracle/redo/TEST2/onlinelog/ total 13G -rw-r----- 1 oracle asmadmin 4.1G Nov 26 16:44 o1_mf_2_mncto2qz_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 26 16:44 o1_mf_3_mncto2s1_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 27 11:21 o1_mf_1_mncto2nv_.log oracle@node0:~/ [TEST2] ls -ltrh /u04/app/oracle/redo/TEST2/controlfile/ total 11M -rw-r----- 1 oracle asmadmin 11M Nov 27 11:21 o1_mf_mncto22y_.ctl oracle@node0:~/ [TEST2] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Wed Nov 27 11:21:34 2024 Version 19.23.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.23.0.0.0 SQL> select inst_id, name, value from gv$parameter where name='cluster_database'; INST_ID NAME VALUE ---------- -------------------- -------------------- 1 cluster_database FALSE SQL> select name from v$datafile; NAME ------------------------------------------------------------------------------------------------------------------------ /u02/app/oracle/oradata/TEST2/TEST2/datafile/o1_mf_system_mnctkw20_.dbf /u02/app/oracle/oradata/TEST2/TEST2/datafile/o1_mf_sysaux_mnctlzhd_.dbf /u02/app/oracle/oradata/TEST2/TEST2/datafile/o1_mf_undotbs1_mnctmgrj_.dbf /u02/app/oracle/oradata/TEST2/TEST2/datafile/o1_mf_users_mncv7rhb_.dbf SQL> select name from v$tempfile; NAME ------------------------------------------------------------------------------------------------------------------------ /u02/app/oracle/oradata/TEST2/TEST2/datafile/o1_mf_temp_mnctonly_.tmp SQL> select member from v$logfile; MEMBER ------------------------------------------------------------------------------------------------------------------------ /u04/app/oracle/redo/TEST2/onlinelog/o1_mf_3_mncto2s1_.log /u04/app/oracle/redo/TEST2/onlinelog/o1_mf_2_mncto2qz_.log /u04/app/oracle/redo/TEST2/onlinelog/o1_mf_1_mncto2nv_.logChange grid infra password file for converted TEST1 database
As we could see in the checks, the grid infra password file is using a password file coming from the conversion, and not stored in the appropriate subdirectory.
The conversion has created new password file.
oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] ls -ltrh total 224K drwx------ 2 root root 64K Nov 26 16:14 lost+found drwxr-x--- 3 oracle asmadmin 20K Nov 26 16:15 TEST1 drwxrwx--- 2 oracle oinstall 20K Nov 26 16:26 arc10 drwxr-x--- 2 oracle oinstall 20K Nov 26 16:27 dbs -rw-r----- 1 oracle oinstall 2.0K Nov 27 10:41 orapwTEST1
Let’s confirm previous password file still exists.
oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] ls -ltrh /u02/app/oracle/oradata/TEST1/dbs/orapwTEST1 -rw-r----- 1 oracle asmadmin 2.0K Nov 26 16:17 /u02/app/oracle/oradata/TEST1/dbs/orapwTEST1
I then updated the value in the grid infra.
oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl config database -d TEST1 | grep -i Password Password file: /u02/app/oracle/oradata/TEST1//orapwTEST1 oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl modify database -d TEST1 -pwfile /u02/app/oracle/oradata/TEST1/dbs/orapwTEST1 oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl config database -d TEST1 | grep -i Password Password file: /u02/app/oracle/oradata/TEST1/dbs/orapwTEST1Change grid infra mount point paths for converted TEST1 database
All ACFS databases mount paths should be added to the grid infra again for dependencies.
Here are the list of the ACFS File System database will be using.
oracle@node1:~/ [rdbms192300_a] df -h /u01/app/odaorahome Filesystem Size Used Avail Use% Mounted on /dev/asm/orahome_sh-446 80G 30G 51G 37% /u01/app/odaorahome oracle@node1:~/ [rdbms192300_a] df -h /u02/app/oracle/oradata/TEST1 Filesystem Size Used Avail Use% Mounted on /dev/asm/dattest1-446 100G 3.0G 98G 3% /u02/app/oracle/oradata/TEST1 oracle@node1:~/ [rdbms192300_a] df -h /u03/app/oracle/ Filesystem Size Used Avail Use% Mounted on /dev/asm/reco-348 7.5T 18G 7.5T 1% /u03/app/oracle oracle@node1:~/ [rdbms192300_a] df -h /u04/app/oracle/redo Filesystem Size Used Avail Use% Mounted on /dev/asm/redo-195 240G 93G 148G 39% /u04/app/oracle/redo oracle@node1:~/ [rdbms192300_a] df -h /u01/app/odaorabase0 Filesystem Size Used Avail Use% Mounted on /dev/asm/odabase_n0-446 100G 11G 90G 11% /u01/app/odaorabase0 oracle@node1:~/ [rdbms192300_a] df -h /u01/app/odaorabase1 Filesystem Size Used Avail Use% Mounted on /dev/asm/odabase_n1-446 100G 3.4G 97G 4% /u01/app/odaorabase1
I then updated grid infra mount point paths value.
oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl config database -d TEST1 | grep -i Mount Mount point paths: /u01/app/odaorahome oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl modify database -d TEST1 -acfspath "/u01/app/odaorahome,/u02/app/oracle/oradata/TEST1,/u03/app/oracle,/u04/app/oracle/redo,/u01/app/odaorabase0,/u01/app/odaorabase1" oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl config database -d TEST1 | grep -i Mount Mount point paths: /u01/app/odaorahome,/u02/app/oracle/oradata/TEST1,/u03/app/oracle/,/u04/app/oracle/redo/,/u01/app/odaorabase0,/u01/app/odaorabase1Final check and database restart
I did some final grid infra configuration checks.
oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl config database -d TEST1 Database unique name: TEST1 Database name: TEST1 Oracle home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 Oracle user: oracle Spfile: /u02/app/oracle/oradata/TEST1/dbs/spfileTEST1.ora Password file: /u02/app/oracle/oradata/TEST1/dbs/orapwTEST1 Domain: swisslos.local Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: Mount point paths: /u01/app/odaorahome,/u02/app/oracle/oradata/TEST1,/u03/app/oracle/,/u04/app/oracle/redo/,/u01/app/odaorabase0,/u01/app/odaorabase1 Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: dba OSOPER group: dbaoper Database instances: TEST11,TEST12 Configured nodes: node0,node1 CSS critical: no CPU count: 0 Memory target: 0 Maximum memory: 0 Default network number for database services: Database is administrator managed
And I tested the database restart with oracle restart.
oracle@node0:~/ [TEST11] srvctl status database -d TEST1 Instance TEST11 is running on node node0 Instance TEST12 is running on node node1 oracle@node0:~/ [TEST11] srvctl stop database -d TEST1 oracle@node0:~/ [TEST11] srvctl status database -d TEST1 Instance TEST11 is not running on node node0 Instance TEST12 is not running on node node1 oracle@node0:~/ [TEST11] ps -ef | grep -i [p]mon | grep -i test1 oracle@node0:~/ [TEST11] oracle@node1:~/ [TEST12] ps -ef | grep -i [p]mon | grep -i test oracle@node1:~/ [TEST12] oracle@node0:~/ [TEST11] TEST11 ************************* INSTANCE_NAME : TEST11 STATUS : DOWN ************************* Statustime: 2024-11-27 11:15:51 oracle@node1:~/ [TEST12] TEST12 ************************* INSTANCE_NAME : TEST12 STATUS : DOWN ************************* Statustime: 2024-11-27 11:16:01 oracle@node0:~/ [TEST11] srvctl start database -d TEST1 oracle@node0:~/ [TEST11] srvctl status database -d TEST1 Instance TEST11 is running on node node0 Instance TEST12 is running on node node1 oracle@node0:~/ [TEST11] ps -ef | grep -i [p]mon | grep -i test1 oracle 60785 1 0 11:16 ? 00:00:00 ora_pmon_TEST11 oracle@node0:~/ [TEST11] oracle@node1:~/ [TEST12] ps -ef | grep -i [p]mon | grep -i test oracle 83531 1 0 11:16 ? 00:00:00 ora_pmon_TEST12 oracle@node1:~/ [TEST12] oracle@node0:~/ [TEST11] TEST11 ****************************************************** INSTANCE_NAME : TEST11 DB_NAME : TEST1 DB_UNIQUE_NAME : TEST1 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/6 DATABASE_ROLE : PRIMARY FLASHBACK_ON : NO FORCE_LOGGING : YES VERSION : 19.23.0.0.0 NLS_LANG : AMERICAN_AMERICA.UTF8 CDB_ENABLED : NO ****************************************************** Statustime: 2024-11-27 11:17:48 oracle@node1:~/ [TEST12] TEST12 ****************************************************** INSTANCE_NAME : TEST12 DB_NAME : TEST1 DB_UNIQUE_NAME : TEST1 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/6 DATABASE_ROLE : PRIMARY FLASHBACK_ON : NO FORCE_LOGGING : YES VERSION : 19.23.0.0.0 NLS_LANG : AMERICAN_AMERICA.UTF8 CDB_ENABLED : NO ****************************************************** Statustime: 2024-11-27 11:17:59Update ODA metadata
I checked ODA metadata and could see that database is still referred as a SI database.
[root@node0 ~]# odacli list-dbhomes ID Name DB Version DB Edition Home Location Status ---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ---------- 03e59f95-e77f-4429-a9fc-466bea89545b OraDB19000_home4 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4 CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 OraDB19000_home5 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 CONFIGURED [root@node0 ~]# odacli list-databases ID DB Name DB Type DB Version CDB Class Edition Shape Storage Status DB Home ID ---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ---------------------------------------- 2d824a9f-735a-4e8d-b6c8-5393ddc894e9 DBSI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 0393d997-50aa-4511-b5b9-c4ff2da393db DBGI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 712d542e-ded7-4d1a-9b9d-7c335042ffc0 DAWHT SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 28026894-0c2d-417b-b11a-d76516805247 DBSI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 11a12489-2483-4f8a-bb60-7145417181a1 DBSI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b cd183219-3daa-4154-b4a4-41b92d4f8155 DBBI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b fdfe3197-223f-4660-a834-4736f50110ef DBSI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 7391380b-f609-4457-be6b-bd9afa51148c DBBI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ce4350ed-e291-4815-8c43-3c6716d6402f DBGI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 274e6069-b174-43fb-8625-70e1e333f160 DBSI5 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 52eadf14-4d20-4910-91ca-a335361d53b2 RCDB SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ec54945d-d0de-4b92-8822-2bd0d31fe653 DBSI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 95420f39-db33-4c4d-8d85-a5f8d42945e6 DBBI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b b7c42ea7-6eab-4b98-8ea7-8dd4ce9517a1 DBBI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 707251cc-f19a-4b8c-89cc-63477c5747d0 DBGI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 2bbd4391-5eed-4878-b2e5-3670587527f6 DBBI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b f540b5d1-c074-457a-85e2-d35240541efd DBGI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 16a24733-cfba-4e75-a9ce-59b3779dc82e DBGI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 4358287c-9cf0-45d4-a7e3-a59f933e86b2 TEST1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 110f26e7-f9f3-412e-9443-a201d24201a0 TEST2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 [root@node0 ~]#
I updated ODA metadata.
[root@node0 ~]# odacli update-registry -n db Job details ---------------------------------------------------------------- ID: 650c800a-823c-4e57-afcc-3a6530eb402c Description: Discover Components : db Status: Created Created: November 27, 2024 11:26:39 AM CET Message: Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- [root@node0 ~]# odacli describe-job -i 650c800a-823c-4e57-afcc-3a6530eb402c Job details ---------------------------------------------------------------- ID: 650c800a-823c-4e57-afcc-3a6530eb402c Description: Discover Components : db Status: Success Created: November 27, 2024 11:26:39 AM CET Message: Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- Discover DBHome node0 November 27, 2024 11:26:56 AM CET November 27, 2024 11:27:02 AM CET Success Discover DBHome node0 November 27, 2024 11:27:02 AM CET November 27, 2024 11:27:07 AM CET Success Discover DB: DAWHT node1 November 27, 2024 11:27:07 AM CET November 27, 2024 11:27:19 AM CET Success Discover DB: DBBI1 node0 November 27, 2024 11:27:19 AM CET November 27, 2024 11:27:32 AM CET Success Discover DB: DBBI2 node0 November 27, 2024 11:27:32 AM CET November 27, 2024 11:27:45 AM CET Success Discover DB: DBBI3 node0 November 27, 2024 11:27:45 AM CET November 27, 2024 11:27:58 AM CET Success Discover DB: DBBI4 node1 November 27, 2024 11:27:58 AM CET November 27, 2024 11:28:10 AM CET Success Discover DB: DBBI6 node0 November 27, 2024 11:28:10 AM CET November 27, 2024 11:28:26 AM CET Success Discover DB: DBGI1 node0 November 27, 2024 11:28:26 AM CET November 27, 2024 11:28:38 AM CET Success Discover DB: DBGI2 node0 November 27, 2024 11:28:38 AM CET November 27, 2024 11:28:50 AM CET Success Discover DB: DBGI3 node0 November 27, 2024 11:28:50 AM CET November 27, 2024 11:29:02 AM CET Success Discover DB: DBGI4 node1 November 27, 2024 11:29:02 AM CET November 27, 2024 11:29:14 AM CET Success Discover DB: DBGI6 node0 November 27, 2024 11:29:14 AM CET November 27, 2024 11:29:30 AM CET Success Discover DB: DBSI1 node0 November 27, 2024 11:29:30 AM CET November 27, 2024 11:29:42 AM CET Success Discover DB: DBSI2 node0 November 27, 2024 11:29:42 AM CET November 27, 2024 11:29:54 AM CET Success Discover DB: DBSI3 node0 November 27, 2024 11:29:54 AM CET November 27, 2024 11:30:06 AM CET Success Discover DB: DBSI4 node1 November 27, 2024 11:30:07 AM CET November 27, 2024 11:30:18 AM CET Success Discover DB: DBSI5 node1 November 27, 2024 11:30:19 AM CET November 27, 2024 11:30:30 AM CET Success Discover DB: DBSI6 node0 November 27, 2024 11:30:30 AM CET November 27, 2024 11:30:43 AM CET Success Discover DB: RCDB node0 November 27, 2024 11:30:43 AM CET November 27, 2024 11:31:01 AM CET Success Discover DB: TEST1 node0 November 27, 2024 11:31:01 AM CET November 27, 2024 11:31:15 AM CET Success Discover DB: TEST2 node0 November 27, 2024 11:31:15 AM CET November 27, 2024 11:31:29 AM CET Success [root@node0 ~]#
I could confirm that all is good now. TEST1 database is seen as a RAC database.
[root@node0 ~]# odacli list-databases ID DB Name DB Type DB Version CDB Class Edition Shape Storage Status DB Home ID ---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ---------------------------------------- 712d542e-ded7-4d1a-9b9d-7c335042ffc0 DAWHT SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b cd183219-3daa-4154-b4a4-41b92d4f8155 DBBI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 7391380b-f609-4457-be6b-bd9afa51148c DBBI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 95420f39-db33-4c4d-8d85-a5f8d42945e6 DBBI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b b7c42ea7-6eab-4b98-8ea7-8dd4ce9517a1 DBBI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 2bbd4391-5eed-4878-b2e5-3670587527f6 DBBI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 16a24733-cfba-4e75-a9ce-59b3779dc82e DBGI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b f540b5d1-c074-457a-85e2-d35240541efd DBGI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 0393d997-50aa-4511-b5b9-c4ff2da393db DBGI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 707251cc-f19a-4b8c-89cc-63477c5747d0 DBGI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ce4350ed-e291-4815-8c43-3c6716d6402f DBGI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 28026894-0c2d-417b-b11a-d76516805247 DBSI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b fdfe3197-223f-4660-a834-4736f50110ef DBSI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ec54945d-d0de-4b92-8822-2bd0d31fe653 DBSI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 11a12489-2483-4f8a-bb60-7145417181a1 DBSI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 274e6069-b174-43fb-8625-70e1e333f160 DBSI5 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 2d824a9f-735a-4e8d-b6c8-5393ddc894e9 DBSI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 52eadf14-4d20-4910-91ca-a335361d53b2 RCDB SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 4358287c-9cf0-45d4-a7e3-a59f933e86b2 TEST1 RAC 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 110f26e7-f9f3-412e-9443-a201d24201a0 TEST2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 [root@node0 ~]#To wrap up
As we can see it is quite easy to convert a Single Instance database to a RAC database on an ODA. There is few rconfig template xml parameters that need some attention to ensure there will be no dataloss. I would strongly recommend to test it first on a TEST database, created just for this test purpose.
L’article Convert Single Instance database to RAC on an ODA est apparu en premier sur dbi Blog.
Save Report to Power BI Report Server: Resolving the ‘Unexpected Error Encountered’
Hello, Readers!
Recently, I was tasked with preparing a demo for Power BI Report Server. Everything proceeded smoothly until I reached the step of deploying a report to the server, where I encountered an interesting authentication challenge that highlights the complexities of working with mixed domain environments. The report deployment phase revealed an unexpected authentication error that required a deeper understanding of Windows authentication mechanisms and network name resolution.
This blog provides a detailed solution to avoid this error, ensuring seamless deployment.
2 ContextTo simulate a production-like environment, I architected my lab environment with the following components:
- Development Environment: A domain-joined laptop running Power BI Desktop
- Database Server: A Hyper-V virtual machine (WIN-LAB1) hosting the database infrastructure
- Power BI Report Server: A separate Hyper-V virtual machine (WIN-LAB3) for report hosting
The workflow was designed to:
- Develop a report on the laptop using Power BI Desktop.
- Save the report to the Power BI Report Server to enable on-premises sharing of reports and dashboards.
The implementation involved several steps:
3.1 Development environmentMy development environment consisted of a domain-joined laptop running Power BI Desktop. It’s crucial to note that Power BI Desktop comes in two distinct flavors:
Standard Power BI Desktop
- Designed for cloud service deployment
- Features monthly release cycles
- Includes preview features
Power BI Desktop for Report Server
- Specifically optimized for on-premises deployment
- Follows a triannual release schedule (January, May, September)
- Limited to generally available features
The required application was Power BI Desktop for Power BI Report Server (September 2024 version), ensuring compatibility with the installed server version. You can download this version from here.
3.2 Database ServerWIN-LAB1 is a VM running Windows Server 2022, with SQL Server 2022 and SSMS 20.2 installed. This VM is not part of an Active Directory domain. The AdventureWorksDW2022 database backup, available here, was used as the data source for report development.
The Developer Edition of SQL Server 2022, licensed for non-productive use can be downloaded here.
3.3 Power BI Report ServerWIN-LAB3 is another VM running Power BI Report Server in Developer mode, which can be downloaded here. Like the database server, this VM is not part of any Active Directory domain.
The server’s database, that resides in WIN-LAB1, is titled ReportServer_PBIRS (instead of the standard ReportServer) because WIN-LAB1 also stored database of SSRS instance (titled ReportServer_SSRS).
Following pictures describe the settings of the Report Server:
3.4 Report developmentThe report itself was developed using the AdventureWorksDW2022 sample database, analyzing sales trends by year and weekday.
3.5 Report deploymentThe deployment process appeared straightforward: File → Save as → Power BI Report Server. However, this led to an “Unexpected error encountered” message that required further investigation.
4 Root Cause AnalysisOn WIN-LAB3, neither the Event Viewer logs nor the server log files (C:\Program Files\Microsoft Power BI Report Server\PBIRS\LogFiles) provided useful details.
However …
The development machine’s Event Viewer exposed two critical issues:
- Name Resolution failure on the domain controller
- Active Directory Replication Latency issues
The error occurs because Power BI Report Server defaults to Windows Authentication. This requires:
- Both machines to be in a trusted domain.
- Valid credentials for access.
Since WIN-LAB3 is not part of a domain but my laptop is, the system failed to resolve WIN-LAB3’s name and authenticate using valid credentials.
5 SolutionTo resolve the issue, I implemented the following steps:
Step 1: Retrieve the IP Address of WIN-LAB3
Connect to WIN-LAB3 and run ipconfig command
- Connect to WIN-LAB3 and execute the ipconfig command to obtain its IP address.
Step 2: Update the Laptop’s Hosts File
- Add an entry mapping the IP address of WIN-LAB3 to its name in the hosts file (C:\Windows\System32\drivers\etc\hosts).
Step 3: Configure Network Credentials
- Open Credential Manager (Control Panel > Credential Manager > Windows Credentials).
- Add a new Windows credential:
- Internet or network address: WIN-LAB3.
- Username: An existing account on WIN-LAB3 (e.g., WIN-LAB3\Administrator).
- Password: Corresponding password for the above account.
Following these steps, I successfully saved the report to the Power BI Report Server.
Finally, the report was available in the Report Server
6 ConclusionWhen working with Power BI Report Server in an environment where the development machine and server are not in the same domain, success depends on properly addressing two key technical requirements:
- Name Resolution: Ensure proper DNS resolution between development and server machines
- Authentication: Configure appropriate credential mapping in Windows Credential Manager
This implementation demonstrates how to bridge the authentication gap between domain-joined and standalone systems in a Power BI Report Server deployment.
When developing a Power BI report in a separate machine than the Power BI Report Server, you should ensure that the two machines are able to communicate mainly:
- The Development machine (where Power BI Desktop is installed) can resolve the Power BI Report Server name
- A Power BI Report Server valid account is defined in the Development machine via the Credential Manager to allow a seamless communication between the two machines.
Stay tuned for more insights in future posts!
L’article Save Report to Power BI Report Server: Resolving the ‘Unexpected Error Encountered’ est apparu en premier sur dbi Blog.
Offloading PostgreSQL Backups to Azure Blob Storage Using PGBackRest, Managed Identity, and SAS Tokens
One of the nice things wise cloud providers is the low costs of their storage. Whenever you want to optimize your long-term backups and push them on Azure, you might face a wall of questions on how to do it with PostgreSQL securely and whether it would align with disaster recovery plans.
The following setup is an example of how you could manage having local PITR backups and offload your long-term full backups on Azure Blob Storage securely to reduce storage costs and extend your backup infrastructure to the cloud.
The architecture consists of the following components:
- PostgreSQL Standalone Server (PG1): An Azure VM hosting PostgreSQL 17.2
- Local PGBackRest Repository: Stores backups locally on the PG1 server for quick access.
- Azure Blob Storage Account: Acts as a secondary, offsite repository for backups.
- Managed Identity: Enables the VM to authenticate with Azure services without storing credentials on the VM.
- SAS Token: Provides secure, time-limited access to the Blob Storage container.
- Azure Key Vault: Securely stores sensitive information like the SAS token (optional but recommended).
+--------------------+ +------------------------+
| Azure VM (PG1) | | Azure Blob Storage |
| PostgreSQL 17 |<------> Repo 2 (Remote Backup)|
| | | (Secondary Storage) |
| +--------------+ | +------------------------+
| | PGBackRest | |
| | Repo 1 | | ^
| | (Local | | |
| | Storage) | | |
| +--------------+ | |
| | |
| | |
| +---------------+ | Managed Identity |
| | Azure CLI |<----------------------+
| +---------------+ |
| | SAS Token | |
| +---------------+ |
+--------------------+
- Azure Subscription: Active subscription to deploy resources.
- Azure VM (PG1): Running compatible Linux distribution with PostgreSQL installed.
- PGBackRest Installed: On the server.
- Azure CLI Installed: Version 2.0 or higher on the server.
- Managed Identity Enabled: On the Azure VM.
- Azure Blob Storage Account and Container: Created for storing backups.
- Navigate to Your VM in Azure Portal:
- Go to Azure Portal > Virtual Machines > Your VM (PG1).
- Enable System-Assigned Managed Identity:
- Click on Identity in the left-hand menu.
- Under the System assigned tab, set the status to On.
- Click Save.
- Create a Storage Account:
- Go to Azure Portal > Storage Accounts > Create.
- Provide a unique Storage Account Name
- Choose the appropriate Subscription, Resource Group, and Location.
- Select Standard performance and BlobStorage account kind.
- Click Review + create and then Create.
- Create a Blob Container:
- Navigate to your storage account.
- Under Data storage, select Containers.
- Click + Container.
- Name the container
- Set the Public access level to Private.
- Click Create.
- Assign Role to Managed Identity:
- Go to your Storage Account > Access Control (IAM).
- Click + Add > Add role assignment.
- In the Role dropdown, select Storage Blob Data Contributor.
- In the Assign access to dropdown, select Managed identity.
- Click + Select members, search for your VM’s name, and select it.
- Click Select and then Review + assign.
According your infrastructure, change the required parameters.
This works well for standalone servers, it can also serve as a base for a Patroni setup, in that case, you would need to modify the local pgBackRest configuration and push it to the other nodes as well with a “scp” command.
#!/bin/bash
## Renew Azure Blob Storage SAS Token for PGBackRest
# Authenticate Azure CLI
az login --identity
# Variables
STORAGE_ACCOUNT_NAME="storage-account"
CONTAINER_NAME="container-name"
PG_BACKREST_CONF="/etc/pgbackrest.conf"
EXPIRY_DATE=$(date -u -d "+6 hours" +"%Y-%m-%dT%H:%M:%SZ")
# Retrieve the Storage Account Key
ACCOUNT_KEY=$(az storage account keys list \
--resource-group "cloud-shell-storage-westeurope" \
--account-name "$STORAGE_ACCOUNT_NAME" \
--query "[0].value" \
--output tsv)
if [ -z "$ACCOUNT_KEY" ]; then
echo "Failed to retrieve storage account key."
exit 1
fi
# Generate SAS Token
SAS_TOKEN=$(az storage container generate-sas \
--account-name "$STORAGE_ACCOUNT_NAME" \
--name "$CONTAINER_NAME" \
--permissions acdlrw \
--expiry "$EXPIRY_DATE" \
--https-only \
--account-key "$ACCOUNT_KEY" \
--output tsv)
if [ -z "$SAS_TOKEN" ]; then
echo "Failed to generate SAS token."
exit 1
fi
echo "SAS Token generated successfully. It will expire on $EXPIRY_DATE."
# Update PGBackRest configuration with the SAS token
echo "Updating PGBackRest configuration on backup server..."
sudo awk -v sas_token="$SAS_TOKEN" '/^repo2-azure-key=/ {$0="repo2-azure-key=" sas_token} {print}' "$PG_BACKREST_CONF" > /tmp/pgbackrest.conf.tmp && sudo mv /tmp/pgbackrest.conf.tmp "$PG_BACKREST_CONF"
if [ $? -eq 0 ]; then
echo "PGBackRest configuration updated successfully."
else
echo "Failed to update PGBackRest configuration."
exit 1
fi
echo "SAS token update completed."
Update your /etc/pgbackrest.conf
file to include the Azure Blob Storage as a secondary repository.
[global]
repo1-path=/u99/pgbackrest
repo1-retention-full=2
repo2-type=azure
repo2-path=/
repo2-azure-container=backups
repo2-azure-account=pgbackreststorage
repo2-azure-key=se=2024-12-02T05%3A36%3A34Z&sp=racwdl&spr=https&sv=2022-11-02&sr=c&sig=d16aYx67bpUT/MVNerRQG9cnIQYJ46%2Bggt3jeEC8fnQ%3D
repo2-azure-key-type=sas
repo2-retention-full=2
[pg17]
pg1-path=/u02/pgdata/17/PG17
pg1-port=5435
db-socket-path=/tmp
Note: The <Your_SAS_Token>
value will be dynamically updated by your script.
- Create your stanza and run a Manual Backup:
postgres@PG1:/home/postgres/renewSAS/ [PG17] pgbackrest --config=/etc/pgbackrest.conf --stanza=pg17 --log-level-console=info stanza-create
2024-12-01 17:36:56.834 P00 INFO: stanza-create command begin 2.54.0: --config=/etc/pgbackrest.conf --exec-id=10839-70ab76ac --log-level-console=info --pg1-path=/u02/pgdata/17/PG17 --pg1-port=5435 --pg1-socket-path=/tmp --repo2-azure-account=<redacted> --repo2-azure-container=backups --repo2-azure-key=<redacted> --repo2-azure-key-type=sas --repo1-path=/u99/pgbackrest --repo2-path=/ --repo2-type=azure --stanza=pg17
2024-12-01 17:36:56.844 P00 INFO: stanza-create for stanza 'pg17' on repo1
2024-12-01 17:36:56.844 P00 INFO: stanza 'pg17' already exists on repo1 and is valid
2024-12-01 17:36:56.844 P00 INFO: stanza-create for stanza 'pg17' on repo2
2024-12-01 17:36:57.473 P00 INFO: stanza-create command end: completed successfully (642ms)
postgres@PG1:/home/postgres/renewSAS/ [PG17] pgbackrest --config=/etc/pgbackrest.conf --stanza=pg17 --log-level-console=info --repo=2 --archive-copy backup
2024-12-01 17:37:22.303 P00 INFO: backup command begin 2.54.0: --archive-copy --config=/etc/pgbackrest.conf --exec-id=10844-31614f9c --log-level-console=info --pg1-path=/u02/pgdata/17/PG17 --pg1-port=5435 --pg1-socket-path=/tmp --repo=2 --repo2-azure-account=<redacted> --repo2-azure-container=backups --repo2-azure-key=<redacted> --repo2-azure-key-type=sas --repo1-path=/u99/pgbackrest --repo2-path=/ --repo1-retention-full=2 --repo2-retention-full=2 --repo2-type=azure --stanza=pg17
2024-12-01 17:37:22.550 P00 WARN: no prior backup exists, incr backup has been changed to full
2024-12-01 17:37:22.550 P00 INFO: execute non-exclusive backup start: backup begins after the next regular checkpoint completes
2024-12-01 17:37:22.793 P00 INFO: backup start archive = 00000001000000010000008F, lsn = 1/8F000028
2024-12-01 17:37:22.793 P00 INFO: check archive for prior segment 00000001000000010000008E
2024-12-01 17:39:10.512 P00 INFO: execute non-exclusive backup stop and wait for all WAL segments to archive
2024-12-01 17:39:10.685 P00 INFO: backup stop archive = 00000001000000010000008F, lsn = 1/8F000158
2024-12-01 17:39:10.727 P00 INFO: check archive for segment(s) 00000001000000010000008F:00000001000000010000008F
2024-12-01 17:39:11.750 P00 INFO: new backup label = 20241201-173722F
2024-12-01 17:39:12.556 P00 INFO: full backup size = 1.5GB, file total = 1592
2024-12-01 17:39:12.556 P00 INFO: backup command end: completed successfully (110257ms)
2024-12-01 17:39:12.556 P00 INFO: expire command begin 2.54.0: --config=/etc/pgbackrest.conf --exec-id=10844-31614f9c --log-level-console=info --repo=2 --repo2-azure-account=<redacted> --repo2-azure-container=backups --repo2-azure-key=<redacted> --repo2-azure-key-type=sas --repo1-path=/u99/pgbackrest --repo2-path=/ --repo1-retention-full=2 --repo2-retention-full=2 --repo2-type=azure --stanza=pg17
2024-12-01 17:39:12.708 P00 INFO: expire command end: completed successfully (152ms)
postgres@PG1:/home/postgres/renewSAS/ [PG17] pgbackrest --config=/etc/pgbackrest.conf --stanza=pg17 --log-level-console=info --repo=1 backup
2024-12-01 17:45:32.275 P00 INFO: backup command begin 2.54.0: --config=/etc/pgbackrest.conf --exec-id=10957-81c3e6e7 --log-level-console=info --pg1-path=/u02/pgdata/17/PG17 --pg1-port=5435 --pg1-socket-path=/tmp --repo=1 --repo2-azure-account=<redacted> --repo2-azure-container=backups --repo2-azure-key=<redacted> --repo2-azure-key-type=sas --repo1-path=/u99/pgbackrest --repo2-path=/ --repo1-retention-full=2 --repo2-retention-full=2 --repo2-type=azure --stanza=pg17
2024-12-01 17:45:32.300 P00 INFO: last backup label = 20241201-164054F_20241201-164358I, version = 2.54.0
2024-12-01 17:45:32.300 P00 INFO: execute non-exclusive backup start: backup begins after the next regular checkpoint completes
2024-12-01 17:45:32.564 P00 INFO: backup start archive = 000000010000000100000091, lsn = 1/91000028
2024-12-01 17:45:32.564 P00 INFO: check archive for prior segment 000000010000000100000090
2024-12-01 17:45:34.211 P00 INFO: execute non-exclusive backup stop and wait for all WAL segments to archive
2024-12-01 17:45:34.392 P00 INFO: backup stop archive = 000000010000000100000091, lsn = 1/91000120
2024-12-01 17:45:34.422 P00 INFO: check archive for segment(s) 000000010000000100000091:000000010000000100000091
2024-12-01 17:45:34.790 P00 INFO: new backup label = 20241201-164054F_20241201-174532I
2024-12-01 17:45:35.022 P00 INFO: incr backup size = 605.7KB, file total = 1591
2024-12-01 17:45:35.022 P00 INFO: backup command end: completed successfully (2750ms)
2024-12-01 17:45:35.022 P00 INFO: expire command begin 2.54.0: --config=/etc/pgbackrest.conf --exec-id=10957-81c3e6e7 --log-level-console=info --repo=1 --repo2-azure-account=<redacted> --repo2-azure-container=backups --repo2-azure-key=<redacted> --repo2-azure-key-type=sas --repo1-path=/u99/pgbackrest --repo2-path=/ --repo1-retention-full=2 --repo2-retention-full=2 --repo2-type=azure --stanza=pg17
2024-12-01 17:45:35.023 P00 INFO: expire command end: completed successfully (1ms)
Note : I use the “–archive-copy” command here to backup the required WAL with the FULL backup for repo 2. This avoids having parts of the WAL chain on both repositories. The idea here is that you would do your local (repo 1) backup for PITR purposes and your long-term backups on repo 2 (weekly/monthly/yearly).
Verify Backups:
You can note here that each backup set has a repo line specifying where the files are stored.
postgres@PG1:/home/postgres/renewSAS/ [PG17] pgbackrest --config=/etc/pgbackrest.conf --stanza=pg17 --log-level-console=info info
stanza: pg17
status: ok
cipher: none
db (current)
wal archive min/max (17): 000000010000000100000087/000000010000000100000091
full backup: 20241201-164054F
timestamp start/stop: 2024-12-01 16:40:54+00 / 2024-12-01 16:41:49+00
wal start/stop: 000000010000000100000089 / 000000010000000100000089
database size: 1.5GB, database backup size: 1.5GB
repo1: backup set size: 91MB, backup size: 91MB
incr backup: 20241201-164054F_20241201-164358I
timestamp start/stop: 2024-12-01 16:43:58+00 / 2024-12-01 16:44:00+00
wal start/stop: 00000001000000010000008D / 00000001000000010000008D
database size: 1.5GB, database backup size: 604.5KB
repo1: backup set size: 91MB, backup size: 27.6KB
backup reference total: 1 full
full backup: 20241201-173722F
timestamp start/stop: 2024-12-01 17:37:22+00 / 2024-12-01 17:39:10+00
wal start/stop: 00000001000000010000008F / 00000001000000010000008F
database size: 1.5GB, database backup size: 1.5GB
repo2: backup set size: 91MB, backup size: 91MB
incr backup: 20241201-164054F_20241201-174532I
timestamp start/stop: 2024-12-01 17:45:32+00 / 2024-12-01 17:45:34+00
wal start/stop: 000000010000000100000091 / 000000010000000100000091
database size: 1.5GB, database backup size: 605.7KB
repo1: backup set size: 91MB, backup size: 27.7KB
backup reference total: 1 full, 1 incr
Verify Remote files Backups:
Use Azure Portal or Azure CLI to check the Blob Storage container for uploaded backup files.
az storage blob list --account-name pgbackreststorage --container-name backups --output table
Step 7: Automate the Script Execution
Schedule the script to run periodically to renew the SAS token before it expires.
Edit Crontab:
sudo crontab -e
Add Cron Job:
0 */11 * * * /path/to/renew_sas_token.sh >> /var/log/pgbackrest_sas_renew.log 2>&1
- Runs the script every 11 hours.
- Redirects output to a log file for troubleshooting.
You can also trigger the script from any automation platform you’d like.
While there are alternative methods, using Managed Identity to generate short-lived SAS tokens provides a balance between security and manageability.
pgBackRest does not support managed Identity yet, it might not be a priority for the project (link) but this solution is valid and secure from a compliance point of view.
This solution offers seamless integration with pgBackRest no code changes are required it is pretty flexible since you can adjust your token validity to when you push towards Azure Blob Storage and permissions are customizable. Automation is easy with a crontab or AAP…etc.
The management is simplified since the token generation is automated, the access control is centralized (with Azure AD RBAC policies) and the administrative overhead is reduced by avoiding key management.
Regarding security standards, using short-lived tokens and Managed Identities aligns with security best practices and standards like ISO 27001, SOC 2, GDPR, and revDSG/nLPD. Additionally, actions performed using Managed Identity are logged in Azure AD providing the audit logs for compliance and security reviews and access to Blob Storage Account can be monitored using Azure Storage analytics to detect unauthorized access attempts.
Obviously, some enhancement could be done, you could configure SSL for pgBackRest and backup encryption even though there is a limited added value here (the connection is local and Azure Blob Storage is already encrypted). But, overall I feel this offers a good foundation to help you setup and offload on Azure your backup repository.
Appendix: Full Script and config
17:45:39 postgres@PG1:/home/postgres/renewSAS/ [PG17] cat renewSASToken.bash
#!/bin/bash
## Renew Azure Blob Storage SAS Token for PGBackRest
# Authenticate Azure CLI
az login --identity
# Variables
STORAGE_ACCOUNT_NAME="pgbackreststorage"
CONTAINER_NAME="backups"
PG_BACKREST_CONF="/etc/pgbackrest.conf"
EXPIRY_DATE=$(date -u -d "+12 hours" +"%Y-%m-%dT%H:%M:%SZ")
# Retrieve the Storage Account Key
ACCOUNT_KEY=$(az storage account keys list \
--resource-group "cloud-shell-storage-westeurope" \
--account-name "$STORAGE_ACCOUNT_NAME" \
--query "[0].value" \
--output tsv)
if [ -z "$ACCOUNT_KEY" ]; then
echo "Failed to retrieve storage account key."
exit 1
fi
# Generate SAS Token
SAS_TOKEN=$(az storage container generate-sas \
--account-name "$STORAGE_ACCOUNT_NAME" \
--name "$CONTAINER_NAME" \
--permissions acdlrw \
--expiry "$EXPIRY_DATE" \
--https-only \
--account-key "$ACCOUNT_KEY" \
--output tsv)
if [ -z "$SAS_TOKEN" ]; then
echo "Failed to generate SAS token."
exit 1
fi
echo "SAS Token generated successfully. It will expire on $EXPIRY_DATE."
# Update PGBackRest configuration with the SAS token
echo "Updating PGBackRest configuration on backup server..."
sudo awk -v sas_token="$SAS_TOKEN" '/^repo2-azure-key=/ {$0="repo2-azure-key=" sas_token} {print}' "$PG_BACKREST_CONF" > /tmp/pgbackrest.conf.tmp && sudo mv /tmp/pgbackrest.conf.tmp "$PG_BACKREST_CONF"
if [ $? -eq 0 ]; then
echo "PGBackRest configuration updated successfully."
else
echo "Failed to update PGBackRest configuration."
exit 1
fi
echo "SAS token update completed."
18:16:54 postgres@PG1:/home/postgres/renewSAS/ [PG17] cat /etc/pgbackrest.conf
[global]
repo1-path=/u99/pgbackrest
repo1-retention-full=2
repo2-type=azure
repo2-path=/
repo2-azure-container=backups
repo2-azure-account=pgbackreststorage
repo2-azure-key=se=2024-12-02T05%3A36%3A34Z&sp=racwdl&spr=https&sv=2022-11-02&sr=c&sig=d16aYx67bpUT/MVNerRQG9cnIQYJ46%2Bggt3jeEC8fnQ%3D
repo2-azure-key-type=sas
repo2-retention-full=2
[pg17]
pg1-path=/u02/pgdata/17/PG17
pg1-port=5435
db-socket-path=/tmp
18:17:19 postgres@PG1:/home/postgres/renewSAS/ [PG17] sqh
psql (17.2 dbi services build)
Type "help" for help.
postgres=# show archive_mode;
archive_mode
--------------
on
(1 row)
postgres=# show archive_command;
archive_command
------------------------------------------------------------------------
pgbackrest --config=/etc/pgbackrest.conf --stanza=pg17 archive-push %p
(1 row)
L’article Offloading PostgreSQL Backups to Azure Blob Storage Using PGBackRest, Managed Identity, and SAS Tokens est apparu en premier sur dbi Blog.
Projet : Modélisation d’un Data Warehouse
Contexte : Mon client travail dans le domaine des échanges de marchandises et ressources primaires et son métier repose sur l’utilisation de différents outils informatiques pour faire ses affaires. Dans le but de consolider ses données afin de pouvoir générer des rapports d’activités globaux, dbi-Services a été sollicité pour réaliser un Data Warehouse (DWH) et d’aider à la mise en place d’un processus d’intégration de données.
Confidentialité : Très Haute. Ce niveau de confidentialité m’oblige à anonymiser l’ensemble des modèles de données que je vais vous présenter.
Environnement de travail :
- Les différents outils de production du client
- SQL Server Management Studio
- DBSchema
- Azure SQL Data Warehouse
- Azure Synapse pipeline
- Power BI
Durée du projet : une 30aine de jours
Etude de l’existant :
Prise en main du métier : Dans un premier temps, afin de comprendre au mieux le métier, il a d’abord été nécessaire de voir le contexte du client. Pour ce faire, une démo de l’outil principal m’a été faite par le développeur principale ainsi que des discussions avec les utilisateurs m’ont permis de mieux cerner ce métier qui m’était inconnu jusqu’à présent.
Modélisation OLTP de l’outil principal : Au fur et à mesure de la découverte de l’outil, je découvrais en parallèle le modèle de données sur lequel il reposait.
- Comme expliqué plus haut, j’ai dû anonymiser le plus possible le modèle de mon client pour respecter les clauses de confidentialité signées lors de mon entrée.
- Ainsi, je ne peux montrer que le schéma dans son ensemble et non en détail. Il en sera de même pour le DWH.
Les besoins métiers
Le modèle ci-dessus ainsi que l’étude du métier, m’a permis d’identifier les points clés de l’outil.
- Quels sont les éléments d’activités principaux de mon client ? (KPI)
- Quel est le niveau de granularité dont mon client a besoin afin de constituer des rapports solides et parlants ?
- Quelles sont les données clés dont mon client a besoin dans ses futurs rapports ?
Il m’est bien souvent demandé de générer des modélisations sans forcément que le client ait une idée précise de ses besoins. Dans ce cas, mon client avait une idée déjà précise de son besoin final. Nous avons eu de longues discussions sur ce qui était faisable ou non.
Dans le cadre du projet, j’ai travaillé en étroite collaboration avec le responsable du développement. L’idée était de lui donner toutes les base de mon métier, pour qu’il puisse maintenir ce projet au quotidien. Bien-entendu, je reste à sa disposition en cas de souci majeur.
Design des tables de faits :
Basé sur le modèle OLTP ainsi que sur les nombreuses discussions avec mon client, J’ai pu identifier, dans un premier temps, 3 tables de faits.
- Afin de répondre à une demande précise, ainsi que des contraintes métier, j’ai pris la décision de séparer certaines tables de faits même si elles semblaient être « identiques » de premier abord.
J’ai pu identifier 7 Points clés métier qui nous serviront de tables de faits.
Chacun de ces points clés sera transformé en table de fait. Même si elles sembles porter les mêmes informations, Afin de répondre à des exigences de performances et de ségrégation de l’information, J’ai opté pour une constellation autour de ces 7 tables de faits.
La base source de mon clients contient des centaines de millions de lignes sur ces tables principales. Sachant qu’il y a un impératif, le Cloud, afin de réduire au maximum les coûts, ie : Avoir des tables plus petites pour ne pas monter en gamme d’abonnement.
Chacune des tables de faits représente un type d’achat différents pour mon client. Nous aurions pu avoir une dimension contenant la liste des produits, cependant, cela ne représente pas assez bien son métier. Chaque élément est indépendant et chaque échange sur type de produit est un fait en lui-même.
Design des dimensions :
Une fois identifié les différentes tables de faits, il est essentiel d’identifier l’ensemble de ses éléments constitutifs. Les dimensions.
Comme vous pouvez le voir sur le modèle au-dessus, il y a une dizaine de dimensions et entre 1 et 4 liaison par table
- La dimensions date est très utilisée dans le cadre métier de mon client et il peut y avoir jusqu’à 6 liens ( entre les date de création des lignes dans l’outil au date d’achats et de livraisons ) Cette dimension est un pilier des besoin client.
- A noter que ceci est un phénomène récurrent dans la plupart des DWH, à quelques exceptions près.
Un modèle OLTP repose sur un concept de relations entre tables et de contraintes d’intégrités fortes. Ceci dans le but de pouvoir assurer une performance sur de petites transactions contenant moins d’informations.
Note :
Dans le cadre d’un DWH, nous n’avons pas le même genre de contraintes. Nous traitons les données en masse, et selon les objectifs du client, une à plusieurs fois par jour. La mise en place d’une dimension dans un DWH repose sur notre capacité à identifier les éléments pouvant être regroupés en une seule table.
Nous nous basons sur un modèle OLTP assez simple à lire. Malgré quelques erreurs de modélisation, il nous est clairement possible d’identifier les faits et les dimensions en quelques lectures. Ceci nous a permis de nous concentrer sur d’autres sujets.
L’utilisation de DB Schema nous a aussi grandement simplifié la tâche. En effet, le concept de layout nous permet de travailler sur un DWH global, mais de n’afficher que les informations relatives à une activité (Dans notre cas un layout par table de fait et ses dimsensions associées).
Au final nous avons 7 tables de faits, et une 30aine de dimensions. La modélisation d’une constellation prends une toute autre forme grâce à cet outil.
Relations entre dimensions et faits
Comme vu au-dessus, nous avons identifié 30 dimensions, dans notre schéma, nous n’en présentons que 14. C’est un des éléments, outre le modèle OLTP d’origine, qui m’a poussé a créer 7 tables de faits. Certaines informations sont communes à tous les faits. Mais au final, une seule table de faits, contenant des liens sur les 30 dimensions auraient rendu l’exploitation bien trop complexe et la gestion du modèle sur un outil comme PowerBI s’en retrouve simplifié.
- Un schéma en étoile par table de fait correspond au modèle pour un shared DataSet dans PBI Online
Nous verrons dans un prochain blog la mise en place du processus d’intégration de données via un ETL.
L’article Projet : Modélisation d’un Data Warehouse est apparu en premier sur dbi Blog.
FreeBSD basics – 3 – Services
This is the third post in the series about getting started with FreeBSD. We’ve looked at the first steps here, and users and groups here. Today we’re looking at how FreeBSD handles services. For those who still don’t like systemd the good news is: There is nothing like systemd in FreeBSD. For those who do like systemd the maybe bad news is: This is not Linux, do not expect things to work the same way in FreeBSD. This is more or less the same when it comes to comparing PostgreSQL to other database systems: They are not the same. Some things might work the same, some features might be the same, but still they are separate systems and you need to adapt what you already know to the specific product. Using and working with FreeBSD as a Linux user is the same story.
We’ve already used the “service” command to restart the network stack in the first post. This is the tool on FreeBSD to interact with services and if you don’t know which services are enabled on a FreeBSD system, the service command can answer that question:
root@freebsd14:~ $ service -e
/etc/rc.d/hostid
/etc/rc.d/zpool
/etc/rc.d/zpoolupgrade
/etc/rc.d/zpoolreguid
/etc/rc.d/zvol
/etc/rc.d/hostid_save
/etc/rc.d/kldxref
/etc/rc.d/zfsbe
/etc/rc.d/zfs
/etc/rc.d/devmatch
/etc/rc.d/cleanvar
/etc/rc.d/rctl
/etc/rc.d/ip6addrctl
/etc/rc.d/mixer
/etc/rc.d/netif
/etc/rc.d/resolv
/etc/rc.d/devd
/etc/rc.d/motd
/etc/rc.d/os-release
/etc/rc.d/virecover
/etc/rc.d/dmesg
/etc/rc.d/gptboot
/etc/rc.d/newsyslog
/etc/rc.d/syslogd
/etc/rc.d/savecore
/etc/rc.d/ntpd
/etc/rc.d/utx
/etc/rc.d/sendmail
/etc/rc.d/sshd
/etc/rc.d/cron
/etc/rc.d/bgfsck
The list of services which is printed here, is already in the right order. That means the first service in that list, is the first one which is started when the system is booting. The question is where that is defined, and how the rc system knows how to order those scripts for a startup, but also for a shutdown of the system. This information comes out of the header of those scripts, e,g, if we take SSH as an example:
root@freebsd14:~ $ head /etc/rc.d/sshd
#!/bin/sh
#
#
# PROVIDE: sshd
# REQUIRE: LOGIN FILESYSTEMS
# KEYWORD: shutdown
. /etc/rc.subr
The “REQUIRE” directive defines what needs to be there before sshd can be started and this is how the order is determined. Internally this is done by rcorder (there are additional directives, not just REQUIRE). When the system is starting up “/etc/rc” checks what needs to be started and then calls the corresponding scripts in “/etc/rc.d” and “/usr/local/etc/rc.d”.
Starting, stopping, restarting and, depending on the service, other tasks are all done with the “service” command. An easy way to get all the options for a specific service is to give a command you know is not existing:
root@freebsd14:~ $ service sshd a
/etc/rc.d/sshd: unknown directive 'a'.
Usage: /etc/rc.d/sshd [fast|force|one|quiet](start|stop|restart|rcvar|enable|disable|delete|enabled|describe|extracommands|configtest|keygen|reload|status|poll)
“describe” is quite good if you don’t know what a service is about:
root@freebsd14:~ $ service sshd describe
Secure Shell Daemon
What we also have already seen in first post is, how a service gets enabled. Either you do it using the sysrc utility (what is recommended):
root@freebsd14:~ $ sysrc ssh_enable=yes
ssh_enable: -> yes
… or directly in “/etc/rc.conf”:
root@freebsd14:~ $ grep enable /etc/rc.conf | egrep -v "^#"
sshd_enable="YES"
ntpd_enable="YES"
moused_nondefault_enable="NO"
zfs_enable="YES"
ssh_enable="yes"
The prefix is always the name of the service (or the name of the rc script) followed by an underline and “enable”. Quite easy and simple.
In the next post we’ll look into Jails, which provide a lightweight virtualization layer we then use for installing PostgreSQL instances in a later post.
L’article FreeBSD basics – 3 – Services est apparu en premier sur dbi Blog.
FreeBSD basics – 2 – Users & Groups
In the last post, we’ve initially configured FreeBSD so that networking is up and running, additional packages can be installed, and the system was patched to the latest release. In this post we’ll look at how users and groups are managed in FreeBSD and what FreeBSD provides when it comes to additional security mechanisms.
On Linux systems users are usually created with useradd, and groups are created with groupadd. There is also usermod and groupmod, which are used to modify users and groups. With FreeBSD you can do all those tasks with pw, no matter if you want to manage users or groups:
root@freebsd14:~ $ pw
usage:
pw [user|group|lock|unlock] [add|del|mod|show|next] [help|switches/values]
By giving the “help” switch to a sub command you may easily check the syntax (or read the man page, of course):
root@freebsd14:~ $ pw group help
usage:
pw group [add|del|mod|show|next] [help|switches/values]
root@freebsd14:~ $ pw group add help
usage: pw groupadd [group|gid] [switches]
-V etcdir alternate /etc location
-R rootdir alternate root directory
-C config configuration file
-q quiet operation
-n group group name
-g gid group id
-M usr1,usr2 add users as group members
-o duplicate gid ok
-Y update NIS maps
-N no update
A typical group would be created like this:
root@freebsd14:~ $ pw group add -n group1 -g 2001
A new user with that group as the primary group would be:
root@freebsd14:~ $ pw user add -n user1 -u 2001 -c "a sample user" -d /home/user1 -g group1 -m -s /usr/local/bin/bash
root@freebsd14:~ $ id -a user1
uid=2001(user1) gid=2001(group1) groups=2001(group1)
Another option you have is adduser. This is basically a wrapper around “pw” and gives you an interactive way of creating users:
root@freebsd14:~ $ adduser
Username: user2
Full name: second sample user
Uid (Leave empty for default): 2002
Login group [user2]: group2
Group group2 does not exist!
Login group [user2]:
Login group is user2. Invite user2 into other groups? []: group1
Login class [default]:
Shell (sh csh tcsh bash rbash nologin) [sh]: bash
Home directory [/home/user2]:
Home directory permissions (Leave empty for default):
Enable ZFS encryption? (yes/no) [no]:
Use password-based authentication? [yes]:
Use an empty password? (yes/no) [no]:
Use a random password? (yes/no) [no]:
Enter password:
Enter password again:
Lock out the account after creation? [no]:
Username : user2
Password : *****
Full Name : second sample user
Uid : 2002
ZFS dataset : zroot/home/user2
Class :
Groups : user2 group1
Home : /home/user2
Home Mode :
Shell : /usr/local/bin/bash
Locked : no
OK? (yes/no) [yes]: yes
adduser: INFO: Successfully created ZFS dataset (zroot/home/user2).
adduser: INFO: Successfully added (user2) to the user database.
Add another user? (yes/no) [no]: no
Goodbye!
root@freebsd14:~ $ id -a user2
uid=2002(user2) gid=2002(user2) groups=2002(user2),2001(group1)
adduser also created a new ZFS file system for the new user (this did not happen with pw, and this is only done when the parent of the home directory is also ZFS):
root@freebsd14:~ $ df -h | grep user
zroot/home/user2 24G 128K 24G 0% /home/user2
All those defaults can be controlled, as you can ask adduser to create a template:
root@freebsd14:~ $ adduser -C
Uid (Leave empty for default): 2003
Login group []: group1
Enter additional groups []: user2
Login class [default]:
Shell (sh csh tcsh bash rbash nologin) [sh]: bash
Home directory [/home/]:
Home directory permissions (Leave empty for default):
Enable ZFS encryption? (yes/no) [no]:
Use password-based authentication? [yes]:
Use an empty password? (yes/no) [no]:
Use a random password? (yes/no) [no]:
Lock out the account after creation? [no]:
Pass Type : yes
Class :
Groups : group1 user2
Home : /home/
Home Mode :
Shell : /usr/local/bin/bash
Locked : no
OK? (yes/no) [yes]: yes
Re-edit the default configuration? (yes/no) [no]:
Goodbye!
This created “/etc/adduser.conf”, which will be the template for new users created with adduser:
root@freebsd14:~ $ cat /etc/adduser.conf
# Configuration file for adduser(8).
# NOTE: only *some* variables are saved.
# Last Modified on Tue Nov 26 16:04:34 CET 2024.
defaultHomePerm=
defaultLgroup=group1
defaultclass=
defaultgroups=user2
passwdtype=yes
homeprefix=/home
defaultshell=/usr/local/bin/bash
udotdir=/usr/share/skel
msgfile=/etc/adduser.msg
disableflag=
uidstart=2003
So far there is nothing special, the commands are not the same as on Linux, but the concepts are very similar. What you might have noticed is, that there is something which is called a “login class”. Login classes are used to setup users environments and optionally put restrictions on resource usage. Those classes are defined in “/etc/login.conf” and the default class looks like this:
default:\
:passwd_format=sha512:\
:copyright=/etc/COPYRIGHT:\
:welcome=/var/run/motd:\
:setenv=BLOCKSIZE=K:\
:mail=/var/mail/$:\
:path=/sbin /bin /usr/sbin /usr/bin /usr/local/sbin /usr/local/bin ~/bin:\
:nologin=/var/run/nologin:\
:cputime=unlimited:\
:datasize=unlimited:\
:stacksize=unlimited:\
:memorylocked=64K:\
:memoryuse=unlimited:\
:filesize=unlimited:\
:coredumpsize=unlimited:\
:openfiles=unlimited:\
:maxproc=unlimited:\
:sbsize=unlimited:\
:vmemoryuse=unlimited:\
:swapuse=unlimited:\
:pseudoterminals=unlimited:\
:kqueues=unlimited:\
:umtxp=unlimited:\
:priority=0:\
:ignoretime@:\
:umask=022:\
:charset=UTF-8:\
:lang=C.UTF-8:
There are many more examples in that file, take your time and have a look at them to get an idea. If you change something in that file, you’ll need to re.generate the database with “cap_mkdb“. There is the same concept for the password files. If you check “/etc/master.passwd” you see the hashed passwords (in Linux it is /etc/shadow), but if you check “/etc/passwd” there are no hashes anymore:
root@freebsd14:~ $ egrep "^root|^user1|^user2" /etc/master.passwd
root:$6$rwisdlYvWBHh7XE/$UjQ7zuTqdDXKSjxCjNph6KBWAn.S5lfwal4FsZGWBJsKDsvbfWSJ3asp7BOa9o09iRVNWrpgXKqxh2J9RnUZs/:0:0::0:0:Charlie &:/root:/bin/sh
user1:*:2001:2001::0:0:a sample user:/home/user1:/usr/local/bin/bash
user2:$6$HdjGHoWTrlJChtkn$dLkcSNPn8.O98/rjm91GhGM7lxHb1rrumK0.SSXtO./5jr0LqddyG7Es8ijqVuge9cDdmwz0BF3q5uGq7ERDn/:2002:2002::0:0:second sample user:/home/user2:/usr/local/bin/bash
root@freebsd14:~ $ egrep "^root|^user1|^user2" /etc/passwd
root:*:0:0:Charlie &:/root:/bin/sh
user1:*:2001:2001:a sample user:/home/user1:/usr/local/bin/bash
user2:*:2002:2002:second sample user:/home/user2:/usr/local/bin/bash
The reason is that normal users are not allowed to read “/etc/master.passwd” but still can get basic account information out of “/etc/passwd”. So there needs to be a way to generate the second one out of the first one, and this is done with “pwd_mkdb“. In the same way as “cap_mkdb” does it for the login classes, this one does it for the password databases:
root@freebsd14:~ $ ls -l /etc/*pwd*.db
-rw-r--r-- 1 root wheel 40960 Nov 26 15:58 /etc/pwd.db
-rw------- 1 root wheel 40960 Nov 26 15:58 /etc/spwd.db
The result are binaries and faster to process than the text representation of the base files. In addition the “/etc/passwd” file is created out of the master file without the sensitive information.
Let’s do a simple test show how this works. The password database files currently have these timestamps:
root@freebsd14:~ $ ls -la /etc/*pwd*.db
-rw-r--r-- 1 root wheel 40960 Nov 27 14:06 /etc/pwd.db
-rw------- 1 root wheel 40960 Nov 27 14:06 /etc/spwd.db
root@freebsd14:~ $ ls -la /etc/*passwd*
-rw------- 1 root wheel 2124 Nov 27 14:01 /etc/master.passwd
-rw-r--r-- 1 root wheel 1781 Nov 26 15:58 /etc/passwd
Changing the shell of “user2” using “pw” will update all the files at once:
root@freebsd14:~ $ grep user2 /etc/passwd
user2:*:2002:2002:second sample user:/home/user2:/usr/local/bin/bash
root@freebsd14:~ $ pw user mod user2 -s /bin/sh
root@freebsd14:~ $ grep user2 /etc/master.passwd
user2:$6$HdjGHoWTrlJChtkn$dLkcSNPn8.O98/rjm91GhGM7lxHb1rrumK0.SSXtO./5jr0LqddyG7Es8ijqVuge9cDdmwz0BF3q5uGq7ERDn/:2002:2002::0:0:second sample user:/home/user2:/bin/sh
root@freebsd14:~ $ ls -la /etc/*passwd*
-rw------- 1 root wheel 2112 Nov 27 14:16 /etc/master.passwd
-rw-r--r-- 1 root wheel 1757 Nov 27 14:16 /etc/passwd
root@freebsd14:~ $ ls -la /etc/*pwd*.db
-rw-r--r-- 1 root wheel 40960 Nov 27 14:16 /etc/pwd.db
-rw------- 1 root wheel 40960 Nov 27 14:16 /etc/spwd.db
On the other hand, if we change the shell of “user2” back to bash manually in “/etc/master.passwd”:
# open, change manually back to /usr/local/bin/bash and save
root@freebsd14:~ $ grep user2 /etc/master.passwd
user2:$6$HdjGHoWTrlJChtkn$dLkcSNPn8.O98/rjm91GhGM7lxHb1rrumK0.SSXtO./5jr0LqddyG7Es8ijqVuge9cDdmwz0BF3q5uGq7ERDn/:2002:2002::0:0:second sample user:/home/user2:/usr/local/bin/bash
… the other files have not been touched and still show the old timestamp:
root@freebsd14:~ $ ls -l /etc/*pwd*
-rw-r--r-- 1 root wheel 40960 Nov 27 14:16 /etc/pwd.db
-rw------- 1 root wheel 40960 Nov 27 14:16 /etc/spwd.db
root@freebsd14:~ $ ls -l /etc/*passwd*
-rw------- 1 root wheel 2124 Nov 27 14:47 /etc/master.passwd
-rw-r--r-- 1 root wheel 1757 Nov 27 14:16 /etc/passwd
In this case we need to run “pwd_mkdb” manually to get this done:
root@freebsd14:~ $ pwd_mkdb -p /etc/master.passwd
root@freebsd14:~ $ ls -l /etc/*pwd*
-rw-r--r-- 1 root wheel 40960 Nov 27 14:52 /etc/pwd.db
-rw------- 1 root wheel 40960 Nov 27 14:52 /etc/spwd.db
root@freebsd14:~ $ ls -l /etc/*passwd*
-rw------- 1 root wheel 2124 Nov 27 14:47 /etc/master.passwd
-rw-r--r-- 1 root wheel 1769 Nov 27 14:52 /etc/passwd
root@freebsd14:~ $ grep user2 /etc/passwd
user2:*:2002:2002:second sample user:/home/user2:/usr/local/bin/bash
At least from my point of view it is not advisable to do it like this. If you want to edit users manually, instead of manually editing the master file you should use “vipw“. This at least does some sanity checks for you and is automatically rebuilding the password databases. Otherwise us “pw” or “chpass“.
The final topic for this post is something that very much reminds of PostgreSQL’s pg_hba.conf, which is called “login access control table” in FreeBSD. As in PostgreSQL, you can define who, from where, networked or not, is either accepted to login or not. Have a look at “/etc/login.access” to get an idea how that looks like. Here are some examples from that file:
# Disallow console logins to all but a few accounts.
#
#-:ALL EXCEPT wheel shutdown sync:console
#
# Disallow non-local logins to privileged accounts (group wheel).
#
#-:wheel:ALL EXCEPT LOCAL .win.tue.nl
#
# Some accounts are not allowed to login from anywhere:
#
#-:wsbscaro wsbsecr wsbspac wsbsym wscosor wstaiwde:ALL
#
# All other accounts are allowed to login from anywhere.
That’s it for today. In the next post we’ll look at services are managed in FreeBSD.
L’article FreeBSD basics – 2 – Users & Groups est apparu en premier sur dbi Blog.
FreeBSD basics – 1 – The first steps
In case you follow the blogs here, you might have noticed that I got interested in FreeBSD over the last months (or even year(s)). The reasons for this are multiple: FreeBSD development is organized in very much the same way as it is with PostgreSQL. The license is very much the same. The history is really interesting as it goes back to the roots of Unix. A lot of companies trust in FreeBSD, but nobody really knows who they are and for what reasons (sounds familiar to PostgreSQL people). There are public references (e.g. here, or here) but as it is “do with it what you like” (same as PostgreSQL) there is no single source of reference. You are free to do with it whatever you want.
The installation is straight forward, so I am not going to look into that here (you might want to check this old post which has detailed screenshots for installing FreeBSD in a virtualized environment). In this first post, we’re going to look at the very basic differences when it comes to FreeBSD and a common Linux distribution. The last sentence already comes with a huge difference between those two: FreeBSD is a complete operating system (including packages and ports), while Linux usually refers to the kernel only. In the Linux world it is up to the distributions to bundle all the stuff you usually need for running a Linux server or workstation. FreeBSD is different, it is not only the kernel, it comes with almost everything you need by default.
Assuming you’ve installed FreeBSD in one way or the other (and you did not create additional users during the installation process) it looks like this once you log in (over ssh in this example)
FreeBSD 14.1-RELEASE GENERIC
Welcome to FreeBSD!
Release Notes, Errata: https://www.FreeBSD.org/releases/
Security Advisories: https://www.FreeBSD.org/security/
FreeBSD Handbook: https://www.FreeBSD.org/handbook/
FreeBSD FAQ: https://www.FreeBSD.org/faq/
Questions List: https://www.FreeBSD.org/lists/questions/
FreeBSD Forums: https://forums.FreeBSD.org/
Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with: pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.
Show the version of FreeBSD installed: freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages: man man
FreeBSD directory layout: man hier
To change this login announcement, see motd(5).
root@freebsd14:~ #
The message of the day gives you some hints for getting more information about FreeBSD. This is not commonly done by default in Linux distributions (except Ubuntu, which even comes with a dynamic motd). If you don’t like this, have a look at “man motd” which points you to “/etc/motd.template” which is the template for the message of the day.
One of the first things you’ll probably notice is, that you are not dropped into a bash shell (but the Bourne shell) as it usually is the case with Linux distributions:
root@freebsd14:~ $ echo $SHELL
/bin/sh
root@freebsd14:~ $ ls -la /bin/sh
-r-xr-xr-x 1 root wheel 168344 May 31 11:00 /bin/sh
If you want, you can get bash by installing the corresponding package using pkg:
root@freebsd14:~ $ pkg install bash
Doing that comes with another difference compared to Linux distributions. Additional programs do not go to either “/bin/” or “/usr/bin”, they go to “/usr/local/bin”:
root@freebsd14:~ $ which bash
/usr/local/bin/bash
root@freebsd14:~ $ ls /usr/local/bin/
drwxr-xr-x 2 root wheel 10 Nov 26 10:12 .
drwxr-xr-x 9 root wheel 9 Nov 26 10:12 ..
-rwxr-xr-x 1 root wheel 968408 Oct 31 02:13 bash
-r-xr-xr-x 1 root wheel 6902 Oct 31 02:13 bashbug
-rwxr-xr-x 1 root wheel 42640 Oct 31 02:05 envsubst
-rwxr-xr-x 1 root wheel 42728 Oct 31 02:05 gettext
-r-xr-xr-x 1 root wheel 5190 Oct 31 02:05 gettext.sh
-rwxr-xr-x 1 root wheel 10656 Oct 31 02:04 indexinfo
-rwxr-xr-x 1 root wheel 42496 Oct 31 02:05 ngettext
lrwxr-xr-x 1 root wheel 4 Oct 31 02:14 rbash -> bash
The same is true for package configuration files, they do not go into “/etc” but into “/usr/local/etc/”:
root@freebsd14:~ $ ls -la /usr/local/etc/
total 27
drwxr-xr-x 4 root wheel 6 Nov 26 10:12 .
drwxr-xr-x 9 root wheel 9 Nov 26 10:12 ..
drwxr-xr-x 2 root wheel 3 Nov 26 10:12 bash_completion.d
drwxr-xr-x 5 root wheel 5 Nov 26 10:12 periodic
-rw-r--r-- 1 root wheel 2392 Oct 31 02:04 pkg.conf
-rw-r--r-- 1 root wheel 2392 Oct 31 02:04 pkg.conf.sample
Before you can install additional packages, you need to setup your network (if not done automatically during the installation). Doing that in Linux varies from distribution to distribution, and in FreeBSD it is also different. If you try to get the current configuration using the “ip” command you’ll notice that it is not available:
root@freebsd14:~ $ ip
-sh: ip: not found
You need to do that using “ifconfig” (as in the old days of Linux or maybe even today with some distributions):
root@freebsd14:~ $ ifconfig
vtnet0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
options=4c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,TXCSUM_IPV6>
ether 52:54:00:af:79:80
inet 192.168.122.34 netmask 0xffffff00 broadcast 192.168.122.255
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
lo0: flags=1008049<UP,LOOPBACK,RUNNING,MULTICAST,LOWER_UP> metric 0 mtu 16384
options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
groups: lo
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
Adding a permanent configuration would look like this, using the sysrc utility:
root@freebsd14:~ $ sysrc ifconfig_vtnet0="inet 192.168.122.34 netmask 255.255.255.0"
ifconfig_vtnet0: inet 192.168.122.34 netmask 255.255.255.0 -> inet 192.168.122.34 netmask 255.255.255.0
The syntax for this is easy to understand: There is an “ifconfig” prefix (because this is what we want to do), followed by an underscore and then the name of the interface followed by the configuration for that interface.
What this is doing is to add this configuration to “/etc/rc.conf” (you could also add that manually, but it is recommend to use sysrc as it does some sanity checks):
root@freebsd14:~ $ grep vtnet0 /etc/rc.conf
ifconfig_vtnet0="inet 192.168.122.34 netmask 255.255.255.0"
The same procedure for the default router:
root@freebsd14:~ $ sysrc defaultrouter="192.168.122.1"
defaultrouter: 192.168.122.1 -> 192.168.122.1
root@freebsd14:~ $ grep defaultrouter /etc/rc.conf
defaultrouter="192.168.122.1"
The last step is to configure name resolution and this goes into “/ect/resolv.conf“, as it was on Linux before systemd:
root@freebsd14:~ $ cat /etc/resolv.conf
nameserver 192.168.122.1
nameserver 8.8.8.8
As there is no systemd in FreeBSD, it is not used to control services. This is done using the “service” command (restarting the network stack in this case):
root@freebsd14:~ $ service netif restart && service routing restart
Stopping Network: lo0 vtnet0.
lo0: flags=8048<LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
groups: lo
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
vtnet0: flags=1008802<BROADCAST,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
options=4c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,TXCSUM_IPV6>
ether 52:54:00:af:79:80
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
Starting Network: lo0 vtnet0.
lo0: flags=1008049<UP,LOOPBACK,RUNNING,MULTICAST,LOWER_UP> metric 0 mtu 16384
options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
groups: lo
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
vtnet0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
options=4c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,TXCSUM_IPV6>
ether 52:54:00:af:79:80
inet 192.168.122.34 netmask 0xffffff00 broadcast 192.168.122.255
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
delete host 127.0.0.1: gateway lo0 fib 0: gateway uses the same route
delete net default: gateway 192.168.122.1 fib 0: not in table
delete host ::1: gateway lo0 fib 0: gateway uses the same route
delete net fe80::: gateway ::1
delete net ff02::: gateway ::1
delete net ::ffff:0.0.0.0: gateway ::1
delete net ::0.0.0.0: gateway ::1
add host 127.0.0.1: gateway lo0 fib 0: route already in table
add net default: gateway 192.168.122.1
add host ::1: gateway lo0 fib 0: route already in table
add net fe80::: gateway ::1
add net ff02::: gateway ::1
add net ::ffff:0.0.0.0: gateway ::1
add net ::0.0.0.0: gateway ::1
Once this is ready, you should have a fully functional network:
oot@freebsd14:~ $ host dbi-services.com
dbi-services.com has address 188.114.96.3
dbi-services.com has address 188.114.97.3
dbi-services.com has IPv6 address 2a06:98c1:3121::3
dbi-services.com has IPv6 address 2a06:98c1:3120::3
dbi-services.com mail is handled by 0 dbiservices-com0i.mail.protection.outlook.com.
root@freebsd14:~ $ ping www.dbi-services.com
PING www.dbi-services.com (188.114.96.3): 56 data bytes
64 bytes from 188.114.96.3: icmp_seq=0 ttl=57 time=16.670 ms
64 bytes from 188.114.96.3: icmp_seq=1 ttl=57 time=17.779 ms
^C
--- www.dbi-services.com ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 16.670/17.224/17.779/0.554 ms
The first thing you should do once you have the network running, is to patch the system to the latest release. This is done using “freebsd-update” (“fetch” will inspect the system and download what is required, “install” will apply the patches):
root@freebsd14:~ $ uname -a
FreeBSD freebsd14.it.dbi-services.com 14.1-RELEASE FreeBSD 14.1-RELEASE releng/14.1-n267679-10e31f0946d8 GENERIC amd64
root@freebsd14:~ $ freebsd-update fetch
Looking up update.FreeBSD.org mirrors... 3 mirrors found.
Fetching public key from update2.freebsd.org... done.
Fetching metadata signature for 14.1-RELEASE from update2.freebsd.org... done.
Fetching metadata index... done.
Fetching 2 metadata files... done.
Inspecting system... done.
Preparing to download files... done.
Fetching 115 patches.....10....20....30....40....50....60....70....80....90....100....110.. done.
Applying patches... done.
The following files will be updated as part of updating to
14.1-RELEASE-p6:
/bin/freebsd-version
/boot/kernel/cfiscsi.ko
...
root@freebsd14:~ $ freebsd-update install
Creating snapshot of existing boot environment... done.
Installing updates...
Restarting sshd after upgrade
Performing sanity check on sshd configuration.
Stopping sshd.
Waiting for PIDS: 857.
Performing sanity check on sshd configuration.
Starting sshd.
Scanning /usr/share/certs/untrusted for certificates...
Scanning /usr/share/certs/trusted for certificates...
done.
root@freebsd14:~ $ reboot
root@freebsd14:~ $ uname -a
FreeBSD freebsd14.it.dbi-services.com 14.1-RELEASE-p5 FreeBSD 14.1-RELEASE-p5 GENERIC amd64
root@freebsd14:~ $ cat /etc/os-release
NAME=FreeBSD
VERSION="14.1-RELEASE-p6"
VERSION_ID="14.1"
ID=freebsd
ANSI_COLOR="0;31"
PRETTY_NAME="FreeBSD 14.1-RELEASE-p6"
CPE_NAME="cpe:/o:freebsd:freebsd:14.1"
HOME_URL="https://FreeBSD.org/"
BUG_REPORT_URL="https://bugs.FreeBSD.org/"
This brought the system to the latest release (there is much more to learn from the output above, but this will be a topic for a later post).
Updating (or managing) packages is done with “pkg“, but as we do not have anything installed except bash, there is not much too see right now:
root@freebsd14:~ $ pkg update
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
root@freebsd14:~ $ pkg upgrade
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking for upgrades (1 candidates): 100%
Processing candidates (1 candidates): 100%
Checking integrity... done (0 conflicting)
Your packages are up to date.
Now we’re ready to go a step further and add users and groups, but this will be the topic for the next post.
L’article FreeBSD basics – 1 – The first steps est apparu en premier sur dbi Blog.
PostgreSQL: Maybe we should give ZFS a chance (3) – testing with an aligned record size
In the last post about PostgreSQL on ZFS the pgbench tests showed, that ZFS was slower for the standard tests compared to ext4 on Debian. A comment to that post made me wonder if changing the ZFS record size would change the results, so this blog post is about exactly that. There is plenty of information out there when it comes to ZFS for PostgreSQL and most of them recommend aligning the ZFS record size and the PostgreSQL block size in some way or the other (e.g. please see here). Reading that, there are basically three options: Either change the ZFS record size, change the PostgreSQL block size or change both of them.
As I don’t like changing the PostgreSQL block size, as everybody (as far as I know) runs with 8kB, the first test runs against a ZFS with an aligned record size to match the PostgreSQL block size:
[postgres@freebsd ~]$ geom disk list
Geom name: nda0
Providers:
1. Name: nda0
Mediasize: 10737418240 (10G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e6
descr: Amazon Elastic Block Store
ident: vol0a89ebfc6cc9173ff
rotationrate: 0
fwsectors: 0
fwheads: 0
Geom name: nda1
Providers:
1. Name: nda1
Mediasize: 21474836480 (20G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
descr: Amazon Elastic Block Store
ident: vol0a6dcd2a6a1a1b898
rotationrate: 0
fwsectors: 0
fwheads: 0
[postgres@freebsd ~]$ sudo zpool create pgpool /dev/nda1
[postgres@freebsd ~]$ sudo zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pgpool 19.5G 360K 19.5G - - 0% 0% 1.00x ONLINE -
[postgres@freebsd ~]$ sudo zfs create pgpool/pgzfs
[postgres@freebsd ~]$ sudo zfs set mountpoint=/u02/pgdata/zfs pgpool/pgzfs
[postgres@freebsd ~]$ sudo zfs set compression=zstd pgpool/pgzfs
[postgres@freebsd ~]$ sudo zfs set recordsize=8K pgpool/pgzfs
[postgres@freebsd ~]$ df -h | grep zfs
df: minimum blocksize is 512
pgpool/pgzfs 19G 96K 19G 0% /u02/pgdata/zfs
For PostgreSQL, exactly the same setup as before (no checksums, and the same three parameter changes):
$ initdb --pgdata=/u02/pgdata/zfs/pg
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "C.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
creating directory /u02/pgdata/zfs/pg ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default "max_connections" ... 100
selecting default "shared_buffers" ... 128MB
selecting default time zone ... UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
pg_ctl -D /u02/pgdata/zfs/pg -l logfile start
$ echo "wal_init_zero='off'" >> /u02/pgdata/zfs/pg/postgresql.auto.conf
$ echo "full_page_writes='off'" >> /u02/pgdata/zfs/pg/postgresql.auto.conf
$ echo "port=5433" >> /u02/pgdata/zfs/pg/postgresql.auto.conf
$ pg_ctl --pgdata=/u02/pgdata/zfs/pg/ --log=/u02/pgdata/zfs/pg/log.log start
waiting for server to start.... done
$ export PGPORT=5433
$ psql
psql (17.1 dbi services build)
Type "help" for help.
postgres=# \q
We’ll use the same data set as in the last post:
$ pgbench -p 5433 -i -s 1000 postgres
Same test as before:
$ for i in {1..3}; do pgbench --client=4 --jobs=4 --time=900 --port=5433; done
pgbench (17.1 dbi services build)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 1000
query mode: simple
number of clients: 4
number of threads: 4
maximum number of tries: 1
duration: 900 s
number of transactions actually processed: 778622
number of failed transactions: 0 (0.000%)
latency average = 4.624 ms
initial connection time = 36.558 ms
tps = 865.137263 (without initial connection time)
pgbench (17.1 dbi services build)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 1000
query mode: simple
number of clients: 4
number of threads: 4
maximum number of tries: 1
duration: 900 s
number of transactions actually processed: 812209
number of failed transactions: 0 (0.000%)
latency average = 4.432 ms
initial connection time = 11.599 ms
tps = 902.449933 (without initial connection time)
pgbench (17.1 dbi services build)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 1000
query mode: simple
number of clients: 4
number of threads: 4
maximum number of tries: 1
duration: 900 s
number of transactions actually processed: 828141
number of failed transactions: 0 (0.000%)
latency average = 4.347 ms
initial connection time = 11.673 ms
tps = 920.148011 (without initial connection time)
Comparing that to the previous numbers this is almost double the speed.
Same test with the simple-update run:
$ do pgbench --client=4 --jobs=4 --time=900 --port=5433 --builtin=simple-update; done
pgbench (17.1 dbi services build)
starting vacuum...end.
transaction type: <builtin: simple update>
scaling factor: 1000
query mode: simple
number of clients: 4
number of threads: 4
maximum number of tries: 1
duration: 900 s
number of transactions actually processed: 900677
number of failed transactions: 0 (0.000%)
latency average = 3.997 ms
initial connection time = 14.278 ms
tps = 1000.689595 (without initial connection time)
pgbench (17.1 dbi services build)
starting vacuum...end.
transaction type: <builtin: simple update>
scaling factor: 1000
query mode: simple
number of clients: 4
number of threads: 4
maximum number of tries: 1
duration: 900 s
number of transactions actually processed: 928162
number of failed transactions: 0 (0.000%)
latency average = 3.879 ms
initial connection time = 10.493 ms
tps = 1031.289907 (without initial connection time)
pgbench (17.1 dbi services build)
starting vacuum...end.
transaction type: <builtin: simple update>
scaling factor: 1000
query mode: simple
number of clients: 4
number of threads: 4
maximum number of tries: 1
duration: 900 s
number of transactions actually processed: 939785
number of failed transactions: 0 (0.000%)
latency average = 3.831 ms
initial connection time = 16.170 ms
tps = 1044.205709 (without initial connection time)
Same picture here, considerable faster. This means, that aligning the PostgreSQL block size and the ZFS record size indeed matters a lot.
How does that look like if we increase the PostgreSQL block size and also align the ZFS record size? Same test with a 32kB block size in PostgreSQL and a 32kB record size with ZFS:
$ sudo zfs set recordsize=32K pgpool/pgzfs
$ zfs get recordsize /u02/pgdata/zfs
NAME PROPERTY VALUE SOURCE
pgpool/pgzfs recordsize 32K local
$ initdb -D /u02/pgdata/zfs/pg/
$ echo "wal_init_zero='off'" >> /u02/pgdata/zfs/pg/postgresql.auto.conf
$ echo "full_page_writes='off'" >> /u02/pgdata/zfs/pg/postgresql.auto.conf
$ echo "port=5433" >> /u02/pgdata/zfs/pg/postgresql.auto.conf
$ pg_ctl --pgdata=/u02/pgdata/zfs/pg/ --log=/u02/pgdata/zfs/pg/log.log start
$ psql -c "show block_size"
block_size
------------
32768
$ pgbench -p 5433 -i -s 1000 postgres
$ psql -c "\l+"
List of databases
Name | Owner | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules | Access privileges | Size | Tablespace | Description
-----------+----------+----------+-----------------+---------+---------+--------+-----------+-----------------------+-------+------------+--------------------------------------------
postgres | postgres | UTF8 | libc | C.UTF-8 | C.UTF-8 | | | | 14 GB | pg_default | default administrative connection database
template0 | postgres | UTF8 | libc | C.UTF-8 | C.UTF-8 | | | =c/postgres +| 16 MB | pg_default | unmodifiable empty database
| | | | | | | | postgres=CTc/postgres | | |
template1 | postgres | UTF8 | libc | C.UTF-8 | C.UTF-8 | | | =c/postgres +| 16 MB | pg_default | default template for new databases
| | | | | | | | postgres=CTc/postgres | | |
(3 rows)
Same standard test again (summaries only):
$ for i in {1..3}; do pgbench --client=4 --jobs=4 --time=900 --port=5433; done
# run 1
number of transactions actually processed: 964541
tps = 1071.697122 (without initial connection time)
# run 2
number of transactions actually processed: 1045888
tps = 1162.073678 (without initial connection time)
# run 3
number of transactions actually processed: 1063922
tps = 1182.126126 (without initial connection time)
Even better than with an aligned 8kB block- and recordsize. Same test with the simple-update run:
$ for i in {1..3}; do pgbench --client=4 --jobs=4 --time=900 --port=5433 --builtin=simple-update; done
# run 1
number of transactions actually processed: 1233855
tps = 1370.926077 (without initial connection time)
# run 2
number of transactions actually processed: 1242710
tps = 1380.790526 (without initial connection time)
# run 3
number of transactions actually processed: 1251885
tps = 1390.963532 (without initial connection time)
Same picture, using a 32kB block- and recordsize gives better results.
To summarize: If you want to go for ZFS with PostgreSQL it indeed matters that the PostgreSQL block size and the ZFS record size are aligned. I am still not sure if I would go for a 32kB block size in PostgreSQL, this requires more testing. We haven’t even looked at other stuff to adapt (shared_buffers, checkpointing, … ) but I think that ZFS today is a valid option to run PostgreSQL workloads. But, again: Please test for your own, using your workload, before taking any decisions.
L’article PostgreSQL: Maybe we should give ZFS a chance (3) – testing with an aligned record size est apparu en premier sur dbi Blog.
DPR and Dbvisit Standby on ODA
Since patch version 19.21 (current one is 19.25), Oracle Database Appliance (ODA) X7, X8 and X9 will require the use of Data Preserving Reprovisioning (DPR) to apply the patch coming from 19.20 and previous releases. Unlike traditional patching, DPR erases the system disks but keeps data on ASM/ACFS disks. Data means databases, ACFS volumes, DB homes, DB Systems and VMs metadata, vDisks for DB Systems and VMs. But it doesn’t keep your system settings: specific OS configurations, additional tools, monitoring, users and groups, aso.
The reason for not being able to use classic patching method is the mandatory OS upgrade from Linux 7 to Linux 8. It is done with a fresh setup only. Actually, it’s much easier to setup a brand new OS than applying a major patch onto an existing Linux system. Therefore, using DPR will require a longer downtime of your ODA, several hours at least, meaning that you will need to switchover your databases to another ODA with Data Guard or Dbvisit Standby.
Dbvisit Standby and Oracle Database Standard EditionData Guard is not available on Oracle Database Standard Edition. It means that you cannot configure a Disaster Recovery setup with the embedded binaries. But a database can still be a standby as it does not rely on the database edition. A standby database is mostly a primary that cannot be updated on its own with SQL statements. If you could compare each file between a primary and a standby, the only difference you would see is a flag in the controlfile. This flag defines the role of your database. And you can still convert a primary to a standby and vice-versa with a pair of SQL commands.
Dbvisit Standby relies on this fact and comes with the tooling for transporting archivelogs, managing the sync, switching the roles, doing the failover. Dbvisit Standby has always been the smartest solution for those needing a Disaster Recovery solution without the need for buying Enterprise Edition. Among the companies I work for, 95% of the owners of ODA plus Standard Edition are using Dbvisit Standby for at least one database. When comes the time for patching an ODA, usually once a year, primaries are switched to the standby server as it would be done with Data Guard. And until now, it brings the same flexibility and convenience as Data Guard.
Best practice for Dbvisit Standby on ODAClassic patching does not require any additional operation when using Dbvisit Standby compared to Data Guard. You just need to change the DB home location in the Dbvisit configuration file after patching the DB home. Patching a DB home is actually moving the database to a new one.
Data Preserving Reprovisioning will have more impact on Dbvisit Standby. It will erase the system, meaning Dvisit binaries and configuration files if they were located on the OS disks.
Best practice to prevent this drawback is to setup Dbvisit Standby in a dedicated ACFS filesystem. I usually create 2 filesystems, one for the binaries, logfiles and configuration files. And another one for archivelogs. This one will be sized depending on the number of protected databases and on the activity of those databases:
su – oracle
mkdir /u01/app/dbvisit/standbymp
mkdir /u01/app/dbvisit/arc
exit
su - grid
sqlplus / as sysasm
ALTER DISKGROUP RECO ADD VOLUME dbvbin SIZE 50G;
ALTER DISKGROUP RECO ADD VOLUME dbvarc SIZE 200G;
col volume_device for a40
select volume_name, VOLUME_DEVICE from v$asm_volume where volume_name='DBVBIN' or volume_name='DBVARC';
VOLUME_NAME VOLUME_DEVICE
------------------------------ ----------------------------------------
DBVBIN /dev/asm/dbvbin-156
DBVARC /dev/asm/dbvarc-156
exit
su - root
/sbin/mkfs -t acfs /dev/asm/dbvbin-156
/sbin/mkfs -t acfs /dev/asm/dbvarc-156
/u01/app/19.24.0.0/grid/bin/srvctl add filesystem -d /dev/asm/dbvbin-156 -g RECO -v DBVBIN -m /u01/app/dbvisit/standbymp -u oracle
/u01/app/19.24.0.0/grid/bin/srvctl start filesystem -d /dev/asm/dbvbin-156
/u01/app/19.24.0.0/grid/bin/srvctl add filesystem -d /dev/asm/dbvarc-156 -g RECO -v DBVARC -m /u01/app/dbvisit/arc -u oracle
/u01/app/19.24.0.0/grid/bin/srvctl start filesystem -d /dev/asm/dbvarc-156
df -h /u01/app/dbvisit/standbymp /u01/app/dbvisit/arc
Filesystem Size Used Avail Use% Mounted on
/dev/asm/dbvbin-156 50G 1.1G 49G 1% /u01/app/dbvisit/standbymp
/dev/asm/dbvarc-156 200G 1.1G 199G 1% /u01/app/dbvisit/arc
Once done, you can do the setup and configuration onto these filesystems. They will be preserved when using DPR. The only task you will have to do again after a DPR is creating the Dbvisit service, that’s it:
/u01/app/dbvisit/standbymp/bin/dbvagentmanager service install --user oracle
/u01/app/dbvisit/standbymp/bin/dbvagentmanager service start
Problem you may encounter when using DPR with Dbvisit
I already had this problem 2 times on 2 different configurations: I’m unable to get a successful preupgradereport because of inconsistent metadata in my standby databases.
odacli describe-preupgradereport -i 917d6120-5837-4130-912a-9dc79faae324
...
__DB__
...
Validate Database Status Success Database 'PCPDBI' is running and is None
in 'CONFIGURED' state
Validate Database Version Success Version '19.20.0.0.230718' for None
database 'PCPDBI' is supported
Validate Database Datapatch Success Role of database 'PCPDBI' is not None
Application Status 'Primary'. This check is skipped.
Validate TDE wallet presence Success Database 'PCPDBI' is not TDE enabled. None
Skipping TDE wallet presence check.
Validate Database Home Success Database home location check passed None
location for database PCPDBI_52
Validate Database Service Failed The following services [PCPDBI_RW] These services should be stopped and
presence created on database 'PCPDBI' can removed, then rerun
result in a failure in 'detach-node'. 'create-preupgradereport'. After
'restore-node -d' these services
should be restored manually
Validate Database metadata Failed Internal error encountered Please check dcs-agent logs
...
The first failure is easy to fix, just remove the service associated to database role. It’s easy to put it back after DPR:
PCPDBI
srvctl stop service -db PCPDBI_52 -service PCPDBI_RW
srvctl remove service -db PCPDBI_52 -service PCPDBI_RW
The second failure does need troubleshooting, here is an extract of the dcs-agent.log:
2024-11-14 17:43:37,414 DEBUG [Validate Database metadata : PCPDBI : JobId=a19cb6d3-0acf-4f82-95b1-0e459b7cd732] [] c.o.d.c.u.CommonsUtils: Output :
Database unique name: PCPDBI_52
Database name: PCPDBI
Oracle home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_30
Oracle user: oracle
Spfile: +DATA/PCPDBI_52/parameterfile/spfilePCPDBI.ora
Password ******** /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_30/dbs/orapwPCPDBI ********
Domain:
Start options: mount
Stop options: abort
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA,
RECO
Mount point paths: /u01/app/odaorahome,
/u01/app/odaorabase0
Services:
Type: SINGLE
OSDBA group: dba
OSOPER group: dbaoper
Database instance: PCPDBI
Configured nodes: rtssgtumitoda52
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed
2024-11-14 17:43:37,414 ERROR [Validate Database metadata : PCPDBI : JobId=a19cb6d3-0acf-4f82-95b1-0e459b7cd732] [] c.o.d.a.d.DetachRpcHandler: Exception
java.lang.NullPointerException: null
at com.oracle.dcs.agent.dpr.prechecks.DatabasePrechecks$DatabaseMetadataPrecheck.validateDatabaseDbRole(DatabasePrechecks.java:779)
at com.oracle.dcs.agent.dpr.prechecks.DatabasePrechecks$DatabaseMetadataPrecheck.execute(DatabasePrechecks.java:701)
at com.oracle.dcs.agent.dpr.DetachRpcHandler.executePreUpgradecheck(DetachRpcHandler.java:276)
at sun.reflect.GeneratedMethodAccessor248.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.oracle.dcs.commons.jrpc.JsonRequestProcessor.invokeRequest(JsonRequestProcessor.java:246)
at com.oracle.dcs.commons.jrpc.JsonRequestProcessor.process(JsonRequestProcessor.java:305)
at com.oracle.dcs.agent.task.TaskDcsJsonRpcExt.callInternal(TaskDcsJsonRpcExt.java:106)
at com.oracle.dcs.agent.task.TaskDcsJsonRpc.call(TaskDcsJsonRpc.java:303)
at com.oracle.dcs.agent.task.TaskDcsJsonRpc.call(TaskDcsJsonRpc.java:79)
at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:118)
at com.oracle.dcs.commons.task.TaskApi.call(TaskApi.java:63)
at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:65)
at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:36)
at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:118)
at com.oracle.dcs.commons.task.TaskApi.call(TaskApi.java:63)
at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:65)
at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:36)
at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:118)
at com.oracle.dcs.commons.task.TaskApi.call(TaskApi.java:63)
at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:65)
at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:36)
at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:118)
at com.oracle.dcs.commons.task.TaskApi.call(TaskApi.java:63)
at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:65)
at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:36)
at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:118)
at com.oracle.dcs.commons.task.TaskApi.call(TaskApi.java:63)
at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:65)
at com.oracle.dcs.agent.task.TaskLockWrapper.call(TaskLockWrapper.java:136)
at com.oracle.dcs.agent.task.TaskLockWrapper.call(TaskLockWrapper.java:59)
at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:118)
at com.oracle.dcs.commons.task.TaskApi.call(TaskApi.java:63)
at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:65)
at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:36)
at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:118)
at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Metadata is not OK on this database, but which metadata?
As the standby is only mounted, it cannot be metadata inside the database. I had no problem when using DPR on databases protected by Data Guard, especially on standby databases. It should be the same with Dbvisit Standby.
Inconsistent metadata in OCR would make sense, but after checking on this database, and apart from database role and startup mode, it looks similar to the primary databases. I temporarily changed the database role from STANDBY to PRIMARY:
. oraenv <<< PCPDBI
srvctl modify database -db PCPDBI -role PRIMARY
srvctl stop database -db PCPDBI_52 ; sleep 10 ; srvctl start database -db PCPDBI_52 -o mount
But it doesn’t change anything.
It should be related to metadata in the ODA registry. Let’s have a look in the MySQL database where the registry is located. Stop the DCS agent before doing anything:
systemctl stop initdcsagent
cd /opt/oracle/dcs/mysql/bin/
./mysql -u root --socket=/opt/oracle/dcs/mysql/log/mysqldb.sock
use dcsagentdb;
select name, dbrole from db where name='PCPDBI' or name='TSTDBI';
+----------+---------+
| name | dbrole |
+----------+---------+
| PCPDBI | NULL |
| TSTDBI | PRIMARY |
+----------+---------+
2 rows in set (0.00 sec)
For some reason, there is no database role for my standby in the ODA registry, let’s solve this problem and restart the odacli stack:
Create table db_20241115 as select * from db;
Update db set dbrole='STANDBY' from db where name='PCPDBI';
Query OK, 1 rows affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
Commit;
exit;
systemctl start initdcsagent
sleep 120 ; odacli create-preupgradereport -bm
...
This solved my problem.
ConclusionUsing DPR on your ODA running Dbvisit Standby works fine, but don’t hesitate to check and fix the role of your database in the ODA registry if you’re stuck at the preupgradereport.
L’article DPR and Dbvisit Standby on ODA est apparu en premier sur dbi Blog.