Yann Neuhaus
Customer case study – SQL Server table partitioning
A client submitted the following issues :
- Their database hosts multiple tables, including one table of approximately 1 TB, consisting of several billion rows. This table is continuously fed by various data sources. Additionally, it is also used to generate reports and charts based on the stored data.
- The client wanted to archive certain data to be able to delete it (selectively) without affecting the applications that use the table.
- They also wanted to reduce the duration of maintenance jobs for indexes, statistics, and those responsible for verifying data integrity.
During an audit of their environment, we also noticed that the database data files were hosted on a volume using an MBR partition type (which is limited to 2 TB).
2 tables were involved (but only one needed to be partitioned, the “Data” table) because we had a foreign key constraint between the tables. Here’s what the tables looked like :
Table nameColumns nameModeIDDataDate, Value, Mode ID Possible solution : table partitioningPartitioning in SQL Server is a technique used to divide large tables or indexes into smaller, more manageable pieces called partitions, while still being treated as a single logical entity. This improves performance, manageability, and scalability, especially for large datasets. In SQL Server, partitioning is done using a partition function which defines how data is distributed based on a column (e.g., date, ID) and a partition scheme which is a mapping mechanism that determines where partitions of a partitioned table or index will be physically stored. Queries benefit from partition elimination, meaning SQL Server scans only relevant partitions instead of the entire table, optimizing execution. This is widely used in data warehousing, archiving, and high-transaction databases to enhance query performance and simplify maintenance.
Production implementation ?How can we implement this with minimal downtime since we cannot modify the main table, which is used continuously ?
In our case, we decided to create a new table that is an exact copy of the source table however this new table will be partitioned. Here’s how we proceeded :
Creating the corresponding filegroups- We have one filegroup per year from 2010 to 2060. We extend up to 2060 to avoid creating a job that dynamically generates filegroups based on certain criteria.
- The files contained in these filegroups will be created in a new volume using the GPT partition type, allowing us to move the table to the new volume.
- The partition function will be based on a datetime column, which will determine the partition ID according to the input value.
This will define where the data is physically stored. The partition scheme maps the partition ID to the filegroups.
From here, we have at least two possibilities :Create a partitioned (copy) table
- This table will have the same structure as the source table but without indexes.
- The new table will be partitioned using CREATE TABLE() ON partition_scheme().
- Initially, we copy data from the source table to the destination table up to a fixed date. This limit allows us to define a delta.
- We then build the indexes.
- The remaining data to be copied is the delta. The delta can be determined by selecting all rows with a date strictly greater than the copied data. This is easier when using the right indexes.
Using indexes (same process but without the last step)
- We build the corresponding indexes (before copying data), which are an exact copy of those in the source table.
- We copy data from the source table to the destination table up to a fixed date.
- We copy the delta.
- Finally, we switch the tables using the sp_rename stored procedure.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/02/s1-1024x600.jpg)
In our case, we stopped the application for a few minutes and then imported all rows from the fixed date onward (since writes were paused for a specific period, we copied the delta).
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/02/s2-2-1024x644.jpg)
In our client’s case, we have dependencies with another table through a foreign key constraint. This is why we have two tables appearing (however, the partitioned table is T_Data).
The code below reproduces the case we worked on.
Database creation :
USE master
GO
IF DB_ID('partitioning_demo') IS NOT NULL
BEGIN
ALTER DATABASE partitioning_demo SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE partitioning_demo
END
CREATE DATABASE partitioning_demo
ON PRIMARY
(
NAME = PARTITIONING_0_Dat,
FILENAME = 'S:\Data\partitioning_demo_0_Dat.mdf',
SIZE = 512MB,
FILEGROWTH = 512MB
)
LOG ON
(
name = PARTITIONING_0_Log,
filename = 'S:\Logs\partitioning_demo_0_Log.ldf',
size = 1024MB,
filegrowth = 512MB
)
GO
Tables creation :
USE partitioning_demo
GO
CREATE TABLE dbo.T_Mode
(
Mode_ID INT NOT NULL
CONSTRAINT PK_Mode_ID PRIMARY KEY(Mode_ID)
)
GO
CREATE TABLE dbo.T_Data
(
Data_Date DATETIME NOT NULL,
Data_Value DECIMAL(18,3) NOT NULL,
FK_Mode_ID INT NOT NULL
CONSTRAINT PK_T_Data_1 PRIMARY KEY(Data_Date, FK_Mode_ID)
CONSTRAINT FK_T_Data_T_Mode FOREIGN KEY(FK_Mode_ID) REFERENCES dbo.T_Mode(Mode_ID)
)
GO
CREATE NONCLUSTERED INDEX [NCI-1] ON dbo.T_Data(Data_Value)
GO
Generate some data for the T_Mode table :
USE partitioning_demo
GO
SET NOCOUNT ON
DECLARE @i INT = 0, @Min INT = 1, @Max INT = 300
WHILE @i <= @Max
BEGIN
INSERT INTO dbo.T_Mode (Mode_ID) VALUES (@i)
SET @i = @i + 1
END
GO
Generate some data for the T_Data table :
USE partitioning_demo
GO
SET NOCOUNT ON
DECLARE
@i BIGINT,
@NbLinesMax BIGINT,
@StartDateTime DATETIME,
@DataDate DATETIME,
@DataValue DECIMAL(18,3),
@NbLinesFKModeID INT,
@FKModeID INT
SET @i = 0
SET @NbLinesMax = 7884000 --7884000 - nb minutes in 14 years (from 2010 to 2024)
SET @StartDateTime = DATEADD(yy, DATEDIFF(yy, 0, DATEADD(year, -15, GETDATE())), 0) --We start in 2010 : 01.01.2010 00:00:00
SET @NbLinesFKModeID = (SELECT COUNT(*) FROM partitioning_demo.dbo.T_Mode) - 1
WHILE @i <= @NbLinesMax
BEGIN
SET @DataDate = DATEADD(mi, @i, @StartDateTime)
SET @DataValue = ROUND(RAND(CHECKSUM(NEWID())) * (100000000000000), 3) --Generate random values for the Data_Value column
SET @FKModeID = ABS(CHECKSUM(NEWID()) % (@NbLinesFKModeID - 1 + 1)) + 1 --Generate random values for the FK_Mode_ID column
INSERT INTO dbo.T_Data (Data_Date, Data_Value, FK_Mode_ID)
VALUES (@DataDate, @DataValue, @FKModeID)
SET @i = @i + 1
END
GO
Here is what our data looks like :
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/02/image-3.png)
We create the corresponding filegroups. We create more filegroups than necessary to anticipate future insertions:
USE master
GO
DECLARE @Year INT = 2010, @YearLimit INT = 2060, @SQLCmd NVARCHAR(max) = ''
WHILE (@Year <= @YearLimit)
BEGIN
SET @SQLCmd = @SQLCmd+'ALTER DATABASE partitioning_demo ADD FILEGROUP PARTITIONING_FG_'+CAST(@Year AS NVARCHAR(4))+'; '
SET @SQLCmd = @SQLCmd+'ALTER DATABASE partitioning_demo ADD FILE (NAME = PARTITIONING_F_'+CAST(@Year AS NVARCHAR(4))+'_Dat, FILENAME = ''S:\Data\PARTITIONING_F_'+CAST(@Year AS NVARCHAR(4))+'_Dat.mdf'', SIZE = 64MB, FILEGROWTH = 64MB) TO FILEGROUP PARTITIONING_FG_'+CAST(@Year AS NVARCHAR(4))+';'
SET @Year = @Year + 1
END
--PRINT @SQLCMD
EXEC(@SQLCMD)
We create our partition function to process data from 2010 to 2060 :
USE partitioning_demo
GO
DECLARE
@i INT = -15,
@PartitionYear DATETIME = 0,
@PartitionFunctionName NVARCHAR(50) = '',
@SQLCMD NVARCHAR(MAX) = ''
SET @PartitionFunctionName = 'PF_T_Data'
SET @SQLCMD = 'CREATE PARTITION FUNCTION ' + @PartitionFunctionName + ' (DATETIME) AS RANGE RIGHT FOR VALUES ('
WHILE (@i <= 35)
BEGIN
SET @PartitionYear = (SELECT DATEADD(yy, DATEDIFF(yy, 0, DATEADD(year, @i, GETDATE())), 0)) -- Start of a year, e.g. 2010-01-01 00:00:00.000
IF (@i <> 35)
BEGIN
SET @SQLCMD = @SQLCMD + '''' + CAST(@PartitionYear AS NVARCHAR(150)) + ''', '
END
ELSE
BEGIN
SET @SQLCMD = @SQLCMD + '''' + CAST(@PartitionYear AS NVARCHAR(150)) + ''')'
END
SET @i = @i + 1
END
--PRINT @SQLCMD
EXEC(@SQLCMD)
We create our partition scheme:
USE partitioning_demo
GO
DECLARE
@PartitionFunctionName NVARCHAR(50) = '',
@PartitionSchemeName NVARCHAR(50) = '',
@SQLCMD NVARCHAR(MAX) = '',
@FGName NVARCHAR(100) = '',
@FGNames NVARCHAR(MAX) = ''
SET @PartitionFunctionName = 'PF_T_Data'
SET @PartitionSchemeName = 'PSCH_T_Data'
SET @SQLCMD = 'CREATE PARTITION SCHEME ' + @PartitionSchemeName + ' AS PARTITION ' + @PartitionFunctionName + ' TO ('
DECLARE filegroup_cursor CURSOR FOR
SELECT [name] FROM partitioning_demo.sys.filegroups ORDER BY data_space_id ASC
OPEN filegroup_cursor
FETCH filegroup_cursor INTO @FGName
WHILE @@FETCH_STATUS = 0
BEGIN
SET @FGNames = @FGNames + '[' + @FGName + '],'
FETCH filegroup_cursor INTO @FGName
END
CLOSE filegroup_cursor
DEALLOCATE filegroup_cursor
SET @FGNames = LEFT(@FGNames, LEN(@FGNames) - 1) --Remove the ',' character at the end
SET @SQLCMD = @SQLCMD + @FGNames + ')'
--PRINT @SQLCMD
EXEC(@SQLCMD)
We will now create the new table. This is the one that will be partitioned and to which we will switch later :
USE partitioning_demo
GO
CREATE TABLE dbo.T_Data_Staging
(
Data_Date DATETIME NOT NULL,
Data_Value DECIMAL(18,3) NOT NULL,
FK_Mode_ID INT NOT NULL
) ON PSCH_T_Data(Data_Date)
GO
We now copy the data from the source table to the destination table. Since our table is a heap, we can use the query hint (TABLOCK). This enables minimal logging, optimal locking and parallel inserts (however, the recovery model must be bulk-logged or simple). In the case below, we only copy the year 2010 (for example) :
USE partitioning_demo
GO
DECLARE @YearToProcess INT, @BeginDataDate DATETIME, @EndDataDate DATETIME;
SET @YearToProcess = 2010
SET @BeginDataDate = DATEADD(yy, DATEDIFF(yy, 0, CAST(@YearToProcess AS NVARCHAR)), 0)
SET @EndDataDate = DATEADD(ms, -3, DATEADD(yy, DATEDIFF(yy, 0, CAST(@YearToProcess AS NVARCHAR)) + 1, 0))
INSERT INTO dbo.T_Data_Staging WITH (TABLOCK)
(
Data_Date,
Data_Value,
FK_Mode_ID
)
SELECT * FROM dbo.T_Data WHERE Data_Date BETWEEN @BeginDataDate AND @EndDataDate
GO
However, it is possible to adjust the value of the @EndDataDate variable to copy the data up to the desired point.
We now create the corresponding indexes. These are indeed present in the source table. Additionally, these indexes are partitioned.
USE partitioning_demo
GO
ALTER TABLE dbo.T_Data_Staging ADD CONSTRAINT PK_T_Data_2 PRIMARY KEY(Data_Date, FK_Mode_ID)
GO
ALTER TABLE dbo.T_Data_Staging ADD CONSTRAINT FK_T_Data_T_Mode_2 FOREIGN KEY(FK_Mode_ID) REFERENCES dbo.T_Mode(Channel_ID)
GO
CREATE NONCLUSTERED INDEX [NCI-1-2] ON dbo.T_Data_Staging(Data_Value) ON PSCH_T_Data(Data_Date)
GO
Another possible strategy
From this point, another strategy is possible, which is as follows :
- We create the partitioned table.
- We create the corresponding indexes.
- We copy the data.
However, with this strategy, it is possible that the TempDB database may grow significantly. In this case, it is possible to use the following query hints :
- min_grant_percent
- max_grant_percent
We can also temporarily update statistics asynchronously. Once the vast majority of the data has been copied, we simply need to retrieve the delta and switch the tables. In our case, we had to stop the application for a few minutes before performing the switch as follows :
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/02/s3-2.jpg)
USE partitioning_demo
GO
BEGIN TRANSACTION Table_Switch
EXEC sp_rename 'dbo.T_Data', 'T_Data_Old'
EXEC sp_rename 'dbo.T_Data_Staging', 'T_Data'
COMMIT TRANSACTION Table_Switch
We can then verify that our data has been properly distributed among the different filegroups:
SELECT
OBJECT_NAME(p.object_id) as obj_name,
f.name,
p.partition_number,
p.rows,
p.index_id,
CASE
WHEN p.index_id = 1 THEN 'CLUSTERED INDEX'
WHEN p.index_id >= 2 THEN 'NONCLUSTERED INDEX'
END AS index_info
FROM sys.system_internals_allocation_units a
JOIN sys.partitions p
ON p.partition_id = a.container_id
JOIN sys.filegroups f
on a.filegroup_id = f.data_space_id
WHERE p.object_id = OBJECT_ID (N'dbo.T_Data_Staging')
ORDER BY f.name ASC
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/02/image-4.png)
For database maintenance, we can use Ola Hallengren’s solution (https://ola.hallengren.com).
Database integrity checkThis solution allows us to verify data integrity and filter by filegroup. In our case, we have defined one filegroup per year, and the filegroup on which we write regularly is the one for the current year.
Thus, we could implement the following strategy to reduce the time needed for data integrity verification:
- Create a job that checks the integrity of the current filegroup once per day.
- Create a job that checks the integrity of other filegroups once per week.
USE partitioning_demo
GO
DECLARE @Databases NVARCHAR(100), @FilegroupsToCheck NVARCHAR(max)
SET @Databases = 'partitioning_demo'
SET @FilegroupsToCheck = @Databases + '.PARTITIONING_FG_' + CAST(YEAR(GETDATE()) AS NVARCHAR)
EXECUTE [dba_tools].[dbo].[DatabaseIntegrityCheck]
@Databases = @Databases,
@CheckCommands = 'CHECKFILEGROUP',
@FileGroups = @FilegroupsToCheck,
@LogToTable = 'Y'
GO
The verification of the other filegroups can be done as follows :
DECLARE @Database NVARCHAR(250), @FilegroupsToCheck NVARCHAR(MAX)
SET @Database = 'partitioning_demo'
SET @FilegroupsToCheck = 'ALL_FILEGROUPS, -' + @Database + '.PARTITIONING_FG_' + CAST(YEAR(GETDATE()) AS NVARCHAR)
EXECUTE [dba_tools].[dbo].[DatabaseIntegrityCheck]
@Databases = 'partitioning_demo',
@CheckCommands = 'CHECKFILEGROUP',
@FileGroups = @FilegroupsToCheck,
@LogToTable = 'Y'
GO
Index maintenance :
Index maintenance can be a long and resource-intensive operation in terms of CPU and I/O. Based on our research, there is no way to rebuild or reorganize a specific partition using Ola Hallengren’s solution.
Indeed, it may be beneficial to maintain only the current partition (the one where data is updated) and exclude the other partitions. To achieve this, the following example can be used:
USE partitioning_demo
GO
DECLARE
@IndexName NVARCHAR(250),
@IndexId INT,
@ObjectId INT,
@PartitionNumber INT,
@SchemaName NVARCHAR(50),
@TableName NVARCHAR(100),
@IndexFragmentationValue INT,
@IndexFragmentationLowThreshold INT,
@IndexFragmentationHighThreshold INT,
@FilegroupToCheck NVARCHAR(250),
@PartitionToCheck INT,
@SQLCMD NVARCHAR(MAX) = ''
SET @FilegroupToCheck = 'PARTITIONING_FG_' + CAST(YEAR(GETDATE()) AS NVARCHAR)
SET @PartitionNumber = (
SELECT
DISTINCT
p.partition_number
FROM sys.system_internals_allocation_units a
JOIN sys.partitions p
ON p.partition_id = a.container_id
JOIN sys.filegroups f on a.filegroup_id = f.data_space_id
WHERE f.[name] = @FilegroupToCheck)
DECLARE index_cursor CURSOR FOR
SELECT DISTINCT idx.[name], idx.index_id, idx.[object_id], pts.partition_number, scs.[name], obj.[name]
FROM sys.indexes idx
INNER JOIN sys.partitions pts
ON idx.object_id = pts.object_id
INNER JOIN sys.objects obj
ON idx.object_id = obj.object_id
INNER JOIN sys.schemas scs
ON obj.schema_id = scs.schema_id
WHERE pts.partition_number = @PartitionNumber
OPEN index_cursor
FETCH index_cursor INTO @IndexName, @IndexId, @ObjectId, @PartitionNumber, @SchemaName, @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
SELECT @IndexFragmentationValue = MAX(avg_fragmentation_in_percent)
FROM sys.dm_db_index_physical_stats(DB_ID('partitioning_demo'), @ObjectId, @IndexId, @PartitionNumber, 'LIMITED')
WHERE alloc_unit_type_desc = 'IN_ROW_DATA' AND index_level = 0
IF (@IndexFragmentationValue < 5)
BEGIN
PRINT 'No action to perform for the index : [' + @IndexName + '] ON [' + @SchemaName + '].[' + @TableName + ']'
END
ELSE IF (@IndexFragmentationValue BETWEEN 5 AND 20)
BEGIN
SET @SQLCMD = @SQLCMD + 'ALTER INDEX [' + @IndexName + '] ON [' + @SchemaName + '].[' + @TableName + '] REORGANIZE PARTITION = ' + CAST(@PartitionNumber AS NVARCHAR) + ';' + CHAR(13)
END
ELSE IF (@IndexFragmentationValue > 20)
BEGIN
SET @SQLCMD = @SQLCMD + 'ALTER INDEX [' + @IndexName + '] ON [' + @SchemaName + '].[' + @TableName + '] REBUILD PARTITION = ' + CAST(@PartitionNumber AS NVARCHAR) + ';' + CHAR(13)
END
FETCH index_cursor INTO @IndexName, @IndexId, @ObjectId, @PartitionNumber, @SchemaName, @TableName
END
CLOSE index_cursor
DEALLOCATE index_cursor
--PRINT @SQLCMD
EXEC(@SQLCMD)
Once the partition maintenance is completed, we can then maintain all other indexes (such as those of other tables) in the following way:
EXECUTE [dba_tools].[dbo].[IndexOptimize]
@Databases = 'USER_DATABASES',
@UpdateStatistics = 'ALL',
@Indexes = 'ALL_INDEXES,-[partitioning_demo].[dbo].[PK_T_Data_2],-[partitioning_demo].[dbo].[NCI-1-2]',
@LogToTable = 'Y'
L’article Customer case study – SQL Server table partitioning est apparu en premier sur dbi Blog.
PostgreSQL 18: Introduce autovacuum_vacuum_max_threshold
Vacuum/Autovacuum is one of the critical parts of every PostgreSQL installation. When autovacuum is not configured properly for your workload you’ll suffer from bloat and performance issues sooner or later. Most of the installations we’ve seen run with the defaults just fine, and a lot of people probably never need to deal with adjusting any of the parameters for autovacuum. On the other side there are workloads where the defaults do not work nicely anymore and you need to adjust how autovacuum deals with specific tables. PostgreSQL 18 will come with a new parameter called “autovacuum_vacuum_max_threshold” which gives you one more option to deal with a specific issue.
Before we look at the new parameter lets take a look at when autovacuum kicks in in the default configuration. This is controlled by two parameters:
postgres=# show autovacuum_vacuum_threshold;
autovacuum_vacuum_threshold
-----------------------------
50
(1 row)
postgres=# show autovacuum_vacuum_scale_factor;
autovacuum_vacuum_scale_factor
--------------------------------
0.2
(1 row)
What that means is, that approximately 20% of the table (the 0.2 of autovacuum_vacuum_scale_factor) + 50 tuples (autovacuum_vacuum_threshold) need to change before autovacuum is triggered. Given this simple table with one million rows:
postgres=# \d t
Table "public.t"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
a | integer | | |
b | text | | |
postgres=# select count(*) from t;
count
---------
1000000
(1 row)
… this can easily be triggered by changing more than 20% of the table:
postgres=# select last_autovacuum from pg_stat_all_tables where relname = 't';
last_autovacuum
------------------------------
2025-02-07 07:24:58.40076+01
(1 row)
postgres=# select now();
now
-------------------------------
2025-02-07 07:26:48.333006+01
(1 row)
postgres=# update t set b = 'xxx' where a < 250000;
UPDATE 249999
postgres=# select pg_sleep('60');
pg_sleep
----------
(1 row)
postgres=# select last_autovacuum from pg_stat_all_tables where relname = 't';
last_autovacuum
-------------------------------
2025-02-07 07:27:58.356337+01
(1 row)
The consequence of this is, that the more rows you have in a table, the longer it takes for autovacuum to kick in. You can deal with this already today by adjusting either “autovacuum_vacuum_threshold” or “autovacuum_vacuum_scale_factor” or both on either the table or globally on the instance level. If, for example, you want autovacuum to kick in after 10’000 rows have been changed in the above table you can do it like this:
postgres=# alter table t set ( autovacuum_vacuum_threshold = 10000 );
ALTER TABLE
postgres=# alter table t set ( autovacuum_vacuum_scale_factor = 0 );
ALTER TABLE
Doing the same test as above but only changing 10001 rows:
postgres=# update t set b = 'aaa' where a < 10002;
UPDATE 10001
postgres=# select now();
now
-------------------------------
2025-02-07 07:54:35.295413+01
(1 row)
postgres=# select last_autovacuum from pg_stat_all_tables where relname = 't';
last_autovacuum
-------------------------------
2025-02-07 07:27:58.356337+01
(1 row)
postgres=# select pg_sleep(60);
pg_sleep
----------
(1 row)
postgres=# select last_autovacuum from pg_stat_all_tables where relname = 't';
last_autovacuum
------------------------------
2025-02-07 07:54:58.69969+01
(1 row)
The downside of this is, that you need to deal with that manually. With the introduction of “autovacuum_vacuum_max_threshold” PostgreSQL will handle those cases in a more “by default” way. The default for this parameter is quite high:
postgres=# show autovacuum_vacuum_max_threshold;
autovacuum_vacuum_max_threshold
---------------------------------
100000000
(1 row)
To see it in action lets reset the table level settings we did above and set autovacuum_vacuum_max_threshold instead:
postgres=# alter table t reset ( autovacuum_vacuum_scale_factor );
ALTER TABLE
postgres=# alter table t reset ( autovacuum_vacuum_threshold );
ALTER TABLE
postgres=# alter table t set ( autovacuum_vacuum_max_threshold = 10000 );
ALTER TABLE
This will have exactly the same effect:
postgres=# update t set b = 'qqq' where a < 10002;
UPDATE 10001
postgres=# select now();
now
-------------------------------
2025-02-07 08:02:51.582044+01
(1 row)
postgres=# select last_autovacuum from pg_stat_all_tables where relname = 't';
last_autovacuum
------------------------------
2025-02-07 07:54:58.69969+01
(1 row)
postgres=# select pg_sleep(60);
pg_sleep
----------
(1 row)
postgres=# select last_autovacuum from pg_stat_all_tables where relname = 't';
last_autovacuum
-------------------------------
2025-02-07 08:02:58.809895+01
(1 row)
Nice, and as always, thanks to everyone involved.
L’article PostgreSQL 18: Introduce autovacuum_vacuum_max_threshold est apparu en premier sur dbi Blog.
SQL Server AlwaysOn – Failover does not work but everything is green on the cluster
Recently I was called by a customer because the failover on a SQL server AlwaysOn 2 nodes cluster does not work.
I connect to the first node of AlwaysOn cluster who is the primary and check the cluster with the Failover Cluster Manager.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/02/image.png)
Nothing tells us that it is not working. All is green! It’s a good point!
I connect also to the instance with SSMS and check all AAGs in the instances through the dashboard of all listeners. Everything is green, and databases are synchronized.
First, I try to do a failover through the Failover Cluster Manager:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/02/image-1.png)
Strange, it’s not working, it’s coming back to the primary…
As you can see in the screenshot, I have the message “The action ‘Move’ did not complete. For more data, see ‘Information Details’”
If you click on “Information Details”, you have the error message:
error code 0x80071398
The operation failded because either the specified cluster node is not the owner of the group or the node is not a possible owner of the group
Ok, let’s try also by script with SSMS:
ALTER AVAILABILITY GROUP [xxx] FAILOVER;
GO
I have also an error message:
Failed to perform a manual failover of the availability group ‘xxx’ to server instance ‘xxx’. (Microsoft.SqlServer.Management.HadrModel)
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
Failed to move a Windows Server Failover Clustering (WSFC) group to the local node (Error code 5016). If this is a WSFC availability group, the WSFC service may not be running or may not be accessible in its current state, or the specified cluster group or node handle is invalid. Otherwise, contact your primary support provider. For information about this error code, see “System Error Codes” in the Windows Development documentation.
Failed to designate the local availability replica of availability group ‘xxx’ as the primary replica. The operation encountered SQL Server error 41018 and has been terminated. Check the preceding error and the SQL Server error log for more details about the error and corrective actions. (Microsoft SQL Server, Error: 41018)
I begin to check all the parameters of the cluster…
After some minutes, hours…, I find that we have 2 little missing checkboxes on the Failover Cluster Manager…
When you are in the Failover Cluster Manager, on our listerner, verify the Resources and specifically the Advanced Policies of the “Other Resources” and on Server Name on both Name and IP address. In my case, the second server was not checked on the name of the server name and on the Other resources:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/02/image-2.png)
After checking the box, the failover was possible.
My advise is to control everytime these “Advanced Policies” on the Resources of the cluster.
I hope this post can also help you when you are facing the problem.
See you soon to continue to share our experience with you!
L’article SQL Server AlwaysOn – Failover does not work but everything is green on the cluster est apparu en premier sur dbi Blog.
PostgreSQL: Indexes and casting
This is a small reminder to be careful with casting one data type to another in your queries when you want to have an index access rather than a sequential scan. Here is a small example of what can happen:
postgres=# create table t ( a int );
CREATE TABLE
postgres=# insert into t select * from generate_series(1,1000000);
INSERT 0 1000000
Creating an index on column “a” will speed up queries like this:
postgres=# create index i1 on t(a);
CREATE INDEX
postgres=# explain select * from t where a = 1;
QUERY PLAN
-----------------------------------------------------------------
Index Only Scan using i1 on t (cost=0.42..4.44 rows=1 width=4)
Index Cond: (a = 1)
(2 rows)
If you, however, add a cast to your query this will disable the index access:
postgres=# explain select * from t where a::text = '1';
QUERY PLAN
-----------------------------------------------------------------------
Gather (cost=1000.00..13216.67 rows=5000 width=4)
Workers Planned: 2
-> Parallel Seq Scan on t (cost=0.00..11716.67 rows=2083 width=4)
Filter: ((a)::text = '1'::text)
(4 rows)
If you really need to cast and you want to have an index access, then you need to create an index for this as well:
postgres=# create index i2 on t ( cast ( a as text ) );
CREATE INDEX
postgres=# explain select * from t where a::text = '1';
QUERY PLAN
---------------------------------------------------------------------
Bitmap Heap Scan on t (cost=95.17..4818.05 rows=5000 width=4)
Recheck Cond: ((a)::text = '1'::text)
-> Bitmap Index Scan on i2 (cost=0.00..93.92 rows=5000 width=0)
Index Cond: ((a)::text = '1'::text)
(4 rows)
Might seem obvious, but we still see this from time to time.
L’article PostgreSQL: Indexes and casting est apparu en premier sur dbi Blog.
PostgreSQL 18: Per-relation cumulative statistics for [auto]vacuum and [auto]analyze
This is about another feature which will most likely show up in PostgreSQL 18 later this year. The statistic system is something which gets more and more details with almost every release of PostgreSQL, and PostgreSQL 18 will be no exception to that.
When you take a look at pg_stat_all_tables (or pg_stat_user_tables) in PostgreSQL 17, it looks like this:
postgres=# select version();
version
-----------------------------------------------------------------------------------------------------------------------------
PostgreSQL 17.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 14.2.1 20250110 (Red Hat 14.2.1-7), 64-bit
(1 row)
postgres=# \d pg_stat_all_tables
View "pg_catalog.pg_stat_all_tables"
Column | Type | Collation | Nullable | Default
---------------------+--------------------------+-----------+----------+---------
relid | oid | | |
schemaname | name | | |
relname | name | | |
seq_scan | bigint | | |
last_seq_scan | timestamp with time zone | | |
seq_tup_read | bigint | | |
idx_scan | bigint | | |
last_idx_scan | timestamp with time zone | | |
idx_tup_fetch | bigint | | |
n_tup_ins | bigint | | |
n_tup_upd | bigint | | |
n_tup_del | bigint | | |
n_tup_hot_upd | bigint | | |
n_tup_newpage_upd | bigint | | |
n_live_tup | bigint | | |
n_dead_tup | bigint | | |
n_mod_since_analyze | bigint | | |
n_ins_since_vacuum | bigint | | |
last_vacuum | timestamp with time zone | | |
last_autovacuum | timestamp with time zone | | |
last_analyze | timestamp with time zone | | |
last_autoanalyze | timestamp with time zone | | |
vacuum_count | bigint | | |
autovacuum_count | bigint | | |
analyze_count | bigint | | |
autoanalyze_count | bigint | | |
There already are statistics for [auto]vacuum and [auto]analyze but there is no information about how much time the system spend in total for vacuum and analyze operations. This is now available:
postgres=# select version();
version
--------------------------------------------------------------------
PostgreSQL 18devel on x86_64-linux, compiled by gcc-14.2.1, 64-bit
(1 row)
postgres=# \d pg_stat_all_tables
View "pg_catalog.pg_stat_all_tables"
Column | Type | Collation | Nullable | Default
------------------------+--------------------------+-----------+----------+---------
relid | oid | | |
schemaname | name | | |
relname | name | | |
seq_scan | bigint | | |
last_seq_scan | timestamp with time zone | | |
seq_tup_read | bigint | | |
idx_scan | bigint | | |
last_idx_scan | timestamp with time zone | | |
idx_tup_fetch | bigint | | |
n_tup_ins | bigint | | |
n_tup_upd | bigint | | |
n_tup_del | bigint | | |
n_tup_hot_upd | bigint | | |
n_tup_newpage_upd | bigint | | |
n_live_tup | bigint | | |
n_dead_tup | bigint | | |
n_mod_since_analyze | bigint | | |
n_ins_since_vacuum | bigint | | |
last_vacuum | timestamp with time zone | | |
last_autovacuum | timestamp with time zone | | |
last_analyze | timestamp with time zone | | |
last_autoanalyze | timestamp with time zone | | |
vacuum_count | bigint | | |
autovacuum_count | bigint | | |
analyze_count | bigint | | |
autoanalyze_count | bigint | | |
total_vacuum_time | double precision | | |
total_autovacuum_time | double precision | | |
total_analyze_time | double precision | | |
total_autoanalyze_time | double precision | | |
To see that in action, lets create a small table and populate it:
postgres=# create table t ( a int, b text);
CREATE TABLE
postgres=# insert into t select i, i::text from generate_series(1,1000000) i;
INSERT 0 1000000
This triggers autovacuum (you’ll have to wait up to a minute before you see something, because of autovacuum_naptime) and you get the total time in ms spend for auto vacuum and auto analyze:
postgres=# select last_autovacuum
, last_autoanalyze
, total_autovacuum_time
, total_autoanalyze_time
from pg_stat_all_tables
where relname = 't';
last_autovacuum | last_autoanalyze | total_autovacuum_time | total_autoanalyze_time
-------------------------------+-------------------------------+-----------------------+------------------------
2025-01-31 11:12:09.809252+01 | 2025-01-31 11:12:09.942748+01 | 187 | 134
(1 row)
The same happens if you manually trigger either vacuum or analyze:
postgres=# select last_vacuum
, last_analyze
, total_vacuum_time
, total_analyze_time
from pg_stat_all_tables
where relname = 't';
last_vacuum | last_analyze | total_vacuum_time | total_analyze_time
-------------+--------------+-------------------+--------------------
| | 0 | 0
(1 row)
postgres=# analyze t;
ANALYZE
postgres=# select last_vacuum
, last_analyze
, total_vacuum_time
, total_analyze_time
from pg_stat_all_tables
where relname = 't';
last_vacuum | last_analyze | total_vacuum_time | total_analyze_time
-------------+-------------------------------+-------------------+--------------------
| 2025-01-31 11:23:07.102182+01 | 0 | 52
(1 row)
postgres=# vacuum t;
VACUUM
postgres=# select last_vacuum
, last_analyze
, total_vacuum_time
, total_analyze_time
from pg_stat_all_tables
where relname = 't';
last_vacuum | last_analyze | total_vacuum_time | total_analyze_time
-------------------------------+-------------------------------+-------------------+--------------------
2025-01-31 11:23:12.286613+01 | 2025-01-31 11:23:07.102182+01 | 1 | 52
Nice, this really helps in identifying the relations where [auto]vacuum and [auto]anaylze have spent most of the time on. Thanks to all involved.
L’article PostgreSQL 18: Per-relation cumulative statistics for [auto]vacuum and [auto]analyze est apparu en premier sur dbi Blog.
Managing multilingual documents including version control
- Manage version controlled documents
- The documents must be available in different languages
- Each document has his own initial language version
- The translated documents must be in relationship with the initial document
- Should a document version be subject to change, it is imperative that all related documents are updated accordingly
- After all documents version are updated and approved, the language collection is again effective
- Review and Approvals are manged and logged
A dedicated folder system is in place to manage all documentation and versions, ensuring compliance with requirements. The system incorporates a process for document approval and review, with tracking capabilities. The language of the documents is a key component of their metadata, with relevant information found on the metadata card and in the documents themselves.
The language collection container and document state are displayed on a timelineThe diagram below illustrates the various states of the owning connection container. Please note that a change to the state of the language collection container is triggered automatically when a document is updated.
List of the possible states:
- All language documents are in the effective state = Language collection container is also effective
- One of the documents is in a review state = Language collection container is also under review
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/10/timeleine_controlled_documnets-removebg-preview-1.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/10/controlled_documnets_1-removebg-preview.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/10/controlled_documnets_2-removebg-preview.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/10/controlled_documnets_3-removebg-preview-1.png)
This diagram is great because it shows us that the language collection container and all the documents are in the ‘under review’ state.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/10/controlled_documnets_4-removebg-preview.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/10/controlled_documnets_5-removebg-preview.png)
Should you require further information on the version control of documents in M-Files, please refer to one of our recent webinars. They are available in German and French.
ConclusionThis solution is ideal for companies operating within a regulated and multilingual environment. It provides comprehensive tracking of document changes and ensures the integrity of the documents. Additionally, it offers extended logging capabilities to control document access.
The topics below can all be satisfied, especially with the use of the QMS template.
- Compliance to FDA 21 CFR part 11 or GDPR for example
- Qualified electronic signature
- Share documents with customer or partner with the use of M-Files Hubshare
- No Code Document Automation with M-File Ment
Should you require further information on integrations and interfaces, please do not hesitate to visit the M-Files website or contact us to arrange a demonstration and preliminary business analysis. We would be delighted to discuss how we can support you.
L’article Managing multilingual documents including version control est apparu en premier sur dbi Blog.
Issue with external connector in M-Files Server 25.1 (January 2025)
I encountered an issue with the configuration of an external connector in the M-Files Server 25.1 (January 2025). In my case, this was the M-Files Network Folder Connector. The installation of the Vault Application proceeded smoothly, as usual. I then attempted to configure the connector in the M-Files Admin Tool, but encountered a problem: the configuration tab was not available. Restarting the vault and refreshing the M-Files Admin Tool resolved the issue.
I began to investigate the issue by checking various M-Files sources, including the M-Files community, the Partner space, and the support case portal. However, I was unable to find a solution that addressed this particular scenario. I did come across one article that described the prerequisites for the connector. After checking my new M-Files Server 25.1 (January 2025), I was able to confirm that all prerequisites are fulfilled.
The next step in the process was to install the connector in our test environments. It was discovered that one of our test servers was still on the December release, as automatic updates had not been enabled. It was then realised that the connector was working fine in the previous release.
I submitted a support case with M-Files and provided a detailed description of the issue with the new release. After a few hours, I received feedback with a possible solution. In addition, they also created a knowledge-base article to describe the solution.
SolutionThe issue is related to the .Net 4.7.2 Runtime when used with Windows Server 2019. To resolve the issue, I installed the .Net 4.8 Runtime on the M-Files Server 25.1 (January 2025).
Should you require further details, please refer to the M-Files support article.
ConclusionAfter implementation of the proposed workaround from M-Files the Network Folder Connector was working as expected and I was able to continue with the configuration of the M-Files Server 25.1 (January 2025).
Furthermore, the issue is not exclusive to the M-Files Network Folder Connector. The same symptoms can also be observed in external databases utilising the OleDB driver.
In the event that your environment is hosted on the M-Files Cloud, the issue may occur on the local Ground Link proxy server.
In some cases, the intelligence services from M-Files can also be affected. Please refer to the image below, which is from the M-Files article.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-80.png)
Should you encounter any issues or require support, we are here to assist you.
L’article Issue with external connector in M-Files Server 25.1 (January 2025) est apparu en premier sur dbi Blog.
How to clone remote PDB with dbaascli on ExaCC?
When your DB infrastructure move to ExaCC, it’s a good option to appropriate the tools offer by it to facilitate your life. And for ExaCC, this tool is dbaascli. It manages many aspects of the ExaCC layers. And in this blog, we focus on the clone remote PDB with dbaascli!
The environment used for this blog is :
The target PDB (PDBORA2) must not exist on the target CDB (CDBORA2). Unless it, an error will be raised.
And in addition, the SYS password will be asked for the remote CDB.
You must connect on the target server with the root or oracle OS account. In my case, I will use the oracle account.
The dbaascli syntax is simple and friendly :
dbaascli pdb remoteClone --pdbName <value> --dbName <value> --sourceDBConnectionString <value> [--targetPDBName <value>] [--powerLimit <value>] [--maxCPU <value>] [--maxSize <value>] [--resume [--sessionID <value>]] [--executePrereqs] [--waitForCompletion <value>] [--sourcePDBExportedTDEKeyFile <value>]
{
[--blobLocation <value>]
| [--standbyBlobFromPrimary <value>]
}
[--excludeUserTablespaces <value>]
[--excludePDBData <value>]
[--pdbAdminUserName <value>]
[--lockPDBAdminAccount <value>]
[--sourcePDBServiceConvertList <value>]
[--refreshablePDB --refreshMode <value> [--refreshIntervalInMinutes <value>] --dblinkUsername <value>
[--honorCaseSensitiveUserName]]
[--updateDBBlockCacheSize]
Where mandatory parameters are :
--pdbName
specifies the name of the source PDB that you want to clone
--dbname
specifies the name (DB_NAME
) of the CDB that hosts the newly cloned PDB
--sourceDBConnectionString
specifies the source database connection string in the format scan_name:scan_port/database_service_name
--targetPDBName
specifies the name for the target PDB (new cloned PDB)
For this blog, we must use this command :
oracle@ExaCC-2:~/ [CDBORA2] dbaascli pdb remoteclone --pdbname PDBORA1 --dbname CDBORA2 --targetPDBName PDBORA2 --sourceDBConnectionString ExaCC-1-scan.mydomain.ch:1521/CDBORA1.mydomain.ch
DBAAS CLI version 24.3.2.0.0
Executing command pdb remoteclone --pdbname PDBORA1 --dbname CDBORA2 --targetPDBName PDBORA2 --sourceDBConnectionString ExaCC-1-scan.mydomain.ch:1521/CDBORA1.mydomain.ch
Job id: 688a8b27-40b1-4115-9fbb-deabc741a235
Session log: /var/opt/oracle/log/CDBORA2/pdb/remoteClone/dbaastools_2025-01-28_11-27-30-AM_153668.log
Enter REMOTE_DB_SYS_PASSWORD:
Enter REMOTE_DB_SYS_PASSWORD (reconfirmation):
Loading PILOT...
Session ID of the current execution is: 689
Log file location: /var/opt/oracle/log/CDBORA2/pdb/remoteClone/pilot_2025-01-28_11-33-38-AM_178552
-----------------
Running Plugin_initialization job
Enter TDE_PASSWORD:
****************
Enter REMOTE_DB_SYS_PASSWORD
****************
Completed Plugin_initialization job
-----------------
Running Validate_input_params job
Completed Validate_input_params job
-----------------
Running Validate_target_pdb_service_name job
Completed Validate_target_pdb_service_name job
-----------------
Running Perform_dbca_prechecks job
Completed Perform_dbca_prechecks job
-----------------
Running Perform_pdb_cross_release_prechecks job
Completed Perform_pdb_cross_release_prechecks job
Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
Acquiring read lock: CDBORA2
Acquiring write lock: PDBORA2
-----------------
Running PDB_creation job
Completed PDB_creation job
-----------------
Running Load_pdb_details job
Completed Load_pdb_details job
-----------------
Running Configure_pdb_service job
Completed Configure_pdb_service job
-----------------
Running Configure_tnsnames_ora job
Completed Configure_tnsnames_ora job
-----------------
Running Set_pdb_admin_user_profile job
Completed Set_pdb_admin_user_profile job
-----------------
Running Lock_pdb_admin_user job
Completed Lock_pdb_admin_user job
-----------------
Running Register_ocids job
Skipping. Job is detected as not applicable.
-----------------
Running Prepare_blob_for_standby_in_primary job
Skipping. Job is detected as not applicable.
Releasing lock: PDBORA2
Releasing lock: CDBORA2
Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
-----------------
Running Generate_dbsystem_details job
Acquiring native write lock: global_dbsystem_details_generation
Releasing native lock: global_dbsystem_details_generation
Completed Generate_dbsystem_details job
dbaascli execution completed
And from now a new PDB has been created on the target CDB :
oracle@ExaCC-2:~/ [] CDBORA2
*******************************************************
INSTANCE_NAME : CDBORA21
DB_NAME : CDBORA2
DB_UNIQUE_NAME : CDBORA2
STATUS : OPEN READ WRITE
LOG_MODE : ARCHIVELOG
USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/16
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : YES
FORCE_LOGGING : YES
VERSION : 19.25.0.0.0
NLS_LANG : AMERICAN_AMERICA.WE8ISO8859P15
CDB_ENABLED : YES
PDBs : PDBORA2 PDB$SEED
*******************************************************
L’article How to clone remote PDB with dbaascli on ExaCC? est apparu en premier sur dbi Blog.
Creating your private cloud using OpenStack – (7) – Horizon, the OpenStack dashboard
We’ve finished the last post with a working Network (Neutron) service on the controller and compute node. In this post we’ll setup the final service which is Horizon, the OpenStack dashboard. Once more, looking back at what we need at a minimum, we’ve done most of it by now:
- Keystone: Identity service (done)
- Glance: Image service (done)
- Placement: Placement service (done)
- Nova: Compute service (done)
- Neutron: Network service (done)
- Horizon: The OpenStack dashboard
Currently, the overview of our playground looks like this:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack12-1-1024x448.png)
Today we’re going to setup Horizon on the controller node, which is the final part of this little blog series about creating your own OpenStack playground.
Compared to the installation and configuration of the other services, the process for getting Horizon up and running is pretty simple. Horizon comes as a Django web application and we anyway have the web server already running on the controller node.
All we need to do is to install the operating system packages and do a little bit of configuration afterwards.
There are only two packages to install on the controller node:
[root@controller ~]$ dnf install python3-django openstack-dashboard -y
… and there are only two configuration files to adapt:
- /etc/openstack-dashboard/local_settings
- /etc/httpd/conf.d/openstack-dashboard.conf
Here is the content of the “local_settings” file (the “LOGGING” and “SECURITY_GROUP_RULES” blocks are kept out to keep this small):
[root@controller ~]$ egrep -v "^#|^$" /etc/openstack-dashboard/local_settings
import os
from django.utils.translation import gettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
ALLOWED_HOSTS = ['*']
LOCAL_PATH = '/tmp'
SECRET_KEY='903bceffe2420fd6b5ac'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyMemcacheCache',
'LOCATION': 'controller:11211',
},
}
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/identity/v3" % OPENSTACK_HOST
TIME_ZONE = "Europe/Zurich"
LOGGING = {
...
}
SECURITY_GROUP_RULES = {
...
},
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_fip_topology_check': False
}
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
WEBROOT = '/dashboard'
The configuration for the web server is:
[root@controller ~]$ cat /etc/httpd/conf.d/openstack-dashboard.conf
WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
WSGISocketPrefix run/wsgi
WSGIApplicationGroup %{GLOBAL}
WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py
Alias /dashboard/static /usr/share/openstack-dashboard/static
<Directory /usr/share/openstack-dashboard/openstack_dashboard/>
Options All
AllowOverride All
Require all granted
</Directory>
<Directory /usr/share/openstack-dashboard/static>
Options All
AllowOverride All
Require all granted
</Directory>
The only thing left to do is to restart the web server and MemcacheD:
[root@controller ~]$ systemctl restart httpd.service memcached.service
The dashboard is now available at http://controller/dashboard:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack13.png)
As we did not configure additional domains or users, the login credentials are (if you used the same passwords as in this blog series, the password is “admin”):
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack14.png)
This will bring you to the “Overview”
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack16-1024x475.png)
… and from there you can dive into the various sections and have a look at the currently available components, e.g. you might have a look at the image we’ve uploaded when we did the setup of Glance (Image Service):
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack17-1024x475.png)
… or the service project we’ve created previously:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack18-1024x475.png)
To finalize this series, here is the complete overview of what we did:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack19-1024x523.png)
Happy testing with OpenStack.
L’article Creating your private cloud using OpenStack – (7) – Horizon, the OpenStack dashboard est apparu en premier sur dbi Blog.
L’historisation des données
L’historisation des données, processus clé d’un projet BI, consiste à conserver et gérer différentes versions des données pour permettre l’analyse historique, la traçabilité et le suivi des tendances.
1 – Pourquoi historiser ?
L’historisation répond à des questions clés :
- Quelle était la valeur d’un indicateur à un moment donné ?
- Quelles données ont changé, quand et pourquoi ?
Selon les besoins du client, l’historisation peut ne pas être nécessaire si l’analyse historique ou les tendances ne sont pas prioritaires.
2 – Pourquoi historiser mes données ?
L’historisation des données apporte de nombreux avantages, en particulier dans les domaines suivants
- L’analyse temporelle
Dans certains secteurs, il est nécessaire de pouvoir consulter des données anciennes. Ceci afin de permettre une meilleure lecture des tendances et d’identifier les points forts et faibles du secteur.
Identifier de façon formelle quel produit fonctionne ou lequel ne fonctionne pas permet d’orienter la stratégie de l’entreprise en fonction.
- La traçabilité
Identifier toutes les étapes de vie d’une information stockée permet, via des analyses, de comprendre son évolution. Complémentaire à l’analyse temporelle, elle n’a cependant pas les mêmes objectifs. Comprendre toutes les étapes de l’évolution d’un produit permet, par exemple, de faire un focus sur les méthodes de production afin d’en améliorer le cycle.
- Audit
Avec la mise en place de normes sur la gestion des données, forer dans l’historique des données, c’est-à-dire à explorer en profondeur les évolutions temporelles, des informations pour en extraire des tendances. Ils permettent ainsi de prendre des décisions afin de répondre aux normes et lois en vigueur.
- Comparaison
Comparer les données entre les années, les trimestres, les mois est un élément important de l’historisation. Complémentaire à l’analyse temporelle, suivre l’évolution et identifier les tendances est un point clé de l’historisation.
- Fiabilisation des analyses
Une analyse basée sur des données incomplètes pourrait s’en trouver biaisée. Conduisant ainsi à de potentielles décisions immédiates dans le but de corriger un problème ou de renforcer une tendance positive.
3 – Approche et techniques d’historisation
Les stratégies d’historisation sont multiples et sont toutes basées sur des approches différentes de la gestion de vos données.
- L’écrasement de données
Cette méthode consiste à écraser les données existantes lors des mises à jour, sans conserver d’historique. Chaque information est remplacée en fonction de sa clé. Ce qui rend la méthode simple et peu coûteuse. En contrepartie, elle présente l’inconvénient d’empêcher toute traçabilité et toute analyse en profondeur de la donnée.
- Historisation complète
Aucune suppression de la donnée n’est effectuée, généralement, l’information est horodatée et permet d’identifier par des requêtes plus ou moins complexe quelle est la version courante. L’avantage de cette méthode est que les analyses seront très pertinentes, cependant elles auront un coût en volume et en performances globales sur l’interrogation des donnes. Elles nécessiteront des efforts en terme d’architecture pour soutenir le volume toujours croissant d’information.
- Historisation par journal de transaction
Cette méthode consiste à transférer la donnée qui vient d’être modifier dans une nouvelle table contenant des marqueurs temporels et un tag contenant le type d’action menée. Lors de la mise à jour ou de la suppression, l’information bascule sur une table spécifique. L’avantage est que les données sont stockées dans leur entièreté, permettent donc des forages et analyses temporelles et ne surchargent pas la table courante d’information passées. L’inconvénient est qu’il devient plus difficile de faire des comparaisons car elles nécessitent des comparaisons sur plusieurs tables.
5 – Contraintes et coûts
Comme toujours, lors d’un projet BI (et informatique en général) il est essentiel de bien structurer et cadrer en amont le processus.
Gérer l’historique a un coût en ressources, temps de traitement, stockage, performances lors des extractions de données. Par conséquent, prendre le temps d’informer le client des différents effets de bords s’avère crucial.
- L’infrastructure et stockage : Que ce soit en Cloud, ou OnPremise, le stockage de données à un coût, le temps de process peut impacter vos processus qui s’exécutent en parallèle. La croissance de données à un impact sur la scalabilité globale de toutes les applications exploitant les données. Faire une bonne analyse pour proposer la meilleure infrastructure qui nécessitera le moins de maintenance possible pour en réduire les coûts.
- Les performances globales : Un processus ne doit pas pâtir de l’existant. Intégrer de nouvelles données, dans une table déjà pleine, ne doit pas être ralentie tous les jours un peu plus. Les choix techniques qui en découlent sont essentiels :
- ETL ou ELT
- différentiel ou annule et remplace
- écraser ou tagger l’information.
6 – Questions importantes
- Quoi historiser : Historiser oui, mais quoi ? Certaines données n’ont aucun sens historique, d’autres doivent obligatoirement l’être. Faire une bonne analyse du métier de votre client permettra de faire les propositions les plus pertinentes sur ce sujet.
- Granularité : Historiser chaque minute d’activité est rarement pertinent. Privilégiez une fréquence adaptée aux besoins métiers.
- Optimiser : Mécanismes de purge, d’archivage, quels sont les limites d’exploitation des données, quelle est l’importance d’une données vielle de 20 ans ?
- Documentation : Définir les règles de l’historisation et les réflexions sur la mise en place afin de pérenniser le processus.
7 – Exemples
Finance :
Dans le secteur financier, les données historisées permettent de :
- Détection de fraude : Identification de transactions suspectes en repérant des comportements anormaux par rapport à l’historique ou des patterns suspects de traitements.
- Analyser les risques : Évaluer la solvabilité des clients en étudiant leurs historiques de paiements et leurs comportements financiers passés.
- Optimiser les stratégies d’investissement : Identifier des tendances boursières ou prévoir l’évolution des marchés.
Retail :
Dans le commerce de détail, l’historisation sert à :
- Prévoir la demande : Anticiper les besoins en stock en fonction des tendances saisonnières ou des promotions passées.
- Analyser le comportement des clients : Étudier l’historique d’achat pour personnaliser les offres et maximiser les ventes.
- Optimiser les promotions : Identifier les périodes où les promotions ont eu le plus d’impact sur les ventes.
Ressources humaines :
Dans la gestion des talents, les données historiques peuvent :
- Prévoir l’attrition : Identifier les collaborateurs à risque de départ en analysant des données comme les absences ou les performances.
- Optimiser le recrutement : Repérer les profils de candidats ayant historiquement mieux performé dans des postes similaires.
- Analyser l’évolution des compétences : Suivre les formations et performances pour anticiper les besoins futurs en compétences.
Santé :
Dans le domaine de la santé, les données historisées permettent :
- La prévision des hospitalisations : Estimer les besoins en lits ou en personnel en fonction des données épidémiologiques passées.
- Le suivi des patients chroniques : Identifier les tendances dans l’évolution de la santé d’un patient pour ajuster les traitements.
- L’optimisation des ressources médicales : Prévoir les pics d’activité dans les services médicaux pour ajuster les plannings.
Conclusion :
L’IA appliquée à des données historisées constitue un levier puissant, une pierre angulaire, pour transformer de simples observations en analyses prédictives et prescriptives. En combinant Databricks, Power BI et un soupçon de Python, vous disposez d’une solution moderne et accessible pour explorer ce monde riche et passionnant qu’est l’IA.
L’article L’historisation des données est apparu en premier sur dbi Blog.
Creating your private cloud using OpenStack – (6) – The Networking service
We’ve finished the last post with a working Compute (Nova) service on the controller and compute node. While only the compute(s) actually run compute resources, Nova is also running on the controller for the management tasks, while libvirt is only running on the compute node. Once more, looking back at what we need at a minimum, we’ve done most of it by now:
- Keystone: Identity service (done)
- Glance: Image service (done)
- Placement: Placement service (done)
- Nova: Compute service (done)
- Neutron: Network service
- Horizon: The OpenStack dashboard
The Network service (Neutron), at least for me, is the hardest part to get right because it requires most of the configuration. As a small refresher, this is the setup we have right now:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack11-1-1024x408.png)
Today, we’re adding the next bit to this: The Network service. Neutron provides “network connectivity as a service” between interfaces, and those are managed by other OpenStack services such as Nova (Compute). You have two choices for implementing this: Provider Networks and Self-service Networks. To keep it as simple as possible for scope of this blog series, we’ll go for a Provider Network.
As with most of the other services, Neutron needs a database:
[root@controller ~]$ su - postgres -c "psql -c \"create user neutron with login password 'admin'\""
CREATE ROLE
[root@controller ~]$ su - postgres -c "psql -c 'create database neutron with owner=neutron'"
CREATE DATABASE
[root@controller ~]$ su - postgres -c "psql -l"
List of databases
Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges
------------+-----------+----------+-----------------+-------------+-------------+------------+-----------+-----------------------
glance | glance | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
keystone | keystone | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
neutron | neutron | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
nova | nova | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
nova_api | nova | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
nova_cell0 | nova | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
placement | placement | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
postgres | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
template0 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
(10 rows)
As with most of the other services, we need to install the packages, setup the service, create credentials and API endpoints. For the packages, this is what we need on the controller node:
[root@controller ~]$ dnf install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y
For the service, credentials and API endpoints it is more or less the same as for the other services:
[root@controller ~]$ openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
No password was supplied, authentication will fail when a user does not have a password.
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| default_project_id | None |
| domain_id | default |
| email | None |
| enabled | True |
| id | 9c9c6e6b622a4e31a176636f4ac0d8d3 |
| name | neutron |
| description | None |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@controller ~]$ openstack role add --project service --user neutron admin
[root@controller ~]$ openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| id | 870999a2fb914fe4b9fd8a24a330215f |
| name | neutron |
| type | network |
| enabled | True |
| description | OpenStack Networking |
+-------------+----------------------------------+
[root@controller ~]$ openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | fb73d8b1db254fea9e4e12f61f4316a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 870999a2fb914fe4b9fd8a24a330215f |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
[root@controller ~]$ openstack endpoint create --region RegionOne network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c5a7982953e44ea0b612488bebbb88d8 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 870999a2fb914fe4b9fd8a24a330215f |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]$ openstack endpoint create --region RegionOne network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | ed9b1e2ebf8b4c7daa9b944b710b8ade |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 870999a2fb914fe4b9fd8a24a330215f |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
Neutron is one of the services which needs configuration on the controller and the compute nodes. For the controller, this is what we need (more on what that all means in a later post when we’ll go into the details):
[root@controller ~]$ egrep -v "^#|^$" /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins =
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
dhcp_agents_per_network = 2
transport_url = rabbit://openstack:admin@controller:5672/
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
use_helper_for_ns_read = true
root_helper_daemon = sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf
[cache]
[cors]
[database]
backend = postgresql
connection=postgresql+psycopg2://neutron:admin@localhost/neutron
[designate]
[experimental]
[healthcheck]
[ironic]
[keystone_authtoken]
ww_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = admin
[nova]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = admin
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[placement]
[privsep]
[profiler]
[profiler_jaeger]
[profiler_otlp]
[quotas]
Configure the Modular Layer 2 (ML2) plugin on the controller node:
[root@controller ~]$ egrep -v "^#|^$" /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = openvswitch
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
network_vlan_ranges = provider
[ml2_type_vxlan]
[ovn]
[ovn_nb_global]
[ovs]
[ovs_driver]
[securitygroup]
[sriov_driver]
Populate the database (as we know this already from the previous services which need the database as a backend):
[root@controller ~]$ /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
Running upgrade for neutron ...
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> kilo
INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225
INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151
...
INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586
INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d
OK
Start the server on the controller node (notice the systemd output for the ovs-vswitchd service):
[root@controller ~]$ systemctl enable neutron-server.service
Created symlink /etc/systemd/system/multi-user.target.wants/neutron-server.service → /usr/lib/systemd/system/neutron-server.service.
[root@controller ~]$ systemctl enable ovs-vswitchd.service
The unit files have no installation config (WantedBy=, RequiredBy=, Also=,
Alias= settings in the [Install] section, and DefaultInstance= for template
units). This means they are not meant to be enabled or disabled using systemctl.
Possible reasons for having this kind of units are:
• A unit may be statically enabled by being symlinked from another unit's
.wants/ or .requires/ directory.
• A unit's purpose may be to act as a helper for some other unit which has
a requirement dependency on it.
• A unit may be started when needed via activation (socket, path, timer,
D-Bus, udev, scripted systemctl call, ...).
• In case of template units, the unit is meant to be enabled with some
instance name specified.
[root@controller ~]$ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@controller ~]$ systemctl start neutron-server.service
[root@controller ~]$ systemctl start ovs-vswitchd.service
You may already have noticed that Open vSwitch is somehow used here. Now is the time to configure this:
[root@controller ~]$ egrep -v "^#|^$" /etc/neutron/plugins/ml2/openvswitch_agent.ini
[DEFAULT]
[agent]
[dhcp]
[metadata]
[network_log]
[ovs]
bridge_mappings = provider:br-provider
ovsdb_connection = tcp:127.0.0.1:6641
[securitygroup]
enable_security_group = true
firewall_driver = openvswitch
Have a close look at the bridge_mappings parameter, because this defines the bridge we’re going to create right now, and this is where the second interface (enp7s0) on the nodes comes into the game:
[root@controller ~]$ ovs-vsctl add-br br-provider
[root@controller ~]$ ovs-vsctl add-port br-provider enp7s0
[root@controller ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:38:00:73 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.90/24 brd 192.168.122.255 scope global noprefixroute enp1s0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe38:73/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000
link/ether 52:54:00:81:1d:26 brd ff:ff:ff:ff:ff:ff
inet6 fe80::9f14:4737:4cf7:d88f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether aa:e4:50:d2:36:f9 brd ff:ff:ff:ff:ff:ff
5: br-provider: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether fe:84:3e:98:e4:45 brd ff:ff:ff:ff:ff:ff
Set the controller and the manager for the “br-provider” bridge:
[root@controller ~]$ ovs-vsctl set-controller br-provider ptcp:6640
[root@controller ~]$ ovs-vsctl set-manager ptcp:6641
[root@controller ~]$ ovs-vsctl show
a60d818d-3738-4c8e-a9ce-40b84163b14e
Manager "tcp:localhost:6641"
Bridge br-provider
Controller "ptcp:6640"
Port enp7s0
Interface enp7s0
Port br-provider
Interface br-provider
type: internal
ovs_version: "3.3.4-71.el9s"
Because we want the compute nodes later on to get IP addresses by default, we need to configure the DHCP agent on the controller node:
[root@controller ~]$ egrep -v "^#|^$" /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[agent]
[metadata_rate_limiting]
[ovs]
… and finally the Metadata Agent on the controller node:
[root@controller ~]$ egrep -v "^#|^$" /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = admin
[agent]
[cache]
Switching over to the compute node, we must configure Nova (Compute) to use the Networking Service (Neutron). Before we do this, install the required packages on the compute node and configure Open vSwitch in the same way as previously:
[root@compute ~]$ dnf install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch python3-psycopg2 -y
[root@compute ~]$ systemctl start ovs-vswitchd.service
[root@compute ~]$ ovs-vsctl add-br br-provider
[root@compute ~]$ ovs-vsctl add-port br-provider enp7s0
[root@compute ~]$ ovs-vsctl set-manager ptcp:6641
[root@compute ~]$ ovs-vsctl set-controller br-provider ptcp:6640
To configure Nova to use the Neutron, add the “[neutron]” section to “/etc/nova/nova.conf”:
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = admin
service_metadata_proxy = true
metadata_proxy_shared_secret = admin
Configure Neutron:
[root@compute ~]$ egrep -v "^#|^$" /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
transport_url = rabbit://openstack:admin@controller
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
use_helper_for_ns_read = true
root_helper_daemon = sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf
[cache]
[cors]
[database]
backend = postgresql
connection=postgresql+psycopg2://neutron:admin@controller/neutron
[designate]
[experimental]
[healthcheck]
[ironic]
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = admin
[nova]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = admin
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[placement]
[privsep]
[profiler]
[profiler_jaeger]
[profiler_otlp]
[quotas]
[ssl]
… and the Open vSwitch agent:
[root@compute ~]$ egrep -v "^#|^$" /etc/neutron/plugins/ml2/openvswitch_agent.ini
[DEFAULT]
[agent]
[dhcp]
[metadata]
[network_log]
[ovs]
bridge_mappings = provider:br-provider
ovsdb_connection = tcp:127.0.0.1:6641
[securitygroup]
firewall_driver =
… and the Metadata Agent:
[root@compute ~]$ egrep -v "^#|^$" /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = admin
[agent]
[cache]
[root@compute ~]$ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.in
Start the services on the controller and the compute nodes:
# controller
[root@controller ~]$ systemctl enable neutron-server.service \
neutron-openvswitch-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
Created symlink /etc/systemd/system/multi-user.target.wants/neutron-openvswitch-agent.service → /usr/lib/systemd/system/neutron-openvswitch-agent.service.
Created symlink /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service → /usr/lib/systemd/system/neutron-dhcp-agent.service.
Created symlink /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service → /usr/lib/systemd/system/neutron-metadata-agent.service.
# compute
[root@compute ~]$ systemctl enable neutron-server.service \
neutron-openvswitch-agent.service \
neutron-metadata-agent.service
Created symlink /etc/systemd/system/multi-user.target.wants/neutron-server.service → /usr/lib/systemd/system/neutron-server.service.
Created symlink /etc/systemd/system/multi-user.target.wants/neutron-openvswitch-agent.service → /usr/lib/systemd/system/neutron-openvswitch-agent.service.
Created symlink /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service → /usr/lib/systemd/system/neutron-metadata-agent.service.
# controller
[root@controller ~]$ systemctl start neutron-server.service \
neutron-openvswitch-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# compute
[root@compute ~]$ systemctl start neutron-server.service \
neutron-openvswitch-agent.service \
neutron-metadata-agent.service
Done with that we need to create the network we want to use on the controller node (adapt the network to your own setup):
[root@controller ~]$ openstack network create --share --external \
--provider-physical-network provider \
--provider-network-type flat provider
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2025-01-22T13:25:21Z |
| description | |
| dns_domain | None |
| id | aa8bd4f9-4d89-4c7f-803c-c56aaf8f8f57 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| is_vlan_transparent | None |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| project_id | 920bf34a6c88454f90d405124ca1076d |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 1 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2025-01-22T13:25:21Z |
+---------------------------+--------------------------------------+
[root@controller ~]$ openstack subnet create --network provider \
--allocation-pool start=10.0.0.101,end=10.0.0.250 \
--dns-nameserver 8.8.4.4 --gateway 10.0.0.1 \
--subnet-range 10.0.0.0/24 provider
+----------------------+--------------------------------------+
| Field | Value |
+----------------------+--------------------------------------+
| allocation_pools | 10.0.0.101-10.0.0.250 |
| cidr | 10.0.0.0/24 |
| created_at | 2025-01-22T13:25:54Z |
| description | |
| dns_nameservers | 8.8.4.4 |
| dns_publish_fixed_ip | None |
| enable_dhcp | True |
| gateway_ip | 10.0.0.1 |
| host_routes | |
| id | 77ba8f00-edeb-4555-8c2a-be48b24f0320 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider |
| network_id | aa8bd4f9-4d89-4c7f-803c-c56aaf8f8f57 |
| project_id | 920bf34a6c88454f90d405124ca1076d |
| revision_number | 0 |
| router:external | True |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2025-01-22T13:25:54Z |
+----------------------+--------------------------------------+
Finally, verify that everything is working as expected:
[root@controller ~]$ openstack extension list --network
+-----------------------------------------------------------+---------------------------------------------+-----------------------------------------------------------+
| Name | Alias | Description |
+-----------------------------------------------------------+---------------------------------------------+-----------------------------------------------------------+
| Address group | address-group | Support address group |
| Address scope | address-scope | Address scopes extension. |
| agent | agent | The agent management extension. |
| Agent's Resource View Synced to Placement | agent-resources-synced | Stores success/failure of last sync to Placement |
| Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs |
| Availability Zone | availability_zone | The availability zone extension. |
| Availability Zone Filter Extension | availability_zone_filter | Add filter parameters to AvailabilityZone resource |
| Default Subnetpools | default-subnetpools | Provides ability to mark and use a subnetpool as the |
| | | default. |
| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among dhcp agents |
| Empty String Filtering Extension | empty-string-filtering | Allow filtering by attributes with empty string value |
| Neutron external network | external-net | Adds external network attribute to network resource. |
| Neutron Extra DHCP options | extra_dhcp_opt | Extra options configuration for DHCP. For example PXE |
| | | boot options to DHCP clients can be specified (e.g. tftp- |
| | | server, server-ip-address, bootfile-name) |
| Filter parameters validation | filter-validation | Provides validation on filter parameters. |
| Neutron Service Flavors | flavors | Flavor specification for Neutron advanced services. |
| IP address substring filtering | ip-substring-filtering | Provides IP address substring filtering when listing |
| | | ports |
| Multi Provider Network | multi-provider | Expose mapping of virtual networks to multiple physical |
| | | networks |
| Network MTU | net-mtu | Provides MTU attribute for a network resource. |
| Network MTU (writable) | net-mtu-writable | Provides a writable MTU attribute for a network resource. |
| Network Availability Zone | network_availability_zone | Availability zone support for network. |
| Network IP Availability | network-ip-availability | Provides IP availability data for each network and |
| | | subnet. |
| Pagination support | pagination | Extension that indicates that pagination is enabled. |
| Port device profile | port-device-profile | Expose the port device profile (Cyborg) |
| Neutron Port MAC address override | port-mac-override | Allow overriding the MAC address of a direct-physical |
| | | Port via the active binding profile |
| Neutron Port MAC address regenerate | port-mac-address-regenerate | Network port MAC address regenerate |
| Port NUMA affinity policy | port-numa-affinity-policy | Expose the port NUMA affinity policy |
| Port NUMA affinity policy "socket" | port-numa-affinity-policy-socket | Adds "socket" to the supported port NUMA affinity |
| | | policies |
| Port Binding | binding | Expose port bindings of a virtual port to external |
| | | application |
| Port Bindings Extended | binding-extended | Expose port bindings of a virtual port to external |
| | | application |
| Port Security | port-security | Provides port security |
| project_id field enabled | project-id | Extension that indicates that project_id field is |
| | | enabled. |
| Provider Network | provider | Expose mapping of virtual networks to physical networks |
| Quota engine limit check | quota-check-limit | Support for checking the resource usage before applying a |
| | | new quota limit |
| Quota management support | quotas | Expose functions for quotas management per project |
| Quota details management support | quota_details | Expose functions for quotas usage statistics per project |
| RBAC Policies | rbac-policies | Allows creation and modification of policies that control |
| | | tenant access to resources. |
| Add address_group type to RBAC | rbac-address-group | Add address_group type to network RBAC |
| Add address_scope type to RBAC | rbac-address-scope | Add address_scope type to RBAC |
| Add security_group type to network RBAC | rbac-security-groups | Add security_group type to network RBAC |
| Add subnetpool type to RBAC | rbac-subnetpool | Add subnetpool type to RBAC |
| If-Match constraints based on revision_number | revision-if-match | Extension indicating that If-Match based on |
| | | revision_number is supported. |
| Resource revision numbers | standard-attr-revisions | This extension will display the revision number of |
| | | neutron resources. |
| Default rules for security groups | security-groups-default-rules | Configure set of security group rules used as default |
| | | rules for every new security group |
| Normalized CIDR field for security group rules | security-groups-normalized-cidr | Add new field with normalized remote_ip_prefix cidr in SG |
| | | rule |
| Port filtering on security groups | port-security-groups-filtering | Provides security groups filtering when listing ports |
| Remote address group id field for security group rules | security-groups-remote-address-group | Add new field of remote address group id in SG rules |
| Security group rule belongs to the project's default | security-groups-rules-belongs-to-default-sg | Flag to determine if the security group rule belongs to |
| security group | | the project's default security group |
| Security group filtering on the shared field | security-groups-shared-filtering | Support filtering security groups on the shared field |
| security-group | security-group | The security groups extension. |
| Neutron Service Type Management | service-type | API for retrieving service providers for Neutron advanced |
| | | services |
| Sorting support | sorting | Extension that indicates that sorting is enabled. |
| standard-attr-description | standard-attr-description | Extension to add descriptions to standard attributes |
| Stateful security group | stateful-security-group | Indicates if the security group is stateful or not |
| Subnet belongs to an external network | subnet-external-network | Informs if the subnet belongs to an external network |
| Subnet Onboard | subnet_onboard | Provides support for onboarding subnets into subnet pools |
| Subnet service types | subnet-service-types | Provides ability to set the subnet service_types field |
| Subnet Allocation | subnet_allocation | Enables allocation of subnets from a subnet pool |
| Subnet Pool Prefix Operations | subnetpool-prefix-ops | Provides support for adjusting the prefix list of subnet |
| | | pools |
| Tag creation extension | tag-creation | Allow to create multiple tags for a resource |
| Tag support for resources with standard attribute: port, | standard-attr-tag | Enables to set tag on resources with standard attribute. |
| subnet, subnetpool, network, security_group, router, | | |
| floatingip, policy, trunk, network_segment_range | | |
| Resource timestamps | standard-attr-timestamp | Adds created_at and updated_at fields to all Neutron |
| | | resources that have Neutron standard attributes. |
+-----------------------------------------------------------+---------------------------------------------+-----------------------------------------------------------+
[root@controller ~]$ openstack network agent list
+--------------------------------------+--------------------+--------------------------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+--------------------------------+-------------------+-------+-------+---------------------------+
| 0fff6416-31eb-4fbd-b8a6-dfd8f52acb6d | DHCP agent | controller.it.dbi-services.com | nova | :-) | UP | neutron-dhcp-agent |
| 2fd7ed1b-d53a-47a5-ad60-9ff95bda4f51 | Metadata agent | controller.it.dbi-services.com | None | :-) | UP | neutron-metadata-agent |
| 7b1b2385-612e-47e8-8f31-dfb78afa0b0b | Metadata agent | compute.it.dbi-services.com | None | :-) | UP | neutron-metadata-agent |
| 7855461f-a5c0-4b90-b52c-d9695b92107d | Open vSwitch agent | controller.it.dbi-services.com | None | :-) | UP | neutron-openvswitch-agent |
| dac53d0d-e24c-4e98-938a-7f480b457486 | Open vSwitch agent | compute.it.dbi-services.com | None | :-) | UP | neutron-openvswitch-agent |
+--------------------------------------+--------------------+--------------------------------+-------------------+-------+-------+---------------------------+
Done. This leaves us with the following components for today:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack12-1024x448.png)
In the next post, we’ll setup the final service for our playground, Horizon (The OpenStack dashboard).
L’article Creating your private cloud using OpenStack – (6) – The Networking service est apparu en premier sur dbi Blog.
Creating your private cloud using OpenStack – (5) – The Compute service
We’re now coming closer to the final setup of the OpenStack test environment. Again, looking at the minimum services we need, there are only three of them left:
- Keystone: Identity service (done)
- Glance: Image service (done)
- Placement: Placement service (done)
- Nova: Compute service
- Neutron: Network service
- Horizon: The OpenStack dashboard
In this post we’ll continue with the Compute Service, which is called Nova. As a short reminder, if you followed the last posts, this is what we have currently:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack10-1-1024x420.png)
In addition to that, we need something which is actually providing compute resources, and this is Nova. Nova “interacts with OpenStack Identity for authentication, OpenStack Placement for resource inventory tracking and selection, OpenStack Image service for disk and server images, and OpenStack Dashboard for the user and administrative interface”.
Nova, like the Glance and Placement services, needs the database backend so we’re going to prepare this on the controller node:
[root@controller ~]$ su - postgres -c "psql -c \"create user nova with login password 'admin'\""
CREATE ROLE
[root@controller ~]$ su - postgres -c "psql -c 'create database nova_api with owner=nova'"
CREATE DATABASE
[root@controller ~]$ su - postgres -c "psql -c 'create database nova with owner=nova'"
CREATE DATABASE
[root@controller ~]$ su - postgres -c "psql -c 'create database nova_cell0 with owner=nova'"
CREATE DATABASE
Again, we need to create the service credentials and the endpoints for the service:
[root@controller ~]$ . admin-openrc
[root@controller ~]$ openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
No password was supplied, authentication will fail when a user does not have a password.
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| default_project_id | None |
| domain_id | default |
| email | None |
| enabled | True |
| id | 5f096012f1d74232a55d0bd76faad3e5 |
| name | nova |
| description | None |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@controller ~]$ openstack role add --project service --user nova admin
[root@controller ~]$ openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| id | bcbf978986834cbca430a1a5c7b6d9b3 |
| name | nova |
| type | compute |
| enabled | True |
| description | OpenStack Compute |
+-------------+----------------------------------+
[root@controller ~]$ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 09cb340bc9014ee69435360e8555a998 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bcbf978986834cbca430a1a5c7b6d9b3 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@controller ~]$ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 7e90b9431c48442d805e43e9af65ef8e |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bcbf978986834cbca430a1a5c7b6d9b3 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@controller ~]$ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | a04227f6f4d84011a186d2eda98b0c8b |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bcbf978986834cbca430a1a5c7b6d9b3 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
Having this ready, we can continue to install the required packages on the controller node:
[root@controller ~]$ dnf install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y
The configuration to do for Nova is bit longer than for the previous services, but nothing special in this simple case: We need the connection to the Message Queue (RabbitMQ), the details of the controller node (same host), the database connection details for Nova itself, the details how to get to the Keystone service, the details for the Placement service, and a few VNC related settings:
[root@controller ~]$ egrep -v "^#|^$" /etc/nova/nova.conf
[DEFAULT]
my_ip=192.168.122.90
enabled_apis = osapi_compute,metadata
transport_url=rabbit://openstack:admin@controller:5672/
[api]
auth_strategy = keystone
[api_database]
connection=postgresql+psycopg2://nova:admin@localhost/nova_api
[barbican]
[barbican_service_user]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[cyborg]
[database]
connection=postgresql+psycopg2://nova:admin@localhost/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[image_cache]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = admin
[libvirt]
[metrics]
[mks]
[neutron]
[notifications]
[os_vif_linux_bridge]
[os_vif_ovs]
[oslo_concurrency]
[oslo_limit]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = admin
[privsep]
[profiler]
[profiler_jaeger]
[profiler_otlp]
[quota]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
send_service_user_token = true
auth_url = https://controller/identity
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = admin
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[workarounds]
[wsgi]
[zvm]
Time to populate the databases for Nova (the following will also create a Cell, which is a concept used by Nova for sharding):
[root@controller ~]$ sh -c "nova-manage api_db sync" nova
2025-01-22 08:42:39.389 7200 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2025-01-22 08:42:39.390 7200 INFO alembic.runtime.migration [-] Will assume transactional DDL.
2025-01-22 08:42:39.397 7200 INFO alembic.runtime.migration [-] Running upgrade -> d67eeaabee36, Initial version
2025-01-22 08:42:39.670 7200 INFO alembic.runtime.migration [-] Running upgrade d67eeaabee36 -> b30f573d3377, Remove unused build_requests columns
2025-01-22 08:42:39.673 7200 INFO alembic.runtime.migration [-] Running upgrade b30f573d3377 -> cdeec0c85668, Drop legacy migrate_version table
[root@controller ~]$ sh -c "nova-manage cell_v2 map_cell0" nova
[root@controller ~]$ sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
--transport-url not provided in the command line, using the value [DEFAULT]/transport_url from the configuration file
--database_connection not provided in the command line, using the value [database]/connection from the configuration file
804d576e-ac59-4fc5-b83a-018088ea8c11
[root@controller ~]$ sh -c "nova-manage db sync" nova
2025-01-22 08:44:07.177 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Context impl PostgresqlImpl.
2025-01-22 08:44:07.178 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Will assume transactional DDL.
2025-01-22 08:44:07.192 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade -> 8f2f1571d55b, Initial version
2025-01-22 08:44:07.819 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade 8f2f1571d55b -> 16f1fbcab42b, Resolve shadow table diffs
2025-01-22 08:44:07.821 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade 16f1fbcab42b -> ccb0fa1a2252, Add encryption fields to BlockDeviceMapping
2025-01-22 08:44:07.823 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade ccb0fa1a2252 -> 960aac0e09ea, de-duplicate_indexes_in_instances__console_auth_tokens
2025-01-22 08:44:07.824 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade 960aac0e09ea -> 1b91788ec3a6, Drop legacy migrate_version table
2025-01-22 08:44:07.825 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade 1b91788ec3a6 -> 1acf2c98e646, Add compute_id to instance
2025-01-22 08:44:07.829 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade 1acf2c98e646 -> 13863f4e1612, create_share_mapping_table
2025-01-22 08:44:07.855 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Context impl PostgresqlImpl.
2025-01-22 08:44:07.856 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Will assume transactional DDL.
2025-01-22 08:44:07.867 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade -> 8f2f1571d55b, Initial version
2025-01-22 08:44:08.706 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade 8f2f1571d55b -> 16f1fbcab42b, Resolve shadow table diffs
2025-01-22 08:44:08.707 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade 16f1fbcab42b -> ccb0fa1a2252, Add encryption fields to BlockDeviceMapping
2025-01-22 08:44:08.709 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade ccb0fa1a2252 -> 960aac0e09ea, de-duplicate_indexes_in_instances__console_auth_tokens
2025-01-22 08:44:08.710 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade 960aac0e09ea -> 1b91788ec3a6, Drop legacy migrate_version table
2025-01-22 08:44:08.712 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade 1b91788ec3a6 -> 1acf2c98e646, Add compute_id to instance
2025-01-22 08:44:08.716 7262 INFO alembic.runtime.migration [None req-69377931-e25d-40d5-afe0-0eadfadd9ed4 - - - - - -] Running upgrade 1acf2c98e646 -> 13863f4e1612, create_share_mapping_table
[root@controller ~]$ sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+------------------------------------------+------------------------------------------------------+----------+
| Name | UUID | Transport URL | Database Connection | Disabled |
+-------+--------------------------------------+------------------------------------------+------------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | postgresql+psycopg2://nova:****@localhost/nova_cell0 | False |
| cell1 | 804d576e-ac59-4fc5-b83a-018088ea8c11 | rabbit://openstack:****@controller:5672/ | postgresql+psycopg2://nova:****@localhost/nova | False |
+-------+--------------------------------------+------------------------------------------+------------------------------------------------------+----------+
The final steps on the controller node are to enable and start the services:
[root@controller ~]$ systemctl enable \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
Created symlink /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service → /usr/lib/systemd/system/openstack-nova-api.service.
Created symlink /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service → /usr/lib/systemd/system/openstack-nova-scheduler.service.
Created symlink /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service → /usr/lib/systemd/system/openstack-nova-conductor.service.
Created symlink /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service → /usr/lib/systemd/system/openstack-nova-novncproxy.service.
[root@controller ~]$ systemctl start \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
As the controller node will not run any compute resources, Nova is the first service which needs to be configured on the compute node(s) as well. In our configuration KVM will be used to run the virtual machines. Before we start with that, make sure that host running the compute node can use hardware acceleration:
[root@compute ~]$ egrep -c '(vmx|svm)' /proc/cpuinfo
8
Check the documentation for additional details. If the output is greater than 0 you should be fine. If it is 0, there is more configuration to do which is not in the scope of this post.
Of course, we need to install the Nova package on the compute node, before we can do anything:
[root@compute ~]$ dnf install openstack-nova-compute -y
The configuration of Nova on the compute node is not much different from the one we’ve done on the controller node, but we do not need to specify any connection details for the Nova databases. We need, however, to tell Nova how to virtualize by setting the driver and the type of virtualization:
[root@compute ~]$ egrep -v "^#|^$" /etc/nova/nova.conf
[DEFAULT]
compute_driver=libvirt.LibvirtDriver
my_ip=192.168.122.91
state_path=/var/lib/nova
enabled_apis = osapi_compute,metadata
transport_url=rabbit://openstack:admin@controller
[api]
auth_strategy = keystone
[api_database]
[barbican]
[barbican_service_user]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[cyborg]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[image_cache]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = admin
[libvirt]
virt_type=kvm
[metrics]
[mks]
[neutron]
[notifications]
[os_vif_linux_bridge]
[os_vif_ovs]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_limit]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = admin
[privsep]
[profiler]
[profiler_jaeger]
[profiler_otlp]
[quota]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
send_service_user_token = true
auth_url = https://controller/identity
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = admin
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[workarounds]
[wsgi]
[zvm]
As we want to use libvirt, this needs to be installed as well (this was not required on the controller node):
[root@compute ~]$ dnf install libvirt -y
Enable and start the services on the compute node:
[root@compute ~]$ systemctl enable libvirtd.service openstack-nova-compute.service
Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /usr/lib/systemd/system/libvirtd.service.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd.socket → /usr/lib/systemd/system/libvirtd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd-ro.socket → /usr/lib/systemd/system/libvirtd-ro.socket.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd-admin.socket → /usr/lib/systemd/system/libvirtd-admin.socket.
Created symlink /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service → /usr/lib/systemd/system/openstack-nova-compute.service.
[root@compute ~]$ systemctl start libvirtd.service openstack-nova-compute.service
Compute nodes need to be attached to a cell (see above), so lets do this on the controller node:
[root@controller ~]$ . admin-openrc
[root@controller ~]$ openstack compute service list --service nova-compute
+--------------------------------------+--------------+-----------------------------+------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+--------------------------------------+--------------+-----------------------------+------+---------+-------+----------------------------+
| 3997f0ab-f9b1-4a7a-b635-01d71c805220 | nova-compute | compute.it.dbi-services.com | nova | enabled | up | 2025-01-22T08:41:19.727723 |
+--------------------------------------+--------------+-----------------------------+------+---------+-------+----------------------------+
[root@controller ~]$ sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 804d576e-ac59-4fc5-b83a-018088ea8c11
Checking host mapping for compute host 'compute.it.dbi-services.com': 04bfd9d9-df04-4479-843e-e97457c0ab67
Creating host mapping for compute host 'compute.it.dbi-services.com': 04bfd9d9-df04-4479-843e-e97457c0ab67
Found 1 unmapped computes in cell: 804d576e-ac59-4fc5-b83a-018088ea8c11
If all is fine, the compute node should show up when we ask for the compute service list:
[root@controller ~]$ openstack compute service list
+--------------------------------------+----------------+--------------------------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+--------------------------------------+----------------+--------------------------------+----------+---------+-------+----------------------------+
| 3997f0ab-f9b1-4a7a-b635-01d71c805220 | nova-compute | compute.it.dbi-services.com | nova | enabled | up | 2025-01-22T08:43:49.725537 |
| 29cc3084-1066-4a1f-b9f6-6f0c2187d6b5 | nova-scheduler | controller.it.dbi-services.com | internal | enabled | up | 2025-01-22T08:43:51.402825 |
| aea131c6-96b6-466f-802e-58018c931ec1 | nova-conductor | controller.it.dbi-services.com | internal | enabled | up | 2025-01-22T08:43:51.702004 |
+--------------------------------------+----------------+--------------------------------+----------+---------+-------+----------------------------+
Nova should show up in the catalog:
[root@controller ~]$ openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| keystone | identity | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | |
| glance | image | RegionOne |
| | | public: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | RegionOne |
| | | admin: http://controller:9292 |
| | | |
| placement | placement | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | internal: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | |
| nova | compute | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | |
| glance | image | |
| glance | image | |
| glance | image | |
+-----------+-----------+-----------------------------------------+
Once more (see last post), very the image service (Glance):
[root@controller ~]$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 150fd48b-8ed4-4170-ad98-213d9eddcba0 | cirros | active |
+--------------------------------------+--------+--------+
… and finally check that the cells and placement API are working properly:
[root@controller ~]$ nova-status upgrade check
+-------------------------------------------+
| Upgrade Check Results |
+-------------------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: Cinder API |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: Policy File JSON to YAML Migration |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: Older than N-1 computes |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: hw_machine_type unset |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: Service User Token Configuration |
| Result: Success |
| Details: None |
+-------------------------------------------+
That’s it for configuring Nova on the controller and the compute nodes and this leaves us with the following setup:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack11-1024x408.png)
As you can see, this is getting more and more complex as there are many pieces which make up our final OpenStack deployment. In the next post we’ll create and configure the Network Service (Neutron).
L’article Creating your private cloud using OpenStack – (5) – The Compute service est apparu en premier sur dbi Blog.
Creating your private cloud using OpenStack – (4) – The Image and Placement services
By the end of the last post we finally got the first OpenStack service up and running: Keystone, the Identity Service. Going back to the list of services we need at a minimum, this still leaves us with some more to setup:
- Keystone: Identity service (done)
- Glance: Image service
- Placement: Placement service
- Nova: Compute service
- Neutron: Network service
- Horizon: The OpenStack dashboard
In this post we’ll setup the Image Service (Glance) and the Placement service. The Glance service is responsible for providing virtual machine images and enables users to discover, register, and retrieve them. Glance (as well as Keystone) exposes an API to query image metadata and to retrieve images.
There are several options for storing those virtual machine images (including block storage), but to keep this as simple as possible we’ll use a directory on the controller node.
Before we start, this is how our setup looks like currently:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack7-1-1024x416.png)
We’ve done almost all the work on the controller node, and Glance is no exception to that. As with Keystone, Glance needs to store some stuff in PostgreSQL (remember that you also could use MySQL or SQLite), in this case metadata about the images. Very much the same as we did it for Keystone, we’ll create a dedicated user and database for that:
[root@controller ~]$ su - postgres -c "psql -c \"create user glance with login password 'admin'\""
CREATE ROLE
[root@controller ~]$ su - postgres -c "psql -c 'create database glance with owner=glance'"
CREATE DATABASE
[root@controller ~]$ su - postgres -c "psql -l"
List of databases
Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges
-----------+----------+----------+-----------------+-------------+-------------+------------+-----------+-----------------------
glance | glance | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
keystone | keystone | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
postgres | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
template0 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
(5 rows)
The next step is creating the Glance service credentials:
[root@controller ~]$ . admin-openrc
[root@controller ~]$ openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
No password was supplied, authentication will fail when a user does not have a password.
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| default_project_id | None |
| domain_id | default |
| email | None |
| enabled | True |
| id | eea5924c69c040428c5c4ef82f46c61b |
| name | glance |
| description | None |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@controller ~]$ openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| id | db04f3f7c7014eb7883e074625d31391 |
| name | glance |
| type | image |
| enabled | True |
| description | OpenStack Image |
+-------------+----------------------------------+
Create the endpoints:
[root@controller ~]$ openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | bde2499cd6c34c19aa221d64b9d0f2a0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | db04f3f7c7014eb7883e074625d31391 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]$ openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 046e64889cda43c2bfc36471e40a7a2d |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | db04f3f7c7014eb7883e074625d31391 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]$ openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c8ddc792772048a18179877355fdbd3f |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | db04f3f7c7014eb7883e074625d31391 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
Install the operating system package:
[root@controller ~]$ dnf install openstack-glance -y
And finally the configuration of the Glance API service:
[root@ostack-controller ~]$ egrep -v "^#|^$" /etc/glance/glance-api.conf
[DEFAULT]
enabled_backends=fs:file
[barbican]
[barbican_service_user]
[cinder]
[cors]
[database]
connection = postgresql+psycopg2://glance:admin@localhost/glance
[file]
[glance.store.http.store]
[glance.store.rbd.store]
[glance.store.s3.store]
[glance.store.swift.store]
[glance.store.vmware_datastore.store]
[glance_store]
default_backend = fs
[healthcheck]
[image_format]
[key_manager]
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = admin
[os_brick]
[oslo_concurrency]
[oslo_limit]
auth_url = http://controller:5000
auth_type = password
user_domain_id = default
username = glance
system_scope = all
password = admin
endpoint_id = 1e7748b2d7d44fb6a0c17edb3c68c4de
region_name = RegionOne
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[paste_deploy]
flavor = keystone
[profiler]
[task]
[taskflow_executor]
[vault]
[wsgi]
[fs]
filesystem_store_datadir = /var/lib/glance/images/
The endpoint_id is the public image endpoint ID:
[root@controller ~]$ openstack endpoint list --service glance --region RegionOne
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------+
| bde2499cd6c34c19aa221d64b9d0f2a0 | RegionOne | glance | image | True | public | http://controller:9292 |
| 046e64889cda43c2bfc36471e40a7a2d | RegionOne | glance | image | True | internal | http://controller:9292 |
| c8ddc792772048a18179877355fdbd3f | RegionOne | glance | image | True | admin | http://controller:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------+
Make sure that the glance account has reader access to system-scope resources (like limits):
[root@controller ~]$ openstack role add --user glance --user-domain Default --system all reader
Populate the Image service database:
[root@controller ~]$ /bin/sh -c "glance-manage db_sync" glance
2025-01-20 14:20:25.319 35181 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2025-01-20 14:20:25.319 35181 INFO alembic.runtime.migration [-] Will assume transactional DDL.
2025-01-20 14:20:25.327 35181 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2025-01-20 14:20:25.328 35181 INFO alembic.runtime.migration [-] Will assume transactional DDL.
2025-01-20 14:20:25.340 35181 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2025-01-20 14:20:25.340 35181 INFO alembic.runtime.migration [-] Will assume transactional DDL.
2025-01-20 14:20:25.350 35181 INFO alembic.runtime.migration [-] Running upgrade -> liberty, liberty initial
2025-01-20 14:20:25.526 35181 INFO alembic.runtime.migration [-] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table
2025-01-20 14:20:25.534 35181 INFO alembic.runtime.migration [-] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server
2025-01-20 14:20:25.571 35181 INFO alembic.runtime.migration [-] Running upgrade mitaka02 -> ocata_expand01, add visibility to images
2025-01-20 14:20:25.575 35181 INFO alembic.runtime.migration [-] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01
2025-01-20 14:20:25.576 35181 INFO alembic.runtime.migration [-] Running upgrade pike_expand01 -> queens_expand01
2025-01-20 14:20:25.576 35181 INFO alembic.runtime.migration [-] Running upgrade queens_expand01 -> rocky_expand01, add os_hidden column to images table
2025-01-20 14:20:25.580 35181 INFO alembic.runtime.migration [-] Running upgrade rocky_expand01 -> rocky_expand02, add os_hash_algo and os_hash_value columns to images table
2025-01-20 14:20:25.584 35181 INFO alembic.runtime.migration [-] Running upgrade rocky_expand02 -> train_expand01, empty expand for symmetry with train_contract01
2025-01-20 14:20:25.585 35181 INFO alembic.runtime.migration [-] Running upgrade train_expand01 -> ussuri_expand01, empty expand for symmetry with ussuri_expand01
2025-01-20 14:20:25.586 35181 INFO alembic.runtime.migration [-] Running upgrade ussuri_expand01 -> wallaby_expand01, add image_id, request_id, user columns to tasks table"
2025-01-20 14:20:25.598 35181 INFO alembic.runtime.migration [-] Running upgrade wallaby_expand01 -> xena_expand01, empty expand for symmetry with 2023_1_expand01
2025-01-20 14:20:25.600 35181 INFO alembic.runtime.migration [-] Running upgrade xena_expand01 -> yoga_expand01, empty expand for symmetry with 2023_1_expand01
2025-01-20 14:20:25.602 35181 INFO alembic.runtime.migration [-] Running upgrade yoga_expand01 -> zed_expand01, empty expand for symmetry with 2023_1_expand01
2025-01-20 14:20:25.603 35181 INFO alembic.runtime.migration [-] Running upgrade zed_expand01 -> 2023_1_expand01, empty expand for symmetry with 2023_1_expand01
2025-01-20 14:20:25.605 35181 INFO alembic.runtime.migration [-] Running upgrade 2023_1_expand01 -> 2024_1_expand01, adds cache_node_reference and cached_images table(s)
2025-01-20 14:20:25.677 35181 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2025-01-20 14:20:25.677 35181 INFO alembic.runtime.migration [-] Will assume transactional DDL.
Upgraded database to: 2024_1_expand01, current revision(s): 2024_1_expand01
2025-01-20 14:20:25.681 35181 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2025-01-20 14:20:25.681 35181 INFO alembic.runtime.migration [-] Will assume transactional DDL.
2025-01-20 14:20:25.684 35181 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2025-01-20 14:20:25.684 35181 INFO alembic.runtime.migration [-] Will assume transactional DDL.
Database migration is up to date. No migration needed.
2025-01-20 14:20:25.692 35181 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2025-01-20 14:20:25.692 35181 INFO alembic.runtime.migration [-] Will assume transactional DDL.
2025-01-20 14:20:25.699 35181 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2025-01-20 14:20:25.699 35181 INFO alembic.runtime.migration [-] Will assume transactional DDL.
2025-01-20 14:20:25.702 35181 INFO alembic.runtime.migration [-] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images
2025-01-20 14:20:25.705 35181 INFO alembic.runtime.migration [-] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables
2025-01-20 14:20:25.711 35181 INFO alembic.runtime.migration [-] Running upgrade pike_contract01 -> queens_contract01
2025-01-20 14:20:25.711 35181 INFO alembic.runtime.migration [-] Running upgrade queens_contract01 -> rocky_contract01
2025-01-20 14:20:25.712 35181 INFO alembic.runtime.migration [-] Running upgrade rocky_contract01 -> rocky_contract02
2025-01-20 14:20:25.712 35181 INFO alembic.runtime.migration [-] Running upgrade rocky_contract02 -> train_contract01
2025-01-20 14:20:25.712 35181 INFO alembic.runtime.migration [-] Running upgrade train_contract01 -> ussuri_contract01
2025-01-20 14:20:25.713 35181 INFO alembic.runtime.migration [-] Running upgrade ussuri_contract01 -> wallaby_contract01
2025-01-20 14:20:25.713 35181 INFO alembic.runtime.migration [-] Running upgrade wallaby_contract01 -> xena_contract01
2025-01-20 14:20:25.713 35181 INFO alembic.runtime.migration [-] Running upgrade xena_contract01 -> yoga_contract01
2025-01-20 14:20:25.714 35181 INFO alembic.runtime.migration [-] Running upgrade yoga_contract01 -> zed_contract01
2025-01-20 14:20:25.714 35181 INFO alembic.runtime.migration [-] Running upgrade zed_contract01 -> 2023_1_contract01
2025-01-20 14:20:25.715 35181 INFO alembic.runtime.migration [-] Running upgrade 2023_1_contract01 -> 2024_1_contract01
2025-01-20 14:20:25.717 35181 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2025-01-20 14:20:25.717 35181 INFO alembic.runtime.migration [-] Will assume transactional DDL.
Upgraded database to: 2024_1_contract01, current revision(s): 2024_1_contract01
2025-01-20 14:20:25.719 35181 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2025-01-20 14:20:25.719 35181 INFO alembic.runtime.migration [-] Will assume transactional DDL.
Database is synced successfully.
Create the policy definition to allow creating an image:
[root@controller ~]$ cat /etc/glance/policy.yaml
{
"default": "",
"add_image": "role:admin",
"modify_image": "role:admin",
"delete_image": "role:admin"
}
Enable and start the service:
[root@controller ~]$ systemctl enable openstack-glance-api.service
[root@controller ~]$ systemctl start openstack-glance-api.service
Verify operation of the Image service using CirrOS, a small Linux image that helps you test your OpenStack deployment:
[root@controller ~]$ mkdir -p /var/cache/glance/api/
[root@controller ~]$ . admin-openrc
[root@controller ~]$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
[root@controller ~]$ glance image-create --name "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public
+------------------+----------------------------------------------------------------------------------+
| Property | Value |
+------------------+----------------------------------------------------------------------------------+
| checksum | 443b7623e27ecf03dc9e01ee93f67afe |
| container_format | bare |
| created_at | 2025-01-20T14:15:21Z |
| disk_format | qcow2 |
| id | 150fd48b-8ed4-4170-ad98-213d9eddcba0 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| os_hash_algo | sha512 |
| os_hash_value | 6513f21e44aa3da349f248188a44bc304a3653a04122d8fb4535423c8e1d14cd6a153f735bb0982e |
| | 2161b5b5186106570c17a9e58b64dd39390617cd5a350f78 |
| os_hidden | False |
| owner | 920bf34a6c88454f90d405124ca1076d |
| protected | False |
| size | 12716032 |
| status | active |
| stores | fs |
| tags | [] |
| updated_at | 2025-01-20T14:15:22Z |
| virtual_size | 46137344 |
| visibility | public |
+------------------+----------------------------------------------------------------------------------+
[root@controller ~]$ glance image-list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| 150fd48b-8ed4-4170-ad98-213d9eddcba0 | cirros |
+--------------------------------------+--------+
Fine, the image is available and now we have this:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack8-1024x476.png)
The next service we need to deploy is the Placement service. This service is used by other services to manage and allocate their resources. Like the Keystone and Glance service, this service needs the database backend:
[root@controller ~]$ su - postgres -c "psql -c \"create user placement with login password 'admin'\""
CREATE ROLE
[root@controller ~]$ su - postgres -c "psql -c 'create database placement with owner=placement'"
CREATE DATABASE
[root@controller ~]$ su - postgres -c "psql -l"
List of databases
Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges
-----------+-----------+----------+-----------------+-------------+-------------+------------+-----------+-----------------------
glance | glance | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
keystone | keystone | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
placement | placement | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
postgres | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
template0 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
(6 rows)
In the same way as with the Glance service, the user, role, service and endpoints need to be created:
[root@controller ~]$ . admin-openrc
[root@controller ~]$ openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
No password was supplied, authentication will fail when a user does not have a password.
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| default_project_id | None |
| domain_id | default |
| email | None |
| enabled | True |
| id | 9d1de7fda54a441b9c1289f8dc520e2b |
| name | placement |
| description | None |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@controller ~]$ openstack role add --project service --user placement admin
[root@controller ~]$ openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| id | 43865e881fb84730b2bfc7c099c8038d |
| name | placement |
| type | placement |
| enabled | True |
| description | Placement API |
+-------------+----------------------------------+
[root@controller ~]$ openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 5dac42d9532e4fd7b34a8e990ec5d408 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 43865e881fb84730b2bfc7c099c8038d |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@controller ~]$ openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 535c652f6e654f56843076cf91a0fe84 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 43865e881fb84730b2bfc7c099c8038d |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@controller ~]$ openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 58370b01f8904e1f8aab8c718c8a4cb6 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 43865e881fb84730b2bfc7c099c8038d |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
Install the package which brings the Placement API and configure the service:
[root@controller ~]$ dnf install openstack-placement-api -y
[root@controller ~]$ egrep -v "^#|^$" /etc/placement/placement.conf
[DEFAULT]
[api]
auth_strategy = keystone
[cors]
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = admin
www_authenticate_uri = http://controller:5000
[oslo_middleware]
[oslo_policy]
[placement]
[placement_database]
connection = postgresql+psycopg2://placement:admin@localhost/placement
[profiler]
[profiler_jaeger]
[profiler_otlp]
Populate the database (this does not produce any output if successful):
[root@controller ~]$ sh -c "placement-manage db sync" placement
… and restart the web server after adding this block:
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
… to the end of “/etc/httpd/conf.d/00-placement-api.conf”.
[root@controller ~]$ systemctl restart httpd
If all went fine and was configured correctly, the service can be verified to be working with:
[root@controller ~]$ placement-status upgrade check
+-------------------------------------------+
| Upgrade Check Results |
+-------------------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: Policy File JSON to YAML Migration |
| Result: Success |
| Details: None |
[root@controller ~]$ dnf install python3-osc-placement -y
[root@controller ~]$ openstack --os-placement-api-version 1.2 resource class list
+----------------------------------------+
| name |
+----------------------------------------+
| VCPU |
| MEMORY_MB |
| DISK_GB |
| PCI_DEVICE |
| SRIOV_NET_VF |
| NUMA_SOCKET |
| NUMA_CORE |
| NUMA_THREAD |
| NUMA_MEMORY_MB |
| IPV4_ADDRESS |
| VGPU |
| VGPU_DISPLAY_HEAD |
| NET_BW_EGR_KILOBIT_PER_SEC |
| NET_BW_IGR_KILOBIT_PER_SEC |
| PCPU |
| MEM_ENCRYPTION_CONTEXT |
| FPGA |
| PGPU |
| NET_PACKET_RATE_KILOPACKET_PER_SEC |
| NET_PACKET_RATE_EGR_KILOPACKET_PER_SEC |
| NET_PACKET_RATE_IGR_KILOPACKET_PER_SEC |
+----------------------------------------+
Fine, now we have the Image and Placement service up and running and our setup looks like this:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack10-1024x420.png)
In the next post we’ll continue with setting up the compute service (Nova).
L’article Creating your private cloud using OpenStack – (4) – The Image and Placement services est apparu en premier sur dbi Blog.
Automate your Deployments in Azure with Terraform!
Terraform is a strong open-source declarative and platform agnostic infrastructure as code (IaC) tool developed by HashiCorp. It facilitates the deployment and whole management of infrastructure. In this hands on blog I will show you how you can use Terraform to automate your cloud deployments in Azure.
Initial Setup:For this blog I’m using a ubuntu server as a automation server where I’m running Terraform. You can install Terraform on different operating systems. For instructions how to install Terraform check out this link from HashiCorp.
Starting with the hands on part I’m creating a new dedicated directory for my new Terraform project:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-20.png)
Within this new directory I’m creating the following files which will hold my configuration code:
- main.tf
- providers.tf
- variables.tf
- outputs.tf
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-21.png)
For the authentication with Azure I’m using the Azure CLI command line tool. You can Install the Azure CLI on Ubuntu with one command which curls a script, provided by Microsoft, from the internet and executes it on your system:
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
To get more information’s about how to install the Azure CLI on your system, checkout this link from Microsoft.
As far as the installation is successfully done, you can verify it with the following command:
az –version
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-22.png)
Then use the following command for connecting to Azure:
az login
This command will open a browser window where you can sign in to Azure:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-23.png)
After you successfully authenticated yourself to Azure, you can check your available subscriptions with the following command:
az account list
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-24.png)
As the Azure CLI is now installed on the system and we are successfully authenticated to Azure, we can now start with the configuration of Terraform and the required provider for interacting with the Azure cloud platform.
Therefore I add the code block below into the providers.tf file which will tell terraform to install and initialize the azurerm provider with the specific version 4.10.0, which is the latest at the moment:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-25.png)
To configure the azurerm provider, I add the provider code block below additionally into the providers.tf file.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-26.png)
You can find your subscription ID in the output from the “az account list” command above.
After inserting those code blocks into the providers.tf file, we can install the defined azurerm provider and initialize Terraform by running the below command in our project directory:
terraform init
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-27.png)
As Terraform is now successfully initialized and the required provider is installed, we can start with the development of our infrastructure code.
But before doing so I would like to target the concept of workspaces in Terraform. Workspaces enables you to use the same configuration code for multiple environments through separate state files. You can imagine workspaces as a separated deployment environment and you terraform code as a independent plan or image of your infrastructure. As an example, imagine you added a new virtual machine to your terraform code and deployed it in production. If you now want to have the same virtual machine for test purposes, you just have to switch into your test workspace and run the terraform code again. You will have the exact same virtual machine within a few minutes!
To check the workspaces you have in your Terraform project, use the following command:
terraform workspace list
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-28.png)
As you can see we just have the default workspace in our new Terraform project. I want to deploy my infrastructure in this hands on blog post for multiple environments, therefore I will create some new workspaces. Lets assume we have a development, test and production stage for our infrastructure. I will create therefore the workspaces accordingly with the commands below:
terraform workspace new development
terraform workspace new test
terraform workspace new production
After executing these commands, we can now check again our available workspaces in our terraform project:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-29.png)
Note that terraform will let you know your current workspace through the “*” symbol behind the particular workspace. We want to deploy our infrastructure for development first. So I will switch back into the development workspace with the following command:
terraform workspace select development
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-30.png)
As the workspaces are now successfully created, we can start with our configuration code.
First of all I go into the variables.tf file and add the variable code block below to that file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-31.png)
I will use this “env” variable for the suffix or prefix of resource names, which I will deploy, to simple recognize to which environment these resources belong.
Next I will create a resource group in Azure. Therefore I add the code block below to the main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-32.png)
As you can see I set the name of the resource group dynamically with the prefix “RG_” and the value for the current workspace in the variable “env”, which I’ve defined before in the variables.tf file. The variable “terraform.workspace” is a default variable which refers to the current workspace.
To check which resource terraform would create in case we would apply the current configuration code, we can run the following command:
terraform plan
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-33.png)
We can see that terraform would create a new resource group with the name “RG_DEV”.
Create a Virtual Network and Subnets:Next I will create a virtual network. Therefore I add the variable code block below to the variables.tf file. This variable defines for each environment stage a own address space:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-34.png)
I add now the code block below to the main.tf file to create a virtual network:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-35.png)
As you can see I’m referencing here as well to the “env” variable for dynamically setting the suffix of the network name and as well to the new “cidr” variable to set the address space of the virtual network.
Next I will create some subnets within the virtual network. I want to create 4 subnets in total:
- A front tier subnet
- A middle tier subnet
- A backend tier subnet
- A bastion subnet for the administration
Therefore I add the variable below to my variables.tf file, which defines for each environment stage and subnet an address space:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-36.png)
Next I will add for each subnet a new resource block to the main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-37.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-38.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-39.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-40.png)
Note that I enabled in the backend tier subnet the option “private_endpoint_network_policies”. This is a option which enforces the network security groups to take effect on the private endpoints in the particular subnet. Checkout this link from Microsoft for more information’s about this option.
Create an Azure SQL Database:Next I will create an Azure SQL Server. Therefore I add the variable below to my variables.tf file. This variable is supposed to hold the admin password of the Azure SQL Server. I set the sensitivity option for this variable which will prevent the password to be exposed in the terminal output or in the Terraform logs:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-41.png)
I did also not set any value in the configuration files, instead I will set the variable value as a environment variable before applying the configuration.
Next I add the code block below to my main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-42.png)
As you can see I referenced the “sqlserver_password” variable to set the password for the “sqladmin” user. I also disabled the public network access to prevent database access over the public endpoint of the server. I will create instead a private endpoint later on.
Next I will create the Azure SQL Database. Therefore I add the variable below to my variables.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-43.png)
The thought behind this variable is, that we have different requirements for the different stages. The general purpose SKU is sufficient for the non-productive databases but for the productive one we want the business critical service tier. As well as we want to have 30 days of point in time recovery for our productive data while 7 days is sufficient for non-productive and we want to store our productive database backups on geo-zone redundant storage while zone redundant storage is sufficient for the non-productive databases.
Then I add the resource block below into my main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-44.png)
As you can see I’m referencing to my “database_settings” variable to set the configuration options dynamically.
Create a DNS Zone and a Private Endpoint:For name resolution I will next create a private DNS zone. For that I add the resource block below to my main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-45.png)
To associate this private DNS zone now with my virtual network, I will next create a virtual network link. Therefore I add the resource block below to my main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-46.png)
To be able to securely connect to my azure sql database I will now create a private endpoint in my backend subnet. Therefore I add the resource block below to my main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-47.png)
With this configuration code, I create a private endpoint with the name of the Azure SQL Server and the suffix “-endpoint”. Through the option “subnet_id” I place this endpoint in the backend subnet with a private service connection to the Azure SQL Server. I also associate the endpoint to the private DNS zone, which I’ve created just before, for name resolution.
Create an Azure Bastion:Lets now continue and create an azure bastion host for the administration of our environment. Therefore I first create a public IP address through adding the resource block below to my main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-48.png)
Next I create the bastion host itself. For that I add the code block below to my main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-49.png)
Now I will add a virtual machine to my middle tier subnet. Therefore I need to create first a network interface for that virtual machine. The resource block below will create the needed network interface:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-50.png)
As the virtual machine, which I intend to create, needs an admin password like the azure sql server, I will create an additional password variable. Therefore I add the code block below to my variables.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-51.png)
To create the virtual machine itself, I add the resource block below to my main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-52.png)
Next I want to secure my subnets. Therefore I create for my front tier, middle tier and backend tier subnet a network security group by adding the resource blocks below to my main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-53.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-54.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-55.png)
Next I create for each network security group particular rules.
Starting with the front tier subnet I want to block all Inbound traffic except traffic over https. Therefore I add the two resource blocks below to my main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-56.png)
Continuing with the middle tier subnet I want to block all inbound traffic but allow http traffic only from the front tier subnet and allow rdp traffic only from the bastion subnet. Therefore I add the three resource blocks below to my main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-57.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-58.png)
Last but not least I want to block all Inbound traffic to my backend tier subnet except traffic to the sql-server port from the middle tier subnet. In addition I want to block explicitly the internet access from this subnet.
You are questioning why I’m explicitly block internet access from this subnet while I haven’t any public IP address or NAT gateway for this subnet? That’s because Microsoft provides access to the internet through a default outbound IP address in case no explicit way is defined. That’s a feature which will be deprecated on the 30th September 2025. To get more information’s about this feature check out this link from Microsoft.
To create the rules for the backend tier subnet I add the three resource blocks below to my main.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-59.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-60.png)
I will stop with the creation of resources for this blog post and will show you finally how you can define outputs. For example let’s assume we want to have the name of the Azure SQL Server and the IP address of the virtual machine extracted after the deployment. Therefore I add the two output variables below to the outputs.tf file:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-61.png)
Outputs are especially useful when you need to pass up information’s from the deployment to a higher context. For example when you are working with modules in terraform and you want to pass information’s from a child module to a parent module. In our case the outputs will just be printed out to the command line after the deployment.
Apply the Configuration Code:As I am now done with the definition of the configuration code for this blog post, I will plan and apply my configuration for each stage. Before doing so, I need to first set a value for my password variables. On Ubuntu this can be done with this command:
export TF_VAR_sqlserver_password=”your password”
export TF_VAR_vm_password=”your password”
After I’ve set the variables, I run the terraform plan command and we can see that terraform would create 29 resources:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-62.png)
This seems to be good for my so I run the terraform apply command to deploy my infrastructure:
terraform apply
After some minutes of patience terraform applied the configuration code successfully:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-63.png)
When I’m signing in to the Azure portal I can see my development resource group with all the resources inside:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-64.png)
I want to have my test and production resources as well so I switch the terraform workspace to test and production and run in both workspaces again the terraform apply command.
After some additional minutes of patience we can see in the Azure portal that we have now all resources for each environment stage:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-65.png)
Lets now compare the settings from the productive database with the development database and we can see that the SKU for the productive one is business critical while the SKU for the non-productive one is general purpose:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-66.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-67.png)
The Backup storage has also been set according to the “database_settings” variable:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-68.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-71.png)
We can see the same for the point in time recovery option:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-72.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-73.png)
We can see that our subnets are also all in place with the corresponding address space and network security group associated:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-74.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-75.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-76.png)
Lets check the private endpoint of the Azure SQL Server. We can see that we have a private IP address within our backend subnet which is linked to the Azure SQL Server:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-77.png)
Lets connect to the virtual machine and try a name resolution. You can see that we were able to successfully resolve the FQDN:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-78.png)
After installing SQL-Server management studio on the virtual machine, we can also connect to the Azure SQL Server through the FQDN of the private endpoint:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/image-79.png)
For now preventing getting a high bill for something I didn’t use, I will now delete all resources which I’ve created with Terraform. This is very simple, and can be done through running the terraform destroy command in each workspace:
terraform destroy
I hope you got some interesting examples and ideas about Terraform and Azure! Feel free to share your questions and feelings about Terraform and Azure with me in the comment section below.
L’article Automate your Deployments in Azure with Terraform! est apparu en premier sur dbi Blog.
Creating your private cloud using OpenStack – (1) – Introduction
While public clouds are a trend since several years now, some companies are also looking into self hosted solutions to build a private cloud. Some do this because of costs, others do this because they don’t want to be dependent on one ore multiple public cloud providers, others do it because they want to keep their data locally. There are several solutions to this and depending on the requirements those might or might not be an option. Some of the more popular ones are:
- VMware: Obviously, one of the long term players and well known but sold to Broadcom, which not everybody is happy with
- Proxmox: A complete open source virtualization solution based on Debian and KVM, can also deploy containerized workloads based on LXC.
- Nutanix: A hyper converged platform that comes with its own hypervisor, the Acropolis Hypervisor (AHV).
- Red Hat Virtualization: Red Hat’s solution for virtualized infrastructures, but this product is in maintenance mode and Red Hat fully goes for Red Hat OpenShift Virtualization nowadays.
- Harvester: A complete open source hyper converged infrastructure solution. SUSE comes with a commercial offering for this, which is called SUSE virtualization.
- … and many others
The other major player which is not in the list above is OpenStack, which started in 2010 already. OpenStack is not a single product, but more a set of products combined together to provide a computing platform to deploy your workloads on top of either virtual machines, or containers, or a mix of both. There are plenty of sub projects which bring in additional functionality, check here for a list. The project itself is hosted and supported by the OpenInfra Foundation, which should give sufficient trust that it will stay as a pure open source project (maybe have a look at the OpenInfra supporting organizations as well, to get an idea of how widely it is adopted and supported).
The main issue with OpenStack is, that it is kind of hard to start with. There are so many services you might want to use that you probably get overwhelmed at the beginning of your journey. To help you a bit out of this, we’ll create a minimal, quick and dirty OpenStack setup on virtual machines with just the core services:
- Keystone: Identity service
- Glance: Image service
- Placement: Placement service
- Nova: Compute service
- Neutron: Network service
- Horizon: The OpenStack dashboard
We’ll do that step by step, because we believe that you should know the components which finally make up the OpenStack platform, or any other stack you’re planning to deploy. There is also DevStack which is a set of scripts for the same purpose, but as it is scripted you’ll probably not gain the same knowledge than by doing it manually. There is OpensStack-helm in addition, which deploys OpenStack on top of Kubernetes, but this as well is out of cope for this series of blog posts. Canonical offers MicroStack, which also can be used to setup a test environment quickly.
Automation is great and necessary, but it also comes with a potential downside: The more you automate, the more people you’ll potentially have who don’t know what is happening in the background. This is usually fine as long as the people with the background knowledge stay in the company, but if they leave you might have an issue.
As there are quite some steps to follow, this will not be single blog post, but split into parts:
- Introduction (this blog post)
- Preparing the controller and the compute node
- Setting up and configuring Keystone
- Setting up and configuring Glance and the Placement service
- Setting up and configuring Nova
- Setting up and configuring Neutron
- Setting up and configuring Horizon, the Openstack dashboard
In the most easy configuration, the OpenStack platform consists of two nodes: A controller node, and at least one compute node. Both of them will require two network interfaces, one for the so-called management network (as the name implies, this is for the management of the stack and communication with the internet), and the other one for the so-called provider network (this is the internal network e.g. the virtualized machines will be using to communicate with each other).
When it comes to your choice of the Linux distribution you want to deploy OpenStack on, this is merely a matter of taste. OpenStack can be deployed on many distributions, the official documentation comes with instructions for Red Hat based distributions (which usually includes Alma Linux, Rocky Linux and Oracle Linux), SUSE based distributions (which includes openSUSE Leap), and Ubuntu (which also should work on Debian). For the scope of this blog series we’ll go with a minimal installation of Rocky Linux 9, just because I haven’t used it since some time.
OpenStack itself is released in a six month release cycle and we’ll go with 2024.2 (Dalmatian), which will be supported until the beginning of 2026. As always, you should definitely go with the latest supported release so you have the most time to test and plan for future upgrades.
To give you an idea of what we’ve going to start with, here is a graphical overview:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/01/ostack1-1024x443.png)
Of course this very simplified, but it is enough to know for the beginning:
- We have two nodes, one controller node and one compute node.
- Both nodes have two network interfaces. The first one is configured using a 192.168.122.0/24 subnet and connected to the internet. The second one is not configured.
- Both nodes are installed with a Rocky Linux 9 (9.5 as of today) minimal installation
We’ll add all the bits and pieces to this graphic while we’ll be installing and configuring the complete stack, don’t worry.
That’s it for the introduction. In the next post we’ll prepare the two nodes so we can continue to install and configure the OpenStack services on top of them.
L’article Creating your private cloud using OpenStack – (1) – Introduction est apparu en premier sur dbi Blog.
Introducing YaK 2.0: The future of effortless PaaS deployments across Clouds and On-Premises
![YaK 2.0 Automated multi-cloud PaaS deployment](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/10/YAK-logotype-CMJN-2022-1-1024x278.png)
Hello, dear tech enthusiasts and cloud aficionados!
We’ve got some news that’s about to make your life —or your deployments, at least— a whole lot easier. Meet YaK 2.0, the latest game-changer in the world of automated multi-cloud PaaS deployment. After months of development, testing, troubleshooting, a fair share of meetings and way too much coffee, YaK is officially launching today.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/11/YaK-Launch1.jpg)
Because we believe IT professionals should spend time on complex, value-added tasks, but not on repetitive setups, we have decided to develop the YaK.
YaK is a framework that allows anyone to deploy any type of component on any platform, while ensuring quality, cost efficiency and reducing deployment time.
YaK 2.0 is your new best friend when it comes to deploying infrastructure that’s not just efficient but also identical across every platform you’re working with – be it multi-cloud or on-premises. Originating from the need to deploy multi-technology infrastructures quickly and effortlessly, YaK ensures your setup is consistent, whether you’re working with AWS, Azure, Oracle Cloud, or your own on-prem servers.
In simpler terms, YaK makes sure your deployment process is consistent and reliable, no matter where. Whether you’re scaling in the cloud or handling things in-house, YaK’s got your back.
Why you should have a look at YaK 2.0?Here’s why we think YaK is going to become your favorite pet:
- Flexibility: Deploy across AWS, Azure, OCI, or your own servers—YaK adapts to your infrastructure, making every platform feel like home.
- Automation: Eliminate repetitive setups with automated deployments, saving you time and headaches.
- Cost efficiency & speed: YaK cuts time-to-market, streamlining deployments for fast, standardized rollouts that are both cost-effective and secure.
- Freedom from vendor lock-In: YaK is vendor-neutral, letting you deploy on your terms, across any environment.
- Swiss software backed up by a consulting company (dbi services) with extensive expertise in deployments.
With this release, we’re excited to announce a major upgrade:
- Sleek new user interface: YaK 2.0 now comes with a user-friendly interface, making it easier than ever to manage your deployments. Say hello to intuitive navigation.
![YaK 2.0 Automated multi-cloud PaaS deployment](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/10/YaK-UI_main_screen-1024x621.png)
- Components: We’ve got components on our roadmap (available with an annual subscription), and we’ll be announcing them shortly : Oracle Database, PostgreSQL, MongoDB, and Kubernetes are already on the list and will be released soon.
Many more will follow… Stay tuned!
How does it work?YaK Core is the open-source part and is the heart of our product, featuring Ansible playbooks and a custom plugin that provides a single inventory for all platforms, making your server deployments seamless across clouds like AWS, Azure, and OCI.
If you want to see for yourself, our GitLab project is available here!
![YaK 2.0 Automated multi-cloud PaaS deployment](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/10/YaK_core-1024x645.png)
YaK Components are the value-added part of the product and bring you expert-designed modules for deploying databases and application servers, with an annual subscription to dbi services.
Join the YaK packExplore the power of automated multi-cloud PaaS deployment with YaK 2.0 and experience a new level of efficiency and flexibility. We can’t wait for you to try it out and see just how much it can streamline your deployment process. Whether you’re a startup with big dreams or an established enterprise looking to optimize, YaK is here to make your life easier.
Our YaK deserved its own web page, check it out for more information, to contact us or to try it out (free demo environments will be available soon): yak4all.io
Wanna ride the YaK? Check out our user documentation to get started!
We promise it’ll be the smoothest ride you’ve had in a while.
We’re not just launching a product; we’re building a community. We’d love for you to chime in, share your experiences, and help us make YaK even better. Follow us on LinkedIn, join our community on GitLab, and let’s create something amazing together.
Feel free to reach out to us for more details or for a live presentation: info@dbi-services.com
Thanks for being part of this exciting journey. We can’t wait to see what you build with YaK.
The YaK Team
—
P.S. If you’re wondering about the name, well, yaks are known for being hardy, reliable, and able to thrive in any environment. Plus, they look pretty cool, don’t you think?
L’article Introducing YaK 2.0: The future of effortless PaaS deployments across Clouds and On-Premises est apparu en premier sur dbi Blog.
New Oracle Database Appliance X11 series for 2025
Oracle Database Appliance X10 is not so old, but X11 is already out, available to order.
Let’s find out what’s new for this 2025 series.
What is an Oracle Database Appliance?ODA, or Oracle Database Appliance, is an engineered system from Oracle. Basically, it’s an x86-64 server with a dedicated software distribution including Linux, Oracle Grid Infrastructure (GI) including Automatic Storage Management and Real Application Cluster, Oracle database software, a Command Line Interface (CLI), a Browser User Interface (BUI) and a virtualization layer. The goal being to simplify database lifecycle and maximize performance. Market position is somewhere between OCI (the Oracle public Cloud) and Exadata (the highest level engineered system – a kind of big and rather expensive ODA). For most clients, ODA brings both simplification and performance they just need. For me, ODA has always been one of my favorite solutions, and undoubtedly a solution to consider. X11 doesn’t change the rules regarding my recommendations.
To address a large range of clients, ODA is available in 3 models: S, L and HA.
For Enterprise Edition (EE) users, as well as for Standard Edition 2 (SE2) users, ODA has a strong advantage over its competitors: capacity on demand licensing. With EE you can start with 1x EE processor license (2 enabled cores). With SE2 you can start with 1x SE2 processor license (8 enabled cores). You can later scale up by enabling additional cores according to your needs.
On the processor sideX11 still rely on Epyc series for its processor, according to Oracle recent long-term commitment to AMD.
Is the X11 CPU better than X10 ones? According to data sheets, ODA moves from Epyc 9334 to Epyc 9J15. This latest version may be specific to Oracle as it doesn’t appear on the AMD website. Looking at the speed, Epyc 9334 is clocked from 2.7Ghz to 3.9GHz, and Epyc 9J15 is clocked from 2.95Ghz to 4.4Ghz. As a consequence, you should probably expect a 10% performance increase per core. Not a huge bump, but X10 was quite a big improvement over X9-2 Xeon processors. Each processor has 32 cores, and there is still 1 processor on X11-S and 2 on X11-L. As X11-HA is basically two X11-L without local disks but connected to a disk enclosure, each node also have 2 Epyc processors.
Having a better CPU does mean better performance, but also less processor licenses needed for the same workload. It’s always something to keep in mind.
RAM and disks: same configuration as outgoing X10Nothing new about RAM on X11, the same configurations are available, from 256GB on X11-S, and from 512GB on X11-L and each node of the X11-HA. You can double or triple the RAM size if needed on each server.
On X11-S and L models, data disks have the same size as X10 series: 6.8TB NVMe disks. X11-S has the same limitation as X10-S, only 2 disks and no possible expansion.
X11-L also comes with 2 disks, but you can add pairs of disks up to 8 disks, meaning 54TB of RAW storage. Be aware that only 4 disk slots are available on the front panel. Therefore, starting from the third pair of disks, disks are different: they are Add-In-Cards (AIC). It means that you will need to open your server to add or replace these disks, with a downtime for your databases.
X11-HA is not different compared to X10-HA, there is still a High Performance (HP) version and a High Capacity (HC) version, the first one being only composed of SSDs, the second one being composed of a mix of SSDs and HDDs. SSDs are 7.68TB each, and HDDs are 22TB each.
Network interfacesNothing new regarding network interfaces. You can have up to 3 of them (2 are optional), and you will choose for each between a quad-port 10GBase-T (copper) or a two-port 10/25GbE (SFP28). You should know that SFP28 won’t connect to 1Gbps fiber network. But using SFPs for a network limited to 1Gbps would not make sense.
Software bundleLatest software bundle for ODA is 19.25, so you will use this latest one on X11. This software bundle is also compatible with X10, X9-2, X8-2 and X7-2 series. This bundle is the same for SE2 and EE editions.
What are the differences between the 3 models?The X11-S is an entry level model for a small number of small databases.
The X11-L is much more capable and can get disk expansions. A big infrastructure with hundreds of databases can easily fit on several X10-L.
The X11-HA is for RAC users because High Availability is included. The disk capacity is much higher than single node models, and HDDs are still an option. With X11-HA, big infrastructures can be consolidated with a very small number of HA ODAs.
ModelDB EditionnodesURAMRAM maxRAW TBRAW TB maxbase priceODA X11-SEE and SE212256GB768GB13.613.624’816$ODA X11-LEE and SE212512GB1536GB13.654.440’241$ODA X11-HA (HP)EE and SE228/122x 512GB2x 1536GB46368112’381$ODA X11-HA (HC)EE and SE228/122x 512GB2x 1536GB390792112’381$You can run SE2 on X11-HA, but it’s much more an appliance dedicated to EE clients.
I’m not so sure that X11-HA still makes sense today compared to Exadata Cloud@Customer: study both options carefully if you need this kind of platform.
In the latest engineered systems price list (search exadata price list and you will easily find it), you will see X11 series alongside X10 series. Prices are the same, so there is no reason to order the old ones.
Which one should you choose?If your databases can comfortably fit on the storage of the S model, don’t hesitate as you will probably never need more.
Most interesting model is still the new X11-L. L is quite affordable, has a great storage capacity, and is upgradable if you don’t buy the full system at first.
If you still want/need RAC and its associated complexity, the HA may be for you but take a look at Exadata Cloud@Customer and compare the costs.
Don’t forget that you will need at least 2 ODAs for Disaster Recovery purpose, using Data Guard (EE) or Dbvisit Standby (SE2). No one would recommend buying a single ODA. Mixing S and L is OK, but I would not recommend mixing L and HA ODAs just because some operations are handled differently when using RAC.
I would still prefer buying 2x ODA X11-L compared to 1x ODA X11-HA. NVMe speed, no RAC and the simplicity of a single server is definitely better in my opinion. Extreme consolidation is not always the best solution.
ConclusionODA X11 series is a slight refresh of X10 series, but if you were previously using older generations (for example X7-2 that comes to end of life this year) switching to X11 will make a significant difference. In 2025, ODA is still a good platform for database simplification and consolidation. And it’s still very popular among our clients.
Useful linksSE2 licensing rules on ODA X10 (apply to X11)
Storage and ASM on ODA X10-L (apply to X11)
L’article New Oracle Database Appliance X11 series for 2025 est apparu en premier sur dbi Blog.
PostgreSQL 18: Change the maximum number of autovacuum workers on the fly
For PostgreSQL it is critical, that autovacuum is able to keep up with the changes to the instance. One of the parameters you can adapt for this is autovacuum_max_workers. This parameter controls how many worker process can be started in parallel by the autovacuum launcher process. By default, this is set to 3, which means that a maximum of three worker processes can run in parallel to do the work. While you can increase this parameter easily, it requires a restart of instance to become active. Starting with PostgreSQL 18 (scheduled to be released later this year), you’ll be able to change this on the fly.
The default configuration for PostgreSQL 18 is still three worker processes:
postgres=# select version();
version
------------------------------------------------------------------------
PostgreSQL 18devel on x86_64-freebsd, compiled by clang-18.1.6, 64-bit
(1 row)
postgres=# show autovacuum_max_workers ;
autovacuum_max_workers
------------------------
3
(1 row)
But now, you can increase that on the fly without restarting the instance (sighup means reload):
postgres=# select context from pg_settings where name = 'autovacuum_max_workers';
context
---------
sighup
(1 row)
postgres=#
On a PostgreSQL 17 (and before) instance, it looks like this (postmaster means restart):
postgres=# select version();
version
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 17.2 on x86_64-unknown-freebsd14.2, compiled by FreeBSD clang version 18.1.6 , 64-bit
(1 row)
postgres=# select context from pg_settings where name = 'autovacuum_max_workers';
context
------------
postmaster
(1 row)
So, lets assume we have an instance where a lot of stuff is going on and we want to increase the worker processes to 32 on a PostgreSQL 18 instance:
postgres=# alter system set autovacuum_max_workers = 32;
ALTER SYSTEM
postgres=# select pg_reload_conf();
pg_reload_conf
----------------
t
(1 row)
postgres=# show autovacuum_max_workers ;
autovacuum_max_workers
------------------------
32
(1 row)
That seems to have worked, but if you take a look at the PostgreSQL log file you’ll notice this:
2025-01-08 11:48:59.504 CET - 1 - 4411 - [local] - postgres@postgres - 0LOG: statement: alter system set autovacuum_max_workers = 32;
2025-01-08 11:49:05.210 CET - 8 - 4174 - - @ - 0LOG: received SIGHUP, reloading configuration files
2025-01-08 11:49:05.212 CET - 9 - 4174 - - @ - 0LOG: parameter "autovacuum_max_workers" changed to "32"
2025-01-08 11:49:05.221 CET - 1 - 4180 - - @ - 0WARNING: "autovacuum_max_workers" (32) should be less than or equal to "autovacuum_worker_slots" (16)
This means there still is a limit, which is defined by “autovacuum_worker_slots“, and the default for this one is 16. For most instances this probably is fine, you can go from the default (3) anywhere up to 16 without restarting the instance. If you think you’ll need more from time to time, then you also should increase autovacuum_worker_slots, but this does require a restart of the instance.
Here the link to the commit.
L’article PostgreSQL 18: Change the maximum number of autovacuum workers on the fly est apparu en premier sur dbi Blog.
M-Files Ment: no code document automation solution in practice
Who has never experienced repetitive tasks writing documents based on similar layout content, making sure it is compliant. Don’t you think this activity is using a lot of your precious knowledge worker time and often include small errors or typos?
Let’s see how M-Files Ment solution will help you to achieve this goal assuming your environment has been properly installed and set up (take a look here)
1. “Hiring contract” use caseIn order to edit, send, sign and register a brand-new hiring contract to a future new employee/recruit, C-Management and HR Administration teams have to create such document and fill in several information and data, according to candidate residence, subsidiary and so on.
Prior entering into this procedure, applicant will have already completed all prerequisite job interviews and sent his CV to the company accordingly. This part is out of the scope of this blog but it could of course be managed with dedicated M-Files workflow.
Hence, as soon as C-Management team members agree to pursue with the chosen one, they informed HR Administration responsible team member to create a new hiring contract with all needed applicant and job information.
During Word document draft, HR Administration responsible team member will fill in below information in applicant mother tongue language, say Swiss French or Swiss German for simplicity:
- Applicant Name
- Applicant Address
- Applicant Email Address
- Job title category (such as Back office, Consultant or SD-Consultant)
- Subsidiaries incorporation
As soon as you reach your M-File Ment company welcome page, depending on your user profile, you have the choice to choose between different menus. At most, with “Admin” user rights yow will get access to all below functionalities:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-86.png)
Additionally, an extra “Videos” menu is provided so you can watch quick short videos explaining how to handle the different software features.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-91-1024x329.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-114-300x247.png)
As you can guess, let’s start by the beginning and automate a new template
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-87.png)
Fill in the different fields. Some are mandatory such as “Template name” (clear enough), “Document styles” (see section 3.11 for details) and “Available for”
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-88.png)
Let’s set “Template Name” field to “Hiring Contract – English Question(s)” in our example.
It is possible to categorize your template using specific “Category” keyword(s) so you can filter based on these categories and retrieve the right template when you want to generate new document(s). Similarly, you can also tag your template to make your search in archives easier.
“Available for” field is used to allow which Ment user profile (Admin, Author, Manager or User) can use the template:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-89.png)
Below a slight description helping to understand what this template relates to
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-90-671x1024.png)
You may have observed that you can “Use M-Files vault as the datasource” to rely on its content to achieve your purpose. We are using “dbi – Ment” vault where several object types, classes and properties were created.
3.2 Document styles![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-116.png)
As soon as you will start creating a template, you will have to use a dedicated document “Style”. Depending on your needs and most probably Microsoft Word document text format content you are targeting, you will either use “Basic 2.0” default one or have to create your own.
See “Basic 2.0” example:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-110-1024x295.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-112-1024x498.png)
Amend the different elements of your “Document style” from the right hand side drop down menu like headings and paragraphs and check the preview reflecting the changes you made on the left. Don’t forget to save your changes hence documents that use your style will be modified accordingly. It is entirely possible to change a document style after you originally chose one during the first uploading template step creation. Moreover, if you create a template with several automated documents. it is also possible to use specific “Document styles” for each of them:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-113.png)
At this stage, we need to look at M-Files “dbi – Ment” vault quickly to avoid any confusion about objects we will interact with.
4. M-Files “dbi – Ment” vault objectsFor the exercise, we will use “Sandra Applicant” person object with below metadata properties
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-120-1024x806.png)
And dbi services Delémont Office metadata properties
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-119-849x1024.png)
All above objects have been set in M-Files vault by accountable user as prerequisite.
Now that everything is in order, let’s go back to M-Files Ment and complete our task.
Set a document name.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-122-1024x467.png)
Then you can either choose to upload your Word .docx file source.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-123-1024x449.png)
Or click on “Add document” and start to write down directly your text.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-124-1024x798.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-125-1024x590.png)
Let’s choose this second option and add all our necessary document text content.
Remember, at the end, the result is to produce either a Swiss French content version or a Swiss German one.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-126-1024x755.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-127-1024x884.png)
Hence, the order of appearance is not really important at all since this will be managed later through block automation next steps.
Save your changes.
Before we move on, you have certainly observed that our style is not yet applied.
To do so, we need to select each text element (title, header, table, lists)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-128-1024x354.png)
and use the proper menu corresponding to our needs (adjusting fonts and margins accordingly).
Example with “Heading 1”:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-129-1024x391.png)
Once this action completed, you can preview your document selecting “Preview” button to visualize your final document style version:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-130-1024x890.png)
Let’s assume our format is completed successfully for both sections
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-131-1024x838.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-132-1024x805.png)
Now here comes the power of Ment.
5.2 One, two, three – add automationAll automations are done in three steps:
- Select the text and automation
- Define the question
- Combine the text and the answer
These stand in below menu:
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-133-1024x120.png)
“Free text field(s)” give the users the possibility to generate any text to the document.
“Inline automation” don’t give the users the possibility to generate any text, but to select between the options you have created.
“Block automation” give the users the possibility to select between the options you have created
Let’s define a block automation selecting the whole Swiss German text section first
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-134-1024x857.png)
Click on “Block automation” menu
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-136-1024x466.png)
A new panel opens requesting the author to define a question leading, according to answer(s) definition, to fill in the document target content.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-137-1024x858.png)
Select “Add a new question” and the type corresponding to your needs
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-140-1024x487.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-141-1024x663.png)
Finally click “Next” and combine the answer to the text selected
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-143-1024x688.png)
Then “Done”, and here you are
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-145-1024x781.png)
Repeat these steps with the second Swiss French section to combine the second answer of this first question. But this time, we will “choose an existing question”
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-146-1024x454.png)
and select the corresponding answer “FR” to combine text section previously selected
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-147-1024x850.png)
Then “Done”. In Ment author interface, the result is presenting now with 2 yellow blocks bounded to one question “Hiring contract language version”, used two times as text generated in targeted document will be either in French or German.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-148-1024x806.png)
Click on “Preview” to visualize and test your automated document
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-151-1024x740.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-152-1024x739.png)
From here, I’m pretty sure you are starting to perceive the growth potential of this tool and hopefully that’s just a beginning.
Let’s carry on with a “Free text field(s)” automation. This time, we will rely on M-Files vault metadata introduced in section 4.
Select the text “Person Full Name” and “Free text field(s)” automation
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-153-1024x622.png)
Select “dbi – Ment” M-Files vault data source where our metadata are located
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-155-1024x759.png)
Add a new question. The tip here is to avoid creating repetitive questions to fulfill different part of the document. Instead, anticipate and group answer elements to update these information accordingly all in one. In this example, job applicant (Person) “Address Full Name” will be set and provided with his/her “Person Full Name”.
Select “Person” M-Files vault object class
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-156.png)
Choose “Person Full Name” property
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-157-1024x879.png)
And additionally “Address Full Name” property
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-158.png)
Click “Next”.
Finally, combine the answer to the text field originally selected
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-159-1024x808.png)
click “Done”.
Half part of this “Select Applicant / Person details” question completed.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-160-1024x731.png)
Repeat this procedure for “Address Full Name” as mentioned before and choose an existing question (you guessed in advance which one)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-161-1024x519.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-162-1024x449.png)
And “Done”.
As a result, both fields are now automated as expected. Notice question / automation usage incrementation
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-163-1024x527.png)
Now, if you remember, we would like to have these information as well in Swiss French version of this document. Then let’s go for it. Not a big deal since a simple copy/paste of our automated fields should accomplish the job.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-164-1024x530.png)
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-165-1024x715.png)
Do not forget to save your changes. Notice again question / automation usage incrementation.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-166-1024x718.png)
All good. What about the preview of these automation ?
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-167-1024x748.png)
- Swiss German version
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-168-1024x756.png)
- Swiss French version
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-169-1024x755.png)
At the end, in order to let Ment users generate a document based on one template, an admin or the author has to publish it.
![](https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2024/12/image-170.png)
All these steps let you foresee what you can achieve with M-Files Ment.
6. What’s nextThis brief introduction to M-Files Ment shows you what an author or admin can do to automate document generation based on template(s) for knowledge workers.
This is not all. Combining these document templates to M-Files workflows can leverage your Business organization to achieve and automate complex tasks with finest controls and results.
I encourage you to watch our recordings (FR version or DE version).
Enjoy watching and do not hesitate to contact us for any project integration or support.
L’article M-Files Ment: no code document automation solution in practice est apparu en premier sur dbi Blog.
Documentum – Login through OTDS without oTExternalID3
As you might know, Documentum “deprecated” (in reality disabled completely) the different Authentication Plugins that were bundled with a Documentum Server. That means that with recent versions of Documentum, you cannot login to your LDAP-managed account anymore without having configured an OTDS and integrated it with your Documentum Server. After you installed the OTDS, and configured it to work with Documentum, you might be faced with an annoying behavior that makes it impossible to login. This is because, by default, it only supports one specific configuration for the user_login_name (i.e. oTExternalID3). There is a workaround, but it’s not documented, as far as I know, so I’m writing this blog to share that information.
When logging in to a Documentum Server, using the “connect” iAPI command, the Repository will verify if the user_login_name exists. If yes, it will send the Authentication request to the JMS, which will contact the OTDS with the details provided. The OTDS will perform the authentication with whatever Identity Provider you configured inside it and return the result to the JMS, which will then confirm the details to the Repository to either allow or deny the login. In this case, it doesn’t matter if the user_source of the dm_user is configured with “LDAP” or “OTDS”. Both will behave in the same way and the request will be sent to the JMS and then the OTDS. That’s the theory, but there are some bug / caveats that I might cover in another blog.
I. OTDS Synchronization with default configurationTo do some testing or if you are setting-up a freshly installed Documentum Repository (i.e. no previous LDAP integrations), you might want to keep things simple and therefore you would most probably end-up using the default configuration.
The default User Mapping configuration for an OTDS Resource, for Documentum, might be something like:
Resource Attribute >> OTDS Attribute >> Format
__NAME__ >> cn >> %s
AccountDisabled >> ds-pwp-account-disabled >> %s
client_capability >> >> 0
create_default_cabinet >> >> F
user_address >> mail >> %s
user_global_unique_id >> oTObjectGUID >> %s
user_login_name >> oTExternalID3 >> %s
user_name >> cn >> %s
user_privileges >> >> 0
user_rename_enabled >> >> F
user_rename_unlock_locked_obj >> >> T
user_type >> >> dm_user
user_xprivileges >> >> 0
Please note that the default value for “user_login_name” is “oTExternalID3”. In addition to mapped attributes from the AD / LDAP, OTDS defines some internal attributes that you can use, and this one is one of those. For example, if a cn/sAMAccountName has a value of “MYUSERID”, then:
- oTExternalID1 == MYUSERID
- oTExternalID2 == MYUSERID@OTDS-PARTITION-NAME
- oTExternalID3 == MYUSERID@DOMAIN-NAME.COM
- oTExternalID4 == DOMAIN\MYUSERID
Therefore, in this case, with the default configuration, you would need to use “MYUSERID@DOMAIN-NAME.COM” to be able to login to Documentum. Nothing else would work as your dm_user would be synchronized/created/modified to have a user_login_name value of “MYUSERID@DOMAIN-NAME.COM”. As a sidenote, the “%s” in the Format column means to keep the formatting/case from the source attribute. In most AD / LDAP, the cn/sAMAccountName would be in uppercase, so you would only be able to login with the uppercase details. There is a parameter that you can set in the server.ini to be able to have a case-insensitive Repository and another one in the JMS, so you might want to take a look at that for example.
Here, I’m setting an AD password in an environment variable and then fetching a dm_user details to show you the current content, before triggering a login attempt (using the “connect” iAPI command):
[dmadmin@cs-0 logs]$ read -s -p " --> Please enter the AD Password: " ad_passwd
--> Please enter the AD Password:
[dmadmin@cs-0 logs]$
[dmadmin@cs-0 logs]$
[dmadmin@cs-0 logs]$ iapi REPO_NAME -Udmadmin -Pxxx << EOC
> retrieve,c,dm_user where upper(user_login_name) like 'MYUSERID%'
> get,c,l,user_name
> get,c,l,user_login_name
> EOC
OpenText Documentum iapi - Interactive API interface
Copyright (c) 2020. OpenText Corporation
All rights reserved.
Client Library Release 20.2.0000.0082
Connecting to Server using docbase REPO_NAME
[DM_SESSION_I_SESSION_START]info: "Session 011234568006fe39 started for user dmadmin."
Connected to OpenText Documentum Server running Release 20.2.00013.0135 Linux64.Oracle
Session id is s0
API> ...
1112345680001d00
API> ...
MYUSERID
API> ...
MYUSERID@DOMAIN-NAME.COM
API> Bye
[dmadmin@cs-0 logs]$
[dmadmin@cs-0 logs]$
[dmadmin@cs-0 logs]$ iapi REPO_NAME -Udmadmin -Pxxx << EOC
> apply,c,NULL,SET_OPTIONS,OPTION,S,trace_authentication,VALUE,B,T
> connect,REPO_NAME,MYUSERID@DOMAIN-NAME.COM,dm_otds_password=${ad_passwd}
> apply,c,NULL,SET_OPTIONS,OPTION,S,trace_authentication,VALUE,B,F
> EOC
OpenText Documentum iapi - Interactive API interface
Copyright (c) 2020. OpenText Corporation
All rights reserved.
Client Library Release 20.2.0000.0082
Connecting to Server using docbase REPO_NAME
[DM_SESSION_I_SESSION_START]info: "Session 011234568006fe40 started for user dmadmin."
Connected to OpenText Documentum Server running Release 20.2.00013.0135 Linux64.Oracle
Session id is s0
API> ...
q0
API> ...
s1
API> ...
q0
API> Bye
[dmadmin@cs-0 logs]$
As you can see above, the result of the “connect” command is “s1”, which means the session is opened and Documentum was able to verify through the OTDS that the login is correct. On the JMS, there is an “otdsauth.log” file, that gives you this kind of information (might give a bit more information depending on the Documentum Server version used):
[dmadmin@cs-0 logs]$ cat otdsauth.log
...
2025-01-01 13:37:26,417 UTC DEBUG [root] (default task-6) In com.documentum.cs.otds.OTDSAuthenticationServlet
2025-01-01 13:37:26,780 UTC DEBUG [root] (default task-6) userId: MYUSERID@DOMAIN-NAME.COM
2025-01-01 13:37:26,782 UTC DEBUG [root] (default task-6) Password Auth Success: MYUSERID@DOMAIN-NAME.COM
[dmadmin@cs-0 logs]$
The Repository logs will also show the trace_authentication details and the OTDS will also have a successful authentication attempt in its logs. So, all is well in a perfect world, right?
II. OTDS Synchronization with updated configurationWhen working with an existing Repository that was initially setup with LDAP Sync and Auth, you might have a “simple” configuration that defined that the user_login_name would be the cn/sAMAccountName attribute from the Active Directory. In this case, you probably don’t want to change anything after the integration of the OTDS… After all, the OTDS is supposed to simplify the configuration and not complexify it. Therefore, you would setup the OTDS to integrate (Synchronized Partition or Non-Synchronized one) with your AD / LDAP and then create a Resource that would replicate and match the exact details of your existing users. Even on a freshly installed Repository without previous LDAP integration, you might choose to login with “MYUSERID” (or “myuserid”) instead of “MYUSERID@DOMAIN-NAME.COM”. The OTDS will allows you to configure that, so users can be synchronized to Documentum however you want.
To achieve that, you would need to change a bit the User Mapping configuration to keep your previous login information / avoid messing with the existing dm_user details. For example, you might want to change the client_capability, user_login_name, user_name and some other things. Here is an example of configuration that allows you to synchronize the users with the cn/sAMAccountName from your AD / LDAP, in lowercase, please note the changes with a wildcard (*):
Resource Attribute >> OTDS Attribute >> Format
__NAME__ >> cn >> %l (*)
AccountDisabled >> ds-pwp-account-disabled >> %s
client_capability >> >> 2 (*)
create_default_cabinet >> >> F
user_address >> mail >> %s
user_global_unique_id >> oTObjectGUID >> %s
user_login_name >> cn (*) >> %l (*)
user_name >> displayName (*) >> %s
user_privileges >> >> 0
user_rename_enabled >> >> T (*)
user_rename_unlock_locked_obj >> >> T
user_type >> >> dm_user
user_xprivileges >> >> 32 (*)
The documentation mention in some places to have the same value for both _NAME_ and for user_name but I’m not sure if that’s really required, as I have some customers with different values, and it works anyway. It’s pretty common for customers to have the same value for cn and sAMAccountName and to store the displayName into, well, the displayName attribute… On Documentum side, some customers will use cn as the user_name, but some others will use displayName instead. The user_name is, after all, a kind of displayName so I don’t really understand why OTDS would require both _NAME_ and user_name to be the same. It should instead rely on the user_login_name, no?
After consolidating the OTDS Resource, you should be able to see the correct user_login_name as it was before (with the LDAP Sync job). What’s the purpose of this blog then? Well, the OTDS allows you to change the mapping as you see fit, so that you can replicate exactly what you used to have with an LDAP Sync. But you cannot login anymore…
After the modification of the OTDS Resource User Mapping and its consolidation, here I’m trying to login again (with “myuserid” instead of “MYUSERID@DOMAIN-NAME.COM”) to show the difference in behavior:
[dmadmin@cs-0 logs]$ iapi REPO_NAME -Udmadmin -Pxxx << EOC
> retrieve,c,dm_user where upper(user_login_name) like 'MYUSERID%'
> get,c,l,user_name
> get,c,l,user_login_name
> EOC
OpenText Documentum iapi - Interactive API interface
Copyright (c) 2020. OpenText Corporation
All rights reserved.
Client Library Release 20.2.0000.0082
Connecting to Server using docbase REPO_NAME
[DM_SESSION_I_SESSION_START]info: "Session 011234568006fe48 started for user dmadmin."
Connected to OpenText Documentum Server running Release 20.2.00013.0135 Linux64.Oracle
Session id is s0
API> ...
1112345680001d00
API> ...
LastName (Ext) FirstName
API> ...
myuserid
API> Bye
[dmadmin@cs-0 logs]$
[dmadmin@cs-0 logs]$ iapi REPO_NAME -Udmadmin -Pxxx << EOC
> apply,c,NULL,SET_OPTIONS,OPTION,S,trace_authentication,VALUE,B,T
> connect,REPO_NAME,myuserid,dm_otds_password=${ad_passwd}
> apply,c,NULL,SET_OPTIONS,OPTION,S,trace_authentication,VALUE,B,F
> EOC
OpenText Documentum iapi - Interactive API interface
Copyright (c) 2020. OpenText Corporation
All rights reserved.
Client Library Release 20.2.0000.0082
Connecting to Server using docbase REPO_NAME
[DM_SESSION_I_SESSION_START]info: "Session 011234568006fe4f started for user dmadmin."
Connected to OpenText Documentum Server running Release 20.2.00013.0135 Linux64.Oracle
Session id is s0
API> ...
q0
API> ...
[DM_SESSION_E_AUTH_FAIL]error: "Authentication failed for user myuserid with docbase REPO_NAME."
API> ...
q1
API> Bye
[dmadmin@cs-0 logs]$
This time the authentication fails. If you look at the Repository logs, you can see the user is detected properly, and the Repository start the authentication with the OTDS (1st line below). But when the result comes back (2nd and 3rd lines below), it says that it failed:
2025-01-01T13:46:16.446426 188808[188808] 011234568006fe50 [AUTH] Start-AuthenticateUserByOTDSPassword:UserLoginName(myuserid)
2025-01-01T13:46:16.815111 188808[188808] 011234568006fe50 [AUTH] otds_password_authentication = false:
2025-01-01T13:46:16.815159 188808[188808] 011234568006fe50 [AUTH] End-AuthenticateUserByOTDSPassword: 0
2025-01-01T13:46:17.174676 188808[188808] 011234568006fe50 [AUTH] Final Auth Result=F, LOGON_NAME=myuserid, ...
The JMS otdsauth.log file will have a similar content, it will start the OTDS communications (1st line below) but the result returned (2nd line below) is not the user_login_name of Documentum. Instead, it’s the value of oTExternalID3 and then the JMS says that it failed (3rd line below):
2025-01-01 13:46:16,671 UTC DEBUG [root] (default task-6) In com.documentum.cs.otds.OTDSAuthenticationServlet
2025-01-01 13:46:16,813 UTC DEBUG [root] (default task-6) userId: MYUSERID@DOMAIN-NAME.COM
2025-01-01 13:46:16,814 UTC DEBUG [root] (default task-6) Password Auth Failed: myuserid
On the OTDS side, no problems, the authentication was successful when it was received (in the directory-access.log):
2025-01-01 13:46:16.777|INFO ||0|0|Authentication Service|Success Access|27,Initial authentication successful|172.0.0.10|""|OTDS-PARTITION-NAME|"MYUSERID@DOMAIN-NAME.COM"|"Authentication success: MYUSERID@DOMAIN-NAME.COM using authentication handler OTDS-PARTITION-NAME for resource __OTDS_AS__"
If you look at the exact timestamp of the messages, you see the exact flow of how things went. In short, the OTDS says that it’s OK and it sends back some information to the JMS. But because the information returned is oTExternalID3, there is a mismatch with the value of the user_login_name and the JMS/Repository then concludes that the authentication failed, which isn’t true…
Therefore, using any user_login_name value other than oTExternalID3 isn’t a problem from a synchronization point of view, but you still cannot login anyway.
III. WorkaroundAs mentioned in the introduction of this blog, there is a workaround, which is to set the parameter “synced_user_login_name=sAMAccountName” in the otdsauth.properties file that configures how the JMS talks to the OTDS. I looked at all the OTDS and Documentum documentations, for several versions, as well as KBs, but I couldn’t find this workaround mentioned anywhere. Maybe I’m the one that doesn’t know how to search (don’t blame the search from OT Support website :D). The one and only reference to this parameter is in the Documentum Server Admin & Config doc, but it tells you that it’s optional and it’s only for OTDS token-based authentication. Here, we are doing a password-based auth, we don’t have any OTDS oAuth Client ID/Secret, so this section shouldn’t be required at all. You don’t need the other parameters from this section, but you DO need “synced_user_login_name”, if you would like to login with the cn/sAMAccountName/oTExternalID1/oTSAMAccountName parameter.
However, there is an additional catch… The parameter was apparently only introduced in 20.3. For any older Documentum Server, you will need to check with OT if they have a fix available. I know there is one for 20.2, but it’s only for Windows (c.f. here). Now, you know that you can also use this parameter for that purpose.
L’article Documentum – Login through OTDS without oTExternalID3 est apparu en premier sur dbi Blog.