Feed aggregator
How to get Hash Index Join Before Table Access?
Apple MLX Vision LLM Server with Ngrok, FastAPI and Sparrow
Automate your Deployments in Azure with Terraform!
Terraform is a strong open-source declarative and platform agnostic infrastructure as code (IaC) tool developed by HashiCorp. It facilitates the deployment and whole management of infrastructure. In this hands on blog I will show you how you can use Terraform to automate your cloud deployments in Azure.
Initial Setup:For this blog I’m using a ubuntu server as a automation server where I’m running Terraform. You can install Terraform on different operating systems. For instructions how to install Terraform check out this link from HashiCorp.
Starting with the hands on part I’m creating a new dedicated directory for my new Terraform project:
Within this new directory I’m creating the following files which will hold my configuration code:
- main.tf
- providers.tf
- variables.tf
- outputs.tf
For the authentication with Azure I’m using the Azure CLI command line tool. You can Install the Azure CLI on Ubuntu with one command which curls a script, provided by Microsoft, from the internet and executes it on your system:
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
To get more information’s about how to install the Azure CLI on your system, checkout this link from Microsoft.
As far as the installation is successfully done, you can verify it with the following command:
az –version
Then use the following command for connecting to Azure:
az login
This command will open a browser window where you can sign in to Azure:
After you successfully authenticated yourself to Azure, you can check your available subscriptions with the following command:
az account list
Initialize Terraform:As the Azure CLI is now installed on the system and we are successfully authenticated to Azure, we can now start with the configuration of Terraform and the required provider for interacting with the Azure cloud platform.
Therefore I add the code block below into the providers.tf file which will tell terraform to install and initialize the azurerm provider with the specific version 4.10.0, which is the latest at the moment:
To configure the azurerm provider, I add the provider code block below additionally into the providers.tf file.
You can find your subscription ID in the output from the “az account list” command above.
After inserting those code blocks into the providers.tf file, we can install the defined azurerm provider and initialize Terraform by running the below command in our project directory:
terraform init
Terraform Workspaces:As Terraform is now successfully initialized and the required provider is installed, we can start with the development of our infrastructure code.
But before doing so I would like to target the concept of workspaces in Terraform. Workspaces enables you to use the same configuration code for multiple environments through separate state files. You can imagine workspaces as a separated deployment environment and you terraform code as a independent plan or image of your infrastructure. As an example, imagine you added a new virtual machine to your terraform code and deployed it in production. If you now want to have the same virtual machine for test purposes, you just have to switch into your test workspace and run the terraform code again. You will have the exact same virtual machine within a few minutes!
To check the workspaces you have in your Terraform project, use the following command:
terraform workspace list
As you can see we just have the default workspace in our new Terraform project. I want to deploy my infrastructure in this hands on blog post for multiple environments, therefore I will create some new workspaces. Lets assume we have a development, test and production stage for our infrastructure. I will create therefore the workspaces accordingly with the commands below:
terraform workspace new development
terraform workspace new test
terraform workspace new production
After executing these commands, we can now check again our available workspaces in our terraform project:
Note that terraform will let you know your current workspace through the “*” symbol behind the particular workspace. We want to deploy our infrastructure for development first. So I will switch back into the development workspace with the following command:
terraform workspace select development
Create an Azure Resource Group:As the workspaces are now successfully created, we can start with our configuration code.
First of all I go into the variables.tf file and add the variable code block below to that file:
I will use this “env” variable for the suffix or prefix of resource names, which I will deploy, to simple recognize to which environment these resources belong.
Next I will create a resource group in Azure. Therefore I add the code block below to the main.tf file:
As you can see I set the name of the resource group dynamically with the prefix “RG_” and the value for the current workspace in the variable “env”, which I’ve defined before in the variables.tf file. The variable “terraform.workspace” is a default variable which refers to the current workspace.
To check which resource terraform would create in case we would apply the current configuration code, we can run the following command:
terraform plan
We can see that terraform would create a new resource group with the name “RG_DEV”.
Create a Virtual Network and Subnets:Next I will create a virtual network. Therefore I add the variable code block below to the variables.tf file. This variable defines for each environment stage a own address space:
I add now the code block below to the main.tf file to create a virtual network:
As you can see I’m referencing here as well to the “env” variable for dynamically setting the suffix of the network name and as well to the new “cidr” variable to set the address space of the virtual network.
Next I will create some subnets within the virtual network. I want to create 4 subnets in total:
- A front tier subnet
- A middle tier subnet
- A backend tier subnet
- A bastion subnet for the administration
Therefore I add the variable below to my variables.tf file, which defines for each environment stage and subnet an address space:
Next I will add for each subnet a new resource block to the main.tf file:
Note that I enabled in the backend tier subnet the option “private_endpoint_network_policies”. This is a option which enforces the network security groups to take effect on the private endpoints in the particular subnet. Checkout this link from Microsoft for more information’s about this option.
Create an Azure SQL Database:Next I will create an Azure SQL Server. Therefore I add the variable below to my variables.tf file. This variable is supposed to hold the admin password of the Azure SQL Server. I set the sensitivity option for this variable which will prevent the password to be exposed in the terminal output or in the Terraform logs:
I did also not set any value in the configuration files, instead I will set the variable value as a environment variable before applying the configuration.
Next I add the code block below to my main.tf file:
As you can see I referenced the “sqlserver_password” variable to set the password for the “sqladmin” user. I also disabled the public network access to prevent database access over the public endpoint of the server. I will create instead a private endpoint later on.
Next I will create the Azure SQL Database. Therefore I add the variable below to my variables.tf file:
The thought behind this variable is, that we have different requirements for the different stages. The general purpose SKU is sufficient for the non-productive databases but for the productive one we want the business critical service tier. As well as we want to have 30 days of point in time recovery for our productive data while 7 days is sufficient for non-productive and we want to store our productive database backups on geo-zone redundant storage while zone redundant storage is sufficient for the non-productive databases.
Then I add the resource block below into my main.tf file:
As you can see I’m referencing to my “database_settings” variable to set the configuration options dynamically.
Create a DNS Zone and a Private Endpoint:For name resolution I will next create a private DNS zone. For that I add the resource block below to my main.tf file:
To associate this private DNS zone now with my virtual network, I will next create a virtual network link. Therefore I add the resource block below to my main.tf file:
To be able to securely connect to my azure sql database I will now create a private endpoint in my backend subnet. Therefore I add the resource block below to my main.tf file:
With this configuration code, I create a private endpoint with the name of the Azure SQL Server and the suffix “-endpoint”. Through the option “subnet_id” I place this endpoint in the backend subnet with a private service connection to the Azure SQL Server. I also associate the endpoint to the private DNS zone, which I’ve created just before, for name resolution.
Create an Azure Bastion:Lets now continue and create an azure bastion host for the administration of our environment. Therefore I first create a public IP address through adding the resource block below to my main.tf file:
Next I create the bastion host itself. For that I add the code block below to my main.tf file:
Create a Virtual Machine:Now I will add a virtual machine to my middle tier subnet. Therefore I need to create first a network interface for that virtual machine. The resource block below will create the needed network interface:
As the virtual machine, which I intend to create, needs an admin password like the azure sql server, I will create an additional password variable. Therefore I add the code block below to my variables.tf file:
To create the virtual machine itself, I add the resource block below to my main.tf file:
Create Network Security Groups and Rules:Next I want to secure my subnets. Therefore I create for my front tier, middle tier and backend tier subnet a network security group by adding the resource blocks below to my main.tf file:
Next I create for each network security group particular rules.
Starting with the front tier subnet I want to block all Inbound traffic except traffic over https. Therefore I add the two resource blocks below to my main.tf file:
Continuing with the middle tier subnet I want to block all inbound traffic but allow http traffic only from the front tier subnet and allow rdp traffic only from the bastion subnet. Therefore I add the three resource blocks below to my main.tf file:
Last but not least I want to block all Inbound traffic to my backend tier subnet except traffic to the sql-server port from the middle tier subnet. In addition I want to block explicitly the internet access from this subnet.
You are questioning why I’m explicitly block internet access from this subnet while I haven’t any public IP address or NAT gateway for this subnet? That’s because Microsoft provides access to the internet through a default outbound IP address in case no explicit way is defined. That’s a feature which will be deprecated on the 30th September 2025. To get more information’s about this feature check out this link from Microsoft.
To create the rules for the backend tier subnet I add the three resource blocks below to my main.tf file:
Define Output Variables:I will stop with the creation of resources for this blog post and will show you finally how you can define outputs. For example let’s assume we want to have the name of the Azure SQL Server and the IP address of the virtual machine extracted after the deployment. Therefore I add the two output variables below to the outputs.tf file:
Outputs are especially useful when you need to pass up information’s from the deployment to a higher context. For example when you are working with modules in terraform and you want to pass information’s from a child module to a parent module. In our case the outputs will just be printed out to the command line after the deployment.
Apply the Configuration Code:As I am now done with the definition of the configuration code for this blog post, I will plan and apply my configuration for each stage. Before doing so, I need to first set a value for my password variables. On Ubuntu this can be done with this command:
export TF_VAR_sqlserver_password=”your password”
export TF_VAR_vm_password=”your password”
After I’ve set the variables, I run the terraform plan command and we can see that terraform would create 29 resources:
This seems to be good for my so I run the terraform apply command to deploy my infrastructure:
terraform apply
After some minutes of patience terraform applied the configuration code successfully:
Check the Infrastructure in the Azure Portal:When I’m signing in to the Azure portal I can see my development resource group with all the resources inside:
I want to have my test and production resources as well so I switch the terraform workspace to test and production and run in both workspaces again the terraform apply command.
After some additional minutes of patience we can see in the Azure portal that we have now all resources for each environment stage:
Lets now compare the settings from the productive database with the development database and we can see that the SKU for the productive one is business critical while the SKU for the non-productive one is general purpose:
The Backup storage has also been set according to the “database_settings” variable:
We can see the same for the point in time recovery option:
We can see that our subnets are also all in place with the corresponding address space and network security group associated:
Lets check the private endpoint of the Azure SQL Server. We can see that we have a private IP address within our backend subnet which is linked to the Azure SQL Server:
Lets connect to the virtual machine and try a name resolution. You can see that we were able to successfully resolve the FQDN:
After installing SQL-Server management studio on the virtual machine, we can also connect to the Azure SQL Server through the FQDN of the private endpoint:
Delete the Deployed Resources:For now preventing getting a high bill for something I didn’t use, I will now delete all resources which I’ve created with Terraform. This is very simple, and can be done through running the terraform destroy command in each workspace:
terraform destroy
I hope you got some interesting examples and ideas about Terraform and Azure! Feel free to share your questions and feelings about Terraform and Azure with me in the comment section below.
L’article Automate your Deployments in Azure with Terraform! est apparu en premier sur dbi Blog.
Creating your private cloud using OpenStack – (1) – Introduction
While public clouds are a trend since several years now, some companies are also looking into self hosted solutions to build a private cloud. Some do this because of costs, others do this because they don’t want to be dependent on one ore multiple public cloud providers, others do it because they want to keep their data locally. There are several solutions to this and depending on the requirements those might or might not be an option. Some of the more popular ones are:
- VMware: Obviously, one of the long term players and well known but sold to Broadcom, which not everybody is happy with
- Proxmox: A complete open source virtualization solution based on Debian and KVM, can also deploy containerized workloads based on LXC.
- Nutanix: A hyper converged platform that comes with its own hypervisor, the Acropolis Hypervisor (AHV).
- Red Hat Virtualization: Red Hat’s solution for virtualized infrastructures, but this product is in maintenance mode and Red Hat fully goes for Red Hat OpenShift Virtualization nowadays.
- Harvester: A complete open source hyper converged infrastructure solution. SUSE comes with a commercial offering for this, which is called SUSE virtualization.
- … and many others
The other major player which is not in the list above is OpenStack, which started in 2010 already. OpenStack is not a single product, but more a set of products combined together to provide a computing platform to deploy your workloads on top of either virtual machines, or containers, or a mix of both. There are plenty of sub projects which bring in additional functionality, check here for a list. The project itself is hosted and supported by the OpenInfra Foundation, which should give sufficient trust that it will stay as a pure open source project (maybe have a look at the OpenInfra supporting organizations as well, to get an idea of how widely it is adopted and supported).
The main issue with OpenStack is, that it is kind of hard to start with. There are so many services you might want to use that you probably get overwhelmed at the beginning of your journey. To help you a bit out of this, we’ll create a minimal, quick and dirty OpenStack setup on virtual machines with just the core services:
- Keystone: Identity service
- Glance: Image service
- Placement: Placement service
- Nova: Compute service
- Neutron: Network service
- Horizon: The OpenStack dashboard
We’ll do that step by step, because we believe that you should know the components which finally make up the OpenStack platform, or any other stack you’re planning to deploy. There is also DevStack which is a set of scripts for the same purpose, but as it is scripted you’ll probably not gain the same knowledge than by doing it manually. There is OpensStack-helm in addition, which deploys OpenStack on top of Kubernetes, but this as well is out of cope for this series of blog posts. Canonical offers MicroStack, which also can be used to setup a test environment quickly.
Automation is great and necessary, but it also comes with a potential downside: The more you automate, the more people you’ll potentially have who don’t know what is happening in the background. This is usually fine as long as the people with the background knowledge stay in the company, but if they leave you might have an issue.
As there are quite some steps to follow, this will not be single blog post, but split into parts:
- Introduction (this blog post)
- Preparing the controller and the compute node
- Setting up and configuring Keystone
- Setting up and configuring Glance and the Placement service
- Setting up and configuring Nova
- Setting up and configuring Neutron
- Setting up and configuring Horizon, the Openstack dashboard
In the most easy configuration, the OpenStack platform consists of two nodes: A controller node, and at least one compute node. Both of them will require two network interfaces, one for the so-called management network (as the name implies, this is for the management of the stack and communication with the internet), and the other one for the so-called provider network (this is the internal network e.g. the virtualized machines will be using to communicate with each other).
When it comes to your choice of the Linux distribution you want to deploy OpenStack on, this is merely a matter of taste. OpenStack can be deployed on many distributions, the official documentation comes with instructions for Red Hat based distributions (which usually includes Alma Linux, Rocky Linux and Oracle Linux), SUSE based distributions (which includes openSUSE Leap), and Ubuntu (which also should work on Debian). For the scope of this blog series we’ll go with a minimal installation of Rocky Linux 9, just because I haven’t used it since some time.
OpenStack itself is released in a six month release cycle and we’ll go with 2024.2 (Dalmatian), which will be supported until the beginning of 2026. As always, you should definitely go with the latest supported release so you have the most time to test and plan for future upgrades.
To give you an idea of what we’ve going to start with, here is a graphical overview:
Of course this very simplified, but it is enough to know for the beginning:
- We have two nodes, one controller node and one compute node.
- Both nodes have two network interfaces. The first one is configured using a 192.168.122.0/24 subnet and connected to the internet. The second one is not configured.
- Both nodes are installed with a Rocky Linux 9 (9.5 as of today) minimal installation
We’ll add all the bits and pieces to this graphic while we’ll be installing and configuring the complete stack, don’t worry.
That’s it for the introduction. In the next post we’ll prepare the two nodes so we can continue to install and configure the OpenStack services on top of them.
L’article Creating your private cloud using OpenStack – (1) – Introduction est apparu en premier sur dbi Blog.
Audit weirdness
REGEXP_COUNT and REGEXP_LIKE and the search for a whitespace
Introducing YaK 2.0: The future of effortless PaaS deployments across Clouds and On-Premises
Hello, dear tech enthusiasts and cloud aficionados!
We’ve got some news that’s about to make your life —or your deployments, at least— a whole lot easier. Meet YaK 2.0, the latest game-changer in the world of automated multi-cloud PaaS deployment. After months of development, testing, troubleshooting, a fair share of meetings and way too much coffee, YaK is officially launching today.
Because we believe IT professionals should spend time on complex, value-added tasks, but not on repetitive setups, we have decided to develop the YaK.
YaK is a framework that allows anyone to deploy any type of component on any platform, while ensuring quality, cost efficiency and reducing deployment time.
YaK 2.0 is your new best friend when it comes to deploying infrastructure that’s not just efficient but also identical across every platform you’re working with – be it multi-cloud or on-premises. Originating from the need to deploy multi-technology infrastructures quickly and effortlessly, YaK ensures your setup is consistent, whether you’re working with AWS, Azure, Oracle Cloud, or your own on-prem servers.
In simpler terms, YaK makes sure your deployment process is consistent and reliable, no matter where. Whether you’re scaling in the cloud or handling things in-house, YaK’s got your back.
Why you should have a look at YaK 2.0?Here’s why we think YaK is going to become your favorite pet:
- Flexibility: Deploy across AWS, Azure, OCI, or your own servers—YaK adapts to your infrastructure, making every platform feel like home.
- Automation: Eliminate repetitive setups with automated deployments, saving you time and headaches.
- Cost efficiency & speed: YaK cuts time-to-market, streamlining deployments for fast, standardized rollouts that are both cost-effective and secure.
- Freedom from vendor lock-In: YaK is vendor-neutral, letting you deploy on your terms, across any environment.
- Swiss software backed up by a consulting company (dbi services) with extensive expertise in deployments.
With this release, we’re excited to announce a major upgrade:
- Sleek new user interface: YaK 2.0 now comes with a user-friendly interface, making it easier than ever to manage your deployments. Say hello to intuitive navigation.
- Components: We’ve got components on our roadmap (available with an annual subscription), and we’ll be announcing them shortly : Oracle Database, PostgreSQL, MongoDB, and Kubernetes are already on the list and will be released soon.
Many more will follow… Stay tuned!
How does it work?YaK Core is the open-source part and is the heart of our product, featuring Ansible playbooks and a custom plugin that provides a single inventory for all platforms, making your server deployments seamless across clouds like AWS, Azure, and OCI.
If you want to see for yourself, our GitLab project is available here!
YaK Components are the value-added part of the product and bring you expert-designed modules for deploying databases and application servers, with an annual subscription to dbi services.
Join the YaK packExplore the power of automated multi-cloud PaaS deployment with YaK 2.0 and experience a new level of efficiency and flexibility. We can’t wait for you to try it out and see just how much it can streamline your deployment process. Whether you’re a startup with big dreams or an established enterprise looking to optimize, YaK is here to make your life easier.
Our YaK deserved its own web page, check it out for more information, to contact us or to try it out (free demo environments will be available soon): yak4all.io
Wanna ride the YaK? Check out our user documentation to get started!
We promise it’ll be the smoothest ride you’ve had in a while.
We’re not just launching a product; we’re building a community. We’d love for you to chime in, share your experiences, and help us make YaK even better. Follow us on LinkedIn, join our community on GitLab, and let’s create something amazing together.
Feel free to reach out to us for more details or for a live presentation: info@dbi-services.com
Thanks for being part of this exciting journey. We can’t wait to see what you build with YaK.
The YaK Team
—
P.S. If you’re wondering about the name, well, yaks are known for being hardy, reliable, and able to thrive in any environment. Plus, they look pretty cool, don’t you think?
L’article Introducing YaK 2.0: The future of effortless PaaS deployments across Clouds and On-Premises est apparu en premier sur dbi Blog.
New Oracle Database Appliance X11 series for 2025
Oracle Database Appliance X10 is not so old, but X11 is already out, available to order.
Let’s find out what’s new for this 2025 series.
What is an Oracle Database Appliance?ODA, or Oracle Database Appliance, is an engineered system from Oracle. Basically, it’s an x86-64 server with a dedicated software distribution including Linux, Oracle Grid Infrastructure (GI) including Automatic Storage Management and Real Application Cluster, Oracle database software, a Command Line Interface (CLI), a Browser User Interface (BUI) and a virtualization layer. The goal being to simplify database lifecycle and maximize performance. Market position is somewhere between OCI (the Oracle public Cloud) and Exadata (the highest level engineered system – a kind of big and rather expensive ODA). For most clients, ODA brings both simplification and performance they just need. For me, ODA has always been one of my favorite solutions, and undoubtedly a solution to consider. X11 doesn’t change the rules regarding my recommendations.
To address a large range of clients, ODA is available in 3 models: S, L and HA.
For Enterprise Edition (EE) users, as well as for Standard Edition 2 (SE2) users, ODA has a strong advantage over its competitors: capacity on demand licensing. With EE you can start with 1x EE processor license (2 enabled cores). With SE2 you can start with 1x SE2 processor license (8 enabled cores). You can later scale up by enabling additional cores according to your needs.
On the processor sideX11 still rely on Epyc series for its processor, according to Oracle recent long-term commitment to AMD.
Is the X11 CPU better than X10 ones? According to data sheets, ODA moves from Epyc 9334 to Epyc 9J15. This latest version may be specific to Oracle as it doesn’t appear on the AMD website. Looking at the speed, Epyc 9334 is clocked from 2.7Ghz to 3.9GHz, and Epyc 9J15 is clocked from 2.95Ghz to 4.4Ghz. As a consequence, you should probably expect a 10% performance increase per core. Not a huge bump, but X10 was quite a big improvement over X9-2 Xeon processors. Each processor has 32 cores, and there is still 1 processor on X11-S and 2 on X11-L. As X11-HA is basically two X11-L without local disks but connected to a disk enclosure, each node also have 2 Epyc processors.
Having a better CPU does mean better performance, but also less processor licenses needed for the same workload. It’s always something to keep in mind.
RAM and disks: same configuration as outgoing X10Nothing new about RAM on X11, the same configurations are available, from 256GB on X11-S, and from 512GB on X11-L and each node of the X11-HA. You can double or triple the RAM size if needed on each server.
On X11-S and L models, data disks have the same size as X10 series: 6.8TB NVMe disks. X11-S has the same limitation as X10-S, only 2 disks and no possible expansion.
X11-L also comes with 2 disks, but you can add pairs of disks up to 8 disks, meaning 54TB of RAW storage. Be aware that only 4 disk slots are available on the front panel. Therefore, starting from the third pair of disks, disks are different: they are Add-In-Cards (AIC). It means that you will need to open your server to add or replace these disks, with a downtime for your databases.
X11-HA is not different compared to X10-HA, there is still a High Performance (HP) version and a High Capacity (HC) version, the first one being only composed of SSDs, the second one being composed of a mix of SSDs and HDDs. SSDs are 7.68TB each, and HDDs are 22TB each.
Network interfacesNothing new regarding network interfaces. You can have up to 3 of them (2 are optional), and you will choose for each between a quad-port 10GBase-T (copper) or a two-port 10/25GbE (SFP28). You should know that SFP28 won’t connect to 1Gbps fiber network. But using SFPs for a network limited to 1Gbps would not make sense.
Software bundleLatest software bundle for ODA is 19.25, so you will use this latest one on X11. This software bundle is also compatible with X10, X9-2, X8-2 and X7-2 series. This bundle is the same for SE2 and EE editions.
What are the differences between the 3 models?The X11-S is an entry level model for a small number of small databases.
The X11-L is much more capable and can get disk expansions. A big infrastructure with hundreds of databases can easily fit on several X10-L.
The X11-HA is for RAC users because High Availability is included. The disk capacity is much higher than single node models, and HDDs are still an option. With X11-HA, big infrastructures can be consolidated with a very small number of HA ODAs.
ModelDB EditionnodesURAMRAM maxRAW TBRAW TB maxbase priceODA X11-SEE and SE212256GB768GB13.613.624’816$ODA X11-LEE and SE212512GB1536GB13.654.440’241$ODA X11-HA (HP)EE and SE228/122x 512GB2x 1536GB46368112’381$ODA X11-HA (HC)EE and SE228/122x 512GB2x 1536GB390792112’381$You can run SE2 on X11-HA, but it’s much more an appliance dedicated to EE clients.
I’m not so sure that X11-HA still makes sense today compared to Exadata Cloud@Customer: study both options carefully if you need this kind of platform.
In the latest engineered systems price list (search exadata price list and you will easily find it), you will see X11 series alongside X10 series. Prices are the same, so there is no reason to order the old ones.
Which one should you choose?If your databases can comfortably fit on the storage of the S model, don’t hesitate as you will probably never need more.
Most interesting model is still the new X11-L. L is quite affordable, has a great storage capacity, and is upgradable if you don’t buy the full system at first.
If you still want/need RAC and its associated complexity, the HA may be for you but take a look at Exadata Cloud@Customer and compare the costs.
Don’t forget that you will need at least 2 ODAs for Disaster Recovery purpose, using Data Guard (EE) or Dbvisit Standby (SE2). No one would recommend buying a single ODA. Mixing S and L is OK, but I would not recommend mixing L and HA ODAs just because some operations are handled differently when using RAC.
I would still prefer buying 2x ODA X11-L compared to 1x ODA X11-HA. NVMe speed, no RAC and the simplicity of a single server is definitely better in my opinion. Extreme consolidation is not always the best solution.
ConclusionODA X11 series is a slight refresh of X10 series, but if you were previously using older generations (for example X7-2 that comes to end of life this year) switching to X11 will make a significant difference. In 2025, ODA is still a good platform for database simplification and consolidation. And it’s still very popular among our clients.
Useful linksSE2 licensing rules on ODA X10 (apply to X11)
Storage and ASM on ODA X10-L (apply to X11)
L’article New Oracle Database Appliance X11 series for 2025 est apparu en premier sur dbi Blog.
Vision LLM Structured Output with Sparrow
Hidden Parameter "_cleanup_rollback_entries" value
Question on VARRAY
Fetching encryption key from an external storage
NVIDIA SANA Model Local Installation with GUI - Step-by-Step Tutorial
This video locally installs NVIDIA SANA which is a text-to-image framework that can efficiently generate images up to 4096 × 4096 resolution.
Code: