Feed aggregator

How to get Hash Index Join Before Table Access?

Tom Kyte - Mon, 2025-01-20 18:25
The following is a simplified version that exhibits the behavior in question. There are two tables T and F. 1% of the rows of table T are indexed by T_IX. Though F has the same number of rows as T_IX, F has 1% of rows "in common" with T_IX. The cardinality are as follows. <code> T ~ 1M T_IX ~ 10K F ~ 10K </code> The query is a semi-join that selects the rows from T that are both indexed in T_IX and "in common" with F. <code> select id from "T" where exists ( select null from "F" where T.num = F.num ) / </code> This query correctly returns 100 (0.01%) of the 1M rows in T. <code> The optimizer's plan is either i) Nested Loops join with F and T_IX, then perform the Table Access on T. PRO: Table Access on T is after the join. CON: Nested Loops starts. ii) Full Scanning T_IX then performing the Table Access on T. This is then hash joined with F. PRO: Full Scan of T_IX and the Table Access on T is batched. CON: Accessed 9,900 more rows from T then we needed (99% inefficient). </code> How can I encourage the optimizer to get the best of both worlds? <b>That is, how can I encourage the optimizer to perform a hash <i>index</i> join between F and the index T_IX <i>first</i>, and then, after many (99%) of T's rowids have been eliminated via that join, perform a Table Access (preferably batched) on T?</b> Does such an access path even exist? Note: The example below adds the hints in order to reliably generate the execution plans for the purpose of this example and is slightly modified to fit the format of this forum (mainly columns of the explain plans are removed). This example shows the two explain plans and the difference in Table Access on T. The Nested Loops join does perform the Table Access of T after the join and only has to get 100 rows, but the Hash join does not. The Hash join performs the Table Access on T before the join and has to get 10,000 rows. <code> SQL>create table T ( id number 2 , num number 3 , pad char(2e3) default 'x' 4 ) 5 / Table created. SQL>insert into T ( id, num ) 2 with "D" as 3 ( select level id 4 from dual connect by level <= 1e3 5 ) 6 select rownum 7 , decode( mod( rownum, 1e2 ), 0, rownum ) 8 from "D", "D" 9 / 1000000 rows created. SQL>create index T_IX on T ( num ) 2 / Index created. SQL>create table F ( num number 2 ) 3 / Table created. SQL>insert into F ( num ) 2 with "D" as 3 ( select level id 4 from dual connect by level <= 1e2 5 ) 6 select rownum 7 from "D", "D" 8 / 10000 rows created. SQL>...
Categories: DBA Blogs

Apple MLX Vision LLM Server with Ngrok, FastAPI and Sparrow

Andrejus Baranovski - Mon, 2025-01-20 02:01
I show how I run Apple MLX backend on my local Mac Mini M4 Pro 64GB and access it from the Web through Ngrok, with automatically provisioned HTTPS certificate. 

 

Automate your Deployments in Azure with Terraform!

Yann Neuhaus - Thu, 2025-01-16 07:43

Terraform is a strong open-source declarative and platform agnostic infrastructure as code (IaC) tool developed by HashiCorp. It facilitates the deployment and whole management of infrastructure. In this hands on blog I will show you how you can use Terraform to automate your cloud deployments in Azure.

Initial Setup:

For this blog I’m using a ubuntu server as a automation server where I’m running Terraform. You can install Terraform on different operating systems. For instructions how to install Terraform check out this link from HashiCorp.

Starting with the hands on part I’m creating a new dedicated directory for my new Terraform project:

Within this new directory I’m creating the following files which will hold my configuration code:

  • main.tf
  • providers.tf
  • variables.tf
  • outputs.tf
Installing the Azure CLI:

For the authentication with Azure I’m using the Azure CLI command line tool. You can Install the Azure CLI on Ubuntu with one command which curls a script, provided by Microsoft, from the internet and executes it on your system:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

To get more information’s about how to install the Azure CLI on your system, checkout this link from Microsoft.

As far as the installation is successfully done, you can verify it with the following command:

az –version

Then use the following command for connecting to Azure:

az login

This command will open a browser window where you can sign in to Azure:

After you successfully authenticated yourself to Azure, you can check your available subscriptions with the following command:

az account list

Initialize Terraform:

As the Azure CLI is now installed on the system and we are successfully authenticated to Azure, we can now start with the configuration of Terraform and the required provider for interacting with the Azure cloud platform.

Therefore I add the code block below into the providers.tf file which will tell terraform to install and initialize the azurerm provider with the specific version 4.10.0, which is the latest at the moment:

To configure the azurerm provider, I add the provider code block below additionally into the providers.tf file.

You can find your subscription ID in the output from the “az account list” command above.

After inserting those code blocks into the providers.tf file, we can install the defined azurerm provider and initialize Terraform by running the below command in our project directory:

terraform init

Terraform Workspaces:

As Terraform is now successfully initialized and the required provider is installed, we can start with the development of our infrastructure code.

But before doing so I would like to target the concept of workspaces in Terraform. Workspaces enables you to use the same configuration code for multiple environments through separate state files. You can imagine workspaces as a separated deployment environment and you terraform code as a independent plan or image of your infrastructure. As an example, imagine you added a new virtual machine to your terraform code and deployed it in production. If you now want to have the same virtual machine for test purposes, you just have to switch into your test workspace and run the terraform code again. You will have the exact same virtual machine within a few minutes!

To check the workspaces you have in your Terraform project, use the following command:

terraform workspace list

As you can see we just have the default workspace in our new Terraform project. I want to deploy my infrastructure in this hands on blog post for multiple environments, therefore I will create some new workspaces. Lets assume we have a development, test and production stage for our infrastructure. I will create therefore the workspaces accordingly with the commands below:

terraform workspace new development

terraform workspace new test

terraform workspace new production

After executing these commands, we can now check again our available workspaces in our terraform project:

Note that terraform will let you know your current workspace through the “*” symbol behind the particular workspace. We want to deploy our infrastructure for development first. So I will switch back into the development workspace with the following command:

terraform workspace select development

Create an Azure Resource Group:

As the workspaces are now successfully created, we can start with our configuration code.

First of all I go into the variables.tf file and add the variable code block below to that file:

I will use this “env” variable for the suffix or prefix of resource names, which I will deploy, to simple recognize to which environment these resources belong.

Next I will create a resource group in Azure. Therefore I add the code block below to the main.tf file:

As you can see I set the name of the resource group dynamically with the prefix “RG_” and the value for the current workspace in the variable “env”, which I’ve defined before in the variables.tf file. The variable “terraform.workspace” is a default variable which refers to the current workspace.

To check which resource terraform would create in case we would apply the current configuration code, we can run the following command:

terraform plan

We can see that terraform would create a new resource group with the name “RG_DEV”.

Create a Virtual Network and Subnets:

Next I will create a virtual network. Therefore I add the variable code block below to the variables.tf file. This variable defines for each environment stage a own address space:

I add now the code block below to the main.tf file to create a virtual network:

As you can see I’m referencing here as well to the “env” variable for dynamically setting the suffix of the network name and as well to the new “cidr” variable to set the address space of the virtual network.

Next I will create some subnets within the virtual network. I want to create 4 subnets in total:

  • A front tier subnet
  • A middle tier subnet
  • A backend tier subnet
  • A bastion subnet for the administration

Therefore I add the variable below to my variables.tf file, which defines for each environment stage and subnet an address space:

Next I will add for each subnet a new resource block to the main.tf file:

Note that I enabled in the backend tier subnet the option “private_endpoint_network_policies”. This is a option which enforces the network security groups to take effect on the private endpoints in the particular subnet. Checkout this link from Microsoft for more information’s about this option.

Create an Azure SQL Database:

Next I will create an Azure SQL Server. Therefore I add the variable below to my variables.tf file. This variable is supposed to hold the admin password of the Azure SQL Server. I set the sensitivity option for this variable which will prevent the password to be exposed in the terminal output or in the Terraform logs:

I did also not set any value in the configuration files, instead I will set the variable value as a environment variable before applying the configuration.

Next I add the code block below to my main.tf file:

As you can see I referenced the “sqlserver_password” variable to set the password for the “sqladmin” user. I also disabled the public network access to prevent database access over the public endpoint of the server. I will create instead a private endpoint later on.

Next I will create the Azure SQL Database. Therefore I add the variable below to my variables.tf file:

The thought behind this variable is, that we have different requirements for the different stages. The general purpose SKU is sufficient for the non-productive databases but for the productive one we want the business critical service tier. As well as we want to have 30 days of point in time recovery for our productive data while 7 days is sufficient for non-productive and we want to store our productive database backups on geo-zone redundant storage while zone redundant storage is sufficient for the non-productive databases.

Then I add the resource block below into my main.tf file:

As you can see I’m referencing to my “database_settings” variable to set the configuration options dynamically.

Create a DNS Zone and a Private Endpoint:

For name resolution I will next create a private DNS zone. For that I add the resource block below to my main.tf file:

To associate this private DNS zone now with my virtual network, I will next create a virtual network link. Therefore I add the resource block below to my main.tf file:

To be able to securely connect to my azure sql database I will now create a private endpoint in my backend subnet. Therefore I add the resource block below to my main.tf file:

With this configuration code, I create a private endpoint with the name of the Azure SQL Server and the suffix “-endpoint”. Through the option “subnet_id” I place this endpoint in the backend subnet with a private service connection to the Azure SQL Server. I also associate the endpoint to the private DNS zone, which I’ve created just before, for name resolution.

Create an Azure Bastion:

Lets now continue and create an azure bastion host for the administration of our environment. Therefore I first create a public IP address through adding the resource block below to my main.tf file:

Next I create the bastion host itself. For that I add the code block below to my main.tf file:

Create a Virtual Machine:

Now I will add a virtual machine to my middle tier subnet. Therefore I need to create first a network interface for that virtual machine. The resource block below will create the needed network interface:

As the virtual machine, which I intend to create, needs an admin password like the azure sql server, I will create an additional password variable. Therefore I add the code block below to my variables.tf file:

To create the virtual machine itself, I add the resource block below to my main.tf file:

Create Network Security Groups and Rules:

Next I want to secure my subnets. Therefore I create for my front tier, middle tier and backend tier subnet a network security group by adding the resource blocks below to my main.tf file:

Next I create for each network security group particular rules.

Starting with the front tier subnet I want to block all Inbound traffic except traffic over https. Therefore I add the two resource blocks below to my main.tf file:

Continuing with the middle tier subnet I want to block all inbound traffic but allow http traffic only from the front tier subnet and allow rdp traffic only from the bastion subnet. Therefore I add the three resource blocks below to my main.tf file:

Last but not least I want to block all Inbound traffic to my backend tier subnet except traffic to the sql-server port from the middle tier subnet. In addition I want to block explicitly the internet access from this subnet.

You are questioning why I’m explicitly block internet access from this subnet while I haven’t any public IP address or NAT gateway for this subnet? That’s because Microsoft provides access to the internet through a default outbound IP address in case no explicit way is defined. That’s a feature which will be deprecated on the 30th September 2025. To get more information’s about this feature check out this link from Microsoft.

To create the rules for the backend tier subnet I add the three resource blocks below to my main.tf file:

Define Output Variables:

I will stop with the creation of resources for this blog post and will show you finally how you can define outputs. For example let’s assume we want to have the name of the Azure SQL Server and the IP address of the virtual machine extracted after the deployment. Therefore I add the two output variables below to the outputs.tf file:

Outputs are especially useful when you need to pass up information’s from the deployment to a higher context. For example when you are working with modules in terraform and you want to pass information’s from a child module to a parent module. In our case the outputs will just be printed out to the command line after the deployment.

Apply the Configuration Code:

As I am now done with the definition of the configuration code for this blog post, I will plan and apply my configuration for each stage. Before doing so, I need to first set a value for my password variables. On Ubuntu this can be done with this command:

export TF_VAR_sqlserver_password=”your password”

export TF_VAR_vm_password=”your password”

After I’ve set the variables, I run the terraform plan command and we can see that terraform would create 29 resources:

This seems to be good for my so I run the terraform apply command to deploy my infrastructure:

terraform apply

After some minutes of patience terraform applied the configuration code successfully:

Check the Infrastructure in the Azure Portal:

When I’m signing in to the Azure portal I can see my development resource group with all the resources inside:

I want to have my test and production resources as well so I switch the terraform workspace to test and production and run in both workspaces again the terraform apply command.

After some additional minutes of patience we can see in the Azure portal that we have now all resources for each environment stage:

Lets now compare the settings from the productive database with the development database and we can see that the SKU for the productive one is business critical while the SKU for the non-productive one is general purpose:

The Backup storage has also been set according to the “database_settings” variable:

We can see the same for the point in time recovery option:

We can see that our subnets are also all in place with the corresponding address space and network security group associated:

Lets check the private endpoint of the Azure SQL Server. We can see that we have a private IP address within our backend subnet which is linked to the Azure SQL Server:

Lets connect to the virtual machine and try a name resolution. You can see that we were able to successfully resolve the FQDN:

After installing SQL-Server management studio on the virtual machine, we can also connect to the Azure SQL Server through the FQDN of the private endpoint:

Delete the Deployed Resources:

For now preventing getting a high bill for something I didn’t use, I will now delete all resources which I’ve created with Terraform. This is very simple, and can be done through running the terraform destroy command in each workspace:

terraform destroy

I hope you got some interesting examples and ideas about Terraform and Azure! Feel free to share your questions and feelings about Terraform and Azure with me in the comment section below.

L’article Automate your Deployments in Azure with Terraform! est apparu en premier sur dbi Blog.

Creating your private cloud using OpenStack – (1) – Introduction

Yann Neuhaus - Thu, 2025-01-16 04:54

While public clouds are a trend since several years now, some companies are also looking into self hosted solutions to build a private cloud. Some do this because of costs, others do this because they don’t want to be dependent on one ore multiple public cloud providers, others do it because they want to keep their data locally. There are several solutions to this and depending on the requirements those might or might not be an option. Some of the more popular ones are:

The other major player which is not in the list above is OpenStack, which started in 2010 already. OpenStack is not a single product, but more a set of products combined together to provide a computing platform to deploy your workloads on top of either virtual machines, or containers, or a mix of both. There are plenty of sub projects which bring in additional functionality, check here for a list. The project itself is hosted and supported by the OpenInfra Foundation, which should give sufficient trust that it will stay as a pure open source project (maybe have a look at the OpenInfra supporting organizations as well, to get an idea of how widely it is adopted and supported).

The main issue with OpenStack is, that it is kind of hard to start with. There are so many services you might want to use that you probably get overwhelmed at the beginning of your journey. To help you a bit out of this, we’ll create a minimal, quick and dirty OpenStack setup on virtual machines with just the core services:

We’ll do that step by step, because we believe that you should know the components which finally make up the OpenStack platform, or any other stack you’re planning to deploy. There is also DevStack which is a set of scripts for the same purpose, but as it is scripted you’ll probably not gain the same knowledge than by doing it manually. There is OpensStack-helm in addition, which deploys OpenStack on top of Kubernetes, but this as well is out of cope for this series of blog posts. Canonical offers MicroStack, which also can be used to setup a test environment quickly.

Automation is great and necessary, but it also comes with a potential downside: The more you automate, the more people you’ll potentially have who don’t know what is happening in the background. This is usually fine as long as the people with the background knowledge stay in the company, but if they leave you might have an issue.

As there are quite some steps to follow, this will not be single blog post, but split into parts:

  • Introduction (this blog post)
  • Preparing the controller and the compute node
  • Setting up and configuring Keystone
  • Setting up and configuring Glance and the Placement service
  • Setting up and configuring Nova
  • Setting up and configuring Neutron
  • Setting up and configuring Horizon, the Openstack dashboard

In the most easy configuration, the OpenStack platform consists of two nodes: A controller node, and at least one compute node. Both of them will require two network interfaces, one for the so-called management network (as the name implies, this is for the management of the stack and communication with the internet), and the other one for the so-called provider network (this is the internal network e.g. the virtualized machines will be using to communicate with each other).

When it comes to your choice of the Linux distribution you want to deploy OpenStack on, this is merely a matter of taste. OpenStack can be deployed on many distributions, the official documentation comes with instructions for Red Hat based distributions (which usually includes Alma Linux, Rocky Linux and Oracle Linux), SUSE based distributions (which includes openSUSE Leap), and Ubuntu (which also should work on Debian). For the scope of this blog series we’ll go with a minimal installation of Rocky Linux 9, just because I haven’t used it since some time.

OpenStack itself is released in a six month release cycle and we’ll go with 2024.2 (Dalmatian), which will be supported until the beginning of 2026. As always, you should definitely go with the latest supported release so you have the most time to test and plan for future upgrades.

To give you an idea of what we’ve going to start with, here is a graphical overview:

Of course this very simplified, but it is enough to know for the beginning:

  • We have two nodes, one controller node and one compute node.
  • Both nodes have two network interfaces. The first one is configured using a 192.168.122.0/24 subnet and connected to the internet. The second one is not configured.
  • Both nodes are installed with a Rocky Linux 9 (9.5 as of today) minimal installation

We’ll add all the bits and pieces to this graphic while we’ll be installing and configuring the complete stack, don’t worry.

That’s it for the introduction. In the next post we’ll prepare the two nodes so we can continue to install and configure the OpenStack services on top of them.

L’article Creating your private cloud using OpenStack – (1) – Introduction est apparu en premier sur dbi Blog.

Audit weirdness

Tom Kyte - Wed, 2025-01-15 12:06
I activated audit on some just for unsuccessful connection attempts ( my purpose was finding who/what locks some users ). This is something I had already done many times, always fine. However on a 19 version I noted something I could not find an explanation for: It appears that an additional audit popped out of nowhere, a select audit on sys owned tables: HIST_HEAD$ and HISTGRM$ I had to change audit_trail parameter, so no chance that they are some kind of inheritance from previous audit ( and I truncated aud$ in order to have a clean start ). However I would never ever dream of touching sys owned tables, my first commandment is "you shall not touch sys onwed tables ( unless under oracle support supervision, of course and with the only exception of aud$ ). I perused docs and Metalink but I was unable to find any relevant info on this. On old 10 and 11 version I never saw this. Is this a new kind of feature of 19 version? I even tried do disable this audit, no luck, from inside the pluggable db it complains because the operation is not allowed, from the root it gives me another error ... I hope that my poor English is clear enough Have a nice day Mauro Papandrea
Categories: DBA Blogs

REGEXP_COUNT and REGEXP_LIKE and the search for a whitespace

Tom Kyte - Wed, 2025-01-15 12:06
Hello, As far as I understand it, Oracle processes regular expressions according to the POSIX standard, but also supports expressions that originate from Perl. Currently I had some missleading results when searching for a space. Theoretically, this should be found by the Perl-like expression \s. As I understand it, this is also noted in the Oracle documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/adfns/regexp.html Table 10-5). However, this does not seem to work in my example. Is this a bug - or is there a reason for this (for me unexpected) result? Should I forget about the Perl expressions and use only the POSIX expressions instead? Intention: Looking for ORA-01555, followed by a colon, space oder new line. <b>Unexpected result (expression wasn't found in string)</b> <code>SELECT 1 AS EXPECTED_RESULT, REGEXP_COUNT ('ORA-01555 caused by SQL statement below', 'ORA-01555[:|\s|\n]') AS REGEXPCNT FROM DUAL;</code> <b>Expected result if using :space: instead of \s</b> <code>SELECT 1 AS EXPECTED_RESULT, REGEXP_COUNT ('ORA-01555 caused by SQL statement below', 'ORA-01555[:|[:space:]|\n]') AS REGEXPCNT FROM DUAL;</code> Best regards, Marian
Categories: DBA Blogs

Introducing YaK 2.0: The future of effortless PaaS deployments across Clouds and On-Premises

Yann Neuhaus - Wed, 2025-01-15 01:13
YaK 2.0 Automated multi-cloud PaaS deployment

Hello, dear tech enthusiasts and cloud aficionados!

We’ve got some news that’s about to make your life —or your deployments, at least— a whole lot easier. Meet YaK 2.0, the latest game-changer in the world of automated multi-cloud PaaS deployment. After months of development, testing, troubleshooting, a fair share of meetings and way too much coffee, YaK is officially launching today.

Automated multi-cloud PaaS deployment. What’s the deal with YaK 2.0?

Because we believe IT professionals should spend time on complex, value-added tasks, but not on repetitive setups, we have decided to develop the YaK.
YaK is a framework that allows anyone to deploy any type of component on any platform, while ensuring quality, cost efficiency and reducing deployment time.

YaK 2.0 is your new best friend when it comes to deploying infrastructure that’s not just efficient but also identical across every platform you’re working with – be it multi-cloud or on-premises. Originating from the need to deploy multi-technology infrastructures quickly and effortlessly, YaK ensures your setup is consistent, whether you’re working with AWS, Azure, Oracle Cloud, or your own on-prem servers.

In simpler terms, YaK makes sure your deployment process is consistent and reliable, no matter where. Whether you’re scaling in the cloud or handling things in-house, YaK’s got your back.

Why you should have a look at YaK 2.0?

Here’s why we think YaK is going to become your favorite pet:

  • Flexibility: Deploy across AWS, Azure, OCI, or your own servers—YaK adapts to your infrastructure, making every platform feel like home.
  • Automation: Eliminate repetitive setups with automated deployments, saving you time and headaches.
  • Cost efficiency & speed: YaK cuts time-to-market, streamlining deployments for fast, standardized rollouts that are both cost-effective and secure.
  • Freedom from vendor lock-In: YaK is vendor-neutral, letting you deploy on your terms, across any environment.
  • Swiss software backed up by a consulting company (dbi services) with extensive expertise in deployments.

With this release, we’re excited to announce a major upgrade:

  • Sleek new user interface: YaK 2.0 now comes with a user-friendly interface, making it easier than ever to manage your deployments. Say hello to intuitive navigation.
YaK 2.0 Automated multi-cloud PaaS deployment
  • Components: We’ve got components on our roadmap (available with an annual subscription), and we’ll be announcing them shortly : Oracle Database, PostgreSQL, MongoDB, and Kubernetes are already on the list and will be released soon.

Many more will follow… Stay tuned!

How does it work?

YaK Core is the open-source part and is the heart of our product, featuring Ansible playbooks and a custom plugin that provides a single inventory for all platforms, making your server deployments seamless across clouds like AWS, Azure, and OCI.
If you want to see for yourself, our GitLab project is available here!

YaK 2.0 Automated multi-cloud PaaS deployment

YaK Components are the value-added part of the product and bring you expert-designed modules for deploying databases and application servers, with an annual subscription to dbi services.

Join the YaK pack

Explore the power of automated multi-cloud PaaS deployment with YaK 2.0 and experience a new level of efficiency and flexibility. We can’t wait for you to try it out and see just how much it can streamline your deployment process. Whether you’re a startup with big dreams or an established enterprise looking to optimize, YaK is here to make your life easier.

Our YaK deserved its own web page, check it out for more information, to contact us or to try it out (free demo environments will be available soon): yak4all.io

Wanna ride the YaK? Check out our user documentation to get started!
We promise it’ll be the smoothest ride you’ve had in a while.

We’re not just launching a product; we’re building a community. We’d love for you to chime in, share your experiences, and help us make YaK even better. Follow us on LinkedIn, join our community on GitLab, and let’s create something amazing together.

Feel free to reach out to us for more details or for a live presentation: info@dbi-services.com

Thanks for being part of this exciting journey. We can’t wait to see what you build with YaK.

The YaK Team

P.S. If you’re wondering about the name, well, yaks are known for being hardy, reliable, and able to thrive in any environment. Plus, they look pretty cool, don’t you think?

L’article Introducing YaK 2.0: The future of effortless PaaS deployments across Clouds and On-Premises est apparu en premier sur dbi Blog.

New Oracle Database Appliance X11 series for 2025

Yann Neuhaus - Tue, 2025-01-14 15:22
Introduction

Oracle Database Appliance X10 is not so old, but X11 is already out, available to order.

Let’s find out what’s new for this 2025 series.

What is an Oracle Database Appliance?

ODA, or Oracle Database Appliance, is an engineered system from Oracle. Basically, it’s an x86-64 server with a dedicated software distribution including Linux, Oracle Grid Infrastructure (GI) including Automatic Storage Management and Real Application Cluster, Oracle database software, a Command Line Interface (CLI), a Browser User Interface (BUI) and a virtualization layer. The goal being to simplify database lifecycle and maximize performance. Market position is somewhere between OCI (the Oracle public Cloud) and Exadata (the highest level engineered system – a kind of big and rather expensive ODA). For most clients, ODA brings both simplification and performance they just need. For me, ODA has always been one of my favorite solutions, and undoubtedly a solution to consider. X11 doesn’t change the rules regarding my recommendations.

To address a large range of clients, ODA is available in 3 models: S, L and HA.

For Enterprise Edition (EE) users, as well as for Standard Edition 2 (SE2) users, ODA has a strong advantage over its competitors: capacity on demand licensing. With EE you can start with 1x EE processor license (2 enabled cores). With SE2 you can start with 1x SE2 processor license (8 enabled cores). You can later scale up by enabling additional cores according to your needs.

On the processor side

X11 still rely on Epyc series for its processor, according to Oracle recent long-term commitment to AMD.

Is the X11 CPU better than X10 ones? According to data sheets, ODA moves from Epyc 9334 to Epyc 9J15. This latest version may be specific to Oracle as it doesn’t appear on the AMD website. Looking at the speed, Epyc 9334 is clocked from 2.7Ghz to 3.9GHz, and Epyc 9J15 is clocked from 2.95Ghz to 4.4Ghz. As a consequence, you should probably expect a 10% performance increase per core. Not a huge bump, but X10 was quite a big improvement over X9-2 Xeon processors. Each processor has 32 cores, and there is still 1 processor on X11-S and 2 on X11-L. As X11-HA is basically two X11-L without local disks but connected to a disk enclosure, each node also have 2 Epyc processors.

Having a better CPU does mean better performance, but also less processor licenses needed for the same workload. It’s always something to keep in mind.

RAM and disks: same configuration as outgoing X10

Nothing new about RAM on X11, the same configurations are available, from 256GB on X11-S, and from 512GB on X11-L and each node of the X11-HA. You can double or triple the RAM size if needed on each server.

On X11-S and L models, data disks have the same size as X10 series: 6.8TB NVMe disks. X11-S has the same limitation as X10-S, only 2 disks and no possible expansion.

X11-L also comes with 2 disks, but you can add pairs of disks up to 8 disks, meaning 54TB of RAW storage. Be aware that only 4 disk slots are available on the front panel. Therefore, starting from the third pair of disks, disks are different: they are Add-In-Cards (AIC). It means that you will need to open your server to add or replace these disks, with a downtime for your databases.

X11-HA is not different compared to X10-HA, there is still a High Performance (HP) version and a High Capacity (HC) version, the first one being only composed of SSDs, the second one being composed of a mix of SSDs and HDDs. SSDs are 7.68TB each, and HDDs are 22TB each.

Network interfaces

Nothing new regarding network interfaces. You can have up to 3 of them (2 are optional), and you will choose for each between a quad-port 10GBase-T (copper) or a two-port 10/25GbE (SFP28). You should know that SFP28 won’t connect to 1Gbps fiber network. But using SFPs for a network limited to 1Gbps would not make sense.

Software bundle

Latest software bundle for ODA is 19.25, so you will use this latest one on X11. This software bundle is also compatible with X10, X9-2, X8-2 and X7-2 series. This bundle is the same for SE2 and EE editions.

What are the differences between the 3 models?

The X11-S is an entry level model for a small number of small databases.

The X11-L is much more capable and can get disk expansions. A big infrastructure with hundreds of databases can easily fit on several X10-L.

The X11-HA is for RAC users because High Availability is included. The disk capacity is much higher than single node models, and HDDs are still an option. With X11-HA, big infrastructures can be consolidated with a very small number of HA ODAs.

ModelDB EditionnodesURAMRAM maxRAW TBRAW TB maxbase priceODA X11-SEE and SE212256GB768GB13.613.624’816$ODA X11-LEE and SE212512GB1536GB13.654.440’241$ODA X11-HA (HP)EE and SE228/122x 512GB2x 1536GB46368112’381$ODA X11-HA (HC)EE and SE228/122x 512GB2x 1536GB390792112’381$

You can run SE2 on X11-HA, but it’s much more an appliance dedicated to EE clients.
I’m not so sure that X11-HA still makes sense today compared to Exadata Cloud@Customer: study both options carefully if you need this kind of platform.

Is X11 more expensive than X10?

In the latest engineered systems price list (search exadata price list and you will easily find it), you will see X11 series alongside X10 series. Prices are the same, so there is no reason to order the old ones.

Which one should you choose?

If your databases can comfortably fit on the storage of the S model, don’t hesitate as you will probably never need more.

Most interesting model is still the new X11-L. L is quite affordable, has a great storage capacity, and is upgradable if you don’t buy the full system at first.

If you still want/need RAC and its associated complexity, the HA may be for you but take a look at Exadata Cloud@Customer and compare the costs.

Don’t forget that you will need at least 2 ODAs for Disaster Recovery purpose, using Data Guard (EE) or Dbvisit Standby (SE2). No one would recommend buying a single ODA. Mixing S and L is OK, but I would not recommend mixing L and HA ODAs just because some operations are handled differently when using RAC.

I would still prefer buying 2x ODA X11-L compared to 1x ODA X11-HA. NVMe speed, no RAC and the simplicity of a single server is definitely better in my opinion. Extreme consolidation is not always the best solution.

Conclusion

ODA X11 series is a slight refresh of X10 series, but if you were previously using older generations (for example X7-2 that comes to end of life this year) switching to X11 will make a significant difference. In 2025, ODA is still a good platform for database simplification and consolidation. And it’s still very popular among our clients.

Useful links

X11-S/L datasheet

X11-HA datasheet

SE2 licensing rules on ODA X10 (apply to X11)

Storage and ASM on ODA X10-L (apply to X11)

L’article New Oracle Database Appliance X11 series for 2025 est apparu en premier sur dbi Blog.

Vision LLM Structured Output with Sparrow

Andrejus Baranovski - Tue, 2025-01-14 06:06
I show how Sparrow UI Shell works with both image and PDF docs to process and extract structured data with Vision LLM (Qwen2) in the MLX backend. 

 

Hidden Parameter "_cleanup_rollback_entries" value

Tom Kyte - Mon, 2025-01-13 23:59
Hello, Tom We have been facing the slow rollback after killing job of huge transaction, then we discovered several ways to speed up the rollback process. So one possibility is altering hidden parameter "_cleanup_rollback_entries" from default value (100) to be 400. However, I still in doubt about the origin of 400 value. I saw some couple documents also said increase the value to be 400. But no explanation why it must be 400? Therefore, Please answer inline questions below 1) Why the recommended value is 400, how come of this value? 2) If there is larger transaction than mentioned is killed, how I ensure that value (400) is still effective for speedup rollback process? Thank in advance
Categories: DBA Blogs

Question on VARRAY

Tom Kyte - Mon, 2025-01-13 23:59
Hi, <code>CREATE TYPE phone_typ AS OBJECT ( country_code VARCHAR2(2), area_code VARCHAR2(3), ph_number VARCHAR2(7)); / CREATE TYPE phone_varray_typ AS VARRAY(5) OF phone_typ; / CREATE TABLE dept_phone_list ( dept_no NUMBER(5), phone_list phone_varray_typ); INSERT INTO dept_phone_list VALUES ( 100, phone_varray_typ( phone_typ ('01', '650', '5550123'), phone_typ ('01', '650', '5550148'), phone_typ ('01', '650', '5550192'))); / INSERT INTO dept_phone_list VALUES ( 200, phone_varray_typ( phone_typ ('02', '750', '5550122'), phone_typ ('02', '750', '5550141'), phone_typ ('02', '750', '5550195'))); /</code> I can execute below query <code>select * from table(select phone_list from dept_phone_list where dept_no=100)</code> I can't execute as below Is there a way to do this ? <code>select * from table(select phone_list from dept_phone_list) ORA-01427: single-row subquery returns more than one row</code> Thanks, Girish
Categories: DBA Blogs

Fetching encryption key from an external storage

Tom Kyte - Mon, 2025-01-13 23:59
We would like to encrypt data at rest in an oracle database column (in one specific database table only) using an encryption key held externally in a vault. Does Oracle provide standard interfaces to make API calls ? The encryption key should not persist in the database in any form.
Categories: DBA Blogs

NVIDIA SANA Model Local Installation with GUI - Step-by-Step Tutorial

Pakistan's First Oracle Blog - Mon, 2025-01-13 17:48

 This video locally installs NVIDIA SANA which is a text-to-image framework that can efficiently generate images up to 4096 × 4096 resolution.


Code:

git clone https://github.com/NVlabs/Sana.git && cd Sana

./environment_setup.sh sana

pip install huggingface_hub

huggingface-cli login  <Get read token from huggingface.co also accept access to
google gemma model on huggingface>

# official online demo
DEMO_PORT=15432 \
python3 app/app_sana.py \
    --share \
    --config=configs/sana_config/1024ms/Sana_1600M_img1024.yaml \
    --model_path=hf://Efficient-Large-Model/Sana_1600M_1024px/checkpoints/Sana_1600M_1024px.pth \
    --image_size=1024
   
Access demo at http://localhost:15432
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator