Feed aggregator
User Defined Extensions in SQLDeveloper Classic – something you can’t do in VSCode (yet)
I can tell you from personal experience that, when you reach a certain point in your life, you start looking for synonyms to use in place of “old”.
If your a venerable yet still useful Oracle IDE for example, you may prefer the term “Classic”.
One thing SQLDeveloper Classic isn’t is obsolete. It still allows customisations that are not currently available in it’s shiny new successor – the SQLDeveloper extension for VSCode.
Fortunately, there’s no reason you can’t run both versions at the same time – unless your corporate IT has been overzealous and either packaged VSCode in an MSI that prohibits installation of extensions or has a policy preventing extensions running because “security”.
Either way, SQLDeveloper Classic is likely to be around for a while.
One particular area where Classic still has the edge over it’s shiny new successor is when it comes to user-defined extensions.
In this case – finding out the partition key and method of a table without having to wade through the DDL for that object…
The following query should give us what we’re after – details of the partitioning and sub-partitioning methods used for a table, together with a list of the partition and (if applicable) sub-partition key columns :
with part_cols as
(
select
owner,
name,
listagg(column_name, ', ') within group ( order by column_position) as partition_key_cols
from all_part_key_columns
group by owner, name
),
subpart_cols as
(
select
owner,
name,
listagg(column_name, ', ') within group ( order by column_position) as subpartition_key_cols
from all_subpart_key_columns
group by owner, name
)
select
tab.owner,
tab.table_name,
tab.partitioning_type,
part.partition_key_cols,
tab.subpartitioning_type,
sp.subpartition_key_cols
from all_part_tables tab
inner join part_cols part
on part.owner = tab.owner
and part.name = tab.table_name
left outer join subpart_cols sp
on sp.owner = tab.owner
and sp.name = tab.table_name
where tab.owner = 'SH'
and table_name = 'SALES'
order by 1,2
/
That’s quite a lot of code to type in – let alone remember – every time we want to check this metadata, so let’s just add an extra tab to the Table view in SQLDeveloper.
Using this query, I’ve created an xml file called table_partitioning.xml to add a tab called “Partition Keys” to the SQLDeveloper Tables view :
<items>
<item type="editor" node="TableNode" vertical="true">
<title><![CDATA[Partition Keys]]></title>
<query>
<sql>
<![CDATA[
with part_cols as
(
select
owner,
name,
listagg(column_name, ', ') within group ( order by column_position) as partition_key_cols
from all_part_key_columns
group by owner, name
),
subpart_cols as
(
select
owner,
name,
listagg(column_name, ', ') within group ( order by column_position) as subpartition_key_cols
from all_subpart_key_columns
group by owner, name
)
select
tab.owner,
tab.table_name,
tab.partitioning_type,
part.partition_key_cols,
tab.subpartitioning_type,
sp.subpartition_key_cols
from all_part_tables tab
inner join part_cols part
on part.owner = tab.owner
and part.name = tab.table_name
left outer join subpart_cols sp
on sp.owner = tab.owner
and sp.name = tab.table_name
where tab.owner = :OBJECT_OWNER
and table_name = :OBJECT_NAME
order by 1,2
]]>
</sql>
</query>
</item>
</items>
Note that we’re using the SQLDeveloper supplied ( and case-sensitive) variables :OBJECT_OWNER and :OBJECT_NAME so that the data returned is for the table that is in context when we open the tab.
If you are familiar with the process of adding User Defined Extensions to SQLDeveloper and want to get your hands on this one, just head over to the Github Repo where I’ve uploaded the relevant file.
You can also find instructions for adding the tab to SQLDeveloper as a user defined extension there.
They are…
In SQLDeveloper select the Tools Menu then Preferences.
Search for User Defined Extensions

Click the Add Row button then click in the Type field and select Editor from the drop-down list

In the Location field, enter the full path to the xml file containing the extension you want to add

Hit OK
Restart SQLDeveloper.
When you select an object of the type for which this extension is defined ( Tables in this example), you will see the new tab has been added

The new tab will work like any other :

The documentation for Extensions has been re-organised in recent years, but here are some links you may find useful :
As you’d expect, Jeff Smith has published a few articles on this topic over the years. Of particular interest are :
- An Introduction to SQLDeveloper Extensions
- Using XML Extensions in SQLDeveloper to Extend SYNONYM Support
- How To Add Custom Actions To Your User Reports
The Oracle-Samples GitHub Repo contains lots of example code and some decent instructions.
I’ve also covered this topic once or twice over the years and there are a couple of posts that you may find helpful :
- SQLDeveloper XML Extensions and auto-navigation includes code for a Child Tables tab, an updated version of which is also in the Git Repo.
- User-Defined Context Menus in SQLDeveloper
Looking for help with JSON object
Setup Apex 22.1 email with MS O365 Outlook SMTP
unique index
Pagination Cost – 2
This note is a follow-on to a note I published a couple of years ago, and was prompted by a question on the MOS community forum (needs an acount) about the performance impact of using bind variables instead of literal values in a clause of the form: offset 20 rows fetch next 20 rows only
The issue on MOS may have been to do with the complexity of the view that was was being queried, so I thought I’d take a look at what happened when I introduced bind variables to the simple tests from the previous article. Here’s the (cloned) script with the necessary modification:
rem
rem Script: fetch_first_offset_3.sql
rem Author: Jonathan Lewis
rem Dated: May 2025
rem
create table t1
as
select
*
from
all_objects
where rownum <= 50000
order by
dbms_random.value
/
create index t1_i1 on t1(object_name);
alter session set statistics_level = all;
set serveroutput off
column owner format a32
column object_type format a12
column object_name format a32
spool fetch_first_offset_3.lst
prompt ===================================
prompt SQL with literals (non-zero offset)
prompt ===================================
select
owner, object_type, object_name
from
t1
order by
object_name
offset
10 rows
fetch next
20 rows only
/
select * from table(dbms_xplan.display_cursor(format=>'+cost allstats last peeked_binds'));
variable offset_size number
variable fetch_size number
begin
:offset_size := 10; :fetch_size := 20;
end;
/
prompt ==============
prompt SQL with binds
prompt ==============
alter session set events '10053 trace name context forever';
select
owner, object_type, object_name
from
t1
order by
object_name
offset
:offset_size rows
fetch next
:fetch_size rows only
/
alter session set events '10053 trace name context off';
select * from table(dbms_xplan.display_cursor(format=>'+cost allstats last peeked_binds'));
I’ve created a simple data set by copying 50,000 rows from the view all_objects and creating an index on the object_name column then, using two different strategies, I’ve selected the 21st to 30th rows in order of object_name. The first strategy uses literal values in the offset and fetch first/next clauses to skip 10 rows then fetch 20 rows; the second strategy creates a couple of bind variables to specify the offset and fetch sizes.
Here’s the execution plan (pulled from memory, with the rowsource execution statistics enabled) for the example using literal values:
SQL_ID d7tm0uhcmpwc4, child number 0
-------------------------------------
select owner, object_type, object_name from t1 order by object_name
offset 10 rows fetch next 20 rows only
Plan hash value: 3254925009
-----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
-----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 33 (100)| 20 |00:00:00.01 | 35 | 333 |
|* 1 | VIEW | | 1 | 30 | 33 (0)| 20 |00:00:00.01 | 35 | 333 |
|* 2 | WINDOW NOSORT STOPKEY | | 1 | 30 | 33 (0)| 30 |00:00:00.01 | 35 | 333 |
| 3 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 50000 | 33 (0)| 30 |00:00:00.01 | 35 | 333 |
| 4 | INDEX FULL SCAN | T1_I1 | 1 | 30 | 3 (0)| 30 |00:00:00.01 | 5 | 28 |
-----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber"<=30 AND
"from$_subquery$_002"."rowlimit_$$_rownumber">10))
2 - filter(ROW_NUMBER() OVER ( ORDER BY "OBJECT_NAME")<=30)
As you can see, the optimizer has used (started) an index full scan to access the rows in order of object_name, but the A-Rows column tells you that it has passed just 30 rowids (the 10 to be skipped plus the 20 to be fetched) up to its parent (table access) operation, and the table access operation has passed the required columns up to its parent (window nosort stopkey) which can conveniently discard the first 10 rows that arrive and pass the remain 20 rows up and on to the client without actually doing any sorting.
You can also see in the Predicate Information that the window operation has used the row_number() function to limit itself to the first 30 (i.e. 10 + 20) rows, passing them up to its parent where the “30 rows” predicate is repeated with a further predicate that eliminates the first 10 of those rows, leaving only the 20 rows requested.
So what does the plan look like when we switch to bind variables:
SQL_ID 5f85rkjc8bv8a, child number 0
-------------------------------------
select owner, object_type, object_name from t1 order by object_name
offset :offset_size rows fetch next :fetch_size rows only
Plan hash value: 1024497473
--------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 933 (100)| 20 |00:00:00.03 | 993 | 990 | | | |
|* 1 | VIEW | | 1 | 50000 | 933 (1)| 20 |00:00:00.03 | 993 | 990 | | | |
|* 2 | WINDOW SORT PUSHED RANK| | 1 | 50000 | 933 (1)| 30 |00:00:00.03 | 993 | 990 | 11264 | 11264 |10240 (0)|
|* 3 | FILTER | | 1 | | | 50000 |00:00:00.02 | 993 | 990 | | | |
| 4 | TABLE ACCESS FULL | T1 | 1 | 50000 | 275 (1)| 50000 |00:00:00.01 | 993 | 990 | | | |
--------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber" <= GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:OFFSET_SIZE))),0)+:FETCH_SIZE AND
"from$_subquery$_002"."rowlimit_$$_rownumber" > :OFFSET_SIZE))
2 - filter(ROW_NUMBER() OVER ( ORDER BY "OBJECT_NAME") <= GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:OFFSET_SIZE))),0)+:FETCH_SIZE)
3 - filter(:OFFSET_SIZE < GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:OFFSET_SIZE))),0)+:FETCH_SIZE)
This looks like bad news – we haven’t taken advantage of an index to avoid visiting and sorting all the rows in the table, operation 4 shows us a table scan passing 50,000 rows through a filter up to the window sort at operation 2 which discards the 49,970 rows we definitely don’t want before passing the remaining 30 rows to the view operation that discards the first 10 that we needed to skip. Why don’t we see the far more efficient index scan?
You may have noticed a couple of oddities in the Predicate Information.
- Wherever you see the :offset_size bind variable the optimizer has wrapped it in to_number(to_char()) – why?! My first thought about this was that the double conversion made it impossible for the optimizer to peek at the value and use it to get a better estimate of cost, but that’s (probably) not why the index full scan disappeared.
- The offset and fetch first are both supposed to be numeric (according to the tram-tracks in the manual) so it seems a little strange that Oracle treats just one of them to a double conversion.
- What is that filter() in operation 3 actually trying to achieve? If you tidy up the messy bits it’s just checking two bind variables to make sure that the offset is less than the offset plus fetch size. This is just an example of “conditional SQL”. In this case it’s following the pattern for “columnX between :bind1 and :bind2” – allowing Oracle to short-circuit the sub-plan if the value of bind2 is less than that of bind1. (It wasn’t needed for the example where we used literals because Oracle could do the artithmetic at parse time and see that 10 was – and always would be – less than 30.)
- What are the checks actually saying about the optimizer’s (or developer’s) expectation for the way you might use the feature? The generated SQL actually allows for negative, non-integer values here. Negative offsets are replaced by zero, negative fetch sizes result in the query short-circuiting and returning no data (in fact any fetech size strictly less than 1 will return no rows).
Hoping to find further clues about the poor choice of plan, I took a look at the “UNPARSED QUERY” from the CBO (10053) trace, and cross-checked against the result from using the dbms_utility.expand_sql() procedure; the results were (logically, though not cosmetically) the same. Here, with a little extra cosmetic tidying is the SQL the optimizer actually works with:
select
a1.owner owner,
a1.object_type object_type,
a1.object_name object_name
from (
select
a2.owner owner,
a2.object_type object_type,
a2.object_name object_name,
a2.object_name rowlimit_$_0,
row_number() over (order by a2.object_name) rowlimit_$$_rownumber
from
test_user.t1 a2
where
:b1 < greatest(floor(to_number(to_char(:b2))),0)+:b3
) a1
where
a1.rowlimit_$$_rownumber <= greatest(floor(to_number(to_char(:b4))),0) + :b5
and a1.rowlimit_$$_rownumber > :b6
order by
a1.rowlimit_$_0
;
It’s fascinating that the optimizer manages to expand the original two bind variables to six bind variables (lots of duplication) and then collapse them back to two named bind variables for the purposes of reporting the Predicate Information. For reference:
- b1 = b3 = b5 = fetch_size
- b2 = b4 = b5 = offset_size
Line 15, which I’ve highlighted, is clearly the source of the “conditional SQL” filter predicate at operation 3 of the previous execution plan, so I thought I’d try running this query (pre-defining all 6 bind variables correctly) to see if I could get the index-driven plan by modifying that line.
My first attempt was simply to remove the (highly suspect) to_number(to_char()) – but that didn’t help. Then I thought I’d make it really simple by getting rid of the greatest(floor()) functions – and that didn’t help either,. Finally I decided to change what was now :b4 + :b5 to a single bind variable :b7 with the right values – and that’s when I got the plan I wanted:
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 20 |00:00:00.01 | 35 |
|* 1 | VIEW | | 1 | 30 | 20 |00:00:00.01 | 35 |
|* 2 | WINDOW NOSORT STOPKEY | | 1 | 30 | 30 |00:00:00.01 | 35 |
|* 3 | FILTER | | 1 | | 30 |00:00:00.01 | 35 |
| 4 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 50000 | 30 |00:00:00.01 | 35 |
| 5 | INDEX FULL SCAN | T1_I1 | 1 | 30 | 30 |00:00:00.01 | 5 |
--------------------------------------------------------------------------------------------------
Of course this doesn’t help answer the question – how do I make the query faster – it just highlights where in the current transformation the performance problem appears. Maybe it’s a pointer to some Oracle developer that there’s some internal code that could be reviewed – possibly for a special (but potentially common) pattern. Perhaps there’s a point of interception where a fairly small, isolated piece of code could be modified to give the optimizer the simpler expression during optimisation.
As for addressing the problem of finding a “client-oriented” mechanism, I found two solutions for my model. First add the (incomplete, but currently adequate) hint /*+ index(t1) */ to the SQL to get:
---------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 50311 (100)| 20 |00:00:00.01 | 25 |
|* 1 | VIEW | | 1 | 50000 | 50311 (1)| 20 |00:00:00.01 | 25 |
|* 2 | WINDOW NOSORT STOPKEY | | 1 | 50000 | 50311 (1)| 20 |00:00:00.01 | 25 |
|* 3 | FILTER | | 1 | | | 20 |00:00:00.01 | 25 |
| 4 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 50000 | 50311 (1)| 20 |00:00:00.01 | 25 |
| 5 | INDEX FULL SCAN | T1_I1 | 1 | 50000 | 339 (1)| 20 |00:00:00.01 | 5 |
---------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber"<=GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:B1))),0
)+:B2 AND "from$_subquery$_002"."rowlimit_$$_rownumber">:B1))
2 - filter(ROW_NUMBER() OVER ( ORDER BY "OBJECT_NAME")<=GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:B1))),0)+:
B2)
3 - filter(:B1<GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:B1))),0)+:B2)
As you can see we now do the index full scan, but it stops after only 20 rowids have been passed up the plan. This isn’t a good solution, of course, since (a) it’s specific to my model and (b) the estimates still show the optimizer working on the basis of handling and forwarding 50,000 rows (E-rows).
The alternative was to tell the optimizer that since we’re doing pagination queries we’re only expecting to fetch a little data each time we execute the query – let’s add the hint /*+ first_rows(30) */ which gives us:
---------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 33 (100)| 20 |00:00:00.01 | 25 |
|* 1 | VIEW | | 1 | 30 | 33 (0)| 20 |00:00:00.01 | 25 |
|* 2 | WINDOW NOSORT STOPKEY | | 1 | 30 | 33 (0)| 20 |00:00:00.01 | 25 |
|* 3 | FILTER | | 1 | | | 20 |00:00:00.01 | 25 |
| 4 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 50000 | 33 (0)| 20 |00:00:00.01 | 25 |
| 5 | INDEX FULL SCAN | T1_I1 | 1 | 30 | 3 (0)| 20 |00:00:00.01 | 5 |
---------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber"<=GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:B1))),0
)+:B2 AND "from$_subquery$_002"."rowlimit_$$_rownumber">:B1))
2 - filter(ROW_NUMBER() OVER ( ORDER BY "OBJECT_NAME")<=GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:B1))),0)+:
B2)
3 - filter(:B1<GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:B1))),0)+:B2)
This is likely to be a much better strategy than “micro-management” hinting; and it may even be appropriate to set the optimizer_mode at the session level with a logon trigger, first_rows_10 or first_rows_100 could well be a generally acceptable result if most of the queries tended to be about reporting the first few rows of a large result set. A key point to note is that both E-Rows and Cost are reasonably representative of the work done, while the corresponding figures when we hinted the use of the index were wildly inaccurate.
MacIntyre, Memory Eternal
There are a handful of living thinkers that have made me re-think fundamental presuppositions that I held consciously (or not) for some time in my early life. Each, in his own way, a genius - but in particular a genius in re-shaping the conceptualization of an intellectual space for me. Until yesterday they were in no particular order, Noam Chomsky, David Bentley Hart, John Milbank, Michael Hudson, Alain de Benoist and Alasdair MacIntyre. We recently lost Rene Girard. Now MacIntyre is no longer with us. His precise analytics, pulling insights from thinkers ranging from Aristotle to Marx, was rarely matched in the contemporary world. The hammer blow that After Virtue was to so many of my assumptions and beliefs is hard to describe - my entire view of the modern project, especially around ethics, was undone. But it was also his wisdom about the human animal and what really mattered in terms of being a human being that set him apart.
MYSQL Custom Function Return issue
ORA-12637: RÚception du paquet impossible
Need to get match and unmatch output for all users having profile and one role
resolution of functions in query
architecting new apex environment
SQLDay 2025 – Wrocław – Sessions
After a packed workshop day, the SQLDay conference officially kicked off on Tuesday with a series of sessions covering cloud, DevOps, Microsoft Fabric, AI, and more. Here is a short overview of the sessions I attended on the first day of the main conference.
Morning Kick-Off: Sponsors and Opening

The day started with a short introduction and a presentation of the sponsors. A good opportunity to acknowledge the partners who made this event possible.
Session 1: Composable AI and Its Impact on Enterprise Architecture
This session (by Felix Mutzl) provided a strategic view of how AI is becoming a core part of enterprise architecture.
Session 2: Migrate Your On-Premises SQL Server Databases to Microsoft Azure

A session (by Edwin M Sarmiento) that addressed one of the most common challenges for many DBAs and IT departments: how to migrate your SQL Server workloads to Azure. The speaker shared a well-structured approach, highlighting the key elements to consider before launching a migration project:
- Team involvement: Ensure all stakeholders are aligned.
- Planning: Migration isn’t just about moving data, dependencies must be mapped.
- Cost: Evaluate Azure pricing models and estimate consumption.
- Testing: Validate each stage in a non-production environment.
- Monitoring: Post-migration monitoring is essential for stability.
Session 3: Fabric Monitoring Made Simple: Built-In Tools and Custom Solutions
This session was produced by Just Blindbaek and he talked about how Microsoft Fabric is gaining traction quickly, and with it comes the need for robust monitoring. This session explored native tools like Monitoring Hub, Admin Monitoring workspace, and Workspace Monitoring. In addition, the speaker introduced FUAM (Fabric Unified Admin Monitoring), an open-source solution supported by Microsoft that complements the built-in options.
Session 4: Database DevOps…CJ/CD: Continuous Journey or Continuous Disaster?

A hands-on session (by Tonie Huizer) about introducing DevOps practices in a legacy team that originally used SVN and had no automation. The speaker shared lessons learned from introducing:
- Sprint-based development cycles
- Git branching strategies
- Build and release pipelines
- Manual vs Pull Request releases
- Versioned databases and IDPs
It was a realistic look at the challenges and practical steps involved when modernizing a database development process.
Session 5: (Developer) Productivity, Data Intelligence, and Building an AI Application
This session (from Felix Mutzl) shifted the focus from general AI to productivity-enhancing solutions. Built on Databricks, the use case demonstrated how to combine AI models with structured data to deliver real-time insights to knowledge workers. The practical Databricks examples were especially helpful to visualize the architecture behind these kinds of applications.
Session 6: Azure SQL Managed Instance Demo Party

The final session of the day was given by Dani Ljepava and Sasa Popovic and was more interactive and focused on showcasing the latest Azure SQL Managed Instance features. Demos covered:
- Performance and scaling improvements
- Compatibility for hybrid scenarios
- Built-in support for high availability and disaster recovery
The session served as a great update on where Azure SQL MI is heading and what tools are now available for operational DBAs and cloud architects.
Thank you, Amine Haloui.
L’article SQLDay 2025 – Wrocław – Sessions est apparu en premier sur dbi Blog.
SQLDay 2025 – Wrocław – Workshops
I had the chance to attend SQLDay 2025 in Wrocław, one of the largest Microsoft Data Platform conferences in Central Europe. The event gathers a wide range of professionals, from database administrators to data engineers and Power BI developers. The first day was fully dedicated to pre-conference workshops. The general sessions are scheduled for the following two days.

In this first post, I’ll focus on Monday’s workshops.
Day 1 – Workshop Sessions
The workshop day at SQLDay is always a strong start. It gives attendees the opportunity to focus on a specific topic for a full day. This year, several tracks were available in parallel, covering various aspects of the Microsoft data stack: from Power BI and SQL Server to Azure and Microsoft Fabric.

Here are the sessions that were available:
Advanced DAX
This session was clearly targeted at experienced Power BI users. Alberto Ferrari delivered an in-depth look into evaluation context, expanded tables, and advanced usage of CALCULATE. One focus area was the correct use of ALLEXCEPT and how it interacts with complex relationships.
Execution Plans in Depth
For SQL Server professionals interested in performance tuning, this workshop provided a detailed walkthrough of execution plans. Hugo Kornelis covered a large number of operators, explained how they work internally, and showed how to analyze problematic queries. The content was dense but well-structured.
Becoming an Azure SQL DBA
This workshop was led by members of the Azure SQL product team. It focused on the evolution of the DBA role in cloud environments. The agenda included topics such as high availability in Azure SQL, backup and restore, cost optimization, and integration with Microsoft Fabric. It was designed to understand the shared responsibility model and how traditional DBA tasks are shifting in cloud scenarios.
Enterprise Databots
This workshop explored how to build intelligent DataBots using Azure and Databricks. The session combined theoretical content with practical labs. The goal was to implement chatbots capable of interacting with SQL data and leveraging AI models. Participants had the opportunity to create bots from scratch.
Analytics Engineering with dbt
This session was focused on dbt (data build tool) and its role in ELT pipelines. It was well-suited for data analysts and engineers looking to standardize and scale their workflows.
Build a Real-time Intelligence Solution in One Day
This workshop showed how to implement real-time analytics solutions using Microsoft Fabric. It covered Real-Time Hub, Eventstream, Data Activator, and Copilot.
From Power BI Developer to Fabric Engineer
This workshop addressed Power BI developers looking to go beyond the limitations of Power Query and Premium refresh schedules. The session focused on transforming reports into scalable Fabric-based solutions using Lakehouse, Notebooks, Dataflows, and semantic models. A good starting point for anyone looking to shift from report building to full data engineering within the Microsoft ecosystem.
Thank you, Amine Haloui.
L’article SQLDay 2025 – Wrocław – Workshops est apparu en premier sur dbi Blog.
Pages
