Feed aggregator

Install NVIDIA Ingest Locally and Use it with Thousands of Documents

Pakistan's First Oracle Blog - Sat, 2025-01-04 01:24

 This video shares step-by-step instructions to install NVIDIA Ingest locally and use it with PDFs, Word, and PowerPoint.


Code:



Pre-requisites:
===============

-- Install docker
-- Get NGC api key from https://ngc.nvidia.com/
-- Get Early Access from https://developer.nvidia.com/nemo-microservices-early-access/join

Phase 1= Configure NV-INGEST Server:
====================================

Step 1:

git clone https://github.com/nvidia/nv-ingest && cd nv-ingest

Step 2:

docker login nvcr.io

Username: $oauthtoken
Password: <Your NGC API Key>

Step 3:

Make sure NVIDIA is set as your default container runtime before running the docker compose command:
sudo nvidia-ctk runtime configure --runtime=docker --set-as-default

Step 4:

docker compose up


Phase 2= Configure NV-INGEST client:
====================================

Step 1:


conda create --name nv-ingest-dev --file ./conda/environments/nv_ingest_environment.yml
conda activate nv-ingest-dev

cd client
pip install .

Step 2:

nv-ingest-cli \
  --doc ./data/multimodal_test.pdf \
  --output_directory ./processed_docs \
  --task='extract:{"document_type": "pdf", "extract_method": "pdfium", "extract_tables": "true", "extract_images": "true"}' \
  --client_host=localhost \
  --client_port=7670

 
Where to find output?
======================

After the ingestion steps above have completed, you should be able to find text and image subfolders inside your processed docs folder. Each will contain JSON formatted extracted content and metadata.

  ls -R processed_docs
Categories: DBA Blogs

ORA-46373 - Unified Audit Policies

Pete Finnigan - Thu, 2025-01-02 17:15
I have a requirement to pre-create unified audit policies and then add the ACTIONS, PRIVILEGES, ROLES etc after they have been created based on some stored audit rules for a customer. We tried to create a policy with no actions....[Read More]

Posted by Pete On 02/01/25 At 02:51 PM

Categories: Security Blogs

Documentum – Login through OTDS without oTExternalID3

Yann Neuhaus - Wed, 2025-01-01 09:00

As you might know, Documentum “deprecated” (in reality disabled completely) the different Authentication Plugins that were bundled with a Documentum Server. That means that with recent versions of Documentum, you cannot login to your LDAP-managed account anymore without having configured an OTDS and integrated it with your Documentum Server. After you installed the OTDS, and configured it to work with Documentum, you might be faced with an annoying behavior that makes it impossible to login. This is because, by default, it only supports one specific configuration for the user_login_name (i.e. oTExternalID3). There is a workaround, but it’s not documented, as far as I know, so I’m writing this blog to share that information.

When logging in to a Documentum Server, using the “connect” iAPI command, the Repository will verify if the user_login_name exists. If yes, it will send the Authentication request to the JMS, which will contact the OTDS with the details provided. The OTDS will perform the authentication with whatever Identity Provider you configured inside it and return the result to the JMS, which will then confirm the details to the Repository to either allow or deny the login. In this case, it doesn’t matter if the user_source of the dm_user is configured with “LDAP” or “OTDS”. Both will behave in the same way and the request will be sent to the JMS and then the OTDS. That’s the theory, but there are some bug / caveats that I might cover in another blog.

I. OTDS Synchronization with default configuration

To do some testing or if you are setting-up a freshly installed Documentum Repository (i.e. no previous LDAP integrations), you might want to keep things simple and therefore you would most probably end-up using the default configuration.

The default User Mapping configuration for an OTDS Resource, for Documentum, might be something like:

    Resource Attribute            >> OTDS Attribute          >> Format
    __NAME__                      >> cn                      >> %s
    AccountDisabled               >> ds-pwp-account-disabled >> %s
    client_capability             >>                         >> 0
    create_default_cabinet        >>                         >> F
    user_address                  >> mail                    >> %s
    user_global_unique_id         >> oTObjectGUID            >> %s
    user_login_name               >> oTExternalID3           >> %s
    user_name                     >> cn                      >> %s
    user_privileges               >>                         >> 0
    user_rename_enabled           >>                         >> F
    user_rename_unlock_locked_obj >>                         >> T
    user_type                     >>                         >> dm_user
    user_xprivileges              >>                         >> 0

Please note that the default value for “user_login_name” is “oTExternalID3”. In addition to mapped attributes from the AD / LDAP, OTDS defines some internal attributes that you can use, and this one is one of those. For example, if a cn/sAMAccountName has a value of “MYUSERID”, then:

  • oTExternalID1 == MYUSERID
  • oTExternalID2 == MYUSERID@OTDS-PARTITION-NAME
  • oTExternalID3 == MYUSERID@DOMAIN-NAME.COM
  • oTExternalID4 == DOMAIN\MYUSERID

Therefore, in this case, with the default configuration, you would need to use “MYUSERID@DOMAIN-NAME.COM” to be able to login to Documentum. Nothing else would work as your dm_user would be synchronized/created/modified to have a user_login_name value of “MYUSERID@DOMAIN-NAME.COM”. As a sidenote, the “%s” in the Format column means to keep the formatting/case from the source attribute. In most AD / LDAP, the cn/sAMAccountName would be in uppercase, so you would only be able to login with the uppercase details. There is a parameter that you can set in the server.ini to be able to have a case-insensitive Repository and another one in the JMS, so you might want to take a look at that for example.

Here, I’m setting an AD password in an environment variable and then fetching a dm_user details to show you the current content, before triggering a login attempt (using the “connect” iAPI command):

[dmadmin@cs-0 logs]$ read -s -p "  --> Please enter the AD Password: " ad_passwd
  --> Please enter the AD Password:
[dmadmin@cs-0 logs]$
[dmadmin@cs-0 logs]$
[dmadmin@cs-0 logs]$ iapi REPO_NAME -Udmadmin -Pxxx << EOC
> retrieve,c,dm_user where upper(user_login_name) like 'MYUSERID%'
> get,c,l,user_name
> get,c,l,user_login_name
> EOC

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2020. OpenText Corporation
        All rights reserved.
        Client Library Release 20.2.0000.0082

Connecting to Server using docbase REPO_NAME
[DM_SESSION_I_SESSION_START]info:  "Session 011234568006fe39 started for user dmadmin."

Connected to OpenText Documentum Server running Release 20.2.00013.0135  Linux64.Oracle
Session id is s0
API> ...
1112345680001d00
API> ...
MYUSERID
API> ...
MYUSERID@DOMAIN-NAME.COM
API> Bye
[dmadmin@cs-0 logs]$
[dmadmin@cs-0 logs]$
[dmadmin@cs-0 logs]$ iapi REPO_NAME -Udmadmin -Pxxx << EOC
> apply,c,NULL,SET_OPTIONS,OPTION,S,trace_authentication,VALUE,B,T
> connect,REPO_NAME,MYUSERID@DOMAIN-NAME.COM,dm_otds_password=${ad_passwd}
> apply,c,NULL,SET_OPTIONS,OPTION,S,trace_authentication,VALUE,B,F
> EOC

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2020. OpenText Corporation
        All rights reserved.
        Client Library Release 20.2.0000.0082

Connecting to Server using docbase REPO_NAME
[DM_SESSION_I_SESSION_START]info:  "Session 011234568006fe40 started for user dmadmin."

Connected to OpenText Documentum Server running Release 20.2.00013.0135  Linux64.Oracle
Session id is s0
API> ...
q0
API> ...
s1
API> ...
q0
API> Bye
[dmadmin@cs-0 logs]$

As you can see above, the result of the “connect” command is “s1”, which means the session is opened and Documentum was able to verify through the OTDS that the login is correct. On the JMS, there is an “otdsauth.log” file, that gives you this kind of information (might give a bit more information depending on the Documentum Server version used):

[dmadmin@cs-0 logs]$ cat otdsauth.log
...
2025-01-01 13:37:26,417 UTC DEBUG [root] (default task-6) In com.documentum.cs.otds.OTDSAuthenticationServlet
2025-01-01 13:37:26,780 UTC DEBUG [root] (default task-6) userId: MYUSERID@DOMAIN-NAME.COM
2025-01-01 13:37:26,782 UTC DEBUG [root] (default task-6) Password Auth Success: MYUSERID@DOMAIN-NAME.COM
[dmadmin@cs-0 logs]$

The Repository logs will also show the trace_authentication details and the OTDS will also have a successful authentication attempt in its logs. So, all is well in a perfect world, right?

II. OTDS Synchronization with updated configuration

When working with an existing Repository that was initially setup with LDAP Sync and Auth, you might have a “simple” configuration that defined that the user_login_name would be the cn/sAMAccountName attribute from the Active Directory. In this case, you probably don’t want to change anything after the integration of the OTDS… After all, the OTDS is supposed to simplify the configuration and not complexify it. Therefore, you would setup the OTDS to integrate (Synchronized Partition or Non-Synchronized one) with your AD / LDAP and then create a Resource that would replicate and match the exact details of your existing users. Even on a freshly installed Repository without previous LDAP integration, you might choose to login with “MYUSERID” (or “myuserid”) instead of “MYUSERID@DOMAIN-NAME.COM”. The OTDS will allows you to configure that, so users can be synchronized to Documentum however you want.

To achieve that, you would need to change a bit the User Mapping configuration to keep your previous login information / avoid messing with the existing dm_user details. For example, you might want to change the client_capability, user_login_name, user_name and some other things. Here is an example of configuration that allows you to synchronize the users with the cn/sAMAccountName from your AD / LDAP, in lowercase, please note the changes with a wildcard (*):

    Resource Attribute            >> OTDS Attribute          >> Format
    __NAME__                      >> cn                      >> %l (*)
    AccountDisabled               >> ds-pwp-account-disabled >> %s
    client_capability             >>                         >> 2 (*)
    create_default_cabinet        >>                         >> F
    user_address                  >> mail                    >> %s
    user_global_unique_id         >> oTObjectGUID            >> %s
    user_login_name               >> cn (*)                  >> %l (*)
    user_name                     >> displayName (*)         >> %s
    user_privileges               >>                         >> 0
    user_rename_enabled           >>                         >> T (*)
    user_rename_unlock_locked_obj >>                         >> T
    user_type                     >>                         >> dm_user
    user_xprivileges              >>                         >> 32 (*)

The documentation mention in some places to have the same value for both _NAME_ and for user_name but I’m not sure if that’s really required, as I have some customers with different values, and it works anyway. It’s pretty common for customers to have the same value for cn and sAMAccountName and to store the displayName into, well, the displayName attribute… On Documentum side, some customers will use cn as the user_name, but some others will use displayName instead. The user_name is, after all, a kind of displayName so I don’t really understand why OTDS would require both _NAME_ and user_name to be the same. It should instead rely on the user_login_name, no?

After consolidating the OTDS Resource, you should be able to see the correct user_login_name as it was before (with the LDAP Sync job). What’s the purpose of this blog then? Well, the OTDS allows you to change the mapping as you see fit, so that you can replicate exactly what you used to have with an LDAP Sync. But you cannot login anymore…

After the modification of the OTDS Resource User Mapping and its consolidation, here I’m trying to login again (with “myuserid” instead of “MYUSERID@DOMAIN-NAME.COM”) to show the difference in behavior:

[dmadmin@cs-0 logs]$ iapi REPO_NAME -Udmadmin -Pxxx << EOC
> retrieve,c,dm_user where upper(user_login_name) like 'MYUSERID%'
> get,c,l,user_name
> get,c,l,user_login_name
> EOC

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2020. OpenText Corporation
        All rights reserved.
        Client Library Release 20.2.0000.0082

Connecting to Server using docbase REPO_NAME
[DM_SESSION_I_SESSION_START]info:  "Session 011234568006fe48 started for user dmadmin."

Connected to OpenText Documentum Server running Release 20.2.00013.0135  Linux64.Oracle
Session id is s0
API> ...
1112345680001d00
API> ...
LastName (Ext) FirstName
API> ...
myuserid
API> Bye
[dmadmin@cs-0 logs]$
[dmadmin@cs-0 logs]$ iapi REPO_NAME -Udmadmin -Pxxx << EOC
> apply,c,NULL,SET_OPTIONS,OPTION,S,trace_authentication,VALUE,B,T
> connect,REPO_NAME,myuserid,dm_otds_password=${ad_passwd}
> apply,c,NULL,SET_OPTIONS,OPTION,S,trace_authentication,VALUE,B,F
> EOC

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2020. OpenText Corporation
        All rights reserved.
        Client Library Release 20.2.0000.0082

Connecting to Server using docbase REPO_NAME
[DM_SESSION_I_SESSION_START]info:  "Session 011234568006fe4f started for user dmadmin."

Connected to OpenText Documentum Server running Release 20.2.00013.0135  Linux64.Oracle
Session id is s0
API> ...
q0
API> ...
[DM_SESSION_E_AUTH_FAIL]error:  "Authentication failed for user myuserid with docbase REPO_NAME."


API> ...
q1
API> Bye
[dmadmin@cs-0 logs]$

This time the authentication fails. If you look at the Repository logs, you can see the user is detected properly, and the Repository start the authentication with the OTDS (1st line below). But when the result comes back (2nd and 3rd lines below), it says that it failed:

2025-01-01T13:46:16.446426      188808[188808]  011234568006fe50        [AUTH]  Start-AuthenticateUserByOTDSPassword:UserLoginName(myuserid)
2025-01-01T13:46:16.815111      188808[188808]  011234568006fe50        [AUTH]  otds_password_authentication = false:
2025-01-01T13:46:16.815159      188808[188808]  011234568006fe50        [AUTH]  End-AuthenticateUserByOTDSPassword: 0
2025-01-01T13:46:17.174676      188808[188808]  011234568006fe50        [AUTH]  Final Auth Result=F, LOGON_NAME=myuserid, ...

The JMS otdsauth.log file will have a similar content, it will start the OTDS communications (1st line below) but the result returned (2nd line below) is not the user_login_name of Documentum. Instead, it’s the value of oTExternalID3 and then the JMS says that it failed (3rd line below):

2025-01-01 13:46:16,671 UTC DEBUG [root] (default task-6) In com.documentum.cs.otds.OTDSAuthenticationServlet
2025-01-01 13:46:16,813 UTC DEBUG [root] (default task-6) userId: MYUSERID@DOMAIN-NAME.COM
2025-01-01 13:46:16,814 UTC DEBUG [root] (default task-6) Password Auth Failed: myuserid

On the OTDS side, no problems, the authentication was successful when it was received (in the directory-access.log):

2025-01-01 13:46:16.777|INFO  ||0|0|Authentication Service|Success Access|27,Initial authentication successful|172.0.0.10|""|OTDS-PARTITION-NAME|"MYUSERID@DOMAIN-NAME.COM"|"Authentication success: MYUSERID@DOMAIN-NAME.COM using authentication handler OTDS-PARTITION-NAME for resource __OTDS_AS__"

If you look at the exact timestamp of the messages, you see the exact flow of how things went. In short, the OTDS says that it’s OK and it sends back some information to the JMS. But because the information returned is oTExternalID3, there is a mismatch with the value of the user_login_name and the JMS/Repository then concludes that the authentication failed, which isn’t true…

Therefore, using any user_login_name value other than oTExternalID3 isn’t a problem from a synchronization point of view, but you still cannot login anyway.

III. Workaround

As mentioned in the introduction of this blog, there is a workaround, which is to set the parameter “synced_user_login_name=sAMAccountName” in the otdsauth.properties file that configures how the JMS talks to the OTDS. I looked at all the OTDS and Documentum documentations, for several versions, as well as KBs, but I couldn’t find this workaround mentioned anywhere. Maybe I’m the one that doesn’t know how to search (don’t blame the search from OT Support website :D). The one and only reference to this parameter is in the Documentum Server Admin & Config doc, but it tells you that it’s optional and it’s only for OTDS token-based authentication. Here, we are doing a password-based auth, we don’t have any OTDS oAuth Client ID/Secret, so this section shouldn’t be required at all. You don’t need the other parameters from this section, but you DO need “synced_user_login_name”, if you would like to login with the cn/sAMAccountName/oTExternalID1/oTSAMAccountName parameter.

However, there is an additional catch… The parameter was apparently only introduced in 20.3. For any older Documentum Server, you will need to check with OT if they have a fix available. I know there is one for 20.2, but it’s only for Windows (c.f. here). Now, you know that you can also use this parameter for that purpose.

L’article Documentum – Login through OTDS without oTExternalID3 est apparu en premier sur dbi Blog.

3million PageViews, 303thousand VideoViews

Hemant K Chitale - Wed, 2025-01-01 01:01

  This blog, begun in December 2006, has now hit a cumulative count of 3million PageViews.


This Chart shows counts from the year 2011 :



My YouTube Channel, begun in January 2014, has hit a cumulative count of 303thousand views :



A Big Thank You to all the Viewers an Subscribers.


Categories: DBA Blogs

Documentum – Silent Install – OTDS

Yann Neuhaus - Mon, 2024-12-30 03:53

A bit more than 6 years ago (already), I wrote a series of blogs about the silent install of some standard Documentum component. It included things from the binaries, the Docbroker (Connection Broker), the Repositories (both Primary and Remote), D2 and finally xPlore (binaries, Dsearch and IndexAgent). As of today, the principle is still the same for these components. Of course, the exact list of parameters might have changed a bit for some of them, like new parameters being added, but in essence, it’s still accurate.

I also worked with the silent install of several other components such as the Documentum BPM / xCP, the Life Sciences Suite (LSTMF / LSQM / LSRD / LSSSV or the complete LSS bundle) or the IDS (Interactive Delivery Services) but I didn’t write blogs about it. If you would be interested, don’t hesitate to ask and I will see if I can write something about it. In this one, I will share my view on the Silent install process of the OTDS (OpenText Directory Services), as it’s the new standard for user management in OpenText products.

I. Properties file

So, let’s start right away with the preparation of the properties file:

[tomcat@otds_01 ~]$ vi /tmp/otds_silent.properties
[tomcat@otds_01 ~]$ cat /tmp/otds_silent.properties
[Setup]
Id=OTDS
Version=24.4.0.4503
Patch=0
Basedir=/opt/workspace
Configfile=/opt/workspace/setup.xml
Action=Install
Log=/opt/workspace/otds-installer.log
Instance=1
Feature=All

[Property]
INST_GROUP=tomcat
INST_USER=tomcat
INSTALL_DIR=/app/app_data/otds
TOMCAT_DIR=/app/tomcat
PRIMARY_FQDN=otds-0.domain.com
ISREPLICA_TOPOLOGY=0
IMPORT_DATA=0
OTDS_PASS=otdsAdm1nP4ss+w0rd
ENCRYPTION_KEY=
MIGRATION_OPENDJ_URL=
MIGRATION_OPENDJ_PASSWORD=otdsAdm1nP4ss+w0rd
JDBC_CONNECTION_STRING=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=db1_hostname.domain.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=otds_svc.domain.com)))
JDBC_USERNAME=OTDS
JDBC_PASSWORD=jdbcP4ss+w0rd
[tomcat@otds_01 ~]$
II. Setup section

First, there is a “Setup” section in the silent install properties file, with values that aren’t really related to the OTDS installation but more about the current environment / binaries to be used. In any cases, here is a short description of the “Setup” section properties:

  • Id: The component to be installed, should match the value of “Id” from the “setup.xml” file
  • Version: Matches the value of “Version” from the “setup.xml” file
  • Patch: Matches the value of “Patch” from the “setup.xml” file
  • Basedir: The path of the base folder where the OTDS installation package (e.g. OTDS-2440-LNX.tar) has been extracted to. This path needs to contain the folders “lib”, “OTDS” and “tools”
  • Configfile: The path of the “setup.xml” file that contains the details of what needs to be done
  • Action: Either “Install” to install a new OTDS where there is none, “Upgrade” to upgrade an existing version to a newer one or “Modify” to uninstall the local OTDS (needs to be using the same version that was installed, you cannot uninstall using newer binaries)
  • Log: The path of the log file that will be generated by the execution of the installer
  • Instance: Standard OT installer details, not really used by OTDS and mostly for the Windows installer part apparently.
  • Feature: Standard OT installer details, not really used by OTDS and mostly for the Windows installer part apparently. OTDS doesn’t support anything else than “All”

I’m not sure why we would need to specify both the Basedir AND Configfile as the “setup.xml” will be present inside the Basedir if you took the OTDS installation package. Maybe it’s just a convenience for OpenText, to be able to use the same installer/binaries with multiple “setup.xml” files, but anyway… Same thing for the “Id”, “Version” and “Patch” parameters, all these details can be found inside the “setup.xml” file, for which we define the Configfile parameter. Therefore, the OTDS installer could just fetch these details by itself from the setup.xml file. I guess it’s not smart enough to do that, so you need to do it by yourself otherwise the installer will complain about it.

III. Property section

Then, there is a “Property” section in the silent install properties file, which really controls the OTDS installation details. Here is a short description of the “Property” section properties:

  • INST_GROUP: Name of the OS Group that will be used for group ownership of OTDS files (e.g.: primary group of the user running the Tomcat process)
  • INST_USER: Name of the OS User that will be used for user ownership of OTDS files (e.g.: user running the Tomcat process)
  • INSTALL_DIR: The path of the base folder where OTDS will put its data. That will include the OTDS config folder (with “otds.properties”) and the different application files (WEB-INF, META-INF and other jsp/js/html files). When installing OTDS like that, there won’t be any WAR files deployed on the webapps folder of Tomcat. The WAR content will be exploded/extracted and available in this INSTALL_DIR location and Tomcat will run these web applications by loading their “Context” inside the $CATALINA_HOME/conf/Catalina/localhost/otds*.xml files
  • TOMCAT_DIR: The path of the Tomcat install base, which is usually $CATALINA_HOME
  • PRIMARY_FQDN: The Fully Qualified Domain Name of the local host. Even if it says Primary, in case of replicated setup, you can set the local host only, as the HA setup is handled by the DB itself in recent versions of OTDS (without OpenDJ)
  • ISREPLICA_TOPOLOGY: A value of “0” indicates the installation to be done is a Primary and a value of “1” is for a replica
  • IMPORT_DATA: A value of “0” indicates to NOT import any data (from a previous OTDS version on OpenDJ (and not a Database)) and a value of “1” will ask you further questions about the OpenDJ URL and password to be used
  • OTDS_PASS: Password to be defined for the OTDS admin account. Only asked if it’s for a Primary initial installation (i.e. no previous local install, not a replica and no previous import)
  • ENCRYPTION_KEY: Specify the Data Encryption Key that was used for this environment. Only asked if there is no local install and if it’s a replica or a primary that requires an import. The value to be used here (e.g. in case of replica install) can be found on the Primary installation as the property “directory.bootstrap.CryptSecret” of the OTDS config file ($INSTALL_DIR/config/otds.properties)
  • MIGRATION_OPENDJ_URL: OpenDJ URL to be used to connect to previous version of OTDS in case import is required on a primary installation
  • MIGRATION_OPENDJ_PASSWORD: OpenDJ password to be used to connect to previous version of OTDS in case import is required on a primary installation
  • JDBC_CONNECTION_STRING: Database JDBC Connection String to be used to connect to the OTDS Database
  • JDBC_USERNAME: Database username to be used to connect to the OTDS Database
  • JDBC_PASSWORD: Database password to be used to connect to the OTDS Database

Most parameters are only needed for a fresh local installation and further upgrades will re-use initial properties.

IV. Execution

Once the properties file is ready, you can install the OTDS in silent using the following command:

[tomcat@otds_01 ~]$ /opt/workspace/setup -qbi -rf /tmp/otds_silent.properties

OpenText Directory Services 24.4.0


------------------------------------------------------------------------------
  OpenText Directory Services
------------------------------------------------------------------------------
Installing OpenText Directory Services Component
Please wait .
Installation of OpenText Directory Services Component OK

Installation completed. Results:

OpenText Directory Services Component OK

Installation finished.

[tomcat@otds_01 ~]$

You now know how to do a silent install of OTDS, but that’s only the tip of the iceberg. The next step is then to configure it, as you will need to create the user partitions (synchronized or not), passwords policies, resources, access roles, etc. If you are already familiar with the OTDS, then you can automate the configuration pieces using the REST-API, but that won’t be covered in this blog.

L’article Documentum – Silent Install – OTDS est apparu en premier sur dbi Blog.

Install Kokoro TTS Model Locally

Pakistan's First Oracle Blog - Mon, 2024-12-30 01:50

 This video locally installs Kokoro which is a frontier TTS model for its size of 82 million parameters. It can be run anywhere.





!git clone https://huggingface.co/hexgrad/Kokoro-82M
%cd Kokoro-82M

!apt-get -qq -y install espeak-ng > /dev/null 2>&1
!pip install -q phonemizer torch transformers scipy munch

from models import build_model
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
MODEL = build_model('kokoro-v0_19.pth', device)

VOICE_NAME = [
    'af',
    'af_bella', 'af_sarah', 'am_adam', 'am_michael',
    'bf_emma', 'bf_isabella', 'bm_george', 'bm_lewis',][0]

VOICEPACK = torch.load(f'voices/{VOICE_NAME}.pt', weights_only=True).to(device)
print(f'Loaded voice: {VOICE_NAME}')

from kokoro import generate
text = "How could I know? It's an unanswerable question. Like asking an unborn child if they'll lead a good life. They haven't even been born."
audio, out_ps = generate(MODEL, text, VOICEPACK, lang=VOICE_NAME[0])

from IPython.display import display, Audio
display(Audio(data=audio, rate=24000, autoplay=True))
print(out_ps)
Categories: DBA Blogs

DataPump with CheckSum in 21c and above

Hemant K Chitale - Sat, 2024-12-28 06:54

 Oracle introduced a CheckSum parameter in 21c.   Here is a demo in 23.6.  


First I run the datapump export without the CheckSum:


[oracle@localhost ~]$ expdp hemant/hemant@freepdb1 schemas=HEMANT dumpfile=HEMANT

Export: Release 23.0.0.0.0 - Limited Availability on Sat Dec 28 12:31:26 2024
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Starting "HEMANT"."SYS_EXPORT_SCHEMA_01":  hemant/********@freepdb1 schemas=HEMANT dumpfile=HEMANT
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA/LOGREP
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
. . exported "HEMANT"."MY_LOBS"                            7.9 MB   75929 rows
Master table "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HEMANT.SYS_EXPORT_SCHEMA_01 is:
  /opt/oracle/admin/FREE/dpdump/2917EBB8705B3129E0630100007F3D2B/HEMANT.dmp
Job "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Dec 28 12:32:06 2024 elapsed 0 00:00:38

[oracle@localhost ~]$


Now this is an export WITH the CheckSum (after verifying that COMPATIBLE is 20.0 or higher) by specifying CHECKSUM=YES :


[oracle@localhost ~]$ sqlplus hemant/hemant@freepdb1

SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Sat Dec 28 12:35:20 2024
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle.  All rights reserved.

Last Successful login time: Sat Dec 28 2024 12:31:26 +00:00

Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.6.0.24.10

SQL> show parameter compatible

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
compatible                           string      23.6.0
noncdb_compatible                    boolean     FALSE
SQL> quit
Disconnected from Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.6.0.24.10
[oracle@localhost ~]$ expdp hemant/hemant@freepdb1 schemas=HEMANT dumpfile=HEMANT_With_CheckSum checksum=YES

Export: Release 23.0.0.0.0 - Limited Availability on Sat Dec 28 12:35:55 2024
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Starting "HEMANT"."SYS_EXPORT_SCHEMA_01":  hemant/********@freepdb1 schemas=HEMANT dumpfile=HEMANT_With_CheckSum checksum=YES
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA/LOGREP
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
. . exported "HEMANT"."MY_LOBS"                            7.9 MB   75929 rows
Master table "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Generating checksums for dump file set
******************************************************************************
Dump file set for HEMANT.SYS_EXPORT_SCHEMA_01 is:
  /opt/oracle/admin/FREE/dpdump/2917EBB8705B3129E0630100007F3D2B/HEMANT_With_CheckSum.dmp
Job "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Dec 28 12:36:33 2024 elapsed 0 00:00:36

[oracle@localhost ~]$


After I transfer the dumpfile to another server, I verify the CheckSum with VERIFY_ONLY=YES:


[oracle@localhost ~]$ sqlplus / as sysdba

SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Sat Dec 28 12:40:54 2024
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.6.0.24.10

SQL> create directory impdp_check as '/tmp';

Directory created.

SQL> quit
Disconnected from Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.6.0.24.10
[oracle@localhost ~]$ impdp directory=impdp_check dumpfile=HEMANT_With_CheckSum.dmp verify_only=YES

Import: Release 23.0.0.0.0 - Limited Availability on Sat Dec 28 12:42:28 2024
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free

Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Verifying dump file checksums
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
dump file set is complete
verified checksum for dump file "/tmp/HEMANT_With_CheckSum.dmp"
dump file set is consistent
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Sat Dec 28 12:42:39 2024 elapsed 0 00:00:05

[oracle@localhost ~]$

[oracle@localhost ~]$ cd /tmp
[oracle@localhost tmp]$ ls -l import.log
-rw-r--r--. 1 oracle oinstall 600 Dec 28 12:42 import.log
[oracle@localhost tmp]$ cat import.log
;;;
Import: Release 23.0.0.0.0 - Limited Availability on Sat Dec 28 12:42:28 2024
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle and/or its affiliates.  All rights reserved.
;;;
Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Verifying dump file checksums
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
dump file set is complete
verified checksum for dump file "/tmp/HEMANT_With_CheckSum.dmp"
dump file set is consistent
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Sat Dec 28 12:42:39 2024 elapsed 0 00:00:05
[oracle@localhost tmp]$



The VERIFY_ONLY parameter verifies the file without actually importing the dump file.

The default CheckSum algorithm is SHA256.
You can override this by specifying CHECKSUM_ALGORITHM set to either of CRC32, SHA256, SHA384 or SHA512.

Categories: DBA Blogs

The GROUP BY column_position enhancement in 23ai

Hemant K Chitale - Sat, 2024-12-28 02:11

 Oracle 23ai allows specifying a Column Position (or Alias) in the GROUP BY clause.
For backward compatibility, the "group_by_position_enabled" parameter is a new feature that defaults to FALSE.


SQL> show parameter group_by_position_enabled

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
group_by_position_enabled            boolean     FALSE
SQL> alter session set group_by_position_enabled=TRUE;

Session altered.

SQL> 
SQL> select p.prod_name, t.day_name, sum(s.amount_sold)
  2  from sh.products p, sh.times t, sh.sales s
  3  where p.prod_id=s.prod_id
  4  and t.time_id=s.time_id
  5  and t.calendar_month_name = 'January'
  6  group by 1, 2
  7  order by 1, 2
  8  /

PROD_NAME                                          DAY_NAME  SUM(S.AMOUNT_SOLD)
-------------------------------------------------- --------- ------------------
11" Youth Field Master Glove                       Friday               4635.73
11" Youth Field Master Glove                       Monday               2903.62
11" Youth Field Master Glove                       Saturday             3636.85
11" Youth Field Master Glove                       Sunday               6602.18
11" Youth Field Master Glove                       Thursday             5696.37
11" Youth Field Master Glove                       Tuesday              2843.81
11" Youth Field Master Glove                       Wednesday            6072.04
11.5" Youth Triple Stripe Series Glove             Friday               6695.84
11.5" Youth Triple Stripe Series Glove             Monday               5436.28
11.5" Youth Triple Stripe Series Glove             Saturday              5653.8
11.5" Youth Triple Stripe Series Glove             Sunday              10909.86
...
...
multiple rows returned 
...
...
PROD_NAME                                          DAY_NAME  SUM(S.AMOUNT_SOLD)
-------------------------------------------------- --------- ------------------
Wicket Keeper Gloves                               Thursday             1550.94
Wicket Keeper Gloves                               Tuesday              3049.62
Wicket Keeper Gloves                               Wednesday            2583.16
Wide Brim Hat                                      Friday                189.28
Wide Brim Hat                                      Monday               1656.35
Wide Brim Hat                                      Saturday             1689.48
Wide Brim Hat                                      Sunday                 560.7
Wide Brim Hat                                      Thursday             1088.44
Wide Brim Hat                                      Tuesday              2855.67
Wide Brim Hat                                      Wednesday             250.19

461 rows selected.

SQL>


This helps developers who already use the Column Position in the ORDER BY clause and can be consistent when writing the GROUP BY clause.
The HAVING clause also supports Column Aliases.


Categories: DBA Blogs

M-Files Ment Integration configuration

Yann Neuhaus - Fri, 2024-12-27 03:20

In this blog we will be looking at the configuration of M-Files Ment Integration into an M-Files vault. We will discuss the configuration, the benefits and learn how to work with it.
This is a follow-up to the webinar I hosted on 10 December and is aimed at technical teams responsible for configuration, implementation and end-user training.

Overview

The latest version of the M-Files Ment integration supports three distinct use cases:

  • Start document generation out of the M.Files Desktop (installation of the M-Files Ment vault integration is required)
  • Content of M-Files can be inserted in the document generated by M-Files Ment
  • The generated document in M-Files Ment can be directly stored in the M-Files vault

As previously indicated by M-Files, the Ment ecosystem will transition from AWS to Azure in 2025. Furthermore, additional developments and use cases are anticipated in the future. To remain apprised of forthcoming announcements, we recommend following dbi services and M-Files.

Prerequisites

To enable the integration of M-Files Ment in your M-Files vault the following prerequisite are required.

  • M-Files Desktop version 22.5 or later
  • M-Files Server version 22.5 or later
  • Microsoft Entra ID with OAuth must be configured in the M-Files vault
  • M-Files Rest API (with the classic M-Fikles Web) must be configured in the M-Files vault
  • M-Files Rest API must be routable from the Ment IP address for M-Files Ment to retrieve data from the Vault
  • The user accounts must be present in MN-Files vault and Ment instance with the same email address
  • An unique domain name is required for each integrated vault (default for M-Files Cloud vaults)
  • For M-Files on-premisses environment the URL of the vault must be whitelisted in M-Files Ment
How to configure the Entra ID (on-premises environment)

Prior to commencing the configuration process, it is advisable to create a local M-Files account as a recovery account. In the event that the federated authentication settings are not correctly configured, access to the M-Files vault may be lost.

The first thing you need to do is create and configure an application in the Azure Portal. Go to the section titled ‘Microsoft Entra ID’ and select ‘App registration’. You will then need to register a new application. Select the supported account type from the list provided below.

  • Accounts in this organisational directory only
  • Accounts in any organisational directory
  • Accounts in organisational directory and personal Micrososft accounts
  • Personal Microsoft accounts only

Once you have clicked register, the overview page of the application will appear. It is important to note down both the application (client) ID and the directory (tenant) ID.
Go to the authentication section and add a new web platform. Enter the address in the following format for the redirect URI.

https://<VAULT_ADDRESS>/authentication/<CONFIGURATION_NAME>/read
<VAULT_ADDRESS> = address of your vault
<CONFIGURATION_NAME> = this is the name you use for your M-Files configuration

Please be advised that the following URIs must now be added to your web application.

  • https:///authentication//mobile
  • https:///vnext
  • https:///vnext/webuiintegrator.html
  • https:///vnext/msteamsoauth.html
  • https://
  • https:///webuiintegrator.html
  • https:///msteamsoauth.html
  • http://localhost
Example of Entra ID we application configuration How to configure the Entra ID application in M-Files Ment

Please configure your Entra ID application to enable authentication on M-Files Ment. You will need to add the redirect URL, which is https://ment.m-files.com/auth/login?sso=mfilesAzure. Alternatively, you can replace the ‘ment.m-files.com’ domain name with the domain you use in your environment.

You must configure the primary vault connection in M-Files Ment. Log in as an Ment administrator and enter the M-Files vault URL in the configuration tab to issue a token for access. Next, confirm the authentication to the vault. The email address of the M-Files Ment login must match the issuing new token in the Entra ID. Details are shown in the picture below.

Example of the SSO configuration in M-Files Ment How to benefit from the M-Files integration

In this chapter I will explain the different benefits of an M-Files Ment integration in a M-Files vault.

When you create a new template in M-Files Ment, you can define a M-Files vault as the data source and use the data you already have in your M-Files vault. Simply choose the vault for your template from the drop-down menu, provided that the vault is configured in Ment to be used.

Let’s get started with template generation. Add your text blocks in the editor section, then select the text you want to replace with information from M-Files. In our example, we’re using the M-Files vault dbi – Ment as our data source to choose a name from the vault.

Configure a test in M-Files Ment to use a M-Files vault

The next step is to define the question and the M-Files Ment vault metadata that you wish to utilise. In this instance, we are using the employee field to select the name of the employee that we wish to add to the document.

Additionally, the metadata for a template can be configured to store it in the M-Files vault. This predefined metadata will then be used for the documents stored in the vault.
The configuration of this predefined metadata is straightforward. To begin, open the template in M-Files Ment and select the icon for the metadata configured (see the illustration below).

Instruction how to find the metadata configuration button for a template

The example below illustrates the available configuration options based on the M-Files vault configuration. You can define the class, including the class properties and related questions, as well as the workflow to be used, including the workflow state.

Example of the metadata configuration for a template in M-Files Ment
Concluding words

M-Files Ment is the ideal solution for generating documents based on predefined phrases, no coding required! The user can simply go through the questionnaire and, hey presto, the documents are ready to go! What’s more, they can be stored in M-Files, where they can enjoy all the benefits of our fantastic feature set, including workflows, version control, automatic permissions, sharing with colleagues or clients, and much more. Get in touch with us today and we’ll discuss your specific use case and prepare a demo.

If you are currently utilising M-Files Ment and require assistance with configuration, please do not hesitate to contact us. We are pleased to offer our support and assistance.

L’article M-Files Ment Integration configuration est apparu en premier sur dbi Blog.

Merry Christmas

Greg Pavlik - Wed, 2024-12-25 13:52

 C Рождеством Христовым! Славим Его!


 Some new books to read for the Feast.




Something to warm the body and delight the soul.


Stateless MLX Inference with FastAPI in Sparrow

Andrejus Baranovski - Mon, 2024-12-23 08:11
I show how to run inference with MLX in stateless mode, when loaded model is released after inference completes. This is useful when inference requests are less frequent and it helps to reclaim resources reserved by MLX.

 

MagicQuill Installation on Windows, Linux, Mac for AI Image Editing for Free

Pakistan's First Oracle Blog - Fri, 2024-12-20 19:22

 This video is an easy step-by-step tutorial to install MagicQuill locally on Linux, Windows, Mac.


Code:

conda create -n ai python=3.10 -y && conda activate ai

git clone --recursive https://github.com/magic-quill/MagicQuill.git && cd MagicQuill

wget -O models.zip "https://hkustconnect-my.sharepoint.com/:u:/g/personal/zliucz_connect_ust_hk/EWlGF0WfawJIrJ1Hn85_-3gB0MtwImAnYeWXuleVQcukMg?e=Gcjugg&download=1"

unzip models.zip

pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu118

pip install gradio_magicquill-0.0.1-py3-none-any.whl

cp -f pyproject.toml MagicQuill/LLaVA/
pip install -e MagicQuill/LLaVA/

pip install -r requirements.txt

python gradio_run.py
Categories: DBA Blogs

FreeBSD basics – 8 – Running Linux Containers

Yann Neuhaus - Fri, 2024-12-20 00:22

When people talk about containers in FreeBSD, they usually talk about Jails. We’ve already seen how this works in general here and here. An additional option you have, is to install Podman on FreeBSD. Podman is well known in the Linux world and can be used to build and run containers. With the release of FreeBSD 14.2, the FreeBSD project is now also publishing OCI-compatible container images of FreeBSD.

We again start with a fresh installation of FreeBSD 14.2, nothing was added or modified. Podman, as with most of the additional software, can either be installed as a binary package or from the ports collection. If you want to install from the ports collection you can find Podman here:

root@freebsd14-latest:~ $ find /usr/ports/ -name "*podman*" 
/usr/ports/sysutils/podman
/usr/ports/sysutils/podman/files/podman_service.in
/usr/ports/sysutils/podman/files/podman.in
/usr/ports/sysutils/podman-suite

As it is faster to install from binary packages, we’ll do it like this for the scope of this post:

root@freebsd14-latest:~ $ pkg update
The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y
Bootstrapping pkg from pkg+https://pkg.FreeBSD.org/FreeBSD:14:amd64/quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
Installing pkg-1.21.3...
Extracting pkg-1.21.3: 100%
Updating FreeBSD repository catalogue...
Fetching meta.conf: 100%    178 B   0.2kB/s    00:01    
Fetching data.pkg: 100%    7 MiB   7.5MB/s    00:01    
Processing entries: 100%
FreeBSD repository update completed. 35568 packages processed.
All repositories are up to date.
root@freebsd14-latest:~ $ pkg upgrade
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Updating database digests format: 100%
Checking for upgrades (1 candidates): 100%
Processing candidates (1 candidates): 100%
Checking integrity... done (0 conflicting)
Your packages are up to date.
root@freebsd14-latest:~ $ pkg install podman
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 33 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        conmon: 2.1.12_3
        containernetworking-plugins: 0.5_9
        containers-common: 0.59.1
        gettext-runtime: 0.22.5
...
--
The FreeBSD port of the Podman container engine is experimental and should be
used for evaluation and testing purposes only.

$ sudo podman run --rm docker.io/dougrabson/hello

Podman can restart containers after a host is rebooted. To enable this, use:

$ sudo sysrc podman_enable=YES

and start the container with a restart policy:

$ sudo podman run -d --restart=always myimage

It is possible to run many Linux container images using FreeBSD's Linux emulation:

$ sudo sysrc linux_enable=YES
$ sudo service linux start
$ sudo podman run --rm --os=linux alpine cat /etc/os-release | head -1
NAME="Alpine Linux"

The package gives you quite some information at the end of the installation process. First of all, this is not yet considered production ready and should be used for testing only. Second, you will need to enable Podman and the Linux compatibility layer:

root@freebsd14-latest:~ $ sysrc podman_enable=YES
podman_enable:  -> YES
root@freebsd14-latest:~ $ sysrc linux_enable=YES
linux_enable: NO -> YES
root@freebsd14-latest:~ $ service linux start

If you want the containers to restart automatically, you should also mount the fdescfs file system:

root@freebsd14-latest:~ $ mount -t fdescfs fdesc /dev/fd
root@freebsd14-latest:~ $ df -h | grep fd
fdescfs               1.0K      0B    1.0K     0%    /compat/linux/dev/fd
fdescfs               1.0K      0B    1.0K     0%    /dev/fd

If you try to search for container images using Podman, you’ll notice that there is no result:

root@freebsd14-latest:~ $ podman search rockylinux

The reason is, that no container registries are defined, but this can easily be fixed by adding a registry to “/usr/local/etc/containers/registries.conf”:

root@freebsd14-latest:~ $ tail -1 /usr/local/etc/containers/registries.conf
unqualified-search-registries = ['docker.io']

Once this is configured, you can search the registry for container images:

root@freebsd14-latest:~ $ podman search rockylinux
NAME                                   DESCRIPTION
docker.io/rockylinux/rockylinux        
docker.io/library/rockylinux           The official build of Rocky Linux.
docker.io/rockylinux/rocky-toolbox     Toolbox image for Rocky Linux - https://gith...
docker.io/rockylinux/rockylinux-shim   RockyLinux shim-review images
docker.io/amd64/rockylinux             The official build of Rocky Linux.
docker.io/litmusimage/rockylinux       
docker.io/arm64v8/rockylinux           The official build of Rocky Linux.
docker.io/ericpaulsen/rockylinux       
docker.io/romcheck/rockylinux          
docker.io/robertdebock/rockylinux      Container to test Ansible roles in, includin...
docker.io/lightnear/rockylinux         
docker.io/accupara/rockylinux          
docker.io/detravi/rockylinux           
docker.io/ppc64le/rockylinux           The official build of Rocky Linux.
docker.io/aursu/rockylinux             
docker.io/s390x/rockylinux             The official build of Rocky Linux.
docker.io/uacontainers/rockylinux      Up-to-date Rocky Linux Docker images with th...
docker.io/buluma/rockylinux            
docker.io/gammabytehosting/rockylinux  Rocky Linux minimal image.
docker.io/whatwewant/rockylinux        
docker.io/kubestation/rockylinux       
docker.io/4geniac/rockylinux           
docker.io/jeksterslab/rockylinux       
docker.io/ihanick/rockylinux           
docker.io/douglarek/rockylinux         

Running a container is exactly the same as you know it from Linux:

root@freebsd14-latest:~ $ podman run -it --rm --platform linux/amd64 rockylinux/rockylinux:9 /bin/bash
Resolving "rockylinux/rockylinux" using unqualified-search registries (/usr/local/etc/containers/registries.conf)
Trying to pull docker.io/rockylinux/rockylinux:9...
Getting image source signatures
Copying blob 3442e16c7069 done   | 
Copying config bb8a97547d done   | 
Writing manifest to image destination
WARN[0009] Failed to load cached network config: network podman not found in CNI cache, falling back to loading network podman from disk 
WARN[0009] Failed to load cached network config: network podman not found in CNI cache, falling back to loading network podman from disk 
Error: plugin type="bridge" failed (add): cni plugin bridge failed: The pf kernel module must be loaded to support ipMasq networks

… and this fails because we forgot to configure the container networking. There actually are instructions how to do this in the output of the Podman package installation:

Message from containernetworking-plugins-0.5_9:

--
Container networking relies on NAT to allow container network packets
out to the host's network. This requires a PF firewall to perform the
translation. A simple example is included - to use it:

# cp /usr/local/etc/containers/pf.conf.sample /etc/pf.conf
...
Edit /etc/pf.conf and set v4egress_if, v6egress_if to your network interface(s)
...
# sysrc pf_enable=YES
# service pf start

The sample PF configuration includes support for port redirections. These are
implemented as redirect rules in anchors nested under cni-rdr.

Support for redirecting connections from the container host to services running
inside a container is included for FreeBSD 13.3 and later. To enable this, first
load the pf kernel module, by adding pf_load="YES" to /boot/loader.conf and
enable PF support for these redirections using sysctl:

# kldload pf
# sysctl net.pf.filter_local=1
# service pf restart

Once this is done, all is fine:

root@freebsd14-latest:~ $ podman run -it --rm --platform linux/amd64 rockylinux/rockylinux:9 /bin/bash
[root@71ef409c1d40 /]$ cat /etc/os-release 
NAME="Rocky Linux"
VERSION="9.5 (Blue Onyx)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.5"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Rocky Linux 9.5 (Blue Onyx)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:9::baseos"
HOME_URL="https://rockylinux.org/"
VENDOR_NAME="RESF"
VENDOR_URL="https://resf.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
SUPPORT_END="2032-05-31"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9"
ROCKY_SUPPORT_PRODUCT_VERSION="9.5"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.5"

Quite impressive and easy to setup. You might wonder if running Linux containers on FreeBSD actually is a thing. Personally, I think that this is really great, because it gives you another option in the FreeBSD world. Containers are everywhere today, and having more choice where to run them is great.

L’article FreeBSD basics – 8 – Running Linux Containers est apparu en premier sur dbi Blog.

Interval Search: Part 4. Dynamic Range Segmentation – interval quantization

XTended Oracle SQL - Thu, 2024-12-19 22:40

Forums, mailing lists, and StackOverflow are all great resources for Oracle performance discussions, but I’ve long thought it would be useful to have a dedicated online chat/group specifically for Oracle performance specialists. A place to share news, articles, and discuss performance issues. To test the waters, I’ve created a group: https://t.me/ora_perf. If you’re interested, feel free to join! Let’s build a central hub for Oracle performance discussions.

Before diving into the main topic, let me address a frequent question I’ve received regarding the earlier parts of this series:
You’re focusing on the rare case of date-only indexes (begin_date, end_date), but most real-world scenarios involve composite indexes with an ID field, like (id, begin_date, end_date).

Yes, it’s true that in practice, composite indexes with an ID field are more common. And exactly such scenarios was the reason of this series. However, I intentionally started with a simplified case to focus on the date filtering mechanics. All the issues, observations, conclusions, and solutions discussed so far are equally applicable to composite indexes.

For example, many production databases have identifiers that reference tens or even hundreds of thousands of intervals. The addition of an ID-based access predicate may reduce the scanned volume for a single query, but the underlying date range filtering issues remain. These inefficiencies often go unnoticed because people don’t realize their simple queries are doing tens of LIOs when they could be doing just 3-5, with response times reduced from 100 microseconds to 2 microseconds.

Even if your queries always use an equality predicate on the ID field, you’ll still encounter challenges with huge queries with joins, such as:

select *
from IDs
    join tab_with_history h
       on IDs.id = h.id
       and :dt between h.beg_date and h.end_date

Here, lookups for each ID against the composite index can become inefficient at scale compared to retrieving a pre-filtered slice for the target date.

To clarify, everything discussed in this series applies to composite indexes as well. The solutions can easily be extended to include ID fields by modifying just a few lines of code. Let’s now move to the main topic.

Dynamic Range Segmentation – Interval Quantization

In the earlier parts, you may have noticed a skew in my test data, with many intervals of 30 days generated for every hour. This naturally leads to the idea of reducing scan volume by splitting long intervals into smaller sub-intervals.

What is Interval Quantization?

Interval quantization is a known solution for this problem, but it often comes with drawbacks. Traditional quantization requires selecting a single fixed unit (e.g., 1 minute), which may not suit all scenarios. Using a small unit to cover all cases can lead to an explosion in the number of rows.

However, since Dynamic Range Segmentation (DRS) already handles short intervals efficiently, we can focus on quantizing only long intervals. For this example, we’ll:

  • Leave intervals of up to 1 hour as-is, partitioning them into two categories: up to 15 minutes and up to 1 hour.
  • Split longer intervals into sub-intervals of 1 day.
Creating a SQL Macro for Interval Splitting

To simplify the splitting of long intervals, we’ll write a SQL Macro:

create or replace function split_interval_by_days(beg_date date, end_date date)
  return varchar2 sql_macro
is
begin
  return q'{
     select/*+ no_decorrelate */
        case 
          when n = 1 
             then beg_date 
          else trunc(beg_date)+n-1
        end as sub_beg_date
       ,case
          when n<=trunc(end_date)-trunc(beg_date)
            then trunc(beg_date)+n -1/24/60/60
          else end_date
        end as sub_end_date
     from (select/*+ no_merge */ level n
           from dual
           connect by level<=trunc(end_date)-trunc(beg_date)+1
          )
  }';
end;
/

Source on github: https://github.com/xtender/xt_scripts/blob/master/blog/1.interval_search/drs.v2/split_interval_by_days.sql

This macro returns sub-intervals for any given range:

SQL> select * from split_interval_by_days(sysdate-3, sysdate);

SUB_BEG_DATE        SUB_END_DATE
------------------- -------------------
2024-12-17 02:30:34 2024-12-17 23:59:59
2024-12-18 00:00:00 2024-12-18 23:59:59
2024-12-19 00:00:00 2024-12-19 23:59:59
2024-12-20 00:00:00 2024-12-20 02:30:34
Updating the Domain Index 1. Partitioning Changes in ODCIIndexCreate_pr

We’ll modify the partitioning structure:

        partition by range(DURATION_MINUTES)
        (
            partition part_15_min   values less than (15)
           ,partition part_1_hour   values less than (60)
           ,partition part_1_day    values less than (1440)  --40*24*60
        )
2. Initial Data Population

We’ll use the SQL Macro to populate the index table with split intervals:

        -- Now populate the table.
        stmt2 := q'[INSERT INTO {index_tab_name} ( beg_date, end_date, rid )
            SELECT SUB_BEG_DATE as beg_date 
                  ,SUB_END_DATE as end_date 
                  ,P.rowid
            FROM "{owner}"."{tab_name}" P
            , split_interval_by_days(
                to_date(substr(P.{col_name}, 1,19),'YYYY-MM-DD HH24:MI:SS')
               ,to_date(substr(P.{col_name},21,19),'YYYY-MM-DD HH24:MI:SS')
               )
            ]';
3. Updating ODCIIndexInsert_pr
  procedure ODCIIndexInsert_pr(
        ia       sys.ODCIIndexInfo,
        rid      VARCHAR2,
        newval   VARCHAR2,
        env      sys.ODCIEnv
    )
  IS
  BEGIN
        -- Insert into auxiliary table
        execute immediate 
           'INSERT INTO '|| get_index_tab_name(ia)||' (rid, beg_date, end_date)'
         ||'select 
             :rid, sub_beg_date, sub_end_date
            from split_interval_by_days(:beg_date, :end_date)'
           using rid,get_beg_date(newval),get_end_date(newval);
  END;
4. Query Generation in ODCIIndexStart_Pr

Update the SQL statement to account for the new partitions:

      stmt := q'{
        select rid from {tab_name} partition (part_15_min) p1
        where :cmpval between beg_date and end_date
          and end_date < :cmpval+interval'15'minute
        union all
        select rid from {tab_name} partition (part_1_hour) p2
        where :cmpval between beg_date and end_date
          and end_date < :cmpval+1/24
        union all
        select rid from {tab_name} partition (part_1_day ) p3
        where :cmpval between beg_date and end_date
          and end_date < :cmpval+1
        }';

Full code: https://github.com/xtender/xt_scripts/blob/master/blog/1.interval_search/drs.v2/2.idx_range_date_pkg.v2.pck

Results
SQL> select count(*) from test_table where DATE_IN_RANGE(virt_date_range,date'2012-02-01')=1;

  COUNT(*)
----------
       943
SQL> @last

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------
SQL_ID  17wncu9ftfzf6, child number 0
-------------------------------------
select count(*) from test_table where
DATE_IN_RANGE(virt_date_range,date'2012-02-01')=1

Plan hash value: 2131856123

-------------------------------------------------------------------------------------------------------
| Id  | Operation        | Name             | Starts | E-Rows | Cost  | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                  |      1 |        |  9218 |      1 |00:00:00.01 |      15 |
|   1 |  SORT AGGREGATE  |                  |      1 |      1 |       |      1 |00:00:00.01 |      15 |
|*  2 |   DOMAIN INDEX   | TEST_RANGE_INDEX |      1 |        |       |    943 |00:00:00.01 |      15 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("XTENDER"."DATE_IN_RANGE"("VIRT_DATE_RANGE",TO_DATE(' 2012-02-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))=1)

So, by applying quantization with Dynamic Range Segmentation, we reduced the number of logical reads from 30 (in the simpler version) to 15—a 2x improvement.

Conclusion

In this example, we used partitions for 15 minutes, 1 hour, and 1 day for simplicity. In practice, optimal values will depend on the actual data. While the number of rows in the index increases, the fixed maximum interval length ensures consistently efficient results.

All series:

  1. Interval Search: Part 1. Optimizing Date Range Queries
  2. Interval Search: Part 2. Dynamic Range Segmentation – Simplified
  3. Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index
  4. Interval Search: Part 4. Dynamic Range Segmentation – interval quantization
Categories: Development

Question about Merge and UDT

Tom Kyte - Thu, 2024-12-19 10:24
I'm using a UDT with dataset inside there as a table and i want to use that UDT on Merge statement. When using that UDT, i put that UDT inside "ON (" as following: "... ON (SELECT * FROM TABLE (UDT)) ..." then, how can i to use that UDT inside INSERT statment putting a only variable in the "VALUES" statment? For Example: MERGE INTO TABLE1 T1 USING (SELECT * FROM TABLE(UDT)) T2 ON (T1.X = T2.Y) WHEN NOT MATCHED THEN INSERT VALUES UDT; //<-------- here, i want only use UDT and that's all, how can i create something like this? The Table1 (T1) and UDT (T2) has the same structure
Categories: DBA Blogs

Fetch limited rows for update

Tom Kyte - Thu, 2024-12-19 10:24
Hi Tom, first at all, thanks for all your support. I wish a blessed christmas time for you, your team and families. I'm looking forward for the best practise of a cursor to fetch just one (or a few) row(s) on my own sorting and lock them. Without locking the whole resultset or include all rows with the condition. <code> select * from MyTable where < MyCondition > order By < MyOrderby > fetch first 1 row only for update skip locked; </code> Unfortunately the combination of fetch-first-only and for update causes a ORA-02014 :( - without fetch-first-only it will lock the whole resultset - a rownum condition instead of fetch-first-only will not consider my orderby - an inner subselect with fetch-first-only will not skip the locked rows and finally fetch zero rows if the inner sql only fetch already locked rows <code> select * from MyTable where Rowid in (select Rowid from MyTable where < MyCondition > order By < MyOrderby > fetch first 1 row only) for update skip locked; </code> Whats your best practice or advice? Thanks, Norman
Categories: DBA Blogs

Recovery with SQL*PLUS in embedded korn shell script

Tom Kyte - Thu, 2024-12-19 10:24
We have a production DB that is used for reporting is been attempted to clone with NetApp Snapmirror technology. The current clone is done with old method placing the tablespaces of PROD DB in backup mode and then copying the files using OS ?cp? tool to the designated clone DB directories and run the necessary DB layer clone steps. This works fine but now taking extra long time to complete and impacting business functions. With NetApp SNapmirror technology all steps are working up to the DB layer clone steps. There is an issue with very last steps of recovering the DB after SnapMirror has performed. The step used is ?alter database recover automatic using backup controlfile until cancel? as that seems to be the only option available according Oracle Doc ID: DATAFILE NEEDS MORE RECOVERY TO BE CONSISTENT ORA-1194 ORA-1547 ORA-1110 (Doc ID 1528788.1) - Scenario 2: Backup Controlfile is Used for recovery. Doing this recovery interactively via SQL*PLUS needs placing the redo log file path when prompted. Then DB recover successfully. See below for output. SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL ; ORA-00279: change 75786588505 generated at 12/06/2024 13:41:35 needed for thread 1 ORA-00289: suggestion : /arch_poct/archive/OFPOC/arch_1186748276_1_18.arc ORA-00280: change 75786588505 for thread 1 is in sequence #18 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} '/redo_poct/oradata/OFPOC/redo06.log' Log applied. Media recovery complete. SQL> alter database open resetlogs ; Database altered. But this clone process has to be done on daily basis and must be automated. When scripted there is no way with SQL*PLUS to be read from a file containing `/redo_poct/oradata/OFPOC/redo06.log' for it?s standard input. Below is the script #! /usr/bin/ksh sqlplus "/ as sysdba" < /projects/of/poc/spt/log/setup_sql_redo_20241217.log <<EOF set echo on alter database recover automatic using backup controlfile until cancel; EOF Where cat /projects/of/poc/spt/log/setup_sql_redo_20241217.log '/redo_poct/oradata/OFPOC/redo06.log' Output ? /projects/gen/ksh/working> ./TestRecovery.sh SQL*Plus: Release 11.2.0.2.0 Production on Tue Dec 17 17:29:00 2024 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> SQL> alter database recover automatic using backup controlfile until cancel * ERROR at line 1: ORA-00279: change 75787505640 generated at 12/16/2024 11:21:36 needed for thread 1 ORA-00289: suggestion : /arch_poct/archive/OFPOC/arch_1186748276_1_157.arc ORA-00280: change 75787505640 for thread 1 is in sequence #157 ORA-00278: log file '/arch_poct/archive/OFPOC/arch_1186748276_1_157.arc' no longer needed for this recovery ORA-00308: cannot open archived log '/arch_poct/archive/OFPOC/arch_1186748276_1_15...
Categories: DBA Blogs

Oracle_Loader External Table using Fixed Width

Tom Kyte - Thu, 2024-12-19 10:24
I'm using external tables everyday to load csv files but I'm having great difficulty loading my first FIXED WIDTH file. I've tried several different access parameter variations and the CREATE TABLE command compiles without error but when I select the table, I get the following error. In my several iterations, I may get a different "KUP-01005: syntax error" but I can't seem to get past this: Error Message: <code>select * from rzedelq;</code> ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-00554: error encountered while parsing access parameters KUP-01005: syntax error: found "badfile": expecting one of: "all, column, convert_error, csv, date_format, enclosed, ignore_chars_after_eor, (, ltrim, lrtrim, ldrtrim, missing, notrim, nullif, number_format, numeric_characters, optionally, rtrim, remove_quotes, reject, terminated, truncate_columns" KUP-01007: at line 2 column 13 29913. 00000 - "error in executing %s callout" *Cause: The execution of the specified callout caused an error. *Action: Examine the error messages take appropriate action. -- <code> CREATE TABLE rzedelq ( rzedelq_type VARCHAR2(1), rzedelq_dob DATE, rzedelq_last_name VARCHAR2(30), rzedelq_first_name VARCHAR2(30), rzedelq_balance NUMBER(6) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_data ACCESS PARAMETERS ( RECORDS FIXED 76 FIELDS BADFILE 'rzedelq.bad' LOGFILE 'rzedelq.log' MISSING FIELD VALUES ARE NULL ( rzedelq_record_type (01:01) CHAR(1), rzedelq_dob (02:09) CHAR(8) DATE 'yyyymmdd' NULLIF rzedelq_dob='00000000', rzedelq_last_name (10:39) CHAR(30), rzedelq_first_name (40:69) CHAR(30), rzedelq_balance (70:75) NUMBER(6) ) ) LOCATION ('rzedelq.dat') ) REJECT LIMIT UNLIMITED ; </code> Sample data file: <code> --'rzedelq.dat'-- 119811218ANDERSEN AIMEE 366910 219121006COWGER AMANDA 030900 119030707GESLIN ANDREA 150910 319041125HATFIELD CARRIE 055900 119150913MERRELL CONNIE 018920 419761024MESSINGER JASON 010960 119170708PRIMROSE JOHN 030920 519980721REEVES KAILYNN 018930 119690511SAFARIK ROBERT 021980 </code> -- Any Ideas? Any help?
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator