Yann Neuhaus

Subscribe to Yann Neuhaus feed Yann Neuhaus
Updated: 12 hours 40 min ago

Migration with Multitenant: how easy it is!

Fri, 2023-02-03 09:01
Introduction

On Oracle Database, Multitenant feature will be turning 9 in July, but it’s not yet widely deployed as most of the databases are still running on non-CDB architecture. With 19c, Multitenant is now mature and free if you stick to a maximum of 3 PDBs per CDB. If the benefits of Multitenant are not obvious at first glance, it’s a game changer when it comes to future migrations.

Migration without Multitenant

It’s been decades that migration is done by mostly choosing between these 2 methods:

  • in-place upgrade of the database files by connecting them to a new DB home and running the catalog upgrade
  • export data from source DB to a dumpfile and import this dumpfile into a newly created database already running the target version

The first option is fast, as soon as you stay on the same server, but after years using this method you still keep the old database structure with its legacy configurations, which is not so good. I remember, several years ago, working on an Exadata and not being able to use modern features because database files came from an old 8i or 9i.

The second migration option is the most popular and pretty clean, but rather slow because it’s based on DDL and DML statements. And you first need to identify schemas and dependencies you’d like to migrate. It’s easier to do a full export/import operation, but it generates a lot of errors you must analyze and solve. It’s because dictionary metadata and objects metadata are mixed, and importing dictionary metadata in a new database is not possible as dictionary already exists. But DBAs get used to this.

A third option is a combination of both: exporting objects metadata only and copying selected datafiles to the new database. It’s called Transportable Tablespaces, but it also needs schemas and dependencies analysis, and it only works if every segment resides in its expected tablespace.

Purpose of a container in Multitenant

A container database (CDB) will not be as important as a non-container database (non-CDB). It’s important as long as pluggable databases (PDBs) are running inside. But as your PDBs are easily movable, at some point you will move them. And the old container becomes useless. One of the reasons of moving these PDBs is migrating them to a newer version.

This is why your container shouldn’t have a name related to the applications it runs. The PDB has its own name, and the associated service will follow the PDB when you move it to another container.

Imagine you are preparing the 23c migration of the 3 PDBs in your 19c container C1917TST. You will create a new C2301TST container, with the same settings as C1917TST, move the PDB to this new container, and your migration is done. You can then delete the old container. Let’s test this.

Testing a migration from 19c to 21c

The 23c is the next long term release, but it’s not yet available. So the tests will be done with the current innovation release: 21c.

Migrating a PDB to a newer version could be limited to unplugging the database and plugging it into a new container. In this example, source CDB is 19c, and target will be 21c.

I’m pretty sure it will work fine, but as I’m using multitenant, I can duplicate my PDB and I will safely migrate the copy instead. If something goes wrong, I still have my original PDB with its original name in the source CDB.

. oraenv <<< CDB19C1
sqlplus / as sysdba
create pluggable database JRQDUP from JRQO4D;
alter pluggable database JRQDUP open;
...
alter pluggable database JRQDUP save state;
alter pluggable database JRQDUP close immediate;

Now let’s unplug this PDB and remove it from the source CDB:

alter pluggable database JRQDUP unplug into '/home/oracle/jrqdup.xml';
drop pluggable database JRQDUP keep datafiles;
exit

Let’s plug it into the new container. The MOVE option is used to move the datafiles to the correct subfolder:

. oraenv <<< CDB21C4
create pluggable database JRQDUP using '/home/oracle/jrqdup.xml' MOVE;
alter pluggable database JRQDUP open;
...

Opening this PDB is not immediate, let’s have a look at what’s happening in another sqlplus session:

show pdbs
    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 JRQDUP			  MIGRATE    YES

PDB is first opened in restricted mode to do the upgrade of the metadata.

Once finished, my PDB is automatically opened in normal mode:

show pdbs
    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 JRQDUP			  READ WRITE NO

And that’s it. Upgrade of the PDB is automatic when opening the database.

Next step would be stopping and dropping the source PDB and renaming the new one if you want to keep its original name.

Conclusion

With Multitenant, it’s very easy to migrate to a newer version. Just unplug the PDB you want to migrate from the old container and plug it into the new one, and Oracle will do the migration job.

L’article Migration with Multitenant: how easy it is! est apparu en premier sur dbi Blog.

ChatGPT vs DevOps

Wed, 2023-02-01 06:45

As a lot of curious people, I have experimented with ChatGPT in order to figure out if it could be helpful in my work as a DevOps consultant or even replace me (why not?). If you are also interested to see how ChatGPT compares with an Oracle DBA, you can check the very interesting blog of my colleague that did that assessment.

First I have to say that I was pleasantly surprised by the accuracy of the responses to basic questions regarding DevOps technology like Ansible, Kubernetes, Rook Ceph, Docker, GitLab… ChatGPT keeps the context of a chat so I usually start by asking general questions as Do you know Kubernetes? where I got a general but accurate response and from there I ask for more details about a component, a script or a command. Each time the response is quite accurate and you can challenge ChatGPT when you test a command and it doesn’t work, he is able to recognized he was wrong!

First contact

My first feeling is that ChatGPT is a great interactive encyclopedia on which you can quickly get information by asking questions and the intelligence I see is that it understands the questions quite well, provides accurate answers and keeps the context of the conversion in order to elaborate or give details and even correct itself. Of course you have to guide the responses by asking the right questions but for learning and to get you started I think it’s great as you can quickly gather information about a technology. In that sense it was faster than doing some research with a traditional search engine. However the drawback is that you get answers from a single source of truth and you lack the diversity of answers you get from various blogs and websites when you do an usual research on the Internet. So you will have to test those answers, cross check them and challenge ChatGPT a for better answer as it will also learn from you (don’t forget that if it’s free, you are the product!).

I’ve previously blogged about Kubernetes SSO using Azure AD and I was curious to know if ChatGPT could have helped me to find the same solution, another one or just found one quicker. Let’s have a look what ChatGPT has in store about it!

ChatGPT vs Kubernetes SSO using Azure AD What are the steps to implement SSO in Kubernetes with Azure?
  1. Create an Azure Active Directory (AAD) tenant and register your application in the tenant.
  2. Create a Service Principal for the registered application and assign necessary permissions.
  3. Configure AAD as the identity provider in Kubernetes using the AAD integration in Kubernetes.
  4. Use the Service Principal credentials to authenticate to the Kubernetes cluster.
  5. Configure Role-based Access Control (RBAC) in Kubernetes to grant access to users and groups in AAD.
  6. Test SSO by logging in to the Kubernetes cluster using AAD credentials.

Note:

  • The above steps are high level steps and there can be more details and steps involved depending on your use case.
  • Also, you might need some Azure Kubernetes Service (AKS) related configurations as well.

The first response is already a good summary and the last note informs us that those are only high level steps. However the mention to AKS is not relevant here but he couldn’t have known as in my question I didn’t specify OnPrem Kubernetes or Azure Kubernetes.

Could you detail how to configure SSO in Kubernetes?
  1. Create an Azure Active Directory (AAD) tenant if you do not have one already.
  2. Register your application in the tenant by going to the Azure portal, selecting Azure Active Directory, and then selecting App registrations.
  3. Create a Service Principal for the registered application and assign necessary permissions by going to the Azure portal, selecting Azure Active Directory, and then selecting App registrations.
  4. Configure AAD as the identity provider in Kubernetes by using the AAD integration in Kubernetes. This can be done by modifying the kube-apiserver manifest to include the –oidc-issuer-url, –oidc-client-id and –oidc-ca-file flags and pointing them to the AAD tenant.
  5. Use the Service Principal credentials to authenticate to the Kubernetes cluster by passing them as environment variables to the kubelet.
  6. Configure Role-based Access Control (RBAC) in Kubernetes to grant access to users and groups in AAD. This can be done by creating roles and role bindings in Kubernetes that map to users and groups in AAD.
  7. Test SSO by logging in to the Kubernetes cluster using AAD credentials by running kubectl commands with the –token option and providing a valid AAD token.

Step 4 for configuring kube-apiserver is added and ChatGPT gives a bit more details in the other steps in comparaison to my first question. This is all correct and quite good.

Could you show an example of kube-apiserver manifest?

Here’s an example of a kube-apiserver manifest that includes the configuration for SSO using Azure Active Directory:

apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  labels:
    component: kube-apiserver
spec:
  containers:
  - name: kube-apiserver
    image: k8s.gcr.io/kube-apiserver:v1.19.0
    command:
    - kube-apiserver
    - --oidc-issuer-url=https://sts.windows.net/<AAD tenant ID>/
    - --oidc-client-id=<client ID>
    - --oidc-ca-file=/etc/kubernetes/pki/ca.crt
    - --oidc-username-claim=email
    - --oidc-groups-claim=groups
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
    - --authorization-mode=RBAC
    - --etcd-servers=https://etcd-0.etcd:2379,https://etcd-1.etcd:2379,https://etcd-2.etcd:2379
    - --advertise-address=1.2.3.4
    - --bind-address=0.0.0.0
    - --secure-port=6443
    volumeMounts:
    - name: ca-certs
      mountPath: /etc/kubernetes/pki
  volumes:
  - name: ca-certs
    configMap:
      name: kubernetes-pki

You can see that the kube-apiserver container is configured with the --oidc-issuer-url, --oidc-client-id, and --oidc-ca-file flags, which point to the AAD tenant and client ID, and the certificate authority file for the AAD tenant, respectively. The --oidc-username-claim and --oidc-groups-claim flags are used to map AAD user’s claims to the corresponding Kubernetes user and group.

Please note that this is just an example and your actual kube-apiserver manifest may differ depending on your specific environment.

Also, you need to pass the --token option with a valid AAD token to kubectl command, in order to authenticate to the cluster

Not bad at all! ChatGPT generates an example of a kube-apiserver.yaml file and explains each parameters related to our SSO discussion. Let’s push a bit further:

Could you give also an example of .kube/config file to use with this kube-apiserver?

Here’s an example of a .kube/config file that can be used to authenticate to a Kubernetes cluster with SSO using Azure Active Directory:

apiVersion: v1
kind: Config
clusters:
- name: my-cluster
  cluster:
    server: https://kubernetes.default.svc
    certificate-authority-data: <base64 encoded CA data>
    insecure-skip-tls-verify: true
  users:
- name: azure-user
  user:
    auth-provider:
      name: azure
      config:
        tenant-id: <tenant id>
        client-id: <client id>
        client-secret: <client secret>
        idp-issuer-url: https://sts.windows.net/<tenant id>/
        idp-certificate-authority-data: <base64 encoded CA data>
        idp-audience: https://kubernetes.default.svc
contexts:
- name: my-context
  context:
    cluster: my-cluster
    user: azure-user
current-context: my-context

The above .kube/config file defines a cluster named my-cluster and an user named azure-user. The auth-provider field for the azure-user is set to azure and the config field contains the necessary configuration for authenticating to AAD. The configuration includes the tenant ID, client ID, client secret, and the URL of the AAD tenant. It also includes the CA certificate data for the AAD tenant and the expected audience for the token.

You also need to set the current context as my-context in order to use this configuration file.

Please note that this is just an example and your actual .kube/config file may differ depending on your specific environment.

Not bad either! I like the note of warning at the end as well as the additional information about the context that I could dig deeper by asking specifically a question about it.

For those of you who have read my blog you will notice that the .kube/config file is using the parameters of the deprecated method. ChatGPT is working on data up to 2021 so he may suggest that well-known solution first. As I know this topic well now, I can try to guide him to the direction of the newest method and see what will come out.

Modify .kube/config configuration by using oidc-login plugin of kubectl

Here’s an example of a modified .kube/config file that can be used to authenticate to a Kubernetes cluster with SSO using Azure Active Directory and the oidc-login plugin of kubectl:

apiVersion: v1
kind: Config
clusters:
- name: my-cluster
  cluster:
    server: https://kubernetes.default.svc
    certificate-authority-data: <base64 encoded CA data>
    insecure-skip-tls-verify: true
users:
- name: azure-user
  user:
    exec:
      command: oidc-login
      args:
      - --issuer-url=https://sts.windows.net/<tenant id>/
      - --client-id=<client id>
      - --username-claim=email
      - --groups-claim=groups
      - --ca-file=path/to/ca.crt
      - --extra-scopes=openid,email,groups
      - --extra-audiences=https://kubernetes.default.svc
contexts:
- name: my-context
  context:
    cluster: my-cluster
    user: azure-user
current-context: my-context

The above .kube/config file defines a cluster named my-cluster and an user named azure-user. The exec field for the azure-user is set to oidc-login and the args field contains the necessary configuration for authenticating to AAD with the plugin. The configuration includes the issuer URL, client ID, username claim, groups claim, and ca file. Additionally, you can set extra scopes and audiences as well.

When you run kubectl commands with this config, the plugin will automatically launch an OIDC flow and obtain an access token for the user.

Please note that this is just an example and your actual .kube/config file may differ depending on your specific environment and the plugin version you use.

So ChatGPT knows this solution as well as is able to provide the required parameters. To reach that point I needed some knowledge myself in order to ask the question properly but anyway we can see that quickly we have the basis on which we can get started.

ChatGPT Limits

If you’ve read my blogs (Part 1 and Part 2) about SSO, you know that we were looking for a solution to connect the oidc plugin with HTTPS to Azure that supports only this protocol. Let’s see if ChatGPT could have found a solution quicker than us:

Adjust oidc-login plugin azure-user to use https

Here’s an example of how you can modify the azure-user configuration in the .kube/config file to use HTTPS with the oidc-login plugin:

apiVersion: v1
kind: Config
clusters:
- name: my-cluster
  cluster:
    server: https://kubernetes.default.svc
    certificate-authority-data: <base64 encoded CA data>
users:
- name: azure-user
  user:
    exec:
      command: oidc-login
      args:
      - --issuer-url=https://sts.windows.net/<tenant id>/
      - --client-id=<client id>
      - --username-claim=email
      - --groups-claim=groups
      - --ca-file=path/to/ca.crt
      - --extra-scopes=openid,email,groups
      - --extra-audiences=https://kubernetes.default.svc
      - --https
contexts:
- name: my-context
  context:
    cluster: my-cluster
    user: azure-user
current-context: my-context

The above .kube/config file defines a cluster named my-cluster and an user named azure-user. The exec field for the azure-user is set to oidc-login and the args field contains the necessary configuration for authenticating to AAD with the plugin. In the args field --https is added to make the plugin use https for redirects and token exchange

When you run kubectl commands with this config, the plugin will automatically launch an OIDC flow and obtain an access token for the user over HTTPS.

Please note that this is just an example and your actual .kube/config file may differ depending on your specific environment and the plugin version you use.

Well that’s interesting because the –https parameter just doesn’t exist so let’s challenge that neuronal beast:

–https doesn’t work

I apologize for any confusion, you are correct that the --https flag is not a valid option for the oidc-login plugin. It seems that this flag was included in my previous response by mistake.

To use HTTPS with the oidc-login plugin, you can configure it to use a reverse proxy that supports HTTPS and set the issuer-url and redirect-url accordingly. Additionally, you can ensure that the ca-file option points to a valid CA certificate.

It is also good to configure the endpoints in your Azure Active Directory tenant to use the https protocol.

I apologize for any confusion caused by my previous response, please let me know if you have any other questions.

Very polite indeed, I guess this is where ChatGPT is learning from his mistakes. He then points us to the direction of a reverse proxy but not to the parameters I was looking for (–local-server-cert and –local-server-key).

I’ve continued to ask for other solutions and other parameters to see if he knew them but he was running in a loop by suggesting other parameters that don’t exist. So I’ve finally given him the solution I was looking for:

What about the –local-server-cert and –local-server-key parameters?

I apologize for my previous errors, you are correct that oidc-login plugin supports https by using --local-server-cert and --local-server-key parameters. --local-server-cert allows you to specify the location of the certificate file that should be used to establish an HTTPS connection with the OIDC provider. --local-server-key allows you to specify the location of the private key file that should be used to establish an HTTPS connection with the OIDC provider.

OK! So in this case he was confirming I was correct (call me BenGPT). As you can see, after a certain depth from the basic core topic, you reach the limit of ChatGPT…for now at least.

Conclusion

I come to the same conclusion as my colleague regarding Oracle DBA, ChatGPT will not replace a DevOps consultant yet as we can reach its limits when the topic becomes too complex or when we need a creative solution to an issue. In the first part of my SSO Blog, I needed to troubleshooting a secret parameter with or without ” ” and I think this could not have been found by ChatGPT at this stage.

However it has been announced that ChatGPT will be updated and will become much more powerful so we can expect that it will greatly improve and expand its limits. I will continue to use it as I am still amazed by its capacities to summarize information about a topic and can greatly help to get started on any topic I need to learn as a DevOps consultant. ChatGPT is just the beginning of a new era with AI and I see it as taking more and more importance in our daily work not as a replacement but as a valuable assistant.

L’article ChatGPT vs DevOps est apparu en premier sur dbi Blog.

Data Staging with OMrun – Migration (part I)

Tue, 2023-01-31 04:05

OMrun is a powerful tool for data staging that allows you to easily load a wide range of file formats like CSV and XML files in databases. This means, OMrun will automatically detect the data structure and convert it to the appropriate format for your database. It generates reusable SQL queries for every staged file. This can save you a lot of time and effort, especially when working with large datasets.

Test scenario of data staging

The various data loading processes are orchestrated with the OMrun test scenario. Based on the selected file type, the database table is generated and the corresponding data is loaded. The SQL query is automatically created and saved during the staging process. The stored queries are reusable to gather data from the staged tables within other test objects.

Process log and generated query

In addition to its powerful data staging features, OMrun also offers a variety of other benefits. For example it can be used to perform data validation and cleaning, as well as data transformation and aggregation. This means that you can use OMrun to ensure that your data is clean and accurate before it is loaded into your database.

Overall, OMrun is a powerful tool for data management that offers a variety of features. Whether you’re working with large datasets or a variety of different data sources, OMrun can help you easily bring all of your data together in one place and ensure that it is clean and accurate.

Read more about data migration with OMrun in Part 2 coming soon.

https://www.dbi-services.com/products/omrun/

https://en.wikipedia.org/wiki/Staging_(data)

L’article Data Staging with OMrun – Migration (part I) est apparu en premier sur dbi Blog.

A generic jdbc tester (part II)

Mon, 2023-01-30 03:00

This is part II of the article. See Part I.

A tale of 2 Oracle drivers

Oracle’s drivers support 2 implementations: the pure java one as used above, also named Thin drivers, with the following URL syntax:

jdbc:oracle:thin:@//db_machine:listener_port/service_name

but also the so-called OCI-based or Thick Type 2 drivers that rely on SQL*Net and the connect strings defined in tnsnames.ora, with the following syntax:

jdbc:oracle:oci:@connect_string

Since they use OCI native libraries, they are said to be faster than their pure java implementation counterpart. To have those libraries, an InstantClient is needed at the minimum. Here, I’m using an existing InstantClient v21.7 for Linux installation /u01/dctm/repo01/oracle/instantclient_21_7. It comes with its own jdbc drivers which are incompatible with the ones downloaded before (but can do both Thin and OCI), so let’s switch to the bundled ones. As they depend on native OCI libraries, those must be made accessible through the LD_LIBRARY_PATH.

Here is the definition of the connect string mypdb1 we will use; it has the same settings as the ones explicitly used with the Thin drivers:

mypdb1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.21)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdb1)
    )
  )

Let’s adapt the environment and syntax, and try:

$ export LD_LIBRARY_PATH=/u01/dctm/repo01/oracle/instantclient_21_7
$ java -cp /u01/dctm/repo01/oracle/instantclient_21_7/ojdbc8.jar:. jdbc_tester_generic oracle.jdbc.driver.OracleDriver repo01 repo01 jdbc:oracle:oci:@mypdb1 "select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual"
The command-line parameters are:
arg 0 = oracle.jdbc.driver.OracleDriver
arg 1 = repo01
arg 2 = repo01
arg 3 = jdbc:oracle:oci:@mypdb1
arg 4 = select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual

Loading driver [oracle.jdbc.driver.OracleDriver]:
JDBC Driver oracle.jdbc.driver.OracleDriver successfully loaded

Connecting to url [jdbc:oracle:oci:@mypdb1] as user [repo01]:
Successful connection to database

Executing SQL query [select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual]:

Retrieving the result set:
now in Oracle db: 22/01/2023 15:51:23

JDBC connection successfully tested, quitting …

As illustrated, since those drivers are based on OCI, they go through the SQL*Net stack to read the connectivity configuration behind the connect string mypdb1, namely from the tnsnames.ora file; hence this information does not need to be provided any more, which reflects in the simpler JDBC URL. Whenever the Oracle client is installed along with JDBC programs, it is preferable to use the OCI drivers to avoid replicating connection information instead of accessing it from one single place, although when working with RDBMS from different vendors, it is understandable to standardize to the Thin drivers for generality and portability. Anyway, there are sufficient differences in the URL syntax across RDBMS that one more does not change much. For example, Oracle Thin drivers also allows to specify the URL through the full, inlined tnsnames.ora syntax, e.g.:

$ export LD_LIBRARY_PATH=/u01/dctm/repo01/oracle/instantclient_21_7
$ java -cp /u01/dctm/repo01/oracle/instantclient_21_7/ojdbc8.jar:. jdbc_tester_generic oracle.jdbc.driver.OracleDriver repo01 repo01 jdbc:oracle:thin:@"(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=192.168.0.21)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=pdb1)))" "select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual"
The command-line parameters are:
arg 0 = oracle.jdbc.driver.OracleDriver
arg 1 = repo01
arg 2 = repo01
arg 3 = jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=192.168.0.21)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=pdb1)))
arg 4 = select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual

Loading driver [oracle.jdbc.driver.OracleDriver]:
JDBC Driver oracle.jdbc.driver.OracleDriver successfully loaded

Connecting to url [jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=192.168.0.21)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=pdb1)))] as user [repo01]:
Successful connection to database
...
JDBC connection successfully tested, quitting …

and so do the OCI drivers:

$ export LD_LIBRARY_PATH=/u01/dctm/repo01/oracle/instantclient_21_7
$ java -cp /u01/dctm/repo01/oracle/instantclient_21_7/ojdbc8.jar:. jdbc_tester_generic oracle.jdbc.driver.OracleDriver repo01 repo01 jdbc:oracle:oci:@"(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=192.168.0.21)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=pdb1)))" "select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual"
The command-line parameters are:
arg 0 = oracle.jdbc.driver.OracleDriver
arg 1 = repo01
arg 2 = repo01
arg 3 = jdbc:oracle:oci:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=192.168.0.21)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=pdb1)))
arg 4 = select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual

Loading driver [oracle.jdbc.driver.OracleDriver]:
JDBC Driver oracle.jdbc.driver.OracleDriver successfully loaded

Connecting to url [jdbc:oracle:oci:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=192.168.0.21)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=pdb1)))] as user [repo01]:
Successful connection to database
...
JDBC connection successfully tested, quitting ...
Using a SID with Oracle JDBC

Oracle JDBC drivers allow to specify a service name, as shown so far, but also the deprecated, old-style SID. The URL syntax changes slighty. Here is a summary of the all the Oracle JDBC URL syntaxes:

# with a service name;
# Thin:
jdbc:oracle:thin:@//192.168.0.21:1521/pdb1
# OCI:
jdbc:oracle:oci:@mypdb1

# with a sid;
# Thin:
jdbc:oracle:thin:@192.168.0.21:1521:orcl
# OCI:
jdbc:oracle:oci:@mysid

with the following mysid‘s definition in tnsnames.ora:

mysid =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = db)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SID = orcl)
    )
  )

where orcl is the sid of the container database. As we are accessing the container database now, and the repo01 account does not exist there, we must switch to a different account; let’s use system as it is defined everywhere (however, it is still possible to access a pluggable database through its SID, and use the repo01 account; for this, set the listener’s parameter USE_SID_AS_SERVICE_listener=on; I did not try it though and I suspect that the service syntax will have to be used but with a SID instead of a service name).

Let’s adapt the environment and syntax, and try first with the OCI drivers:

$ export LD_LIBRARY_PATH=/u01/dctm/repo01/oracle/instantclient_21_7
$ java -cp /u01/dctm/repo01/oracle/instantclient_21_7/ojdbc8.jar:. jdbc_tester_generic oracle.jdbc.driver.OracleDriver system manager jdbc:oracle:oci:@mysid "select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual"
The command-line parameters are:
arg 0 = oracle.jdbc.driver.OracleDriver
arg 1 = system
arg 2 = manager
arg 3 = jdbc:oracle:oci:mysid
arg 4 = select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual

Loading driver [oracle.jdbc.driver.OracleDriver]:
JDBC Driver oracle.jdbc.driver.OracleDriver successfully loaded

Connecting to url [jdbc:oracle:oci:@mysid] as user [system]:
Successful connection to database
...
JDBC connection successfully tested, quitting ...

Using the full inline definition behind the mysid connect string:

$ export LD_LIBRARY_PATH=/u01/dctm/repo01/oracle/instantclient_21_7
$ java -cp /u01/dctm/repo01/oracle/instantclient_21_7/ojdbc8.jar:. jdbc_tester_generic oracle.jdbc.driver.OracleDriver system manager jdbc:oracle:oci:@"(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=192.168.0.21)(PORT=1521))(CONNECT_DATA=(SID = orcl)))" "select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual"
The command-line parameters are:
arg 0 = oracle.jdbc.driver.OracleDriver
arg 1 = system
arg 2 = manager
arg 3 = jdbc:oracle:oci:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=192.168.0.21)(PORT=1521))(CONNECT_DATA=(SID = orcl)))
arg 4 = select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual

Loading driver [oracle.jdbc.driver.OracleDriver]:
JDBC Driver oracle.jdbc.driver.OracleDriver successfully loaded

Connecting to url [jdbc:oracle:oci:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=192.168.0.21)(PORT=1521))(CONNECT_DATA=(SID = orcl)))] as user [system]:
Successful connection to database
...
JDBC connection successfully tested, quitting ...

Let’s try now this SID with the Thin driver:

$ java -cp ojdbc11.jar:. jdbc_tester_generic oracle.jdbc.driver.OracleDriver system manager jdbc:oracle:thin:@192.168.0.21:1521:orcl "select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual"
The command-line parameters are:
arg 0 = oracle.jdbc.driver.OracleDriver
arg 1 = system
arg 2 = manager
arg 3 = jdbc:oracle:thin:@192.168.0.21:1521:orcl
arg 4 = select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual

Loading driver [oracle.jdbc.driver.OracleDriver]:
JDBC Driver oracle.jdbc.driver.OracleDriver successfully loaded

Connecting to url [jdbc:oracle:thin:@192.168.0.21:1521:orcl] as user [system]:
Successful connection to database
...
JDBC connection successfully tested, quitting ...

Using the inline connect string definition with the Thin drivers:

$ java -cp ojdbc11.jar:.:. jdbc_tester_generic oracle.jdbc.driver.OracleDriver system manager jdbc:oracle:thin:@"(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=192.168.0.21)(PORT=1521))(CONNECT_DATA=(SID = orcl)))" "select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual" 
The command-line parameters are:
arg 0 = oracle.jdbc.driver.OracleDriver
arg 1 = system
arg 2 = manager
arg 3 = jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=192.168.0.21)(PORT=1521))(CONNECT_DATA=(SID = orcl)))
arg 4 = select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual

Loading driver [oracle.jdbc.driver.OracleDriver]:
JDBC Driver oracle.jdbc.driver.OracleDriver successfully loaded

Connecting to url [jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=192.168.0.21)(PORT=1521))(CONNECT_DATA=(SID = orcl)))] as user [system]:
Successful connection to database
...
JDBC connection successfully tested, quitting ...

As showed above, the utility jdbc_tester_generic is URL- and drivers-agnostic; just provide the right parameters and they will be passed unchanged to the drivers.

Other RDBMS

We tested jdbc_tester_generic with PostgreSQL and Oracle and chances are that it is really generic indeed as it could handle Oracle’s multiple specificities. Nonetheless, let’s try it with 7 more RDBMS, e.g. MariaDB/MySQL, SQLite, Firebird, Microsoft SQL Server, HSQLDB, MongoDB and Excel, and see how it behaves. If you don’t have those installed somewhere, concise instructions to do it are presented in several articles published on the dbi-services blog site https://www.dbi-services.com/blog/installing-the-odbc-driver-manager-with-*, e.g. https://www.dbi-services.com/blog/installing-the-odbc-driver-manager-with-sqlite-on-linux/ for SQLite.

After the installation of the software and the creation of the databases, their readiness can be verified via ODBC as explained in the articles. Hereafter, we assume that those databases are up and running, with valid credentials where applicable, and we will try to access them via JDBC this time.

MariaDB

First, let’s install the JDBC drivers for MariaDB:

$ wget https://dbschema.com/jdbc-drivers/MariaDbJdbcDriver.zip
$ unzip MariaDbJdbcDriver.zip

The server’s default port is 3306 but let’s check it:

$ netstat -tlnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
...
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 112172/mariadbd

# or:
$ grep -i 'port =' /etc/mysql/my.cnf
$ port = 3306

# or
$ mysql
show variables like 'port';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| port          | 3306  |
+---------------+-------+
1 row in set (0.006 sec)

Let’s invoke the tester now:

$ java -cp mariadb-java-client-3.0.6.jar:. jdbc_tester_generic org.mariadb.jdbc.Driver debian debian jdbc:mariadb://localhost:3306/sampledb "select concat('now in mysql db: ', DATE_FORMAT(SYSDATE(), '%d/%m/%Y %H:%i:%s')) as now;"
The command-line parameters are:
arg 0 = org.mariadb.jdbc.Driver
arg 1 = debian
arg 2 = debian
arg 3 = jdbc:mariadb://localhost:3306/sampledb
arg 4 = select concat('now in mysql db: ', DATE_FORMAT(SYSDATE(), '%d/%m/%Y %H:%i:%s')) as now;

Loading driver [org.mariadb.jdbc.Driver]:
JDBC Driver org.mariadb.jdbc.Driver successfully loaded

Connecting to url [jdbc:mariadb://localhost:3306/sampledb] as user [debian]:
Successful connection to database

Executing SQL query [select concat('now in mysql db: ', DATE_FORMAT(SYSDATE(), '%d/%m/%Y %H:%i:%s')) as now;]:

Retrieving the result set:
now in mysql db: 23/01/2023 21:13:00

JDBC connection successfully tested, quitting ...

No surprise with MariaDB.

SQLite

First, let’s install the JDBC drivers for SQLite:

$ wget https://repo1.maven.org/maven2/org/xerial/sqlite-jdbc/3.40.0.0/sqlite-jdbc-3.40.0.0.jar

Let’s invoke the tester:

$ java -cp sqlite-jdbc-3.40.0.0.jar:. jdbc_tester_generic java.sql.Driver debian debian jdbc:sqlite:/path/to/database/sampledb "select 'now in sqlite db: ' || STRFTIME('%d/%m/%Y, %H:%M:%S', datetime()) as now;"
The command-line parameters are:
arg 0 = java.sql.Driver
arg 1 = debian
arg 2 = debian
arg 3 = jdbc:sqlite:/path/to/database/sampledb
arg 4 = select 'now in sqlite db: ' || STRFTIME('%d/%m/%Y, %H:%M:%S', datetime()) as now;

Loading driver [java.sql.Driver]:
JDBC Driver java.sql.Driver successfully loaded

Connecting to url [jdbc:sqlite:/path/to/database/sampledb] as user [debian]:
Successful connection to database

Executing SQL query [select 'now in sqlite db: ' || STRFTIME('%d/%m/%Y, %H:%M:%S', datetime()) as now;]:

Retrieving the result set:
now in sqlite db: 23/01/2023, 22:02:13

JDBC connection successfully tested, quitting ...

No surprise with SQLite either.

See Part III.

L’article A generic jdbc tester (part II) est apparu en premier sur dbi Blog.

A quick repository creation utility (part V)

Sat, 2023-01-28 05:52

This is part V of the article, see Part I here, Part II here, Part III here and Part IV here.

Timings

As said before, absolute timings vary a lot because they depends on many factors, starting with the hardware in use. Thus, the given values are just examples of what can be achieved in a simple setup such as a laptop, and they are nonetheless quite impressive. The relevant information here is to compare the creation times with the instantiation times when 1, 2, … 5 and 10 docbases are created at the same time.

It is well understood that Oracle’s slower timings are not caused by Oracle itself but by the more complex procedure to duplicate the docbase’s schema, vs. nothing to duplicate in PostgreSQL since the database gets included with the content server and customized in place. Maybe, similar results as PostgreSQL could be obtained with a local Oracle Express Edition RDBMS, to be tried sometime if the stingy 2 Gb RAM and 12 Gb data limits are acceptable (note that Oracle XE is likely not certified for Documentum, likely because of its limits), although there is no way in Oracle to rename a schema barring exporting/importing it, which we already do here except mostly remotely.

The fastest test bed we had access to was composed of:

  • Hardware: DELL XPS15 2019, i9 9980HK 8 cores, 2TB SSD, 64GB RAM
  • Virtualization: Proxmox Virtual Environment 7.2-3

The following 2 linux container hosts were created:

  • oracle-8, 4 cores, 10 GiB RAM, Oracle Linux Server 8.6 and Oracle RDBMS v21.3.0.0.0
  • cs2, 8 cores, 20 GiB RAM, Ubuntu 20.04.5 LTS for the Content Server v22.4

In both machines, the amount of memory was large enough to prevent the 1 GiB large swap to be used with the ensuing dramatic slowness.

The table below gives the average and average maximum times to first create a seed repository and then instantiate 1 to 5 + 10 concurrent times per repository. Averages were based on 3 to 5 executions, depending on the variability of the timings (the more variable timings, the more executions were performed).

As an example, the values in bold means we instantiated 10 docbases with Oracle RDBMS as the database and it took on average 14m43s per docbase, whereas the longest time was 15m7s in average. We can therefore tell that in average it took 15m7s to create that set of 10 docbases on the same machine. Had we created them the normal and sequential way, it would have taken on average 10 * 21m20s, i.e. almost 3.5 hours or 14 times more. That’s a lot of long coffee breaks.

RDBMScreation11 concurrent2 instantiation2 concurrent2 instantiations3 concurrent2 instantiations4 concurrent2,3 instantiations5 concurrent2,3 instantiations10 concurrent2,3 instantiationsOracle
max21m20s 21m56s3m7s
3m17s4m25s 4m29s4m38s 4m42s5m59s
6m5s7m13s
7m31s
14m43s
15m7s
PostgreSQL max18m22s 18m25s0m57s
1m1m4s
1m5s1m11s
1m16s1m18s
1m26s1m30s
1m40s
2m49s
3m11stimings for docbases’ creation and instantiation

When creating a single docbase by instantiating it, it took in average 3m7s. The concurrency placed a toll on the performances, but of course this could be alleviated somewhat by adding more resources to the containers.

With PostgreSQL RDBMS as the database, the gain is even better: it took as little as 57s to instantiate one single docbase on average, and 2m49s per docbase to create 10 docbases on the same machine, and overall 3m11s until the last one was ready. Had we created them the normal and sequential way, it would have taken 10 * 18m22s, i.e. almost 3 hours or more than 60 times more ! The instantiation procedure is between 3 and 5 times faster with PostgreSQL than with Oracle but both are so fast that it does not make any practical difference, at least for a few docbases.

As expected, measured times are quite variable so only the ratios between instantiation and creation times are really comparative. Still, at the very least, the absolute timings prove that is is possible to have excellent performance on some out of the box, run-of-the-mill hardware, such as developer’s laptops.

Notes

1The command was:

$ time ./create_docbase.sh DCTM0 2>&1 | tee /tmp/dctm0.log

with the settings for the stem DCTM0 defined in global_properties.

2The commands for n concurrent instantiations was for instance:

$ for i in {1..n}; do
   time ./instantiate_docbase.sh DCTM0 ORA_DCTM${i} 2>&1 | tee /tmp/ORA_DCTM${i}.log &
done

with the settings for the stems *DCTM1 to *DCTM10 defined in global_properties.

3After an increase of the number of cores from 4 to 20 and of the memory from 10 GiB to 20 GiB in oracle-8, and of the number of cores from 8 to 20 and of the memory from 20 GiB to 30 GiB in cs2. Also, make sure there is enough disk space in both oracle-8 and cs2, e.g. for 10 repositories, 50 GiB for the Oracle data files and 50 GiB for the documentum files.

Management commands

Now that creating docbases has become such an expedite and mundane task (docbases are a commodity at this point), chances are that they’ll start piling up in the same machine and therefore we need a way to manage them easily. At the minimum, such functions must be able to:

  • switch among the installed docbases
  • show the currently selected docbase and its instantiation parameters
  • start/stop/query the status each of a docbase’s component
  • restart a docbase, a docbroker or a method server
  • navigate to folders of interest in a docbase’s directory tree
  • view/tail a docbase’s logs if interest
  • connect to the currently selected docbase and to its underlying database using native tools
  • cleanly remove an instantiated docbase or all of them at once

Like swr() glimpsed precedently, several functions and bash aliases have been created to fulfill the above needs.

Here are they are, defined in each created or instantiated repository’s environment file ${dctm_root}.${repo}/${repo}.env:

# function shr, for SHow Repository;
# displays the currently selected or given repository's info;
# Usage:
# shr[repository_name]
shr()

# alias whr for WHich Repository to show the currently selected repository;
alias whr=shr

# function sur, for StartUp Repository, to start a repository's all processes;
# it depends on function swr above;
# Usage:
# sur [repository_name]
# when invoked without parameters, the currently selected repository environment is started up;
# when invoked with a parameter, it becomes the current selected repository and its environment is started up;
sur()

# function sdr, for ShutDown Repository, to stop a repository's all processes;
# it depends on function swr above;
# Usage:
# sdr [repository_name]
# when invoked without parameters, the currently selected repository environment is shut down;
# when invoked with a parameter, it becomes the current selected repository and its environment is started up;
sdr()

# function str, for STatus Repository, to show the status of a repository;
# it depends on functions swr;
# Usage:
# rst [repository_name]
# when invoked without parameters, the currently selected repository environment is queried;
# when invoked with a parameter, it becomes the current selected repository and its environment is started up;
rst()

# bounced, for bounce Docbase, stop/start the repository;
alias bounced

# bounceb, for bounce Broker, stop/start the docbroker by minimizing the lost connectivity to the repository;
alias bounceb

# bouncems, for bounce Method Server;
alias bouncems

# cd to directories of interest;
alias croot
alias cdroot
alias cdctm
alias cdba
alias clog

# view files/logs of interest;
# less server.ini
alias lconf

# less repository’s log;
alias llog

# less tomcat’s catalina.out;
alias lcat

# less the method server log;
alias lms

# tails logs of interest; also doable from within the less utility with the F command;
# tail the repository’s log;
alias tlog

# tails tomcat’s catalina.out;
alias tcat

# tails the method server log;
alias tms

# interactive utilities;
# for the repositories;
alias iiapi
alias iidql

# for PostgreSQL
alias iisql
alias ipsql

# for Oracle;
alias isqlp

# function rmr, for Remove Repository, to remove a repository by wiping off its working directory;
# it depends on functions swr and uses sdr defined above;
# Usage:
# rmr [repository_name]
# when invoked without parameters, the currently selected repository environment is removed;
# when invoked with a parameter, it becomes the current selected repository and is removed;
rmr()

Several global functions are also defined in ~/.profile:

# alias lsr for LS Repository;
# list the existing, instanciated repositories;
alias lsr

# function swr, for SWitch Repository;
# tool to switch between instanciated repositories on the same machine and ${dctm_root};
# current working directory is moved to the selected repository's ${DOCUMENTUM};
# Usage:
# swr [repository_name]
# when invoked without parameters, the currently sourced repository's environment is displayed, without first refreshing it;
#
swr()

# function sura, for Start Up Repository All;
# start all the existing, instantiated repositories;
# Usage:
#    sura
#
sura()

# function sdra, for Shut Down Repository All;
# shut down all existing, instantiated repositories;
# Usage:
#    sdra
#
sdra()

# function shra, for Show Repository All;
# show all the existing, instantiated repositories;
# Usage:
#    shra
#
shra()

# function stra, for STatus Repository All;
# show the status of all existing, instantiated repositories;
# Usage:
#    stra
#
stra()

# function rmra, for RM Repository All;
# remove all existing, instantiated repositories;
# Usage:
# rmra
#
rmra()

swr() [repo] is the first function to invoke when starting a new shell and no repository is selected yet. It will switch to the docbase repo. If unsure about repo, a list of existing instantiated repositories can be obtained through the alias lsr which simply list the directories in ${dctm_root}, e.g.:

$ lsr
repo02
repo03
repo04
repo05
repo06
repo07
repo08
repo09
repo10
repo12

$ swr repo12
Switching to repository repo12 ...
ACTIVE_DOCBASE=repo12
ACTIVE_ROOT=/u01/dctm/repo12
CLASSPATH=/u01/dctm/repo12/documentum/product/22.4/dctm-server.jar:/u01/dctm/repo12/documentum/dctm.jar:/u01/dctm/repo12/documentum/dfc/bcprov-jdk15on.jar:/u01/dctm/repo12/documentum/config:/u01/dctm/repo12/documentum/product/22.4/bin:
DM_HOME=/u01/dctm/repo12/documentum/product/22.4
DM_JMS_HOME=/u01/dctm/repo12/documentum/tomcat9.0.65
DOCUMENTUM=/u01/dctm/repo12/documentum
DOCUMENTUM_SHARED=/u01/dctm/repo12/documentum
JAVA_HOME=/u01/dctm/repo12/documentum/java64/JAVA_LINK
LD_LIBRARY_PATH=/u01/dctm/repo12/documentum/product/22.4/bin:/u01/dctm/repo12/documentum/dfc:/u01/dctm/repo12/documentum/fulltext/dsearch:/u01/dctm/repo12/postgresql/lib:
PATH=/u01/dctm/repo12/documentum/java64/JAVA_LINK/bin:/u01/dctm/repo12/documentum/product/22.4/bin:/u01/dctm/repo12/documentum/dba:/u01/dctm/repo12/postgresql/bin:/u01/dctm/repo12/documentum/tomcat9.0.65/bin:/u01/dctm/repo12/java/amazon-corretto-11.0.17.8.1-linux-x64/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
POSTGRESQL_HOME=/u01/dctm/repo12/postgresql
PS1=\[\033[0;32m\]\u@\h:\[\033[36m\][\w][repo12]\[\033[0m\] $ 
dfcpath=/u01/dctm/repo12/documentum/dfc
Repository repo12's environment is:
Active docbase name is                  : repo12
Active docbase id is                    : 100013
Active docbase service name is          : repo12
Active docbase service port is          : 50120
Active docbase host is                  : cs2
Active docbase version                  : 22.4.0000.0120
Active docbase root directory is        : /u01/dctm/repo12
Active installer owner is               : dmadmin
Active installer password is            : dmadmin
Active docbase docbroker name           : docbroker010
Active docbase docbroker port           : 1511
Active docbase http server base port    : 9800
Active docbase http server memory       : "-Xms512m -Xmx1024m"
JAVA_HOME                               : /u01/dctm/repo12/documentum/java64/JAVA_LINK
JAVA_VERSION                            :
openjdk version "11.0.17" 2022-10-18 LTS
OpenJDK Runtime Environment Corretto-11.0.17.8.1 (build 11.0.17+8-LTS)
OpenJDK 64-Bit Server VM Corretto-11.0.17.8.1 (build 11.0.17+8-LTS, mixed mode)
dctm_root                               : /u01/dctm
Scripts'dir is                          : /mnt/shared/blog-seed-docbase
CLASSPATH                               : /u01/dctm/repo12/documentum/product/22.4/dctm-server.jar:/u01/dctm/repo12/documentum/dctm.jar:/u01/dctm/repo12/documentum/dfc/bcprov-jdk15on.jar:/u01/dctm/repo12/documentum/config:/u01/dctm/repo12/documentum/product/22.4/bin:
DM_HOME                                 : /u01/dctm/repo12/documentum/product/22.4
DOCUMENTUM                              : /u01/dctm/repo12/documentum
DM_JMS_HOME                             : /u01/dctm/repo12/documentum/tomcat9.0.65
PATH                                    : /u01/dctm/repo12/documentum/java64/JAVA_LINK/bin:/u01/dctm/repo12/documentum/product/22.4/bin:/u01/dctm/repo12/documentum/dba:/u01/dctm/repo12/postgresql/bin:/u01/dctm/repo12/documentum/tomcat9.0.65/bin:/u01/dctm/repo12/java/amazon-corretto-11.0.17.8.1-linux-x64/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LD_LIBRARY_PATH                         : /u01/dctm/repo12/documentum/product/22.4/bin:/u01/dctm/repo12/documentum/dfc:/u01/dctm/repo12/documentum/fulltext/dsearch:/u01/dctm/repo12/postgresql/lib:
Active database owner is                : repo12
Active database password                : repo12
Active database connection string       : repo12
Active database server port             : 5422
DATABASE_TYPE                           : Postgresql

dmadmin@cs2:[/u01/dctm/repo12][repo12] $ 

It may help to compare functions defined in ~/.profile with static methods in OO programming, whereas functions and aliases defined in ${dctm_root}.${repo}/${repo}.env are similar to instance methods: the former apply to any repository and the latter to the current one, unless a different one is explictly specified. Note that if one of ~/.bash_profile or ~/.bash_login is present, ~/.profile is ignored; Ubuntu tends to use ~/.profile whereas Red Hat derivatives use ~/.bash_profile, so adapt this as needed in your linux variant.

The repository-related functions takes optional docbase names, not stems, because the latter don’t apply to existing docbases and are exclusively used in the create_docbase.sh and instantiate_docbase.sh scripts.

These functions are mostly useful when several repositories have been instantiated on the same machine and one needs to switch quickly among them (see ~/.profile’s swr() above) to perform certain tasks such as show the current one (shr()), start it up (sur()), shut it down (sdr()), check the status of its services (str()), restart services (bounced(), bouceb() and bouncems()), check the logs and configuration (lconf, [lt]log, [lt]cat, [lt]ms), navigate directories of interest (croot, cdroot, cdctm, cdba, clog), launch utilities (iiapi, iidql, iisql, ipsql, isqlp), or wipe it off altogether (rmr()). In effect, removing an instantiated repository should not be any more complicated than instantiating a new one. rmr() also takes care of the Oracle schema if one is used; to do so, it first transfers the script rm_user.sh to the Oracle host and then invokes it. This script removes the database user ${ACTIVE_DATABASE_OWNER}, and drops its tablespaces and datafiles in a safely manner, or so. For PostgreSQL, no such script is needed since the embedded database is removed when the ${ACTIVE_ROOT} directory tree is deleted.

The alias bounceb that stops/starts a docbroker is quite clever and deserves some explanation. When a docbroker is stopped, the docbase that targets it becomes unreachable and stays so until its next projection period, even up to several seconds after the docbroker is back on-line. In order to minimize that unavailability delay, an iapi session is first started while listening to a temporary named pipe, connects to the docbase and is sent to the background; then, the docbroker is restarted, a reinit command is written into the pipe and gets processed by iapi so the docbase projects immediately to the freshly started docbroker. Finally, the quit command is sent to iapi through the named pipe to exit the tool, and the pipe is removed. This works because iapi keeps its connection to the docbase even when the docbroker is stopped. In effect, the docbroker is only used to allow clients to establish a session with the docbases that project onto it; once connected, they don’t need it any more. In that respect, the docbroker has a function similar to an Oracle listener.

Conclusion

While the original idea was simply to speed up the creation of docbases, some of its deliverables can be used as stand-alone utilities, e.g. the script create_docbase.sh, the management aliases and functions.

Those scripts can be used as the basis or greatest common divisor of a more ambitious tool to streamline the installation and operation of Documentum software, e.g. xPlore, DA, etc., on any environment such as VMs, linux or OCI containers.

However useful, the scripts’ main purpose is only to give a practical example of the approach. They were written with a clear context in mind: an individual with full control of their infrastructure (e.g. a powerful enough laptop with virtualization software such as Proxmox), which is not always the case, especially with commercial software. Thus, be prepared to do some adaptations to suit different, likely more rigid contexts, and fix a few bugs at the same time too.

Many enhancements are possible, e.g. use OCI containers, convert any docbase installation into a seed one, etc.; there is enough work to keep one busy during those long, freezing winter nights. Incidentally, it is not impossible that some of them will be covered in future articles, so watch this space.

L’article A quick repository creation utility (part V) est apparu en premier sur dbi Blog.

A quick repository creation utility (part IV)

Sat, 2023-01-28 05:51

This is part IV of the article, see here for Part I here, Part II here, and Part III here.

Instantiating a new docbase

See dbi services.

Roughly speaking, the instantiation process uncompresses a seed, or model, docbase from its compressed tar ball and runs the Migration Utility against it to adapt its name and id. Those actions are performed by the script instantiate_docbase.sh. Its usage is:

$ instantiate_docbase.sh seed_repo new_repo

where seed_repo and new_repo are stems pointing to the settings of the respective docbase defined in the global_parameters file. The easy-to-remember syntax is the same as that of the cp command, for a similar concept.

Instantiation is always performed locally so if it needs to be done on a machine different from the one where the model docbase was created, the new machine needs first to be enrolled (see script pre-requisites.sh) and the script invoked from there.

The procedure looks in the ${DOCUMENTUM} tree and changes all the references to the seed docbase so they match the target docbase, including the settings such as port numbers, docbroker names, host names, http server memory sizing, sym links, etc, and then tackles the metadata’s big part. For this to be successful, make sure the seed docbase’s definition in the global_parameter file has not changed and still matches its installation in the compressed tar ball.

For an Oracle RDBMS, a new schema is created with the required tablespaces and grants, the dump file that was produced by the exp utility is copied onto the database’s host, slightly edited (references to the seed’s schema are changed to the new instance’s schema) and imported into the new schema. Next, as the creation of some views’ fails with “IMP-00008: unrecognized statement in the export file:” errors, the view creation statements (almost 1000 of them) are extracted from the dump file (although this is a binary file, the SQL statements it contains can be extracted as text and edited), corrected and re-executed separately, as shown here:

# extract the view creation statements from export file and create the views;
strings ${seed_dmp_file} | sed -r 's/^[ \t\$]*(\(*)(SELECT)(.+)/\1\2\3/I' | gawk '{if (sub(/^CREATE VIEW /, "CREATE or REPLACE VIEW ", \$0)) {print; getline; pos = 2000; if (length(\$0) > pos) {while (substr(\$0, --pos, 1) != ","); print substr(\$0, 1, pos); print substr(\$0, pos + 1)}else print; print "/"}}END{print "quit"}' >> create_views.sql

# count the views to be created;
grep "CREATE or REPLACE VIEW" create_views.sql | wc -l
# 977 in CS 22.2;
# 983 in CS 22.4;

# create the views;
sqlplus ${ACTIVE_DATABASE_OWNER}/${ACTIVE_DATABASE_PASSWORD}@${db_remote_connect_string} @create_views.sql

The view text needs to be reformatted to work around the weird legacy format of the dump file.

For good measure, all the new schema’s compilable objects are recompiled and the schema’s statistics computed. A verification of the schema’s object validity is performed:

set echo on
set pagesize 10000
set linesize 100
col object_name format a30
col object_type format a30
select object_name, object_type from user_objects where status = 'INVALID';
select object_name, object_type, status from user_objects;
quit
eoq
# output:
no rows selected
OBJECT_NAME                    OBJECT_TYPE                    STATUS
------------------------------ ------------------------------ -------
D_1F00271080000195             INDEX                          VALID
D_1F00271080000241             INDEX                          VALID
DMI_CHANGE_RECORD_S            TABLE                          VALID
D_1F002710800001A8             INDEX                          VALID
DMI_DD_ATTR_INFO_R             TABLE                          VALID
...
DM_INDEXES                     VIEW                           VALID
DM_RESYNC_DD_ATTR_INFO         VIEW                           VALID
DM_RESYNC_DD_TYPE_INFO         VIEW                           VALID
DMI_DD_ATTR_INFO_DDEN          VIEW                           VALID

1931 rows selected.

For a PostgreSQL RDBMS, things are much less complicated. The server port is adapted and the server started. Next, the names of the schema, database and user are adapted and the ownership of each of the schema’s objects is displayed as a verification. Example of output:

  datname  
-----------
 postgres
 template1
 template0
 repo02
(4 rows)

                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 dmadmin   | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
 repo02    | No inheritance                                             | {}

You are now connected to database "repo02" as user "dmadmin".
ALTER SCHEMA
You are now connected to database "postgres" as user "dmadmin".
ALTER DATABASE
NOTICE:  MD5 password cleared because of role rename
ALTER ROLE
ALTER ROLE
ALTER ROLE

You are now connected to database "repo03" as user "repo03".
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 dmadmin   | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
 repo03    | No inheritance                                             | {}

 current_user 
--------------
 repo03
(1 row)

                                                version                                                
-------------------------------------------------------------------------------------------------------
 PostgreSQL 15.1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit
(1 row)

  datname  
-----------
 postgres
 template1
 template0
 repo03
(4 rows)

 schemaname |              objectname               | len | objectowner | objecttype 
------------+---------------------------------------+-----+-------------+------------
 repo03     | adm_turbo_size                        |  14 | repo03      | TABLE
 repo03     | creation_date_message_id_ui           |  27 | repo03      | INDEX
 repo03     | d_1f00271280000001                    |  18 | repo03      | INDEX
 repo03     | d_1f00271280000002                    |  18 | repo03      | INDEX
...
 repo03     | dm_federation_log                     |  17 | repo03      | TABLE
 repo03     | dm_federation_r                       |  15 | repo03      | TABLE
 repo03     | dm_federation_rp                      |  16 | repo03      | VIEW
 repo03     | dm_federation_rv                      |  16 | repo03      | VIEW
 repo03     | dm_federation_s                       |  15 | repo03      | TABLE
...
 repo03     | status_no_idx                         |  13 | repo03      | INDEX
(1932 rows)

The above output shows the repo03 account and its roles, a database named repo03 and docbase’s objects owned by user repo03. Also, repo03 was instantiated from repo02 as the seed.

As for Oracle, the connectivity is also adapted here but instead of the SQL*Net’s tnsnames.ora, the installation owner’s ~/.odbc.ini file is adapted to match the target docbase’s settings and the reference to it in the server.ini file is adapted too. That concludes the RDBMS part.

The new instance’s password is also saved using the documentum’s dm_encrypt_password utility, and tested using dmdbtest. At this point, the Migration Utility step can be started.

The Migration Utility

The Migration Utility is located in ${DOCUMENTUM}/product/22.2/install/external_apps/MigrationUtil. This tool takes its instructions from the xml configuration file ${DM_HOME}/install/external_apps/MigrationUtil/config.xml. The following relevant attributes’ values are edited to match the target docbase’s settings:

  • the type of RDBMS, the listener’s (for Oracle) or the server’s port (for PostgreSQL)
  • the database host name
  • the install owner’s password
  • the docbase’s name and database’s password
  • the old and new docbase ids
  • the new docbase server name (this is not a machine name but the server object’s one in dm_server_config)
  • the docbase’s machine name
  • the installation owner’s name

Here is an example of the config.xml file generated while instantiating repo02 into bookstore:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
   <comment>Database connection details</comment>
   <entry key="dbms">postgres</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
   <entry key="tgt_database_server">localhost</entry> <!-- Database Server host or IP -->
   <entry key="port_number">6000</entry> <!-- Database port number -->
   <entry key="InstallOwnerPassword">dmadmin</entry>
   <entry key="isRCS">no</entry> <!-- set it to yes, when running the utility on secondary CS -->
   <entry key="TomcatPath"></entry> <!-- Optional. set it to appropriate tomcat path if DM_JMS_HOME environment variable is not set -->
   <!-- <comment>List of docbases in the machine</comment> -->
   <entry key="DocbaseName.1">repo02</entry>
   <entry key="DocbaseName.2">Docbase2</entry>
   <entry key="DocbaseName.3"></entry>
   <!-- <comment>docbase owner password</comment> -->
   <entry key="DocbasePassword.1">bookstore</entry>
   <entry key="DocbasePassword.2">Docbase owner password2</entry>
   <entry key="DocbasePassword.3"></entry>
   <entry key="ChangeDocbaseID">yes</entry> <!-- To change docbase ID or not -->
   <entry key="Docbase_name">repo02</entry> <!-- has to match with DocbaseName.1 -->
   <entry key="NewDocbaseID">20000</entry> <!-- New docbase ID -->
   <entry key="ChangeServerName">yes</entry>
   <entry key="NewServerName.1">bookstore</entry>
   <entry key="NewServerName.2"> </entry>
   <entry key="ChangeDocbaseName">yes</entry>
   <entry key="NewDocbaseName.1">bookstore</entry>
   <entry key="NewDocbaseName.2"> </entry>
   <entry key="ChangeHostName">no</entry>
   <entry key="HostName">Old Host name </entry>
   <entry key="NewHostName">New Host name </entry>
   <entry key="ChangeInstallOwner">no</entry>
   <entry key="InstallOwner">Old Install Owner </entry>
   <entry key="NewInstallOwner"> New Install Owner </entry>
   <entry key="NewInstallOwnerPassword">New Install Owner password </entry>
   <entry key="DockerSeamlessUpgrade">no</entry>
   <entry key="PrimaryHost">Primary Host name </entry>
   <entry key="SecondaryHost">Secondary Host name </entry>
   <entry key="PrimaryServerConfig">Primary Server name </entry>
   <entry key="SecondaryServerConfig">Secondary Server name </entry>
   <entry key="DocbaseService">Docbase Service name </entry>
</properties>

Since the Migration Utility is a java application, it uses JDBC to connect to the database; thus, prior to launching it, a JDBC connection to the database is attempted in order to check the connectivity parameters. To this effect, a simple but generic utility has been specially written for this project, jdbc_tester_generic.java (see article A Generic JDBC Tester), located in ${scripts_dir}. Its usage is:

$ java -cp jdbc_driver_jar:path_to_the_tester jdbc_tester_generic jdbc_driver user_name password jdbc_url sql_statement

# Example for a PostgreSQL database:
$ export CLASSPATH=${dctm_software}/${postgresql_jdbc_package}:$CLASSPATH
$ java -cp $CLASSPATH:${scripts_dir} jdbc_tester_generic org.postgresql.Driver dmadmin xxx jdbc:postgresql://localhost:6000/bookstore "select 'now in postgresql db: ' || to_char(current_timestamp(0), 'DD/MM/YYYY HH24:MI:SS') as now;"
The command-line parameters are:
arg 0 = org.postgresql.Driver
arg 1 = dmadmin
arg 2 = xxx
arg 3 = jdbc:postgresql://localhost:6000/bookstore
arg 4 = select 'now in postgresql db: ' || to_char(current_timestamp(0), 'DD/MM/YYYY HH24:MI:SS') as now;
Loading driver [org.postgresql.Driver]:
JDBC Driver org.postgresql.Driver successfully loaded
Connecting to url [jdbc:postgresql://localhost:6000/bookstore] as user [dmadmin]:
Successful connection to database
Executing SQL query [select 'now in postgresql db: ' || to_char(current_timestamp(0), 'DD/MM/YYYY HH24:MI:SS') as now;]:
Retrieving the result set:
now in postgresql db: 08/01/2023 19:00:53
JDBC connection successfully tested, quitting ...

# Example for an Oracle database;
$ ln -s ${ORACLE_HOME}/ojdbc8.jar ${ORACLE_HOME}/ojdbc.jar
$ export CLASSPATH=${ORACLE_HOME}/ojdbc.jar:$CLASSPATH
# using Oracle thin:
$ java -cp $CLASSPATH:${scripts_dir} jdbc_tester_generic oracle.jdbc.driver.OracleDriver seed seed jdbc:oracle:thin:@//db:1521/pdb1 "select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual"
The command-line parameters are:
arg 0 = oracle.jdbc.driver.OracleDriver
arg 1 = seed
arg 2 = seed
arg 3 = jdbc:oracle:thin:@//db:1521/pdb1
arg 4 = select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual
Loading driver [oracle.jdbc.driver.OracleDriver]:
JDBC Driver oracle.jdbc.driver.OracleDriver successfully loaded
Connecting to url [jdbc:oracle:thin:@//db:1521/pdb1] as user [seed]:
Successful connection to database
Executing SQL query [select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual]:
Retrieving the result set:
now in Oracle db: 08/01/2023 18:51:51
JDBC connection successfully tested, quitting …

# using Oracle thick, OCI-based drivers and the SQL*Net alias:
$ java -cp $CLASSPATH:${scripts_dir} jdbc_tester_generic oracle.jdbc.driver.OracleDriver seed seed jdbc:oracle:oci:@pdb1 "select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual"

The command-line parameters are:
arg 0 = oracle.jdbc.driver.OracleDriver
arg 1 = seed
arg 2 = seed
arg 3 = jdbc:oracle:oci:@pdb1
arg 4 = select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual
Loading driver [oracle.jdbc.driver.OracleDriver]:
JDBC Driver oracle.jdbc.driver.OracleDriver successfully loaded
Connecting to url [jdbc:oracle:oci:@pdb1] as user [seed]:
Successful connection to database
Executing SQL query [select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual]:
Retrieving the result set:
now in Oracle db: 08/01/2023 18:52:37
JDBC connection successfully tested, quitting ...

This utility works for any RDBMS as long as its JDBC drivers are provided. Note how simpler the syntax for Oracle JDBC Thick is compared to Oracle Thin’s or PosgreSQL’s, as its required parameters are actually read from the tnsnames.ora using the service name as an index, which allows to reuse an existing configuration instead of extracting each required piece of information from it. Unfortunately, but understandably since it is Oracle-specific, it is not used by the Migration Utility in favor of the more universal, pure java one that works with any RDBMS.

Finally, the Migration Utility is launched and the requested changes are performed. Each one has its own log file so it is easy to check them for any error. As an example, here is the log for the change docbase id step:

Reading config.xml from path: config.xmlReading server.ini parameters
Retrieving server.ini path for docbase: repo02
Found path: /u01/dctm/bookstore/documentum/dba/config/repo02/server.ini
Set the following properties:
Docbase Name:repo02
Docbase ID:10002
New Docbase ID:20000
DBMS: postgres
DatabaseName: bookstore
SchemaOwner: bookstore
ServerName: localhost
PortNumber: 6000
DatabaseOwner: bookstore
-------- PostgreSQL JDBC Connection Testing ------
jdbc:postgresql://localhost:6000/bookstore
Connected to database
Utility is going to modify Objects with new docbase ID
Fri Jan 06 13:52:47 UTC 2023
-----------------------------------------------------------
Processing tables containing r_object_id column
-----------------------------------------------------------
-------- PostgreSQL JDBC Connection Testing ------
jdbc:postgresql://localhost:6000/bookstore
Connected to database
-----------------------------------------------------------
Update the object IDs of the Table: dm_replication_events with new docbase ID:4e20
-----------------------------------------------------------
Processing objectID columns
-----------------------------------------------------------
Getting all ID columns from database
-----------------------------------------------------------
Processing ID columns in each documentum table
Column Name: job_id
Update the ObjectId columns of the Table: with new docbase ID
-----------------------------------------------------------
Update the object IDs of the Table: dm_state_extension_s with new docbase ID:4e20
-----------------------------------------------------------
Processing objectID columns
-----------------------------------------------------------
Getting all ID columns from database
-----------------------------------------------------------
Processing ID columns in each documentum table
Column Name: r_object_id
Update the ObjectId columns of the Table: with new docbase ID
...

where ID 4e20 is the hexadecimal representation of the bookstore’s docbase id 20000. We can deduce that all the columns containing ids of all the tables of the bookstore schema are scanned for the old value as a substring and replaced with the new value as a substring. It is understandable that these changes can put quite a heavy load on the database and are quite touchy since any failure can corrupt its integrity. Hopefully, they are done in an efficient manner. However, in our case with an out of the box seed docbase, there are very few objects to change so this is not a concern and the utility completes very quickly.

Although the resulting docbase is perfectly usable, the migration utility leaves lots of things unchanged, e.g. the dm_location’s root directories on disk (they still contain the seed docbase’s hexadecimal id), the name of the index tablespace (if Oracle is used), the ACLs’ owner (i.e. the ACLs’ domain is still the seed’s database owner; this does not jeopardize the database but is not justified in an empty docbase; actually, by keeping them unchanged, potential applications that refer to acl_domain/object_name pairs are protected), etc. Thus, the documentum services are started and a final corrective pass is executed to normalize all those remaining traces of the prior incarnation.

A few jobs, such as the dm_ContentWarning, dm_StateOfDocbase and dm_ConsistencyChecker, are forced to be executed as soon as possible so some more post-instantiation checks can be done. The db-crawler script presented in the article db-crawler, a database search utility for Documentum is also launched in the background at the end against the Oracle schema to check if there are still remains of the former seed docbase. Likewise, for PostgreSQL schemas, the pg_dump utility is launched and the resulting text file is grepped for any occurrence of the former seed docbase which is basically what the db-crawler does, except much quicker.

Like with create_docbase.sh, and for the same reason, multiple concurrent instantiations may be launched simultaneously, as follows:

# Example:
for i in {1..10}; do
   time ./instantiate_docbase.sh SEED "DCTM${i}" &
done
where SEED and DCTM1_* to DCTM10_* are the respective docbases's stem "pointing" to the settings defined in the global_properties file.

# Another example:
./instantiate_docbase.sh DCTM2 bookstore &
./instantiate_docbase.sh SEED sales &
./instantiate_docbase.sh PUBLICATIONS research_papers &

See Part V here

L’article A quick repository creation utility (part IV) est apparu en premier sur dbi Blog.

A quick repository creation utility (part III)

Sat, 2023-01-28 05:49

This is part III of the article, see Part I here and Part II here.

Creating the seed docbase

See dbi services for its source. Hereafter, only some salient steps are presented, refer to the script for the details.

As its name implies, the script create_docbase.sh is used to create a docbase optionally to be used later as a seed. Its usage is:

$ ./create_docbase.sh [stem]

where the optional stem in global_parameters defaults to DCTM0 and points to the docbase’s required settings. It is executed as the installation owner, usually dmadmin.

After the global_parameters file is sourced with the given stem, the installation directory is created and all the needed software packages (the JDK, the Oracle Instant Client and SQL*Plus if the Oracle RDBMS is chosen, the PostgreSQL source tarball global_parameters.${postgresql_package} otherwise) are downloaded into the directory global_parameters.${dctm_software}, only if they are not there yet. After their downloading, the packages are expanded and the software installed and configured as advised in the OpenText documentation. For example, the anonymous algorithm for secure communication with the content server is reinstated in the JDK from its default disabled state.

If the selected RDBMS is PostgreSQL, its installation is completely customized. In effect, the platform’s standard PostgreSQL package is normally installed system-wide by root with pieces scattered in several locations, e.g. /var/lib and /usr/lib. As we want to embed this RDBMS with the repository and under the dmadmin ownership, this is not convenient; we want all the pieces in one directory tree root. In order to make this possible, we need to compile the source code specifying the installation directory ${ACTIVE_DOCBASE_ROOT}/postgresql. The binary and data files will all be installed under this directory, which will later be included in the compressed tar ball when the script completes executing. As the compilation takes several minutes and even though it only needs to be done once, a pre-compiled tarball global_parameters.${postgresql_custom_package} is available so this step can be skipped, unless global_parameters.${postgresql_compile} is set to “yes“.

After the PostgreSQL binaries have been extracted or compiled, the ODBC connectivity is configured. Surprisingly, Documentum’s configuration program imperatively needs a connect string named postgres and defined in /etc/odbc.ini as if it were hard-coded. Apparently, it reads this file explicitly instead of relying on the ODBC API which looks for any specified connect string first in ~/.odbc.ini and lastly in /etc/odbc.ini. Also, it requires the running PostgresSQL server to be listening on the default port 5432 instead of any port defined in the ini file. Strangely enough, and this is visible when using the configuration program interactively, ~/.odbc.ini is correctly used and validated but only until the program effectively starts creating the docbase, where it switches to /etc/odbc.ini and the weird behavior. The documentation mentions changes to be done in ODBC.INI but does not say where it is (it’s in /etc but ~/.odbc.ini should be used preferably). Also, it incorrectly says that the connect string should be [MyPostgres] whereas it should be anything, although only [postgres] works at this stage. All this suggests that the ODBC part is not well mastered by developpers of the installation program, or that it was ported without much attention from some other O/S. Fortunately, these idiosyncrasies are abandoned after the docbase is created and all the parameters set in the installer owner’s ~/.odbc.ini are honored, e.g. custom connect string and database server port.

Here is how /etc/odbc.ini must look for the docbase creation to start:

[postgres]
Description = PostgreSQL connection to postgres
Driver = PostgreSQL Unicode
Database = postgres
Servername = ${db_server_host_alias}
UserName = ${dctm_owner}
# hard-coded in the server configuration tool;
Port = 5432

and here is the final ~/.odbc.ini file defined after the docbase creation, including some of the recommended database settings:

[${db_connect_string}]
Description = PostgreSQL connection to ${ACTIVE_DOCBASE}
Driver = PostgreSQL Unicode
Database = ${ACTIVE_DOCBASE}
Servername = ${db_server_host_alias}
UserName = ${dctm_owner}
Port = ${ACTIVE_DB_SERVER_PORT}
Protocol = $(echo ${postgresql_custom_package} | sed -E 's/.+-([0-9]+).*$/\1/')
ReadOnly = No
RowVersioning = No
ShowSystemTables = No
ShowOidColumn = No
FakeOidIndex = No
UpdateableCursors = Yes

The shell variables get expanded when the file is saved.

Another requirement with the postgreSQL RDBMS is that the installer demands a directory named db_${ACTIVE_DOCBASE}_dat.dat for the datafile to be created beforehand; if not found, the installer fails.

When using the Oracle RDBMS, a global_parameters.${dctm_root}/docbase/oracle/instantclient_21_7/network/admin/tnsnames.ora file gets created with the following content:

${db_connect_string} =
   (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)
                             (HOST = ${db_server_host_alias})(PORT = ${db_listener_port}
                             )
                  )
                  (CONNECT_DATA =
                     (SERVER = DEDICATED)
                     (SERVICE_NAME = ${db_service_name})
                  )
   )

Next, the data and index tablespaces are created and finally the schema account with the required grants. Here is a summary of the database requirements for Oracle:

select TABLESPACE_NAME, ALLOCATION_TYPE from dba_tablespaces order by 1;
TABLESPACE_NAME          ALLOCATION_TYPE
------------------------------ ---------------------------
DCTM1_DATA            SYSTEM
DCTM1_INDEX           SYSTEM
...
SEED_DATA             SYSTEM
SEED_INDEX            SYSTEM

SQL> select privilege from dba_sys_privs where grantee = upper('dctm1');
GRANTEE                        PRIVILEGE                      ADMIN_OPT COMMON
------------------------------ ------------------------------ --------- ---------
DCTM1                          CREATE VIEW                    NO        NO
DCTM1                          CREATE ANY VIEW                NO        NO
DCTM1                          CREATE SEQUENCE                NO        NO
DCTM1                          CREATE PROCEDURE               NO        NO
DCTM1                          CREATE TABLE                   NO        NO

SQL> select * from dba_role_privs where grantee = upper('dctm1');
GRANTEE                        GRANTED_ROLE                   ADMIN_OPT DEFAULT_R COMMON
------------------------------ ------------------------------ --------- --------- ---------
DCTM1                          CONNECT                        NO        YES       NO
DCTM1                          RESOURCE                       NO        YES       NO
DCTM1                          SELECT_CATALOG_ROLE            NO        YES       NO

SQL> select TABLESPACE_NAME, STATUS, ALLOCATION_TYPE, SEGMENT_SPACE_MANAGEMENT from user_tablespaces;
TABLESPACE_NAME                STATUS                      ALLOCATION_TYPE             SEGMENT_SPACE_MANA
------------------------------ --------------------------- --------------------------- ------------------
DCTM1_DATA                     ONLINE                      SYSTEM                      AUTO
DCTM1_INDEX                    ONLINE                      SYSTEM                      AUTO

After the database steps have been completed, the documentum’s installer program serverSetup.bin is launched with the following generated on-the-fly response file:

INSTALLER_UI=silent
KEEP_TEMP_FILE=true
common.installOwner.password=dmadmin
SERVER.SECURE.ROOT_PASSWORD=root

# documentum binaries;
SERVER.DOCUMENTUM=${ACTIVE_ROOT}/documentum
APPSERVER.SERVER_HTTP_PORT=${ACTIVE_SERVER_HTTP_PORT}
APPSERVER.SECURE.PASSWORD=tomcat

# java;
PATH_TO_JAVA=$(ls -d ${ACTIVE_ROOT}/java/amazon-corretto-11*-linux-x64)

This step takes several minutes to complete but it does not matter; as said previously, the real optimization occurs in the instantiation step, not in the installation of the binaries or the creation of the model docbase; those steps are done only once (unless it is done outside the context of this project).

Method servers are dedicated to each docbase and receive a distinct base port number defined as global_properties.${stem_SERVER_HTTP_PORT}. Later during the instantiation, their memory sizing will be configured as set in global_properties.${stem_SERVER_HTTP_MEMORY}.

It has been noticed that the method server creates several cache directories (e.g. felix-cache and activemq-data) in the directory it is started from; to move them out of the way and prevent cluttering, the command cd ${DM_JMS_HOME}/temp is later inserted in startMethodServer.sh before tomcat’s startup.sh script is invoked.

After the binaries have been installed, a connectivity check using documentum’s dmdbtest is performed to confirm that the database is reachable via SQL*Net for Oracle or via ODBC for PostgreSQL. This test differs from the ones previously done directly at the database level in that it is performed by documentum. Its output resembles the following one:

if [[ "${ACTIVE_RDBMS}" == "oracle" ]]; then
   ${DOCUMENTUM}/product/*/bin/dmdbtest -Dxx -S${db_connect_string} -U${ACTIVE_DOCBASE} -P${ACTIVE_DOCBASE}
else
   ${DOCUMENTUM}/product/*/bin/dmdbtest -Dpostgres -Spostgres -U${dctm_owner} -Pxxx
fi
Database successfully opened.
Test table successfully created.
Test view successfully created.
Test index successfully created.
Insert into table successfully done.
Index successfully dropped.
View successfully dropped.
Database case sensitivity test successfully past.
Table successfully dropped.

Note that the above test needs an extra step when using Oracle RDBMS because although the sqlplus executable is installed in Instant Client’s ${ORACLE_HOME} and ${ORACLE_HOME} is in the ${PATH}, the installer expects it in the non-existing ${ORACLE_HOME}/bin. So, that directory must be created beforehand and sqlplus symlinked there. Evidently, sqlplus’ expected location is hard-coded and the ${ORACLE_HOME} environment variable is either not passed to the java programs invoked by the installer script, or ignored when looking for it. This quirk can be half-forgiven as it is a documented requirement.

Since several docbases can potentially be instantiated and run on the same machine, an environment file named docbase_name.env for each of them is prepared to allow switching easily between them. This file notably defines the ${DOCUMENTUM} and ${PATH} variables, invokes documentum’s own dm_set_server_env.sh, and should be sourced prior to working with a given docbase. To simplify this, the bash function swr() (which stands for SWitch Repository) has been defined in the installation owner’s ~/.profile. Other useful management functions such as sur()and sdr(), have also been defined in docbase_name.env, see down below.

Next, another response file is prepared to install the docbroker. docbrokers too are dedicated to each docbase and defined in the global_properties file with parameters stem_DOCBROKER_NAME and stem_DOCBROKER_PORT. Make sure there is no port conflict when creating/instantiating new docbases. Here is an example of this response file:

KEEP_TEMP_FILE=true
PATH_TO_JAVA=$(ls -d ${ACTIVE_ROOT}/java/amazon-corretto-11*-linux-x64)
INSTALLER_UI=silent
common.aek.algorithm=AES_128_CBC

# docbroker;
SERVER.CONFIGURATOR.BROKER=TRUE
SERVER.DOCBROKER_ACTION=CREATE
SERVER.DOCBROKER_PORT=${ACTIVE_DOCBROKER_PORT}
SERVER.DOCBROKER_NAME=${ACTIVE_DOCBROKER_NAME}
SERVER.PROJECTED_DOCBROKER_HOST=$(hostname)
SERVER.PROJECTED_DOCBROKER_PORT=${ACTIVE_DOCBROKER_PORT}
SERVER.DOCBROKER_CONNECT_MODE=native
SERVER.USE_CERTIFICATES=false

Make sure the docbroker name does not contains any non alphanumeric characters as the installer is quite restrictive in this regard. For example, the name docbroker_01 is rejected because it contains an underscore.

Documentum’s dm_launch_server_config_program.sh is then invoked with that response file and the docbroker gets created and started along with the method server.

Next, another response file is prepared for the repository. That one is quite large so, please, refer to the code on git here dbi services.

After the configuration program is invoked with the above response file and completes, the documentum processes are shut down and, if using Oracle, the resulting seed’s database schema will be extracted using the traditional exp utility. Although the exp and imp utilities have been superseded by the data pump for a while now, they do their job well enough for repositories’ schemas. Of course, the more modern data pump can also be used instead under certain conditions, but it may need some preliminary work on the database’s side to accommodate it. The same applies if using the Instant Client’s exp and imp utilities when these tools’ version differs from the database’s. For example, one of our test database is a v12.1.0.1.0 one but there are no downloadable exp and imp tools with that version for the Instant Client as they started being included only in v12.2.0.x (cf. https://download.oracle.com/otn/linux/instantclient/122010/instantclient-basic-linux.x64-12.2.0.1.0.zip and https://download.oracle.com/otn/linux/instantclient/122010/instantclient-tools-linux.x64-12.2.0.1.0.zip). And since those versions differ from the database’s, a PL/SQL package must be executed so they can be used. All this is quite complicated and to simplify the data extraction and import tasks, we will use the exp and imp utilities bundled with the RDBMS; those will always work as-is so no additional download nor configuration are necessary. However, those tools need to be invoked from within a remote session, hence the settings global_parameters.${db_server_host_*}, unless those steps are delegated to the DBAs. As already discussed, Oracle RDBMS is a proprietary, closed-source and quite complex software. Starting with the licensing, it requires some careful planning. Sometimes, databases are even shared between several applications. Therefore, it is mostly managed by specialized people in an organization and installed in dedicated machines. For those reasons, the Oracle part takes up a large chunk of the procedure dedicated to the RDBMS and, for faster and more agile configuration, the leaner but sufficient PostgreSQL RDBMS is preferred. As precedently written, the postgreSQL source is compiled so it can be embedded with the docbase that uses it and the whole binaries and database get copied into the tar ball, which is a considerable simplification at a measly cost of 276 Mb.

After the compressed tar ball is produced, it is renamed to ${ACTIVE_DOCBASE}_${ACTIVE_RDBMS}.tgz and moved to the ${scripts_dir} if one with the same name does not exist there yet (otherwise, the move is not performed), ready to be instantiated as needed.

A useful by-product of this project, although it won’t benefit from the optimized provision time, the generic script create_docbase.sh creates any docbase whose parameters are set in global_parameters. Once created, this docbase can be used stand-alone, or taken a snapshot of and used as a seed for future instantiations, or both. It can even be uncompressed in several places on the same machine or on different machines as distinct clones, provided resource conflicts are resolved and the usual well-known adjustments post-cloning are applied. The script can create several docbases to be used as models for different purposes after further customizations; edit the script as required.

It is possible to specify as many docbases to be created (or instantiated, see later) in global_parameters.${dctm_machine} as needed by specifying their stems on the command-line. So, if several docbases need be created, even concurrently, define them all in global_parameters and invoke the creation script as follows:

# Example:
$ for i in {1..10}; do
   ./create_docbase.sh "DCTM${i}" &
done
where DCTM1_* to DCTM10_* are the respective docbases's stem "pointing" to the settings defined in the global_properties file.

# Another example with unrelated stems:
./create_docbase.sh bookstore &
./create_docbase.sh product_catalog &
./create_docbase.sh research_papers &

Concurrent docbase creation is another useful by-product of the project and is possible because each created docbase is normally stand-alone (i.e. no dependency on other services such as the docbroker or the method server) and has a distinct ${DOCUMENTUM} directory under global_parameters.${dctm_root}/docbase_name.

Granted, when using PostgreSQL , since the docbase creation demands a server listening on port 5432, there may be a short period of time when a server gets commonly used by several instances of dm_launch_server_config_program.sh. Also, multiple database server instances get started on port 5432, with only the first one succeeding. But this should not be an issue since the databases and their locations are distinct. This does not happen at instantiation time as the PostgreSQL server is always started with the custom port.

The additional cost is 2 Gb of disk space since the documentum binaries are not shared between the repositories (i.e. each repository has its own ${DOCUMENTUM} and its own copy of the binaries), but disk space is so cheap these days that this is not prohibitive. When using Oracle, the above concurrency behavior does not apply.

As mentioned before, create_docbase.sh sets up a sourceable script to define a working docbase’s environment and several useful functions, global_properties.${dctm_root}/docbase/docbase.env. In particular, it defines:

${ACTIVE_DOCBASE}, ${ACTIVE_ROOT}, ${JAVA_HOME}, ${DOCUMENTUM} and ${PATH}, and sources documentum’s own dm_set_server_env.sh. ${ACTIVE_DOCBASE} is the name of the current repository and ${ACTIVE_ROOT} the root directory that contains that docbase’s ${JAVA_HOME}, ${DOCUMENTUM} and the postgres root directory if that RDBMS is used. ${ACTIVE_ROOT} is also equal to global_properties.${dctm_root}/docbase.

In addition to those environment variables, several management functions are defined too; see further down below.

For more comfort, the environment file also changes the shell’s prompt to something nicer, e.g.:

dmadmin@cs2:[/u01/dctm/repo03][repo03] $

The prompt shows the currently logged user and the machine it is logged on, the current working directory, and the current active docbase, which is useful to prevent manipulating the wrong docbase when several of them coexist in the machine.

See Part IV here

L’article A quick repository creation utility (part III) est apparu en premier sur dbi Blog.

A quick repository creation utility (part I)

Sat, 2023-01-28 05:43

Some things never change: gravity, human foolishness, and the length of content servers’ installation time. Indeed, have you ever noticed how long it takes to create a new Documentum repository ? I’m not talking of the time spent interactively behind a keyboard to type commands or answer prompts from the installation and configuration tools, those are just tedious, but of the time actually spent by those tools to deliver the new repository once all the settings have been provided. Whether it happens in a VM or a container, interactive or automated, the final approach is the same slow one. Most of it is spent inside the database creating tables (414 tables), views (974 views) and indexes (536 indexes, including the implicit ones for non-nullity constraints), for a total of 1928 user objects in ContentServer v22.2, and populate them; plus, initializing the new repository with system objects, which will ultimately also affects the database. Comparatively, not so much time is spent at copying and configuring files around, even though there are about 13’000 of them in the CS v22.2. Clearly, if the database part could be sped up, it would make the whole process much less time-consuming. A lot of time, in the order of several total minutes, is also spent waiting for processes to start or stop, e.g. the method server; removing these inefficient delays could also substantially reduce the overall installation time.

So, here is a challenge: is there a way to speed up the creation of a repository by optimizing some of its steps ? As you can guess, there is, partly thanks to a tool Documentum has made available since Content Server v7.3, the Migration Utility.

In this article, I propose to use an out of the box docbase as a seed repository, copy it along with the Documentum binaries anywhere it is deemed necessary, on the same host or on a different one, and launch the Documentum’s Migration Utility to change its name, docbase id, host name and/or server name as needed. The procedure, let’s call it henceforth docbase instantiation, can be repeated as many times as needed, even in parallel (which is normally not recommended by the vendor), whenever new, fresh, distinct repositories are necessary, either temporarily or permanently.

Besides the technical challenge, the use cases are aplenty: quickly instantiate a docbase to rehearse a migration, apply an update or patch procedure, deploy a development docbase, test a Documentum’s feature inside a fresh repository, test an application’s deployment, rollback some heavy customizations, unit testing or application testing, etc., basically anything that pertains to development and testing. To be fair, nothing precludes applying this procedure in production once one has gained sufficient confidence with the docbases churned out this way.

To wet your appetite, let’s just say that on my 8-year old, puffing and panting but sturdy laptop, instantiating a new repository out of the seed one with a new name and id takes less than 6 minutes, considerably less that the 25 minutes of a standard, even silent, creation on the same machine. On a much more recent 4-year old machine, with a lighter virtualization configuration (linux containers vs. VMs), its took 2.5 minutes down from 15 minutes. On the same infrastructure but using a private and local PostgreSQL database instead of a remote Oracle one, this time drops down to less than 1 minute. Did I get your attention yet ? If so, read on !

Of course, the absolute timings don’t mean much as they are subject to the infrastructure in use but the ratio does: at least a remarkable 4-fold on old hardware, and a more than 15 times speed increase with a local PostgreSQL database on more recent hardware/infrastructure. Even on faster hardware, where the speed gain is less appealing, the automated instantiating procedure is still convenient thanks to its simplicity.

The seed docbase approach

So far, we used to clone repositories using long and daunting procedures (see e.g. Knowledge Base articles KB8966391, KB8715468, KB0528105, and KB0499567 at OpenText). While renaming a docbase is explained in OTX note KB7713200, before the Migration Utility there was no official in-place way to change a docbase id. Such a change must be applied to all the docbase’s objects metadata and all the references to object ids in configuration files (e.g. in server.ini) on disk. The risk is to forget or misinterpret some embedded docbase id in some strings in the database or in disk files; thus, only the vendor could really propose an exhaustive and reliable way to do it. Without the Migration Utility, the only safe way at our disposal to do that is to first create an empty repository with the new name and id, and importing in there the original repository’s documents, quite a complicated and boring process. Consequently, it is easier to just copy a whole docbase somewhere else and use it from there. Such clones are nonetheless quite practical as long as they don’t attempt to project to the same docbroker(s) (i.e. they don’t have any common docbroker) and their docbrokers are not listed in the same client’s dfc.properties file. A DFC client wishing to connect to any of the clones would either use a tailored dfc.properties file with the clone docbroker’s host as the sole occurrence of the clone docbase (plus, optionally, any other docbroker hosts of non conflicting repositories) as discussed in the article Connecting to a Repository via a Dynamically Edited dfc.properties File (part I) or use the enhancement presented in the article Connecting to Repositories with the Same Name and/or ID, which require some code editing and is therefore not possible in closed-source applications.

Experience shows that while stand-alone clones are acceptable at first, sooner or later they must be accessed together by the same client (e.g. some third-party service such as a pdf rendition service), and that’s where the name and id restrictions kick in. Failure to remember their common origin yields strange, hard to diagnose error messages, e.g. non-existing objects with ids seemingly coming out of thin air.

To remove all these limitations, the repository’s name and id have to be changed, which is the Migration Utility’s main purpose. Once done, as many clone repositories as needed can run on any machine, the same machine or different ones, and project to common docbrokers, or dedicated ones listed in the same client’s dfc.properties file. This enhancement was missed for so long that it is a real relief to finally have it, so kudos to OpenText.

Besides the docbase’s name and id, the Migration Utility also allows to change the host name, the owner name, server name and/or password, and all these from one single place. The most time consuming part is the id change because each and every repository object has an id (or more) containing the 5 hexadedimal digits of the docbase id which must be corrected. However, in a freshly created docbase containing less than 1’800 objects, this step is completed quickly at the database level. The Migration Utility itself can be used against any repository, not necessarily an empty one but of course the more objects it contains (e.g. in a production repository), the longer it can potentially take to complete, with a considerable stress on the database if the update operation is transactional. Here, as we are starting with an empty docbase, the migration cannot go any faster and, in case of a failure, the instantiation procedure using the seed can be restarted confidently as many times as needed after performing some clean up to roll back the changes done so far.

In order to prevent distracting issues, the copied repository must run under the same O/S version as the original one, which won’t be an limitation if the host is the same as the seed’s (e.g. identical machine or a precisely configured O/S such as in containers); the RDBMS’s version can be more relaxed as Documentum only uses very basic functionalities from it (tables, views and indexes, and foreign keys, not much more, although Oracle’s index-organized tables were also used in a few occasions), but it must be the same RDBMS software as the seed docbase’s (i.e. either Oracle or PostgreSQL in both the seed and the copy; the other RDBMS supported by Documentum, DB2 and SQLServer, were not tested). In this project, we worked in the following environments:

  • O/S

Oracle Linux 8.4 in a VirtualBox VM; also tested with Ubuntu Linux 22.0.4 (Jammy Jellyfish) in a linux container managed by Proxmox.

  • RDBMS

Oracle RDBMS v12.1.0.1.0 (12c) on a remote VM, the same database for both the seed and clone docbases but distinct schemas of course.

Also tested with Oracle RDBMS v21.3.0.0.0 in a linux container running Oracle Linux 8.6, and with PostgreSQL v15.1, the most recent as of this writing.

  • Database connectivity

Oracle InstantClient 21.7.0.0 and ODBC drivers v42.5.1 for PostgreSQL.

For the Migration Utility: Oracle JDBC drivers included with the InstantClient and JDBC v42.5.1 for PostgreSQL.

  • JDK

AWS Corretto JDK v11.0.14.

  • Content Server

Content Server v22.2 and v22.4.

Some of the above products’ versions are not implicitly mentioned in the system requirement guides, which does not mean they don’t work, only that they were not tested, which is understandable given the large number of possible products’ and versions’ combinations. In this project, we seize the opportunity to test the latest available versions of each component, and downgrade in case an issue was encountered, a way to stay on the bleeding edge of the platform.

Since a docbase consists of the Documentum binaries, configuration files, content files (collectively grouped under the “Documentum files”) and a database schema, each of these parts of a seed repository must be addressed. Once the seed docbase along with its docbroker and method server have been created and shut down, its logs and caches will be deleted, and the ${DOCUMENTUM} tree and the content files will be moved into a tar ball. When using an Oracle database, the docbase’s schema will be exported and the dump file added to the archive, When using a PostgreSQL database, the whole tree including binaries and data, will be added to the archive. The compressed archive can then be stored on a convenient location (e.g. a network drive, a git or nexus repository, etc.) ready to be used by anyone, mostly developers or administrators. After that, the seed docbase can be removed from the host and from the database (if using Oracle) as it is no longer needed. Should it ever be, it can be restored from the tar ball or instantiated out of itself since, unsurprisingly, the instantiation process is idempotent.

On the receiving end, the instantiating procedure will explode the tarball to another root directory, possibly on another machine after having created an O/S account for the content server files’ owner. Next, if an Oracle database is used, it will create a db account for the new docbase and import the dumped data into the db schema. If a PostgreSQL database is used, no data import is necessary since the whole database, binaries included, has been extracted from the tarball; only some quick renaming of the database, schema and user are performed. Next, a few adjustments will be applied in the restored configuration files to match their new location and schema, and finally the Migration Utility will be run. A few final corrections will be applied and, optionally, a check of the db schema and files under ${DOCUMENTUM} too.

This approach implies that the Documentum binaries are considered part of the repository, just like its documents, which is a common situation in dedicated VMs and containers. One pro of this approach is that it takes care of the installation of the binaries as well, they are simply extracted from the tarball, a relief knowing how long the serverSetup.sh program can take to execute. When using PostgreSQL as the RDBMS, the database becomes part of the seed package, binaries and data, and is treated as if it were a component of the repository. Maybe one day, OpenText will propose an optional installation of a PostgreSQL database aside the content server, just like it currently does with tomcat and jboss as the method servers; being open source, there are no legal deterrents in doing so.

A single parameter file, global_parameters, defines all the parameters for the creation of the seed docbase and its future instantiations. Actually, there is nothing special in a seed docbase, it is just a role; any docbase created by the create_docbase.sh script can later be used as a seed. If only the creation of a docbase is wanted, just by setting suitable parameters in that file, an automated installation of the binaries and a repository creation is possible the traditional – but slow – way. Even without the benefits of a full instantiation, this is already quite a useful by-product of the project.

The complete procedure consists of the following general steps:

1. Edit the global_parameters file and provide all the required settings for the machine that will host the repositories;

2. Enroll a machine for the seed docbase’s installation, see pre-requisites.sh;

3. Edit the global_parameters file and provide all the required settings for the repositories to be created, the seed and any instantiation of it to come;

4. Create a repository, archive its ${DOCUMENTUM} directory along with its extracted database schema or full database if using PostgreSQL, see script create_docbase.sh; this is done only once and the docbase will be the model – or seed – for all future instantiations;

5. For each docbase to instantiate, execute the script instantiate_docbase.sh with as parameters the docbase to use as the seed and the docbase to instantiate from it. The detailed parameters will be taken from global_parameters. If the docbases must be created on different hosts, enroll those hosts first using pre-requisites.sh;

Let’s now see all the above scripts. In order to save space, only code snippets are included in the current article. See dbi services github for the complete scripts.

The prerequisites

The script is accessible here: dbi services’ github.

Prior to running the scripts, the following conditions must be met:

1. An existing directory location to hold the global_parameters file and the scripts; at the start of the scripts’ execution, the current working directory will be changed there. This directory can be a mounted volume or it can be copied onto the machine where the seed docbase and/or the instantiations will be created; it will be referred to as global_parameters.${scripts_dir}. The notation global_parameters.${parameter_name} means the value of parameter_name as defined in the global_parameters file;

2. The global_parameters file edited to suit the needs (see next paragraph); that’s where all the parameters below are taken from;

3. An installation volume global_parameters.${binaries_root}. If it does not exist, it is created under /. If a mounted volume must be used, be sure to mount it first so the installation directory gets created there;

4. Download into global_parameters.${dctm_software} the Content Server from OpenText, e.g. documentum_server_22.4_linux64_oracle.tar and documentum_server_22.4_linux64_postgres.tar. The other packages get downloaded anonymously by create_docbase.sh as needed but the content server needs a logged in session at OpenText (and a contractual account).

If the PostgreSQL RDBMS is used, nothing is required at the database level since the scripts have a complete control over this locally installed software.

However, if the selected RDBMS is Oracle, the requirements are clearly more complicated. An existing database is required that is accessible through global_parameters.${db_server_host_ip_address} plus its service name (global_parameters.${db_service_name}) and listener port (global_parameters.${db_listener_port}); the InstantClient’s local tnsnames.ora will be created and filled in with that information. Also, an account is required on the database host’s, typically oracle (cf. global_parameters.${db_server_host_account}/global_parameters.${db_server_host_password}), to remotely launch the dump and load utilities, and transfer the dump file from/to the database host.

Moreover, the sys account’s credentials global_parameters.${db_sys_account}/global_parameters.${db_sys_password} is needed to manipulate the repositories’schemas while connected using global_parameters.${db_remote_connect_string}.

Note that the server host account and the sys account are unnecessary if the steps to perform under those accounts are delegated to the DBAs, which is a must in some large organizations where personnel’s’ roles are very specialized. Therefore, some coordination with them is needed, which may make the whole process not as straightforward and fast as intended. The schema’s dumps after the seed docbases are created and their subsequent loads when instantiating new docbases can also be done by the DBAs. The same applies to the schemas if they are created beforehand. Adapt the scripts as needed by the actual situation.

After the above conditions are satisfied where applicable, the script prerequisites.sh can be invoked as root to enroll the machine with the settings from global_parameters. It will perform the following actions:

1. Set the FQDN of the machine (global_parameters.${dctm_machine} and global_parameters.${dctm_domain}, and append it in /etc/hosts;

2. Set IP aliases for the content server machine (cs) and the database (global_parameters.${db_server_host_alias} for global_parameters.${db_server_host_ip_address});

2. Install sshpass; this program allows a non-interactive ssh connection without being prompted by taking the password from an environment variable or the command-line and passing it to ssh. It is used in order to fully automate the scripts’ execution only when the RDBMS is a supposedly remote Oracle database. If PostgreSQL is used, sshpass is not necessary since those databases are always local;

3. Install a few other utilities such as unzip, gawk and curl used by the scripts. Additionally, tcl and expect are also installed although they may not be necessary since the Documentum programs serverSetup.bin and dm_launch_server_config_program.sh are invoked non interactively, i.e. during silent installations;

4. Create an installation owner account on current machine (defined as global_parameters.${dctm_machine}), usually dmadmin, defined as global_parameters.${dctm_owner}. In order to simplify the scripting and to execute the Documentum root tasks, dmadmin is made sudoer with no password required; This is not an issue as most of the time dmadmin runs on a fully dedicated VM or container and cannot hurt anybody except itself, which is already possible with no additional privilege. Anyway, the privilege can be revoked later once the procedure is completed;

5. Create the global_parameters.${binaries_root} sub-directory where the repositories will be created on the enrolled machine. Make sure the parent volume is large enough to contain the docbases to create locally;

Again, points 1 to 4 depends on the organization; sometimes, the machines (e.g. VMs) are delivered already configured; sometimes the root account is a no go or under some strict conditions. Therefore, those steps are only applicable in private machines under complete control of an individual. Adapt the procedure as needed.

There are also a few commented out kernel settings related to limits as recommended by Documentum in the OpenText™ Documentum™ System Upgrade and Migration Guide for CS22.2. They may apply in VMs or non-virtualized environments. In our Promox linux container environment, some of them may only be set at the host’s level. Activate them as needed, although in a light developer’s environment those limits will probably never be hit.

See Part II here

L’article A quick repository creation utility (part I) est apparu en premier sur dbi Blog.

VirtualBox on Mac Host: High CPU utilization for idle VM

Fri, 2023-01-27 15:44

Many people in the IT industry do use VirtualBox as a Type-2 Hypervisor to test software in virtual machines. I’m using it with Vagrant on my Macbook Pro 2019 (Intel with Ventura 13.2. when writing this Blog) to setup testcases for Oracle Databases or use a Windows VM for tests of Windows programs. Since migrating to Virtualbox V7 it feels as if programs like VBoxHeadless (when starting the VM through Vagrant) or VirtualBoxVM (when starting my Windows VM) take lots of CPU resources on my Mac host even though the VM itself is idle.

Analysis

The issue specifically happens when running queries against the DB, which produce high load. After the load has finished, I still see my host CPU busy for minutes, but the virtual CPUs in the VM are idle.

To get more information about what threads actually take CPU-time on MacOS the tool spindump can be quite useful. Let’s start with an idle VM after booting it through Vagrant:

$ vagrant up
getting Proxy Configuration from Host...
Bringing machine 'oracle-19c6-vagrant' up with 'virtualbox' provider...
==> oracle-19c6-vagrant: Clearing any previously set forwarded ports...
==> oracle-19c6-vagrant: Clearing any previously set network interfaces...
==> oracle-19c6-vagrant: Preparing network interfaces based on configuration...
    oracle-19c6-vagrant: Adapter 1: nat
==> oracle-19c6-vagrant: Forwarding ports...
    oracle-19c6-vagrant: 1521 (guest) => 1521 (host) (adapter 1)
    oracle-19c6-vagrant: 22 (guest) => 2222 (host) (adapter 1)
==> oracle-19c6-vagrant: Running 'pre-boot' VM customizations...
==> oracle-19c6-vagrant: Booting VM...
==> oracle-19c6-vagrant: Waiting for machine to boot. This may take a few minutes...
    oracle-19c6-vagrant: SSH address: 127.0.0.1:2222
    oracle-19c6-vagrant: SSH username: vagrant
    oracle-19c6-vagrant: SSH auth method: private key
==> oracle-19c6-vagrant: Machine booted and ready!
==> oracle-19c6-vagrant: Configuring proxy environment variables...
==> oracle-19c6-vagrant: Configuring proxy for Yum...
[oracle-19c6-vagrant] GuestAdditions 7.0.6 running --- OK.
==> oracle-19c6-vagrant: Checking for guest additions in VM...
==> oracle-19c6-vagrant: Setting hostname...
==> oracle-19c6-vagrant: Mounting shared folders...
    oracle-19c6-vagrant: /vagrant => /Users/cbl/Vagrant/oracle/vagrant-boxes/OracleDatabase/19.x
==> oracle-19c6-vagrant: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> oracle-19c6-vagrant: flag to force provisioning. Provisioners marked to run always will still run.
$ 

So far everything looks good. The program VBoxHeadless does not take much CPU resources on my Mac. I can use spindump to see what my process is doing. It samples the program VBoxHeadless for 10 seconds:

$ sudo spindump -file VirtualBoxVM-spindump-idle.txt -onlytarget VBoxHeadless
$ grep "cpu time" VirtualBoxVM-spindump-idle.txt
...
  Thread 0x32d19    Thread name "EMT-0"    1001 samples (1-1001)    priority 31 (base 31)    cpu time 0.188s (398.4M cycles, 107.5M instructions, 3.71c/i)
  Thread 0x32d1a    Thread name "EMT-1"    1001 samples (1-1001)    priority 31 (base 31)    cpu time 0.674s (1302.9M cycles, 312.2M instructions, 4.17c/i)
...

The threads “EMT-0” and “EMT-1” are my virtual CPUs in the VM.

After starting an Oracle DB in mv VM the VM is idle:

But on my Mac I do see between 30% and 40% utilization of a Core:

My spindump shows the following:

$ grep "cpu time" VirtualBoxVM-spindump-DB-idle-40percent.txt
...
  Thread 0x32d19    Thread name "EMT-0"    1001 samples (1-1001)    priority 31 (base 31)    cpu time 2.878s (6.1G cycles, 1727.6M instructions, 3.51c/i)
  Thread 0x32d1a    Thread name "EMT-1"    1001 samples (1-1001)    priority 31 (base 31)    cpu time 0.870s (1795.8M cycles, 538.1M instructions, 3.34c/i)
...

To see more information I sampled the process VBoxHeadless in Activity Monitor:

Unfortunately that does not help much. Running dtruss on the process VBoxHeadless produces many “invalid user access” errors:

$ ps -ef | grep VBoxHeadless | grep -v grep
272972148 34870  1838   0  9:52PM ??        33:19.36 /Applications/VirtualBox.app/Contents/MacOS/VBoxHeadless --comment oracle-19c6-vagrant --startvm 084f3493-f531-4492-b8dc-09e3bf232948 --vrde config
$ sudo dtruss -d -e -f -p 34870
Password:
dtrace: system integrity protection is on, some features will not be available
...
dtrace: error on enabled probe ID 1713 (ID 173: syscall::read:return): invalid user access in action #5 at DIF offset 0
dtrace: error on enabled probe ID 1712 (ID 961: syscall::write_nocancel:return): invalid user access in action #5 at DIF offset 0
dtrace: error on enabled probe ID 1712 (ID 961: syscall::write_nocancel:return): invalid user access in action #5 at DIF offset 0
dtrace: error on enabled probe ID 1712 (ID 961: syscall::write_nocancel:return): invalid user access in action #5 at DIF offset 0
dtrace: error on enabled probe ID 1744 (ID 353: syscall::select:return): invalid user access in action #5 at DIF offset 0
...

As the first line indicates: “system integrity protection is on” and hence I do not get the data I need. It would require a reboot of the Mac into recovery mode to change that and disable SIP for dtrace.

Unfortunately the VBox.log also does not have more info.

After producing load on my VM (the 2 Cores 100% utilized) and then the VM has become idle again I still see the process VBoxHeadless utilizing 2 Cores on the Host. I.e. my VM is idle:

But my host CPU still utilizes 2 Cores for almost 1 minute:

After googling of such a behavior I found some remarks about enabling the High Precision Event Timer, which improved the situation for other users. So I stopped my VM and changed it to use the High Precision Event Timer:

$ VBoxManage showvminfo "oracle-19c6-vagrant" | grep -i hpet
HPET:                        disabled
$ VBoxManage modifyvm "oracle-19c6-vagrant" --hpet on
$ VBoxManage showvminfo "oracle-19c6-vagrant" | grep -i hpet
HPET:                        enabled

After booting the VM and running my load test again, the Host CPUs became idle (i.e. back to 30-40% for VBoxHeadless) after around 10 secs. So the process recovers much faster compared to the default setting:

The Virtualbox High Precision Event Timer setting is documented as follows:

 --hpet=on | off

    Enables or disables a High Precision Event Timer (HPET) that can replace a legacy system timer. This feature is disabled by default. Note HPET is supported on Windows versions starting with Vista. 

Enabling hpet for my Windows VM also had a very positive effect. The VM runs more stable and produces less load on the Host as well.

REMARK: With hpet=on I do see the Host process VBoxHeadless for an idle VM being around 10% more utilized compared to hpet=off. It does not matter in my case, but I could reduce this for an Oracle DB-server by e.g. following the steps here.

Summary

Running Virtualbox-VMs on a Mac-host may gain from enabling the High Precision Event Timer. The VMs recover faster from high load and run more stable. Unfortunately tracing issues of high CPU-load of VirtualBox-processes is difficult because of the enabled system integrity protection (SIP).

L’article VirtualBox on Mac Host: High CPU utilization for idle VM est apparu en premier sur dbi Blog.

A generic jdbc tester (part III)

Fri, 2023-01-27 11:01

This is part III of the article. See Part I and Part II.

Firebird

First, let’s install the JDBC drivers for Firebird:

$ wget https://github.com/FirebirdSQL/jaybird/releases/download/v5.0.0/jaybird-5.0.0.java11.zip
$ unzip -x jaybird-5.0.0.java11.zip jaybird-5.0.0.java11.jar

Our Firebird database was installed embedded, i.e. locally without a server (this is functionally similar to SQLite). In this configuration, the JDBC drivers need the JNA classes, so download them too:

$ wget https://repo1.maven.org/maven2/net/java/dev/jna/jna-platform-jpms/5.13.0/jna-5.13.0.jar

Let’s invoke the tester:

$ java -cp jna-5.13.0.jar:jaybird-5.0.0.java11.jar:. jdbc_tester_generic org.firebirdsql.jdbc.FBDriver SYSDBA SYSDBA "jdbc:firebird:embedded:/path/to/database/sampledb.fb" "select 'now in firebird db: ' || cast('NOW' as timestamp) from rdb\$database;"
The command-line parameters are:
arg 0 = org.firebirdsql.jdbc.FBDriver
arg 1 = SYSDBA
arg 2 = SYSDBA
arg 3 = jdbc:firebird:embedded:/path/to/database/sampledb.fb
arg 4 = select 'now in firebird db: ' || cast('NOW' as timestamp) from rdb$database;

Loading driver [org.firebirdsql.jdbc.FBDriver]:
JDBC Driver org.firebirdsql.jdbc.FBDriver successfully loaded

Connecting to url [jdbc:firebird:embedded:/media/sf_customers/dbi/odbc4gawk/sampledb.fb] as user [SYSDBA]:
Successful connection to database

Executing SQL query [select 'now in firebird db: ' || cast('NOW' as timestamp) from rdb$database;]:

Retrieving the result set:
now in firebird db: 2023-01-23 22:43:45.1910

JDBC connection successfully tested, quitting ...

No surprise with Firebird.

Admittedly, the above is is not your typical configuration but the documentation lists lots of syntax variants to connect via JDBC when Firebird is configured with a server process. E.g.:

$ java -cp jaybird-5.0.0.java11.jar:. jdbc_tester_generic org.firebirdsql.jdbc.FBDriver SYSDBA SYSDBA "jdbc:firebird:/media/sf_customers/dbi/odbc4gawk/sampledb.fb?serverName=localhost&portNumber=3050" "select 'now in firebird db: ' || cast('NOW' as timestamp) from rdb$database;"

$ java -cp jna-5.13.0.jar:jaybird-5.0.0.java11.jar:. jdbc_tester_generic org.firebirdsql.jdbc.FBDriver SYSDBA SYSDBA "jdbc:firebird:native:inet4://locahost//path/to/database/sampledb.fb?serverName=localhost&portNumber=3050" "select 'now in firebird db: ' || cast('NOW' as timestamp) from rdb$database;"

$ java -cp jaybird-5.0.0.java11.jar:. jdbc_tester_generic org.firebirdsql.jdbc.FBDriver SYSDBA SYSDBA "jdbc:firebird:?serverName=localhost&portNumber=3050&databaseName=/path/to/database/sampledb.fb" "select 'now in firebird db: ' || cast('NOW' as timestamp) from rdb$database;"

$ java -cp jaybird-5.0.0.java11.jar:. jdbc_tester_generic org.firebirdsql.jdbc.FBDriver SYSDBA SYSDBA "jdbc:firebird:?databaseName=//localhost///path/to/database/sampledb.fb" "select 'now in firebird db: ' || cast('NOW' as timestamp) from rdb$database;"

$ java -cp jaybird-5.0.0.java11.jar:. jdbc_tester_generic org.firebirdsql.jdbc.FBDriver SYSDBA SYSDBA "jdbc:firebirdsql://localhost:3050/path/to/database/sampledb.fb" "select 'now in firebird db: ' || cast('NOW' as timestamp) from rdb$database;"

$ java -cp jaybird-5.0.0.java11.jar:. jdbc_tester_generic org.firebirdsql.jdbc.FBDriver SYSDBA SYSDBA "jdbc:firebirdsql://localhost/path/to/database/sampledb.fb" "select 'now in firebird db: ' || cast('NOW' as timestamp) from rdb$database;"

etc…

By the way, rdb$database, really? Could Firebird by a close parent of DEC’s RDB, the former RDBMS purchased by Oracle when DEC was on the brink of bankruptcy?

Microsoft SQL Server

The JDBC drivers for SQL Server for be downloaded by following this link https://go.microsoft.com/fwlink/?linkid=2222207 as there is not direct link to the jar apparently. Once the archive has been downloaded, the jar mssql-jdbc-11.2.3.jre17.jar must be extracted.

Let’s now invoke the tester:

$ java -cp mssql-jdbc-11.2.3.jre17.jar:. jdbc_tester_generic com.microsoft.sqlserver.jdbc.SQLServerDriver SA mypassord "jdbc:sqlserver://localhost:1433;databaseName=sampledb;encrypt=false;" "select concat('now in sqlsrver db: ', format(getdate(), 'dd/MM/yyyy HH:mm:ss')) as now"
The command-line parameters are:
arg 0 = com.microsoft.sqlserver.jdbc.SQLServerDriver
arg 1 = SA
arg 2 = mypassord
arg 3 = jdbc:sqlserver://localhost:1433;databaseName=sampledb;encrypt=false;
arg 4 = select concat('now in sqlsrver db: ', format(getdate(), 'dd/MM/yyyy HH:mm:ss')) as now

Loading driver [com.microsoft.sqlserver.jdbc.SQLServerDriver]:
JDBC Driver com.microsoft.sqlserver.jdbc.SQLServerDriver successfully loaded

Connecting to url [jdbc:sqlserver://localhost:1433;databaseName=sampledb;encrypt=false;] as user [SA]:
Successful connection to database

Executing SQL query [select concat('now in sqlsrver db: ', format(getdate(), 'dd/MM/yyyy HH:mm:ss')) as now]:

Retrieving the result set:
now in sqlsrver db: 23/01/2023 23:33:28

JDBC connection successfully tested, quitting ...

No surprise with SQL Server.

HSQLDB

As HSQLDB is a RDBMS written in java, it already comes with its JDBC drivers, so no need to download them. We will just copy them from their installation directory into the current directory. The jar name is hsqldb-jdk8.jar.

Let’s invoke the tester:

$ java -cp hsqldb-jdk8.jar:. jdbc_tester_generic org.hsqldb.jdbcDriver sa sa jdbc:hsqldb:localhost/runtime,user=sa "select concat('now in hsqldb: ', TO_CHAR(current_timestamp, 'YYYY-MM-DD HH:MM:SS')) as now from INFORMATION_SCHEMA.SYSTEM_USERS;"
The command-line parameters are:
arg 0 = org.hsqldb.jdbcDriver
arg 1 = sa
arg 2 = sa
arg 3 = jdbc:hsqldb:localhost/runtime,user=sa
arg 4 = select concat('now in hsqldb: ', TO_CHAR(current_timestamp, 'YYYY-MM-DD HH:MM:SS')) as now from INFORMATION_SCHEMA.SYSTEM_USERS;

Loading driver [org.hsqldb.jdbcDriver]:
JDBC Driver org.hsqldb.jdbcDriver successfully loaded

Connecting to url [jdbc:hsqldb:localhost/runtime,user=sa] as user [sa]:
Successful connection to database

Executing SQL query [select concat('now in hsqldb: ', TO_CHAR(current_timestamp, 'YYYY-MM-DD HH:MM:SS')) as now from INFORMATION_SCHEMA.SYSTEM_USERS;]:

Retrieving the result set:
now in hsqldb: 2023-01-23 11:01:31

JDBC connection successfully tested, quitting …

No surprise with HSQLDB either.

We’re done with the relational databases and so far the tester has behaved like a charm. Let’s now tackle non-relational data sources.

MongoDB

MongoDB is a noSQL database and this reflects in its JDBC drivers: they don’t support SQL function calls sent by the java SQL libraries such as sqlGetResultSchema() and therefore fail as soon as the tester attempts a connection, e.g.:

...
Loading driver [com.mongodb.jdbc.MongoDriver]:
JDBC Driver com.mongodb.jdbc.MongoDriver successfully loaded

Connecting to url [jdbc:mongodb://localhost:27017/sampledb] as user [sa]:
Exception occurred connecting to database: Connection failed.
java.sql.SQLException: Connection failed.
at com.mongodb.jdbc.MongoDriver.connect(MongoDriver.java:175)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228)
at jdbc_tester_generic.main(jdbc_tester_generic.java:43)
Caused by: java.util.concurrent.ExecutionException: com.mongodb.MongoCommandException: Command failed with error 59 (CommandNotFound): 'no such command: 'sqlGetResultSchema'' on server localhost:27017. The full response is {"ok": 0.0, "errmsg": "no such command: 'sqlGetResultSchema'", "code": 59, "codeName": "CommandNotFound"}
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at com.mongodb.jdbc.MongoConnection.testConnection(MongoConnection.java:510)
at com.mongodb.jdbc.MongoDriver.connect(MongoDriver.java:171)
... 3 more
Caused by: com.mongodb.MongoCommandException: Command failed with error 59 (CommandNotFound): 'no such command: 'sqlGetResultSchema'' on server localhost:27017. The full response is {"ok": 0.0, "errmsg": "no such command: 'sqlGetResultSchema'", "code": 59, "codeName": "CommandNotFound"}
at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:175)
…

Fortunately, there are commercial versions that map SQL statements to MongoDB’s NoSQL syntax. One of these is UnityJDBC. It is free to try for 30 days without limits and after that, it is limited to a maximum of 100 results (per query I guess !). It can also work with ServiceNow and Splunk. For our purpose, it is sufficient.

It can be downloaded from here: https://unityjdbc.com/download.php?type=mongodb.

To install it, launch the downloaded jar file:

$ java -jar UnityJDBC_Trial_Install.jar

Download also the logging API slf4j-api-2.0.6.jar and a logger such as slf4j-simple-2.0.6.jar or slf4j-jdk14-2.0.6.jar:

$ wget https://repo1.maven.org/maven2/org/slf4j/slf4j-api/2.0.6/slf4j-api-2.0.6.jar

$ wget https://repo1.maven.org/maven2/org/slf4j/slf4j-simple/2.0.6/slf4j-simple-2.0.6.jar

$ wget https://repo1.maven.org/maven2/org/slf4j/slf4j-jdk14/2.0.6/slf4j-jdk14-2.0.6.jar

Those are not required but if provided, they get rid of the warning:

WARNING: SLF4J not found on the classpath. Logging is disabled for the 'org.mongodb.driver' component

We will use an account sa/sa in database sampledb for the tests below but you are free to use whatever account in whatever database you have, of course. If there are no accounts that can be used, the sa one can be created thusly (example in sampledb, adapt as needed):

$ mongosh
test> show dbs
admin       41 kB
config    73.7 kB
local     81.9 kB
sampledb   516 kB
test      81.9 kB

# select our sampledb;
use sampledb

# list existing users;
db.getUsers()

# create sa in sampledb if needed;
db.createUser(
   {
     user: "sa",
     pwd: "sa",
     roles: [  ]
   }
)

# list existing users;
sampledb>  db.getUsers()
{
  users: [
    {
      _id: 'sampledb.sa',
      userId: UUID("ed042a6a-cd89-47de-a3e7-77589e015746"),
      user: 'sa',
      db: 'sampledb',
      roles: [],
      mechanisms: [ 'SCRAM-SHA-1', 'SCRAM-SHA-256' ]
    }
  ],
  ok: 1
}

Those drivers support 2 syntaxes:

URL format:
jdbc:mongo://<serverName>/<databaseName>

MongoDB URL format:
jdbc:mongodb://[username:password@]host1[:port1][,...hostN[:portN]][/[defaultauthdb][?options]]
OR:
jdbc:mongodb+srv://[username:password@]host1[:port1][,...hostN[:portN]][/[defaultauthdb][?options]]

Execute the tester with both syntaxes:

$ java -cp UnityJDBC/mongodb_unityjdbc_full.jar:slf4j-api-2.0.6.jar:slf4j-simple-2.0.6.jar:. jdbc_tester_generic mongodb.jdbc.MongoDriver sa sa jdbc:mongodb://localhost:27017/sampledb "select 'now in mongodb: ' || CURRENT_TIMESTAMP"
The command-line parameters are:
arg 0 = mongodb.jdbc.MongoDriver
arg 1 = sa
arg 2 = sa
arg 3 = jdbc:mongodb://localhost:27017/sampledb
arg 4 = select 'now in mongodb: ' || CURRENT_TIMESTAMP

Loading driver [mongodb.jdbc.MongoDriver]:
JDBC Driver mongodb.jdbc.MongoDriver successfully loaded

Connecting to url [jdbc:mongodb://localhost:27017/sampledb] as user [sa]:
[main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'}
[cluster-rtt-ClusterId{value='63d051525cbedc5285b70614', description='null'}-localhost:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:2, serverValue:139}] to localhost:27017
[cluster-ClusterId{value='63d051525cbedc5285b70614', description='null'}-localhost:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:140}] to localhost:27017
[cluster-ClusterId{value='63d051525cbedc5285b70614', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=13, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=28536232}
Successful connection to database

Executing SQL query [select 'now in mongodb: ' || CURRENT_TIMESTAMP]:

Retrieving the result set:
now in mongodb: Tue Jan 24 22:00:28 CET 2023

JDBC connection successfully tested, quitting …

The log above was output in json format.

Let’s try also the second syntax but with an alternate logger this time:

$ java -cp UnityJDBC/mongodb_unityjdbc_full.jar:slf4j-simple-2.0.6.jar:slf4j-jdk14-2.0.6.jar:. jdbc_tester_generic mongodb.jdbc.MongoDriver unused unused jdbc:mongodb://sa:sa@localhost:27017/sampledb "select 'now in mongodb: ' || CURRENT_TIMESTAMP"
The command-line parameters are:
arg 0 = mongodb.jdbc.MongoDriver
arg 1 = unused
arg 2 = unused
arg 3 = jdbc:mongodb://sa:sa@localhost:27017/sampledb
arg 4 = select 'now in mongodb: ' || CURRENT_TIMESTAMP

Loading driver [mongodb.jdbc.MongoDriver]:
JDBC Driver mongodb.jdbc.MongoDriver successfully loaded

Connecting to url [jdbc:mongodb://sa:sa@localhost:27017/sampledb] as user [unused]:
Jan 24, 2023 10:46:23 PM com.mongodb.diagnostics.logging.SLF4JLogger info
INFO: Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'}
Jan 24, 2023 10:46:23 PM com.mongodb.diagnostics.logging.SLF4JLogger info
INFO: Opened connection [connectionId{localValue:1, serverValue:142}] to localhost:27017
Jan 24, 2023 10:46:23 PM com.mongodb.diagnostics.logging.SLF4JLogger info
INFO: Opened connection [connectionId{localValue:2, serverValue:141}] to localhost:27017
Jan 24, 2023 10:46:23 PM com.mongodb.diagnostics.logging.SLF4JLogger info
INFO: Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=13, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=23671572}
Successful connection to database

Executing SQL query [select 'now in mongodb: ' || CURRENT_TIMESTAMP]:

Retrieving the result set:
now in mongodb: Tue Jan 24 22:02:02 CET 2023

JDBC connection successfully tested, quitting ...

The log above was output in traditional JDK format.

The jdbc tester works fine with MongoDB too when JDBC drivers with SQL support are provided.

Excel

Like MongoDB, Excel spreadsheets are NoSQL, or more exactly no nothing actually because they’re not even databases. However, the correspondence with a relational database can be established very simply: some drivers map the directory containing spreadsheet files as a server, spreadsheet files as databases, sheets with their lines and columns as tables. The drivers from HXTT below don’t use the notion of server for files but want the full path name in one piece as the database. Other drivers want just the dirname of the file and SELECT against the file_name.sheet_name. The JDBC drivers interpret the SQL statements into operations applied to the spreadsheet to perform selections, projections, joins, etc… Let’s see if the tester can work with one of such unconventional drivers.

Several commercial JDBC drivers for Excel are available with a 30-day trial time. I picked the ones here http://www.hxtt.com/excel.html for no particular reason. Let’s download them:

$ wget http://www.hxtt.com/excel.zip
$ unzip excel.zip lib/Excel_JDBC30.jar

and try the tester:

$ java -cp lib/Excel_JDBC30.jar:. jdbc_tester_generic com.hxtt.sql.excel.ExcelDriver unused unused "jdbc:excel:////home/debian/jdbc_tester_generic/SampleWorkbook.xls" "select distinct top 1 concat('now in SampleWorkbook.xls: ', now()) from countries;"
The command-line parameters are:
arg 0 = com.hxtt.sql.excel.ExcelDriver
arg 1 = unused
arg 2 = unused
arg 3 = jdbc:excel:////home/debian/jdbc_tester_generic/SampleWorkbook.xls
arg 4 = select distinct top 1 concat('now in SampleWorkbook.xls: ', now()) from countries;

Loading driver [com.hxtt.sql.excel.ExcelDriver]:
JDBC Driver com.hxtt.sql.excel.ExcelDriver successfully loaded

Connecting to url [jdbc:excel:////home/debian/jdbc_tester_generic/SampleWorkbook.xls] as user [unused]:
Successful connection to database

Executing SQL query [select distinct top 1 concat('now in SampleWorkbook.xls: ', now()) from countries;]:

Retrieving the result set:
now in SampleWorkbook.xls: 2023-01-25 01:17:08.003

JDBC connection successfully tested, quitting ...

Those drivers look extremely capable. Unfortunately, they only display one column at a time; it may be because it is a trial version but I could not find any mention of this limitation in the somewhat messy documentation. Anyway, jdbc_tester_generic works well with these JDBC drivers for Excel too.

And that concludes the tests with non-relational databases.

Conclusion

As usual, what was supposed to be a quick hack and a by-product ended up being a lot more than that with 9 data sources, RDBMS or NoSQL, all successfully tested. By factoring out the vendor-specific parameters, and they vary a lot between vendors, jdbc_tester_generic has proved itself to be truly universal. I hope it will useful to you as well.

L’article A generic jdbc tester (part III) est apparu en premier sur dbi Blog.

A quick repository creation utility (part II)

Fri, 2023-01-27 11:01

This is part II of the article, see Part I here

The global_parameters file

Here is an example of this file: dbi services.

Besides the location of the software packages and installation directory, this mandatory sourceable bash file contains all the settings required to create a seed docbase and any instantiation of it. Those settings have the following syntax:

stem_parameter-name=value

e.g.:
DCTM0_DOCBASE=seed
DCTM0_DOCBASE_ID=1
DCTM0_SERVICE_NAME=${DCTM0_DOCBASE}
DCTM0_SERVICE_PORT=50000
DCTM0_RDBMS=oracle
DCTM0_DATABASE_OWNER=${DCTM0_DOCBASE}
DCTM0_DATABASE_PASSWORD=${DCTM0_DOCBASE}
[[ ${DCTM0_RDBMS} == "postgres" ]] && DCTM0_DB_SERVER_PORT=5432
DCTM0_ROOT=${dctm_root}/${DCTM0_DOCBASE}
DCTM0_HOST=cs1
DCTM0_INSTALL_OWNER=${dctm_owner}
DCTM0_INSTALL_PASSWORD=${DCTM0_INSTALL_OWNER}
DCTM0_DOCBROKER_NAME=docbroker
DCTM0_DOCBROKER_PORT=1489
DCTM0_SERVER_HTTP_PORT=9080
DCTM0_SERVER_HTTP_MEMORY="-Xms512m -Xmx512m"

Here, a docbase named seed is defined with an id of 1, a TCP/IP port of 50000, etc. Its stem is DCTM0. Although it may look overly complicated, this prefix trick allows to define several docbases at once in global_parameters without parameter name clash since the prefix, or stem, make them unique. This way, a history of the docbases can also be kept in the same file. However, it is mostly useful in that it allows to create or instantiate several docbases at the same time, see later for more about this. For the curious ones, the stem idea comes from the Rexx programming language where stems are used to implement multi-dimensional arrays, trees, lists, records, etc., an original take on data structure syntax.

Of course, these indirections must be resolved to access the values; this is done in function global_parameter.set_role() by using bash’s dereferencing syntax, e.g.:

local tmp=${stem}_DOCBASE_ID; tmp=${!tmp}
echo "${title} docbase id is : ${tmp}"

On line 1, the name of the variable of interest is computed, e.g. DCTM10_DOCBASE_ID for ${stem} set to DCTM10; next, this variable name is dereferenced (or expanded in bash’s parlance) into ${tmp} using the ${!var} notation.

At any one time, we only need to know the docbase to be created or the seed docbase and the docbase to be instantiated into. Their respective settings will be stored in the variables ${ACTIVE_*} and ${SEED_*} after expansion.

global_parameters takes up to 2 parameters when invoked:

Usage
. global_parameters [seed_repo [new_repo]]

where:
seed_repo is the stem of the repository to be created or used as a seed;
it defaults to DCTM0 and is used in create_docbase.sh and instantiate_docbase.sh.
for creation, it points to the specs of the docbase to be created and that can be used later as a seed;
for instantiation, seed_repo is the stem of the repository to be taken as the seed during the instantiation and new_repo is the one of the docbase to instantiate.
all repositories must have their specs defined in this global_parameters file.

global_parameters is usually invoked by the scripts create_docbase.sh and instantiate_docbase.sh with the same command-line parameters they receive. There is no real need to invoke it directly since a tailored environment for each repository after its instantiation can be set using the more adequate function swr() (standing for SWitch Repository) defined in dmadmin’s ~/.profile by create_docbase.sh, more on this later. This also prevents polluting the environment with the settings of all the docbases defined in there. However, being a bash file, it also allows to define several useful functions:

function show_repo() stem [title]
function show_seed()
function show_active()
function set_role() (SEED | ACTIVE) stem

show_repo() stem [title] displays the parameters of the repository as dereferenced from the given stem, with title as explanatory string, which may be empty. E.g.:

$ show_repo DCTM10 "stem DCTM10 defines:"
stem DCTM10 defines: docbase name is : repo10
stem DCTM10 defines: docbase id is : 100011
stem DCTM10 defines: docbase service name is : repo10
stem DCTM10 defines: docbase service port is : 50100
stem DCTM10 defines: docbase host is : cs2
stem DCTM10 defines: docbase root directory is : /u01/dctm/repo10
stem DCTM10 defines: docbase owner is : dmadmin
stem DCTM10 defines: installer password is : dmadmin
stem DCTM10 defines: docbase docbroker name : docbrokerrepo010
stem DCTM10 defines: docbase docbroker port : 1507
stem DCTM10 defines: RDBMS is : postgres
stem DCTM10 defines: database owner is : repo10
stem DCTM10 defines: database password is : repo10
stem DCTM10 defines: database server port is : 5500
stem DCTM10 defines: docbase http server port : 9760
stem DCTM10 defines: docbase http server memory : -Xms512m -Xmx1024m
Scripts'dir is /mnt/shared/blog-seed-docbase

$ show_repo DCTM1
docbase name is : repo01
docbase id is : 10001
docbase service name is : repo01
docbase service port is : 50010
docbase host is : cs2
docbase root directory is : /u01/dctm/repo01
docbase owner is : dmadmin
installer password is : dmadmin
docbase docbroker name : docbroker
docbase docbroker port : 1489
RDBMS is : oracle
database owner is : repo01
database password is : repo01
database server port is : N/A
docbase http server port : 9180
docbase http server memory : -Xms512m -Xmx1024m
Scripts'dir is /mnt/shared/blog-seed-docbase

show_repo() can be manually invoked to verify the repository settings before they are used by create_docbase.sh and instantiate_docbase.sh.

In addition to the defined stems, the roles SEED and ACTIVE are considered stems too but are set dynamically by set_role(). Thus:

$ set_role SEED DCTM1
SEED docbase name is : repo01
SEED docbase id is : 10001
SEED docbase service name is : repo01
SEED docbase service port is : 50010
SEED docbase host is : cs2
SEED docbase root directory is : /u01/dctm/repo01
SEED docbase owner is : dmadmin
SEED installer password is : dmadmin
SEED docbase docbroker name : docbroker
SEED docbase docbroker port : 1489
SEED RDBMS is : oracle
SEED database owner is : repo01
SEED database password is : repo01
SEED database server port is : N/A
SEED docbase http server port : 9180
SEED docbase http server memory : -Xms512m -Xmx1024m
Scripts'dir is /mnt/shared/blog-seed-docbase

$ show_repo SEED Seed
Seed docbase name is : repo01
Seed docbase id is : 10001
Seed docbase service name is : repo01
Seed docbase service port is : 50010
Seed docbase host is : cs2
Seed docbase root directory is : /u01/dctm/repo01
Seed docbase owner is : dmadmin
Seed installer password is : dmadmin
Seed docbase docbroker name : docbroker
Seed docbase docbroker port : 1489
Seed RDBMS is : oracle
Seed database owner is : repo01
Seed database password is : repo01
Seed database server port is : N/A
Seed docbase http server port : 9180
Seed docbase http server memory : -Xms512m -Xmx1024m
Scripts'dir is /mnt/shared/blog-seed-docbase

$ set_role ACTIVE DCTM10
ACTIVE docbase name is : repo10
ACTIVE docbase id is : 100011
ACTIVE docbase service name is : repo10
ACTIVE docbase service port is : 50100
ACTIVE docbase host is : cs2
ACTIVE docbase root directory is : /u01/dctm/repo10
ACTIVE docbase owner is : dmadmin
ACTIVE installer password is : dmadmin
ACTIVE docbase docbroker name : docbrokerrepo010
ACTIVE docbase docbroker port : 1507
ACTIVE RDBMS is : postgres
ACTIVE database owner is : repo10
ACTIVE database password is : repo10
ACTIVE database server port is : 5500
ACTIVE docbase http server port : 9760
ACTIVE docbase http server memory : -Xms512m -Xmx1024m
Scripts'dir is /mnt/shared/blog-seed-docbase

$ show_repo ACTIVE Active
Active docbase name is : repo10
Active docbase id is : 100011
Active docbase service name is : repo10
Active docbase service port is : 50100
Active docbase host is : cs2
Active docbase root directory is : /u01/dctm/repo10
Active docbase owner is : dmadmin
Active installer password is : dmadmin
Active docbase docbroker name : docbrokerrepo010
Active docbase docbroker port : 1507
Active RDBMS is : postgres
Active database owner is : repo10
Active database password is : repo10
Active database server port is : 5500
Active docbase http server port : 9760
Active docbase http server memory : -Xms512m -Xmx1024m
Scripts'dir is /mnt/shared/blog-seed-docbase

# same effect with:
$ . global_parameters DCTM1 DCTM10
SEED docbase name is : repo01
SEED docbase id is : 10001
SEED docbase service name is : repo01
SEED docbase service port is : 50010
SEED docbase host is : cs2
SEED docbase root directory is : /u01/dctm/repo01
SEED docbase owner is : dmadmin
SEED installer password is : dmadmin
SEED docbase docbroker name : docbroker
SEED docbase docbroker port : 1489
SEED RDBMS is : oracle
SEED database owner is : repo01
SEED database password is : repo01
SEED database server port is : N/A
SEED docbase http server port : 9180
SEED docbase http server memory : -Xms512m -Xmx1024m
Scripts'dir is /mnt/shared/blog-seed-docbase

ACTIVE docbase name is : repo10
ACTIVE docbase id is : 100011
ACTIVE docbase service name is : repo10
ACTIVE docbase service port is : 50100
ACTIVE docbase host is : cs2
ACTIVE docbase root directory is : /u01/dctm/repo10
ACTIVE docbase owner is : dmadmin
ACTIVE installer password is : dmadmin
ACTIVE docbase docbroker name : docbrokerrepo010
ACTIVE docbase docbroker port : 1507
ACTIVE RDBMS is : postgres
ACTIVE database owner is : repo10
ACTIVE database password is : repo10
ACTIVE database server port is : 5500
ACTIVE docbase http server port : 9760
ACTIVE docbase http server memory : -Xms512m -Xmx1024m
Scripts'dir is /mnt/shared/blog-seed-docbase

set_role() is invoked by global_parameters with the following logic:

  • when global_parameters is invoked with one or no parameter, the repository the stem (or DCTM0 if no stem was given) resolves into receives the ACTIVE role;
  • when 2 stems are passed as parameters, the docbase the first stem resolves into receives the SEED role, and the one the second stem resolves into receives the ACTIVE role;

Obviously, the SEED role is only used in the script instantiate_docbase.sh and refers to the model docbase. The ACTIVE role is used in both scripts to refer to the docbase to be created or instantiated.

Several settings relate to the supported RDBMS, Oracle or PostgreSQL:

if [[ "${ACTIVE_RDBMS}" == "oracle" ]]; then
   cs_package=documentum_server_22.4_linux64_oracle.tar
   oracle_download_url=https://download.oracle.com/otn_software/linux/instantclient/217000
   oracle_ic_basic=instantclient-basic-linux.x64-21.7.0.0.0dbru.zip
   oracle_ic_sqlplus=instantclient-sqlplus-linux.x64-21.7.0.0.0dbru.zip
   db_server_host_account=oracle
   db_server_host_password=${db_server_host_account}
   db_server_host_alias=db
   db_server_host_ip_address=192.168.0.21
   db_listener_port=1521
   # although distinct concepts, the connect string and the service name must be equal to satisfy the migration utility;
   db_connect_string=pdb1
   db_service_name=pdb1
   db_sys_account=sys
   db_sys_password=${db_sys_account}
   db_datafile_root=/u02/oradata/ORCL/${db_service_name}
   db_remote_connect_string=orcl
elif [[ "${ACTIVE_RDBMS}" == "postgres" ]]; then
   cs_package=documentum_server_22.4_linux64_postgres.tar
   postgresql_download_url=https://ftp.postgresql.org/pub/source
   jdbc_postgresql_download_url=https://jdbc.postgresql.org/download
   postgresql_custom_package=postgresql-15.1.tar.gz
   # set this to yes or no if a compilation is needed to produce the custom package;
   postgresql_compile=no
   postgresql_jdbc_package=postgresql-42.5.1.jar
   db_server_host_alias=localhost
   db_connect_string=${ACTIVE_DOCBASE}
else
   echo "Invalid or Unsupported RDBMS [${ACTIVE_RDBMS}]"
  rc=1
fi

The packages to download and their URLs are specified there. For the Oracle RDBMS, we will use the InstantClient and SQL*Plus. The InstantClient includes the JDBC drivers required by the Migration Utility. For the PostgreSQL RDBMS, we download the source package and the latest JDBC drivers from the global_properties.${*postgresql_download_url} locations.

For Oracle, we need to connect to the host to perform some tasks, e.g. schema export and import, cleanup; thus, we need the host’s IP address and account. For PostgreSQL, this is not needed as the database is always local (embedded). For Oracle, we also need the sys credentials to manage accounts in the existing database. If this is delegating too much power, especially if the database is shared among several unrelated applications, the database accounts need to be created in advance and the scripts be adapted consequently. When using Oracle RDBMS, a database exclusively dedicated to the instantiated repositories and shared among them is the ideal situation. In this context, a full access to the sys account is justified but this may depend on your organization. Evidently, open-source, free software offers much more flexibility in this regard.

Finally, we declare the name of the connect string to use to connect to the database; for Oracle, it will be the SQL*Net alias that will be defined in the tnsnames.ora file; for PostgreSQL, it will be the name of the INI section that will be defined in dmadmin‘s ~/.odbc.ini file since the database is accessed through the ODBC API. More on these parameters in the paragraph dedicated to the database.

A trendy alternative to this parameter=value file format could be json or yaml files, with an intermediate conversion to bash syntax using the parsing tools jq and yq. Another excellent alternative would consist in storing each docbase’s settings in its own file and provide the file(s) to source as command-line arguments. That would allow to not use the stem trick if found too complicated. There are many ways to jungle between different sets of configurations but the stem one is quite a simple and easy one to use as it is pure, executable bash and therefore does not need any intermediate translation; moreover, it centralizes into one unique file all the docbase definitions to be used. Adapt as you see fit.

See part III here

L’article A quick repository creation utility (part II) est apparu en premier sur dbi Blog.

A generic jdbc tester (part I)

Fri, 2023-01-27 11:00

I was recently working on the re-configuration of an existing OpenText Documentum content server’s docbase (a docbase is like a database but for documents). Documentum docbases store the documents’ metadata into relational databases and their content on filesystems. Several RDBMS are supported, e.g. Oracle, SQLServer, and the open source PostgreSQL. The tool I was using, the Migration Utility, is a java program that connects to the underlying database through the JDBC API to change object ids and other attributes. So, prior to launching that program, I had to make sure that the database could be reached with the provided connectivity parameters, typically a host name, a port number, and a database name or service name or connect string, plus an account’s credentials, and I needed a tool for that. Like any well-trained monkey I searched it on-line first. Unsurprisingly, lots of such tools are available in source form but quite strangely I could not find a tool that supports more than one RDBMS flavor. In effect, the available utilities either work for, say, Oracle or PostgreSQL, but not for both. Isn’t JDBC supposed to be an universal API to access any vendor’s relational database once its drivers are provided ? Maybe I did not dig deep enough but after a while I decided to write one myself as it would likely take less time than going through hundreds of click baits and other questionable links. However, in order not to reinvent the wheel, I took inspiration of the existing work and only did some very minor modifications. So, here it is, a single generic enough JDBC program to connects to any database whose JDBC drivers are provided. To prove it, it will be tested against 9 different data sources and if this is not conclusive, I don’t know what will be.

The source code can also be found in github here but since the program is really small, I pasted it down below.

The program

Here is the source code of jdbc_tester_generic.java.

// generic test utility for jdbc connections;
// cec@dbi-services;
// works with any jdbc driver since its name is passed as parameter;
// to compile:
//   javac jdbc_tester_generic.java
// to execute, specify the JDBC jar to use to -cp at run-time; see example below;
import java.sql.DriverManager;
import java.sql.*;

public class jdbc_tester_generic {
    public static void main(String[] args) throws ClassNotFoundException {
        System.out.println("The command-line parameters are:");
        for (int i = 0; i < args.length; i++) {
            System.out.println("arg " + i + " = " + args[i]);
        }

        if (args.length != 5) {
           System.out.println("Invalid number of arguments: 5 arguments with the following format are required: driver user_name password url sqlquery");
	       System.out.println("e.g.:");
	       System.out.println("   java -cp postgresql-42.5.1.jar:. jdbc_tester_generic org.postgresql.Driver dmadmin xxx jdbc:postgresql://localhost:5432/mydb \"select 'now in postgresql db: ' || to_char(current_timestamp(0), 'DD/MM/YYYY HH24:MI:SS') as now;\"");
	       System.out.println("   java -cp ojdbc.jar:. jdbc_tester_generic oracle.jdbc.driver.OracleDriver dmadmin dmadmin jdbc:oracle:thin:@//db:1521/pdb1 \"select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual\"");
           return;
        }

        String driver   = args[0];
        String user     = args[1];
        String password = args[2];
        String url      = args[3];
        String sqlStmt = args[4];

        System.out.println("\nLoading driver [" + driver + "]:");
	    try {
	       Class.forName(driver);
	    } catch (ClassNotFoundException e) {
	       System.out.println("JDBC Driver [" + driver + " is not found. Please, append it to -cp or CLASSPATH");
	       e.printStackTrace();
	       System.out.println("Example of driver's syntax: org.postgresql.Driver or oracle.jdbc.driver.OracleDriver");
	       return;
	    }
	    System.out.println("JDBC Driver " + driver + " successfully loaded");

        try {
           System.out.println("\nConnecting to url [" + url + "] as user [" + user + "]:");
           Connection conn = DriverManager.getConnection(url, user, password);
	       if (conn != null) {
	          System.out.println("Successful connection to database");
	       }

           System.out.println("\nExecuting SQL query [" + sqlStmt + "]:");
           Statement statement = conn.createStatement();
           ResultSet resultSet = statement.executeQuery(sqlStmt);

           System.out.println("\nRetrieving the result set:");
           while (resultSet.next()) {
              System.out.println(resultSet.getString(1));
           }

           statement.close();
           conn.close();

           System.out.println("\nJDBC connection successfully tested, quitting ...");
        } catch (SQLException e) {
           System.out.println("Exception occurred connecting to database: " + e.getMessage());
	       e.printStackTrace();
        }
    }
}

As is visible in the import statements at the top, no vendor’s specific libraries are referenced and only calls from the standard JDBC API are used. As is, it should therefore be universal.

To compile it, let’s make sure a JDK is installed:

$ java -version
openjdk version "11.0.17" 2022-10-18 LTS
OpenJDK Runtime Environment Corretto-11.0.17.8.1 (build 11.0.17+8-LTS)
OpenJDK 64-Bit Server VM Corretto-11.0.17.8.1 (build 11.0.17+8-LTS, mixed mode)
$ javac --version
$ javac 11.0.17

Use the command below to compile it:

$ javac jdbc_tester_generic.java
$ ls -l
...
-rw-r--r-- 1 dmadmin dmadmin 2975 Jan 22 14:36 jdbc_tester_generic.java
-rw-rw-r-- 1 dmadmin dmadmin 3463 Jan 22 14:38 jdbc_tester_generic.class

Now that is has been compiled into a .class file, let’s execute it.

Execution

jdbc_tester_generic expects 5 command-line parameters:

driver_name user_name password connection_url sql_query

and complains in a constructive manner if one of them is missing, e.g.:

$ java -cp . jdbc_tester_generic
The command-line parameters are:
Invalid number of arguments: 5 arguments with the following format are required: driver user_name password url sqlquery
e.g.
java -cp postgresql-42.5.1.jar:. jdbc_tester_generic org.postgresql.Driver dmadmin xxx jdbc:postgresql://localhost:5432/mydb "select 'now in postgresql db: ' || to_char(current_timestamp(0), 'DD/MM/YYYY HH24:MI:SS') as now;"
java -cp ojdbc.jar:. jdbc_tester_generic oracle.jdbc.driver.OracleDriver dmadmin dmadmin jdbc:oracle:thin:@//db:1521/pdb1 "select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual"

The command-line parameters that makes the utility truly universal are the JDBC drivers’ name and the connection URL; once supplied, any RDBMS can be “pinged” if the given target-specific URL is correct. Let the tests begin now !

Testing with a PostgreSQL database

Let’s test it first with a PostgreSQL database. Let’s download its JDBC drivers:

$ wget https://jdbc.postgresql.org/download/postgresql-42.5.1.jar
$ ls -lrt
…
-rw-r--r-- 1 dmadmin dmadmin 2975 Jan 22 14:36 jdbc_tester_generic.java
-rw-rw-r-- 1 dmadmin dmadmin 3463 Jan 22 14:38 jdbc_tester_generic.class
-rw-r--r-- 1 dmadmin dmadmin 1046770 Jan 22 14:56 postgresql-42.5.1.jar

Let’s suppose there is already a postgres server running locally:

$ ps -ef | grep postgr
…
dmadmin 1674739 1 0 14:57 ? 00:00:00 /u01/dctm/repo02/postgresql/bin/postgres -D /u01/dctm/repo02/postgresql/data -k /tmp
…

On which port:

$ grep 'port = ' /u01/dctm/repo02/postgresql/data/postgresql.conf
port = 5422

Let’s suppose the PostgreSQL database repo02 exists with an user’s account dmadmin/xxx:

host: localhost
port: 5422
database name: repo02
credentials: dmadmin/xxx

The command to use for connecting is:

$ java -cp postgresql-42.5.1.jar:. jdbc_tester_generic org.postgresql.Driver dmadmin xxx jdbc:postgresql://localhost:5422/repo02 "select 'now in postgresql db: ' || to_char(current_timestamp(0), 'DD/MM/YYYY HH24:MI:SS') as now;"
The command-line parameters are:
arg 0 = org.postgresql.Driver
arg 1 = dmadmin
arg 2 = xxx
arg 3 = jdbc:postgresql://localhost:5422/repo02
arg 4 = select 'now in postgresql db: ' || to_char(current_timestamp(0), 'DD/MM/YYYY HH24:MI:SS') as now;

Loading driver [org.postgresql.Driver]:
JDBC Driver org.postgresql.Driver successfully loaded

Connecting to url [jdbc:postgresql://localhost:5422/repo02] as user [dmadmin]:
Successful connection to database

Executing SQL query [select 'now in postgresql db: ' || to_char(current_timestamp(0), 'DD/MM/YYYY HH24:MI:SS') as now;]:

Retrieving the result set:
now in postgresql db: 22/01/2023 14:58:08

JDBC connection successfully tested, quitting …

where org.postgresql.Driver is taken from the documentation. If too lazy or impatient, just do that:

$ jar vtf the drivers_jar_file | egrep 'Driver.*class'

One of the returned lines is the driver class.

Note the URL’s syntax: jdbc:postgresql://localhost:5422/repo02, no leading @ sign.

Testing with an Oracle database

Let’s now test it with an Oracle database. Let’s download the drivers:

$ wget https://download.oracle.com/otn-pub/otn_software/jdbc/218/ojdbc11.jar
$ ls -lrt
…
-rw-rw-r-- 1 dmadmin dmadmin 5181682 Dec 7 21:20 ojdbc11.jar
-rw-r--r-- 1 dmadmin dmadmin 2975 Jan 22 14:36 jdbc_tester_generic.java
-rw-rw-r-- 1 dmadmin dmadmin 3463 Jan 22 14:38 jdbc_tester_generic.class
-rw-r--r-- 1 dmadmin dmadmin 1046770 Jan 22 14:56 postgresql-42.5.1.jar

Let’s suppose there is an Oracle database server running remotely with the following connectivity data:

host: 192.168.0.21
port: 1521
service name: pdb1
credentials: repo01/repo01

The pdb1 service name we use points to a pluggable database with the same name inside a container database named orcl. The account repo01 is only defined in pdb1.

The command to use is:

$ java -cp ojdbc11.jar:. jdbc_tester_generic oracle.jdbc.driver.OracleDriver repo01 repo01 jdbc:oracle:thin:@//192.168.0.21:1521/pdb1 "select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual"
The command-line parameters are:
arg 0 = oracle.jdbc.driver.OracleDriver
arg 1 = repo01
arg 2 = repo01
arg 3 = jdbc:oracle:thin:@//192.168.0.21:1521/pdb1
arg 4 = select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual

Loading driver [oracle.jdbc.driver.OracleDriver]:
JDBC Driver oracle.jdbc.driver.OracleDriver successfully loaded

Connecting to url [jdbc:oracle:thin:@//192.168.0.21:1521/pdb1] as user [repo01]:
Successful connection to database

Executing SQL query [select 'now in Oracle db: ' || to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') as now from dual]:

Retrieving the result set:
now in Oracle db: 22/01/2023 15:31:37

JDBC connection successfully tested, quitting ...

Note the URL’s syntax: jdbc:oracle:thin:@//192.168.0.21:1521/pdb1, with a leading @ sign.

Continue to Part II.

L’article A generic jdbc tester (part I) est apparu en premier sur dbi Blog.

OCI : How to change VCN Name?

Fri, 2023-01-27 09:12

Starting my first experience with Oracle Cloud Infrastructure, I played with my lab and wanted to change my VCN name. I thought it might be more than easily with the Web Console, but could realise this was not possible. I find out that the only way to do it was through the command line. So great opportunity to start learning first OCI commands.

List compartment to get needed IDs

Of course the first challenge was to get the compartment appropriate ID. And as I followed oracle training recommendations to separate resource family into compartment, I had more than one existing.

To do so I ran following command from the Cloud Shell console:

myocicloud@cloudshell:~ (eu-zurich-1)$ oci iam compartment list --compartment-id-in-subtree 1

List VCN

Having the appropriate compartment Id I could list the existing VCN:

myocicloud@cloudshell:~ (eu-zurich-1)$ oci network vcn list -c "ocid1.compartment.oc1..aaaaaaaasbt3rfrxmgjb2qoe2g3rmjihiulkm4owxqlp7t2fpcs7eiaxwxkq"
{
  "data": [
    {
      "byoipv6-cidr-blocks": null,
      "cidr-block": "10.0.0.0/16",
      "cidr-blocks": [
        "10.0.0.0/16"
      ],
      "compartment-id": "ocid1.compartment.oc1..aaaaaaaasbt3rfrxmgjb2qoe2g3rmjihiulkm4owxqlp7t2fpcs7eiaxwxkq",
      "default-dhcp-options-id": "ocid1.dhcpoptions.oc1.eu-zurich-1.aaaaaaaabytg3uxkb3am347vccifaxbj3ds6chasyxb2vzw5aj5275vtlvoq",
      "default-route-table-id": "ocid1.routetable.oc1.eu-zurich-1.aaaaaaaaj7a77k4iufdstacgcbuf3ih3ykjbfevdaz4ns6e5jjgdivgqaxeq",
      "default-security-list-id": "ocid1.securitylist.oc1.eu-zurich-1.aaaaaaaav4sroetsvyclgp5vdiynbq4wxsvp52akhflkcyoe4ab6cz4rozjq",
      "defined-tags": {
        "Oracle-Tags": {
          "CreatedBy": "myocicloud@email.com",
          "CreatedOn": "2023-01-24T12:59:27.772Z"
        }
      },
      "display-name": "VNC_xChange",
      "dns-label": "vncxchange",
      "freeform-tags": {
        "VCN": "VCN-2023-01-24T12:58:34"
      },
      "id": "ocid1.vcn.oc1.eu-zurich-1.amaaaaaavo3cm5ya7mpwerd6dbqexwpandduwk2cciifpmeorinuix4cdyxq",
      "ipv6-cidr-blocks": null,
      "ipv6-private-cidr-blocks": null,
      "lifecycle-state": "AVAILABLE",
      "time-created": "2023-01-24T12:59:27.883000+00:00",
      "vcn-domain-name": "vncxchange.oraclevcn.com"
    }
  ]
}

Rename VCN

To finally rename it:

myocicloud@cloudshell:~ (eu-zurich-1)$ oci network vcn update --vcn-id "ocid1.vcn.oc1.eu-zurich-1.amaaaaaavo3cm5ya7mpwerd6dbqexwpandduwk2cciifpmeorinuix4cdyxq" --display-name VCN_xChange
{
  "data": {
    "byoipv6-cidr-blocks": null,
    "cidr-block": "10.0.0.0/16",
    "cidr-blocks": [
      "10.0.0.0/16"
    ],
    "compartment-id": "ocid1.compartment.oc1..aaaaaaaasbt3rfrxmgjb2qoe2g3rmjihiulkm4owxqlp7t2fpcs7eiaxwxkq",
    "default-dhcp-options-id": "ocid1.dhcpoptions.oc1.eu-zurich-1.aaaaaaaabytg3uxkb3am347vccifaxbj3ds6chasyxb2vzw5aj5275vtlvoq",
    "default-route-table-id": "ocid1.routetable.oc1.eu-zurich-1.aaaaaaaaj7a77k4iufdstacgcbuf3ih3ykjbfevdaz4ns6e5jjgdivgqaxeq",
    "default-security-list-id": "ocid1.securitylist.oc1.eu-zurich-1.aaaaaaaav4sroetsvyclgp5vdiynbq4wxsvp52akhflkcyoe4ab6cz4rozjq",
    "defined-tags": {
      "Oracle-Tags": {
        "CreatedBy": "myocicloud@email.com",
        "CreatedOn": "2023-01-24T12:59:27.772Z"
      }
    },
    "display-name": "VCN_xChange",
    "dns-label": "vncxchange",
    "freeform-tags": {
      "VCN": "VCN-2023-01-24T12:58:34"
    },
    "id": "ocid1.vcn.oc1.eu-zurich-1.amaaaaaavo3cm5ya7mpwerd6dbqexwpandduwk2cciifpmeorinuix4cdyxq",
    "ipv6-cidr-blocks": null,
    "ipv6-private-cidr-blocks": null,
    "lifecycle-state": "AVAILABLE",
    "time-created": "2023-01-24T12:59:27.883000+00:00",
    "vcn-domain-name": "vncxchange.oraclevcn.com"
  },
  "etag": "1dc33ee3"
}

Checking the result:

myocicloud@cloudshell:~ (eu-zurich-1)$ oci network vcn list -c "ocid1.compartment.oc1..aaaaaaaasbt3rfrxmgjb2qoe2g3rmjihiulkm4owxqlp7t2fpcs7eiaxwxkq" | grep display-name
      "display-name": "VCN_xChange",

Conclusion

This is really my first experience with OCI and I might update this blog with a better way to do so in the future for sure. But just wanted to share my findings here.

I could find the same guidance in oracle Doc ID 2918071.1.

Moreover this is only a solution to change the display name of a VCN, knowing that all component created initially with the VCN and attached to it as Domain Name, Route Table, Security List, DNS Resolver will still reflect the old name used during the creation.

L’article OCI : How to change VCN Name? est apparu en premier sur dbi Blog.

SEHA on ODA?

Fri, 2023-01-27 04:31

SEHA means Standard Edition High Availability. SEHA is fully integrated with Oracle Grid Infrastructure starting with Oracle Grid Infrastructure 19c, and more specifically since RU 19.7. SEHA provides then cluster-based failover for single-instance Standard Edition Oracle Databases using Oracle Clustware. It is important to mention that Oracle removed Oracle RAC from SE2 Edition since 19c.

Most important things to know about SEHA
  • SEHA allows running a database in SE in cluster environment
  • No RAC functionality (global cache, global enqueue, …)
  • SEHA has less functionalities as RAC
  • 2 instances can then not be up at the same time
  • SEHA will have to shutdown one instance before starting the other node instance

Also know that a database in SE Edition with a version lower than 19c and configured in RAC will have to be converted to SEHA configuration as part of the upgrade to Oracle database 19c.

A comparison between SEHA and RAC can be found in this oracle documentation : https://docs.oracle.com/en/database/oracle/oracle-database/19/cwadd/introduction-to-oracle-clusterware.html#GUID-C54E2511-24E2-452B-B4C7-0D5A93A751A9

A stated in Markus Michalewicz’s (Oracle Vice President of Product Management) blog about SEHA (https://blogs.oracle.com/maa/post/standard-edition-high-availability-released-see-whats-new), SEHA databases can be licensed using the “10-day failover rule”.

Please carefully read the prerequisite of SEHA you can find on this link : https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/guidelines-for-seha.html#GUID-55EEFB09-B708-4BA5-BA1A-23609FEC6A29

What about SEHA and ODA?

SEHA provides cluster-based failover for 19c SE database. When there is an instance or node failure, following scenario will apply:

  • There is first an attempt to restart the instance on the local node
  • Then the database automatically fails over on the other node if needed
  • This means the database is restarted on the second node

Of course SEHA is not possible on ODA light as ODA 2S, 2M, 2L. To be able to take advantage of this functionality we will need an ODA-2-HA model which will come with 2 server nodes, a shared storage and server interconnects.

My Lab environment

My lab is running an ODA X5-2-HA.

Node 0 hostname is dbioda02.

Node 1 hostname is dbioda03.

Create a SEHA database on the ODA-2-HA

A single-instance Oracle 19c SE2 database will be automatically configured in failover mode. High Availability option can of course later be enable or disabled. Both nodes need to host a dbhome in SE edition.

I have a SE dbhome:

[root@dbioda02 ~]# odacli describe-dbhome -i 5b0b3769-0c05-435e-b7c8-5d295c05934c
DB Home details
----------------------------------------------------------------
                     ID: 5b0b3769-0c05-435e-b7c8-5d295c05934c
                   Name: OraDB19000_home2
                Version: 19.15.0.0.220419
          Home Location: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2
                 Status: CONFIGURED
                Created: January 27, 2023 9:34:03 AM CET
       Database Edition: SE

I created a SE Single Instance database named DBISEHA (note the option -y SI):

[root@dbioda02 ~]# odacli create-database -u DBISEHA -n DBISEHA -dh 5b0b3769-0c05-435e-b7c8-5d295c05934c -y SI -g 0
Enter SYS and SYSTEM user password:
Retype SYS and SYSTEM user password:

Job details
----------------------------------------------------------------
                     ID:  f47e6f6a-6fcb-422a-8187-b9a2031eae50
            Description:  Database service creation with db name: DBISEHA
                 Status:  Created
                Created:  January 27, 2023 9:41:13 AM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

As we can see, High Availability, so called SEHA, has been automatically configured. Note the field High Availability:

[root@dbioda02 ~]# odacli describe-database -i bed8b9fb-214a-423e-b0e8-d8c084b45dd1
Database details
----------------------------------------------------------------
                     ID: bed8b9fb-214a-423e-b0e8-d8c084b45dd1
            Description: DBISEHA
                DB Name: DBISEHA
             DB Version: 19.15.0.0.220419
                DB Type: SI
                DB Role: PRIMARY
    DB Target Node Name: dbioda02
             DB Edition: SE
                   DBID: 1535732728
 Instance Only Database: false
                    CDB: false
               PDB Name:
    PDB Admin User Name:
      High Availability: true
                  Class: OLTP
                  Shape: odb1
                Storage: ASM
          DB Redundancy:
           CharacterSet: AL32UTF8
  National CharacterSet: AL16UTF16
               Language: AMERICAN
              Territory: AMERICA
                Home ID: 5b0b3769-0c05-435e-b7c8-5d295c05934c
        Console Enabled: false
  TDE Wallet Management:
            TDE Enabled: false
     Level 0 Backup Day: Sunday
     AutoBackup Enabled: false
                Created: January 27, 2023 9:41:13 AM CET
         DB Domain Name: dbi-lab.ch
    Associated Networks: Public-network
          CPU Pool Name:

We can disable and enable SEHA functionality with following options:

[root@dbioda02 ~]# odacli modify-database -h | grep seha
    --disable-seha,-no-sh      Disable SEHA for SE 19c and later SIDB (inverse
                               option: --enable-seha/-sh)
    --enable-seha,-sh          Enable SEHA for SE 19c and later SIDB (inverse
                               option: --disable-seha/-no-sh)

Let’s deactivate the SEHA functionality:

[root@dbioda02 ~]# odacli modify-database -in DBISEHA -no-sh
{
  "jobId" : "930d200a-cd29-4f7b-a6b3-5831b1b14ff2",
  "status" : "Created",
  "message" : "Modify database",
  "reports" : [ ],
  "createTimestamp" : "January 27, 2023 10:09:00 AM CET",
  "resourceList" : [ {
    "resourceId" : "bed8b9fb-214a-423e-b0e8-d8c084b45dd1",
    "resourceType" : "DB",
    "resourceNewType" : null,
    "jobId" : "930d200a-cd29-4f7b-a6b3-5831b1b14ff2",
    "updatedTime" : "January 27, 2023 10:09:00 AM CET"
  } ],
  "description" : "Modify database : DBISEHA",
  "updatedTime" : "January 27, 2023 10:09:00 AM CET",
  "jobType" : "",
  "warning" : "'--enable-seha/-sh' option is deprecated, use '--enable-ha/-ha' instead. '--disable-seha/-no-sh' option is deprecated, use '--disable-ha/-no-ha' instead."
}

[root@dbioda02 ~]# odacli describe-job -i 930d200a-cd29-4f7b-a6b3-5831b1b14ff2

Job details
----------------------------------------------------------------
                     ID:  930d200a-cd29-4f7b-a6b3-5831b1b14ff2
            Description:  Modify database : DBISEHA
                 Status:  Success
                Created:  January 27, 2023 10:09:00 AM CET
                Message:  Modify database

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Disable High availability for database {DBISEHA} January 27, 2023 10:09:02 AM CET    January 27, 2023 10:09:04 AM CET    Success

[root@dbioda02 ~]# odacli describe-database -i bed8b9fb-214a-423e-b0e8-d8c084b45dd1 | grep 'High Availability'
      High Availability: false

And reactivate it again:

[root@dbioda02 ~]# odacli modify-database -in DBISEHA -sh
{
  "jobId" : "4ae27cab-84fa-4204-be8d-37519df75aac",
  "status" : "Created",
  "message" : "Modify database",
  "reports" : [ ],
  "createTimestamp" : "January 27, 2023 10:14:05 AM CET",
  "resourceList" : [ {
    "resourceId" : "bed8b9fb-214a-423e-b0e8-d8c084b45dd1",
    "resourceType" : "DB",
    "resourceNewType" : null,
    "jobId" : "4ae27cab-84fa-4204-be8d-37519df75aac",
    "updatedTime" : "January 27, 2023 10:14:05 AM CET"
  } ],
  "description" : "Modify database : DBISEHA",
  "updatedTime" : "January 27, 2023 10:14:05 AM CET",
  "jobType" : "",
  "warning" : "'--enable-seha/-sh' option is deprecated, use '--enable-ha/-ha' instead. '--disable-seha/-no-sh' option is deprecated, use '--disable-ha/-no-ha' instead."
}

[root@dbioda02 ~]# odacli describe-job -i 4ae27cab-84fa-4204-be8d-37519df75aac

Job details
----------------------------------------------------------------
                     ID:  4ae27cab-84fa-4204-be8d-37519df75aac
            Description:  Modify database : DBISEHA
                 Status:  Success
                Created:  January 27, 2023 10:14:05 AM CET
                Message:  Modify database

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Configure All Candidate Nodes            January 27, 2023 10:14:06 AM CET    January 27, 2023 10:14:09 AM CET    Success

[root@dbioda02 ~]# odacli describe-database -i bed8b9fb-214a-423e-b0e8-d8c084b45dd1 | grep 'High Availability'
      High Availability: true
[root@dbioda02 ~]#

Relocate a SEHA database

A Single Instance SE2 database can easily be relocated to other node with odacli modify-database command using the target host name (option –targethost,-th) or with the node number (option –targetnode,-g).

As we can see, my DBISEHA database is currently running on my node 0:

[root@dbioda02 ~]# ps -ef | grep [p]mon
oracle     464     1  0 09:54 ?        00:00:00 ora_pmon_DBISEHA
grid     45768     1  0  2022 ?        00:05:06 apx_pmon_+APX1
grid     63447     1  0  2022 ?        00:04:47 asm_pmon_+ASM1
oracle   86674     1  0  2022 ?        00:05:22 ora_pmon_DBITST1

[root@dbioda03 ~]# ps -ef | grep [p]mon
grid     11548     1  0  2022 ?        00:01:54 asm_pmon_+ASM2
grid     13397     1  0  2022 ?        00:01:57 apx_pmon_+APX2
oracle   23156     1  0  2022 ?        00:02:13 ora_pmon_DBITST2

Which is confirmed by ODA Metadata:

[root@dbioda02 ~]# odacli describe-database -i bed8b9fb-214a-423e-b0e8-d8c084b45dd1 | grep 'DB Target Node Name'
    DB Target Node Name: dbioda02

Let’s relocate the database to the other node:

[root@dbioda02 ~]# odacli modify-database -in DBISEHA -g 1
{
  "jobId" : "ab67a8e9-d5bf-40db-b83c-ecff118f5268",
  "status" : "Created",
  "message" : "Modify database",
  "reports" : [ ],
  "createTimestamp" : "January 27, 2023 10:31:03 AM CET",
  "resourceList" : [ {
    "resourceId" : "bed8b9fb-214a-423e-b0e8-d8c084b45dd1",
    "resourceType" : "DB",
    "resourceNewType" : null,
    "jobId" : "ab67a8e9-d5bf-40db-b83c-ecff118f5268",
    "updatedTime" : "January 27, 2023 10:31:03 AM CET"
  } ],
  "description" : "Modify database : DBISEHA",
  "updatedTime" : "January 27, 2023 10:31:03 AM CET",
  "jobType" : ""
}

[root@dbioda02 ~]# odacli describe-job -i ab67a8e9-d5bf-40db-b83c-ecff118f5268

Job details
----------------------------------------------------------------
                     ID:  ab67a8e9-d5bf-40db-b83c-ecff118f5268
            Description:  Modify database : DBISEHA
                 Status:  Success
                Created:  January 27, 2023 10:31:03 AM CET
                Message:  Modify database

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Relocate database {DBISEHA}              January 27, 2023 10:31:04 AM CET    January 27, 2023 10:32:22 AM CET    Success

We can see with odacli that the database has been relocated and that the current running node is now node 1, dbioda03:

[root@dbioda02 ~]# odacli describe-database -i bed8b9fb-214a-423e-b0e8-d8c084b45dd1 | grep 'DB Target Node Name'
    DB Target Node Name: dbioda03

And we can confirm seeing that the pmon process of the database is now running on node 1:

[root@dbioda02 ~]# ps -ef | grep [p]mon
grid     45768     1  0  2022 ?        00:05:06 apx_pmon_+APX1
grid     63447     1  0  2022 ?        00:04:47 asm_pmon_+ASM1
oracle   86674     1  0  2022 ?        00:05:22 ora_pmon_DBITST1

[root@dbioda03 ~]# ps -ef | grep [p]mon
grid     11548     1  0  2022 ?        00:01:54 asm_pmon_+ASM2
grid     13397     1  0  2022 ?        00:01:57 apx_pmon_+APX2
oracle   23156     1  0  2022 ?        00:02:13 ora_pmon_DBITST2
oracle   38465     1  0 10:32 ?        00:00:00 ora_pmon_DBISEHA

We can also see from both alert logs that, with Oracle SEHA, the running instance is first stopped before the new one is started.

On the running node 0, the instance was stopped and completed at 10:31:28:

[oracle@dbioda02 trace]$ tail -2 alert_DBISEHA.log
2023-01-27T10:31:28.200215+01:00
Instance shutdown complete (OS id: 74061)

The start of the instance on the new node 1, only started afterwards at 10:31:53:

[oracle@dbioda03 trace]$ head -2 alert_DBISEHA.log
2023-01-27T10:31:53.131466+01:00
Starting ORACLE instance (normal) (OS id: 37772)

What if now node 1 is crashing?

If we power off node 1, the database DBISEHA will be automatically failovered on node 0.

Let’s poweroff node 1:

[root@dbioda03 ~]# date
Fri Jan 27 10:55:22 CET 2023

[root@dbioda03 ~]# systemctl poweroff
Connection to 10.X.X.233 closed by remote host.
Connection to 10.X.X.233 closed.

And check that PMON processes for DBISEHA database will be started on node 0:

[root@dbioda02 ~]# ps -ef | grep [p]mon
oracle   18471     1  0 10:56 ?        00:00:00 ora_pmon_DBISEHA
grid     45768     1  0  2022 ?        00:05:07 apx_pmon_+APX1
grid     63447     1  0  2022 ?        00:04:47 asm_pmon_+ASM1
oracle   86674     1  0  2022 ?        00:05:22 ora_pmon_DBITST1

The database has been automatically started on node 0 about 30 seconds later.

Confirmed with the alert log file on node 0:

[oracle@dbioda02 trace]$ more alert_DBISEHA.log
...
...
...
2023-01-27T10:55:55.393501+01:00
Starting ORACLE instance (normal) (OS id: 18288)
...
...
...

Upgrade a RAC database to 19c SEHA database

A stated previously, databases in SE version lower than 19c and having RAC enabled will have to be converted to SEHA configuration as part of the upgrade to Oracle database 19c.

Convert the oracle RAC database to a Single Instance database with following command (option -y SI):

[root@dbioda02 ~]# odacli modify-database -in DBIRACSE -y SI

Upgrade the SI database to Oracle 19c version with SEHA functionality enabled.

[root@dbioda02 ~]# odacli upgrade-database -i 42612408-6ddf-4152-bce0-74ed41394866 –to 5b0b3769-0c05-435e-b7c8-5d295c05934c -sh

Enable/disable options are available for all 3 odacli create, modify and upgrade database commands.

L’article SEHA on ODA? est apparu en premier sur dbi Blog.

Patching WebLogic with Stack Patch Bundle (SPB)

Fri, 2023-01-27 01:48

Oracle delivers patches for WebLogic every 3 months. These Critical Patch Updates (CPU) are often composed of multiple patches which makes even more steps to have them installed on server:

  • Download
  • Copy on server
  • Installation via opatch

Since April 2021, Oracle is providing what they are calling a Stack Patch Bundle (SPB) which includes Stack Patch Bundle Automation Tool (SPBAT). This make the installation easier and safer.

This can be used to patch following WebLogic installations:

  • Lite
  • Slim
  • Generic
  • Fusion Middleware Infrastructure (WebLogic part only).
Prepare

Download the patch for your OS and copy it on the server. Unzip it and go into the directory where spbat.sh script is located:

cd /home/oracle/CPU-2023-01/WLS_SPB_12.2.1.4.230112/tools/spbat/generic/SPBAT/
PreChecks

Prechecks can take few minutes, but it is worth using it as it can detect potential patching issues.

./spbat.sh -phase precheck -oracle_home $ORACLE_HOME

ORACLE_HOME variable is pointing to the installation you want to update. In my case, it is a Fusion Middleware Infrastructure home.

Precheck is composed of several steps:

  • requirements of script itself (jdk version):
[2023-01-26_14-47-37] [SUCCESS] The Installed JDK version 1.8.0_361 is higher than or same as the recommended JDK version 1.8.0_361
  • OPatch requirements:
[2023-01-26_14-47-21] Minimum OPatch version required : 13.9.4.2.5
[2023-01-26_14-47-21] Environment has OPatch version : 13.9.4.2.8
[2023-01-26_14-47-21] [SUCCESS] Minimum OPatch version check
...
[2023-01-26_14-47-41] Middleware OPatch Version : 13.9.4.2.8
[2023-01-26_14-47-41] SPB OPatch version : 13.9.4.2.11
  • List of installed patches:
[2023-01-26_14-48-28] List of patches present in the Oracle Home: /u00/app/oracle/product/fmw_reports
29494060;Database Patch Set Update : 12.1.0.2.190716 (29494060)
27516009;
27293599;
25549931;
22754279;One-off
19795066;One-off
19632480;One-off
19154304;One-off
19030178;One-off
  • List of applicable patches including compatibility and conflict checks:
[2023-01-26_14-48-28] Patch compatibility check with the environment is in progress…
[2023-01-26_14-50-26] CheckForNoOpPatches has Completed on /u00/app/oracle/product/fmw_reports Home
[2023-01-26_14-51-00] PATCH 34856450 APPLY WILL BE SKIPPED AS IT IS NOT APPLICABLE FOR THIS ENVIRONMENT
[2023-01-26_14-51-02] PATCH 34883826 IS #NOT APPLIED# IN THE ENVIRONMENT
[2023-01-26_14-51-04] PATCH 34974766 IS #NOT APPLIED# IN THE ENVIRONMENT
[2023-01-26_14-51-04] PATCH 1221416 IS #NOT APPLIED# IN THE ENVIRONMENT
[2023-01-26_14-51-05] PATCH 33639718 IS #NOT APPLIED# IN THE ENVIRONMENT
[2023-01-26_14-51-08] PATCH 33093748 IS #NOT APPLIED# IN THE ENVIRONMENT
[2023-01-26_14-51-11] PATCH 34801809 IS #NOT APPLIED# IN THE ENVIRONMENT
[2023-01-26_14-51-14] PATCH 32720458 IS #NOT APPLIED# IN THE ENVIRONMENT
[2023-01-26_14-51-16] PATCH 34879707 IS #NOT APPLIED# IN THE ENVIRONMENT
[2023-01-26_14-51-16] Patch conflict check is in progress …
[2023-01-26_14-51-57] Patch conflict check has completed on /u00/app/oracle/product/fmw_reports Home

Expected output at the end is like this:

PRECHECK SUMMARY:
No Of FAILURES: 0
No Of WARNINGS: 0

With this output, we are ready to run spbat in apply mode.

Apply

Command is very similar with just the “phase” parameter being different:

./spbat.sh -phase apply -oracle_home $ORACLE_HOME

First step of script is to run precheck phase. Once it is completed without errors, spbat will proceed to installation:

[2023-01-26_15-10-06] Application of patches is in progress …
[2023-01-26_15-22-39] SUCCESSFUL - OPatch napply has completed for wls Home
[2023-01-26_15-22-39] Opatch Napply Exit Status - 0
[2023-01-26_15-22-39] COMPLETED : Performing SPBAT Binary patching on wls Home
[2023-01-26_15-22-57] STARTED : Performing SPBAT binary audit on wls Home
[2023-01-26_15-23-29] NoOp patch#34856450# detected in Environment.Skipping Audit for the same
[2023-01-26_15-23-29] SUCCESSFUL - SPB PATCH 34883826 IS #APPLIED#
[2023-01-26_15-23-30] SUCCESSFUL - SPB PATCH 34974766 IS #APPLIED#
[2023-01-26_15-23-31] SUCCESSFUL - SPB PATCH 1221416 IS #APPLIED#
[2023-01-26_15-23-32] SUCCESSFUL - SPB PATCH 33639718 IS #APPLIED#
[2023-01-26_15-23-33] SUCCESSFUL - SPB PATCH 33093748 IS #APPLIED#
[2023-01-26_15-23-34] SUCCESSFUL - SPB PATCH 34801809 IS #APPLIED#
[2023-01-26_15-23-35] SUCCESSFUL - SPB PATCH 32720458 IS #APPLIED#
[2023-01-26_15-23-35] SUCCESSFUL - SPB PATCH 34879707 IS #APPLIED#

And finally, spbat will list all installed patches:

[2023-01-26_15-24-02] List of patches present in the Oracle Home: /u00/app/oracle/product/fmw_reports

[2023-01-26_15-24-02] List of patches present in the Oracle Home: /u00/app/oracle/product/fmw_reports

34974766;WLS STACK PATCH BUNDLE 12.2.1.4.230112 (Patch 34974729)
34883826;WLS PATCH SET UPDATE 12.2.1.4.221210
34879707;FMW Thirdparty Bundle Patch 12.2.1.4.221209
34801809;RDA release 23.1-20230117 for OFM 12.2.1.4 SPB
33639718;33639718 - ADR FOR WEBLOGIC SERVER 12.2.1.4.0 JUL CPU 2022
33093748;One-off
32720458;JDBC 19.3.0.0 FOR CPUJAN2022 (WLS 12.2.1.4, WLS 14.1.1)
1221416;Coherence Cumulative Patch 12.2.1.4.16
29494060;Database Patch Set Update : 12.1.0.2.190716 (29494060)
27516009;
27293599;
25549931;
22754279;One-off
19795066;One-off
19632480;One-off
19154304;One-off
19030178;One-off

All these logs output, and even more, are kept under logs directory under SPBAT directory:

# ls -ltr ./WLS_SPB_12.2.1.4.230112/tools/spbat/generic/SPBAT/logs/
total 0
drwxr-xr-x. 5 oracle workshop 105 Jan 26 14:47 vm06_spb_precheck_2023-01-26_14-46-45
drwxr-xr-x. 7 oracle workshop 135 Jan 26 15:22 vm06_spb_apply_2023-01-26_14-57-45

This is great to have a complete history of what was before and after spbat was run.

Conclusion

Do not forget to backup before proceeding to any software update.

Even if apply will also run precheck, I do prefer to run precheck phase myself first, so that I am able to decide if I should proceed to patch apply or not.

L’article Patching WebLogic with Stack Patch Bundle (SPB) est apparu en premier sur dbi Blog.

Deploying 11.2.0.4 on a recent ODA

Wed, 2023-01-25 03:16
Introduction

11gR2 should be retired now, and there is normally no need to deploy a new database using this version anymore. But in the real world, you may need to do that for some reasons. 11gR2 is no more supported on Oracle Database Appliance, but this is the way you can setup a “brand new” Oracle Database 11gR2 environment on ODA. Obviously, I would only recommend doing this if there is no other options.

History

The first ODAs were designed for 11gR2, but support for this version disappeared nearly 2 years ago. There are still ODAs running 11gR2 databases, but it’s mostly old ODAs deployed several years ago. Applying the latest ODA patch on top of 11gR2 databases is not a problem, your 11gR2 databases won’t be patched although every other components will. Therefore, running the most recent 19.17 patch with 11gR2 databases is OK. But deploying a new infrastructure with X9-2 ODAs running 11gR2 databases wouldn’t make sense and is not supported. Furthermore, ODA development’s team now focus on modern features, mainly tied with Oracle 19c. Don’t expect to run these new features on old 11gR2 databases. Most of the time it won’t work correctly as Oracle doesn’t bother anymore testing on 11gR2.

2 solutions for running 11gR2 databases on ODA

If you absolutely need 11gR2, you currently have 2 solutions.

The first one is deploying an old set of binaries from an old patch, for example 19.10. It’s the easiest way to put 11gR2 on your ODA:

cd /opt/dbi/
unzip p23494997_1910000_Linux-x86-64.zip
odacli update-repository -f /opt/dbi/odacli-dcs-19.10.0.0.0-210115-DB-11.2.0.4.zip 
odacli create-dbhome -v 11.2.0.4.210119
odacli list-dbhomes 

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
2a00d3bb-b042-4720-94a2-bef13bfaf5f5     OraDB19000_home1     19.15.0.0.220419                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 CONFIGURED
4f56e25e-e228-4cc3-b827-b15b66c67143     OraDB19000_home4     19.16.0.0.220719                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4 CONFIGURED
306d949e-6a61-4d7f-83e2-b023a9c47586     OraDB11204_home2     11.2.0.4.210119                          /u01/app/odaorahome/oracle/product/11.2.0.4/dbhome_2 CREATING

The second one we will focus on is creating a new user-managed VM and manually deploy an 11gR2 DB home. I would definitely prefer this solution because I don’t want to use unsupported version on my ODA: I want to keep everything clean and supported. Creating a user-managed VM needs some work, but this VM won’t have any link to my ODA system, everything will be running on its own inside the VM.

License considerations

You should be careful when using user-managed VMs with Oracle databases inside. If you run on Standard Edition, you normally have 1 license per socket on your system, and you can use as may cores as you want. When using Enterprise Edition, you will limit the available number of cores on your ODA with odacli update-cpucore, or you will use CPU Pools for DB Systems and run all your DB Systems in these CPU Pools. Configuring a VM CPU pool (which is different from DB Systems’ CPU pool) for databases in user-managed VMs is not compliant with the Oracle licensing model. You’d better limiting the total CPU-cores of your ODA in this case. For example, if you need 6 cores for 19c DB Systems and 2 cores for your 11gR2, configure 8 cores on your ODA with odacli update-cpucore and then configure the 2 CPU pools accordingly.

Test environment

My test environment is based on a brand new ODA X9-2S deployed with 19.16 and using Standard Edition. I will use 19c DB Systems for most databases, and I will need a user-managed VM for 11gR2. The purpose of this user-managed VM is to decomission an old 11gR2 server and put all Oracle databases into the ODA. The application linked to this 11gR2 database will also be decomissioned, but later. It’s why there is no plan to migrate this database to a newer version.

Setting up the user-managed 11gR2 VM

I will need a VM CPU pool for this “old” system:

odacli create-cpupool -n cpupool4olddbs -c 2 -vm
odacli list-cpupools
Name                  Type                Configured on Cores Associated resources     Created                   Updated
--------------------  ------------------  ------------- ----- -----------------------  ------------------------  ----------
cpupool4olddbs        VM                  srv-bd3        2    NONE        2022-11-29 14:50:28 CET   2022-11-29 14:50:28 CET
cpupool4dbsystems     DB_SYSTEM_SHARED    srv-bd3        8    NONE        2022-11-29 14:49:55 CET   2022-11-29 14:49:55 CET

My VM will need filesystems, I first need to create 2 VM storages, one for DATA and one for Recovery Area:

odacli create-vmstorage -n VMsDATA -dg DATA -s 300G
odacli create-vmstorage -n VMsRECO -dg RECO -s 100G

odacli list-vmstorages
Name                  Disk group       Volume name      Volume device                   Size        Mount Point                          Created                   Updated
--------------------  ---------------  ---------------  ------------------------------  ----------  -----------------------------------  ------------------------  ------------------------
VMsRECO               RECO             VMSRECO          /dev/asm/vmsreco-115            100.00 GB   /u05/app/sharedrepo/vmsreco          2022-11-30 09:34:38 CET   2022-11-30 09:34:38 CET
VMsDATA               DATA             VMSDATA          /dev/asm/vmsdata-211            300.00 GB   /u05/app/sharedrepo/vmsdata          2022-11-30 09:34:04 CET   2022-11-30 09:34:04 CET

Now I will create vdisks in these VM storages:

odacli create-vdisk -n prlinux11-data -vms VMsDATA -s 200G
odacli create-vdisk -n prlinux11-reco -vms VMsRECO -s 70G

odacli list-vdisks

Name                  VM storage            Size        Shared      Sparse      Created                   Updated
--------------------  --------------------  ----------  ----------  ----------  ------------------------  ------------------------
prlinux11-reco         VMsRECO               70.00 GB    NO          NO          2022-11-30 09:39:51 CET   2022-11-30 09:39:51 CET
prlinux11-data         VMsDATA               200.00 GB   NO          NO          2022-11-30 09:41:06 CET   2022-11-30 09:41:06 CET

Now, let’s create the VM. I will use an Oracle Linux distribution provided as an ISO file. Creating this VM will create a boot disk on the VM storage VMsDATA, and connect the 2 vdisks previously created. I will use my ODA’s IP address to map the VNC port.

odacli create-vm -n PRLINUX11 -m 32G -src /mnt/nas/V1009690-01.iso -vd prlinux11-data,prlinux11-reco -vc 2 -cp cpupool4olddbs -vn prnetwork -vms VMsDATA -s 50G -g "vnc,listen=10.146.107.31"

odacli describe-job -i 5d94d0b1-da6c-43d0-89ca-98f6e5a89cfa

Job details
----------------------------------------------------------------
                     ID:  5d94d0b1-da6c-43d0-89ca-98f6e5a89cfa
            Description:  VM PRLINUX11 creation
                 Status:  Success
                Created:  November 30, 2022 10:05:41 AM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate dependency resources            November 30, 2022 10:05:41 AM CET   November 30, 2022 10:05:41 AM CET   Success
Validate resource allocations            November 30, 2022 10:05:41 AM CET   November 30, 2022 10:05:41 AM CET   Success
Allocate resources                       November 30, 2022 10:05:41 AM CET   November 30, 2022 10:05:41 AM CET   Success
Provision new VM                         November 30, 2022 10:05:41 AM CET   November 30, 2022 10:05:43 AM CET   Success
Add VM to Clusterware                    November 30, 2022 10:05:43 AM CET   November 30, 2022 10:05:44 AM CET   Success
Save domain in ACFS                      November 30, 2022 10:05:44 AM CET   November 30, 2022 10:05:44 AM CET   Success
Create VM metadata                       November 30, 2022 10:05:44 AM CET   November 30, 2022 10:05:44 AM CET   Success
Persist metadata                         November 30, 2022 10:05:44 AM CET   November 30, 2022 10:05:44 AM CET   Success

odacli describe-vm -n PRLINUX11 | grep Display
             Display Port:  10.146.107.31:3

Creating a VM is fast as nothing is really created.

Now I can use a VNC client connected to 10.146.107.31:5903 and deploy a Linux distribution. I would recommend using the same OS as your ODA, meaning an Oracle Linux 7.9, but you can use an older one if needed. Linux setup on this VM is quite typical, additional settings and packages will be deployed using preinstall package provided by Oracle.

Once the Linux is deployed, let’s check the disks and configure LVM according to OFA naming:

fdisk -l /dev/vdb | grep GB
Disk /dev/vdb: 214.7 GB, 214748364800 bytes, 419430400 sectors
fdisk -l /dev/vdc | grep GB
Disk /dev/vdc: 75.2 GB, 75161927680 bytes, 146800640 sectors

pvcreate /dev/vdb
pvcreate /dev/vdc

vgcreate vg_oradata /dev/vdb
vgcreate vg_orareco /dev/vdc

lvcreate -L 30G -n lv_u01 vg_oradata
lvcreate -L 160G -n lv_data vg_oradata
lvcreate -L 60G -n lv_reco vg_orareco

mkdir /u01
mkdir -p /u02/app/oracle/oradata
mkdir -p /u03/app/oracle

mkfs.ext4 /dev/mapper/vg_oradata-lv_u01
mkfs.ext4 /dev/mapper/vg_oradata-lv_data
mkfs.ext4 /dev/mapper/vg_orareco-lv_reco

echo "/dev/mapper/vg_oradata-lv_u01 /u01 ext4 defaults 1 2" >> /etc/fstab
echo "/dev/mapper/vg_oradata-lv_data /u02/app/oracle/oradata ext4 defaults 1 2" >> /etc/fstab
echo "/dev/mapper/vg_orareco-lv_reco /u03/app/oracle ext4 defaults 1 2" >> /etc/fstab

mount -a

df -h
Filesystem                      Size  Used Avail Use% Mounted on
devtmpfs                         16G     0   16G   0% /dev
tmpfs                            16G     0   16G   0% /dev/shm
tmpfs                            16G  8.6M   16G   1% /run
tmpfs                            16G     0   16G   0% /sys/fs/cgroup
/dev/mapper/ol-root              44G  1.4G   43G   4% /
/dev/vda1                      1014M  184M  831M  19% /boot
tmpfs                           3.2G     0  3.2G   0% /run/user/0
/dev/mapper/vg_oradata-lv_u01    30G   45M   28G   1% /u01
/dev/mapper/vg_oradata-lv_data  158G   61M  150G   1% /u02/app/oracle/oradata
/dev/mapper/vg_orareco-lv_reco   59G   53M   56G   1% /u03/app/oracle

Let’s copy the ISO file on the server for package setup. I will mount this ISO file, configure a local repository on this ISO, and install the preinstall package. Other packages will be needed for 11gR2:

mkdir /install
scp root@10.168.1.54:/mnt/nas/V1009690-01.iso /install/

 
mkdir /mnt/iso
mount -o loop /install/V1009690-01.iso /mnt/iso
rm -f /etc/yum.repos.d/*
vi /etc/yum.repos.d/local-oel7.repo
[OEL790]
name=Oracle Linux 7.9 x86_64
baseurl=file:///mnt/iso
gpgkey=file:///mnt/iso/RPM-GPG-KEY
gpgcheck=1
enabled=1

yum install oracle-database-preinstall-19c.x86_64
...

yum install compat-libstdc++-33-3.2.3-72.el7.x86_64
yum install gcc-4.8.5-44.0.3.el7.x86_64
yum install gcc-c++-4.8.5-44.0.3.el7.x86_64
yum install mlocate

Now I will do the 11gR2 setup using an image from the old system and the cloning method: I don’t want any changes regarding the binaries.

scp root@10.168.1.54:/mnt/nas/root_SRV-BD2_dbhome_11gR2.tgz /install/

chmod -R 755 /install/ 
chown -R oracle:oinstall /u01/
chown -R oracle:oinstall /u02/
chown -R oracle:oinstall /u03/

 
su - oracle
mkdir -p /u01/app/oracle/product/11.2.0.4/
mkdir -p /u01/app/oracle/network/admin
mkdir /u01/app/oracle/local
cd /u01/app/oracle/product/11.2.0.4/
tar xzf /install/root_SRV-BD2_dbhome_11gR2.tgz 
cd /u01/app/oracle/product/11.2.0.4/dbhome_1/clone/bin
/u01/app/oracle/product/11.2.0.4/dbhome_1/clone/bin/clone.pl ORACLE_BASE="/u01/app/oracle" ORACLE_HOME="/u01/app/oracle/product/11.2.0.4/dbhome_1" ORACLE_HOME_NAME=OraDB11gHome1 OSDBA_GROUP=dba OSOPER_GROUP=oper
...

exit

/u01/app/oraInventory/orainstRoot.sh
/u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh

I will disable SELinux and the firewall, I don’t need them:

vi /etc/selinux/config
SELINUX=disabled
systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
systemctl stop firewalld
reboot

A relink of the binaries was needed:

su - oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1
$ORACLE_HOME/bin/relink all

Now I need to configure a default listener:

su – oracle
$ORACLE_HOME/bin/netca -silent -responsefile /u01/app/oracle/product/11.2.0.4/dbhome_1/assistants/netca/netca.rsp

Parsing command line arguments:
    Parameter "silent" = true
    Parameter "responsefile" = /u01/app/oracle/product/11.2.0.4/dbhome_1/assistants/netca/netca.rsp
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Oracle Net Listener Startup:
    Running Listener Control:
      /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/lsnrctl start LISTENER
    Listener Control complete.
    Listener started successfully.
Listener configuration complete.
Oracle Net Services configuration successful. The exit code is 0

My system is now ready for “new” 11gR2 databases.

Next steps

The system is now deployed and ready to host a first database. Creating a database is done by using dbca or by restoring a database from a backup. Using a backup is definitely the best idea to keep the database as close as the source one. You may need to use db_file_name_convert to remap the old datafile structure to the new one. I would probably create a pfile from the source database, make the needed changes and start my new instance with this pfile.

Conclusion

Only consider this solution if no other one is possible. Remember that it comes without any support and without any performance guarantee. For sure, I wouldn’t recommend using this solution for a production database, and I definitely advise migrating all your old databases to 19c. Don’t forget that 11gR2 is more than 15-year old now, and you shouldn’t deploy it in 2023.

L’article Deploying 11.2.0.4 on a recent ODA est apparu en premier sur dbi Blog.

Upgrade OEM 13c from R4 to R5

Tue, 2023-01-24 12:42

Upgrading Enterprise Manger Cloud Control can be tricky. In this blog I am describing an upgrade I did for a client.

The source version is OEM13c R4

And the target version will be EM13c R5

Before starting I did following tasks

-Copy the key in the repository

oracle@oemserverp01:/home/oracle/emc135/ [oms13cR4] emctl config emkey -copy_to_repos
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation.  All rights reserved.
Enter Enterprise Manager Root (SYSMAN) Password :
The EMKey has been copied to the Management Repository. This operation will cause the EMKey to become unsecure.
After the required operation has been completed, secure the EMKey by running "emctl config emkey -remove_from_repos".
oracle@oemserverp01:/u00/app/oracle/product/13.1.0.0/swlib/ [oms13cR4]

-set the parameter “_allow_insert_with_update_check” to true

SQL> alter system set "_allow_insert_with_update_check"=true scope=both;

System altered.

SQL>

You can now stop the oms. Anyway the upgrade tool will ask you to do this when needed

oracle@oemserverp01:/home/oracle/ [oms13cR4] emctl stop oms -all
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation.  All rights reserved.
Stopping Oracle Management Server...
WebTier Successfully Stopped
Oracle Management Server Successfully Stopped
Oracle Management Server is Down
JVMD Engine is Down
Stopping BI Publisher Server...
BI Publisher Server Successfully Stopped
AdminServer Successfully Stopped
BI Publisher Server is Down
oracle@oemserverp01:/home/oracle/ [oms13cR4] emctl status oms
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation.  All rights reserved.
WebTier is Down
Oracle Management Server is Down
JVMD Engine is Down
BI Publisher Server is Down
oracle@oemserverp01:/home/oracle/ [oms13cR4]

Following zip files are downloaded.

oracle@oemserverp01:/home/oracle/emc135/ [EMRE13C] ll
total 8363828
-rw-r--r--. 1 oracle oinstall 1897494675 Jan 23 09:22 em13500_linux64-2.zip
-rw-r--r--. 1 oracle oinstall 2130750885 Jan 23 09:26 em13500_linux64-3.zip
-rw-r--r--. 1 oracle oinstall 1454982045 Jan 23 09:29 em13500_linux64-4.zip
-rw-r--r--. 1 oracle oinstall 1494500710 Jan 23 09:31 em13500_linux64-5.zip
-rwxr-xr-x. 1 oracle oinstall 1586801533 Jan 23 09:19 em13500_linux64.bin
oracle@oemserverp01:/home/oracle/emc135/ [EMRE13C]

To launch the upgrade run the executable  em13500_linux64.bin (do chmod +x em13500_linux64.bin)

oracle@oemserverp01:/home/oracle/emc135/ [rdbms19c] pwd
/home/oracle/emc135
oracle@oemserverp01:/home/oracle/emc135/ [rdbms19c] ./em13500_linux64.bin

Then I had a look on the specified file in the error and I deinstalled the agents on the corresponding servers

oracle@oemserverp01:/u00/app/oracle/product/13.1.0.0/swlib/ [EMRE13C] cat  /tmp/OraInstall2023-01-23_09-37-33AM/extensibility_check/agent_side_compatibility_check_errors.txt
============================================================================================================================
Plug-in                                           Agent                                             Unsupported Version deployed
============================================================================================================================
Oracle Database                                   dbserver01.example.com:3872                  13.2.1.0.0
Oracle Database                                   dbserver02.example.com:3872                  13.2.2.0.0
oracle@oemserverp01:/u00/app/oracle/product/13.1.0.0/swlib/ [EMRE13C]

And then continue the upgrade

Execute as required the root.sh file

[root@oemserverp01 ~]# /u00/app/oracle/product/13.5.0.0/omsr5/allroot.sh

Starting to execute allroot.sh .........

Starting to execute /u00/app/oracle/product/13.5.0.0/omsr5/root.sh ......
Check /u00/app/oracle/product/13.5.0.0/omsr5/install/root_oemserverp01.example.com_2023-01-23_12-06-44.log for the output of root script
Finished execution of  /u00/app/oracle/product/13.5.0.0/omsr5/root.sh ......


[root@oemserverp01 ~]#

After the end of the process we can validate the new version of our OEM

Conclusion

Hope this may help.

L’article Upgrade OEM 13c from R4 to R5 est apparu en premier sur dbi Blog.

How to submit a talk for a conference

Mon, 2023-01-23 08:34

I already did some talks at conferences and I had already the honor to select talks for a conference. During this talk selection I found some excellent examples on how to do to a proposal, but unfortunately also some examples that could be improved. Also my own learnings from proposals will run into this post, because I know, it’s not always easy to find the right words.

Let’s have a look at the things that can be improved. Let’s assume you want to submit a talk about High Availability in PostgreSQL, maybe a comparison of different tools.

Title

First of all you, need a title. This one should wake the interest of the audience. Be creative!

“HA in PGSQL” is maybe not the best choice to wake someone’s interest. Of course, the summary of your talk is more important. But if the Title does not catch me, I am even not really interested in the summary.

Summary

The summary is the heart of your proposal, the heart of your talk. Take some time to write it. It tooks days to prepare your talk. So please take more than two minutes to write your summary.

One liner

“I will talk about high availability in PostgreSQL”

Let’s be honest. Do you know what this talk is about now? Is it about replication in general? Is it about one tool? Is it about several tools? Is it a comparison? Will there be examples?

With any respect. This is not enough!

Agenda

Publishing the Agenda in the summary maybe looks self-explaining for you. But don’t just put the agenda 1:1 into your proposal

  1. Explaining PostgreSQL
  2. Explaining Method 1
  3. Explaining Method 2
  4. Conclusion

So now I have a better idea what you want to talk about than in the one liner. But isn’t it worth to put a bit more details into your submission? A bit more passion?

After a small introduction of PostgreSQL and its possibilities to reach high availability, we will have a look at the pros and cons of different solutions (patroni, repmanager and autofailover) to achieve even better availability of your data.

If you really want to put your agenda into the summary already (think that’s a bit taking the surprise), feel free to do. But some more sentences to give a better understanding what is the goal of this talk would be great.

Spelling

Yes, this is still a topic we have to talk about. For example, if you propose to a PostgreSQL conference and write: Postgre or PostgresSQL, the talk selection committee will ask themselves, if you’re familiar with the topic. Same is valid for Orcl or Oracel.

Always ask somebody to read your proposal. Especially if you’re submitting in another language than your mother tongue. It’s not necessary to write as good as a native speaker. But it should at least be understandable.

“I talk my speach about patroni and high available is good to having” is understandable, but be sure your talk won’t be chosen. The talk committee will ask themselves if your language skills are good enough to give a talk in another language.

And if you don’t have someone to read your summary, even MS Word can help you to find typos.

About you

The “About you” section is important because it should wake the audience’s interest. I should also give the talk selection committee an idea who you are.

So please think about a short but interesting “About You” entry.

Where are your from, how long are you working in your area? Please, don’t put your full CV into the about you section. But maybe a bit more than: “Hi my name is Julia” would be great as well.

Deadline

Last, but not less important, maybe the most important part of your submission.

Make yourself familiar with the deadline for a Call for Papers (CfP). It’s usually published on the conference homepage. Further the CfP is announced on the mailing lists, e.g. for PostgreSQL (pgsql-announce@lists.postgresql.org). Feel free to subscribe to it on www.postgresql.org. There is also the possibility to follow your favourite conference on social media (Twitter / LinkedIn….)

In general, there are also reminders, that the CfP will close soon. You don’t need to submit at the first day. But please submit before the CfP closes.

Usually, the organisation teams have a timeline as well as some time pressure during the organisation period. So please understand, that the voting for the talks will start immediately after the CfP is closed and later submissions or direct messages to the talk selection committee cannot be considered.

First-time speakers

Don’t be shy. Try it, I am sure you have something interesting to talk about. Don’t be sad if your talk is not chosen. Give it another try on another conference. There are always talk committees that will look for newcomers to give them a chance. We all started as a first-time speaker. So be brave, we look forward to meet you.

Conclusion

This blog reflects my personal impression of talk submissions I read during the last years. If you find any similarities to your summaries, I am sorry, I tried to invent my own examples.

Of course, it’s always your own decision how you want to propose a talk. Find your own style, it should reflect your personality. But always keep in mind: Do I think this proposal is interesting? Do I want to spend my time listening to that talk?

Have fun with your next or even first proposal!

L’article How to submit a talk for a conference est apparu en premier sur dbi Blog.

Configuring a SMTP relay on Oracle Linux Server v8

Mon, 2023-01-23 08:00

For some internal DMK (https://www.dbi-services.com/fr/produits/dmk-management-kit/) development I had to configure a SMTP relay on my VM. Purpose of the SMTP relay is to send the email locally from DMK without any authentication. This Email will be transferred to the SMTP server created by one of my colleague on our AWS Cloud. This SMTP server is of course using authentication. To do so I have been installing and configuring postfix. Through this blog, I have just wanted to share my experience on this installation, hoping it might help someone.

Installation of postfix

My lab is running Oracle Linux Server release 8.6.

[root@srv-ol8-ora ~]# cat /etc/oracle-release
Oracle Linux Server release 8.6

I have no postfix currently installed on my VM.

[root@srv-ol8-ora ~]# rpm -qa | grep postfix

I installed it.

[root@srv-ol8-ora ~]# dnf install -y postfix
Last metadata expiration check: 0:09:18 ago on Mon 23 Jan 2023 09:13:46 AM CET.
Dependencies resolved.
==============================================================================================================================================================================================================================
 Package                                           Architecture                                     Version                                                 Repository                                                   Size
==============================================================================================================================================================================================================================
Installing:
 postfix                                           x86_64                                           2:3.5.8-4.el8                                           ol8_baseos_latest                                           1.5 M
Installing dependencies:
 libicu                                            x86_64                                           60.3-2.el8_1                                            ol8_baseos_latest                                           8.8 M

Transaction Summary
==============================================================================================================================================================================================================================
Install  2 Packages

Total download size: 10 M
Installed size: 36 M
Downloading Packages:
(1/2): postfix-3.5.8-4.el8.x86_64.rpm                                                                                                                                                          15 MB/s | 1.5 MB     00:00
(2/2): libicu-60.3-2.el8_1.x86_64.rpm                                                                                                                                                          33 MB/s | 8.8 MB     00:00
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                          38 MB/s |  10 MB     00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                      1/1
  Installing       : libicu-60.3-2.el8_1.x86_64                                                                                                                                                                           1/2
  Running scriptlet: libicu-60.3-2.el8_1.x86_64                                                                                                                                                                           1/2
  Running scriptlet: postfix-2:3.5.8-4.el8.x86_64                                                                                                                                                                         2/2
  Installing       : postfix-2:3.5.8-4.el8.x86_64                                                                                                                                                                         2/2
  Running scriptlet: postfix-2:3.5.8-4.el8.x86_64                                                                                                                                                                         2/2
  Verifying        : libicu-60.3-2.el8_1.x86_64                                                                                                                                                                           1/2
  Verifying        : postfix-2:3.5.8-4.el8.x86_64                                                                                                                                                                         2/2

Installed:
  libicu-60.3-2.el8_1.x86_64                                                                                   postfix-2:3.5.8-4.el8.x86_64

Complete!

The appropriate package has been installed.

[root@srv-ol8-ora ~]# rpm -qa | grep postfix
postfix-3.5.8-4.el8.x86_64

Firewall configuration

If running a local firewall, this one needs to be configured to allow SMTP traffic.

[root@srv-ol8-ora ~]# firewall-cmd --zone=public --add-service=smtp --permanent
success

[root@srv-ol8-ora ~]# firewall-cmd --reload
success

Or if the firewall is not needed, which is the case on my side, I have simply deactivated it.

[root@srv-ol8-ora postfix]# systemctl stop firewalld

[root@srv-ol8-ora postfix]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

Remove sendmail

If installed, sendmail should be removed as per oracle documentation : https://docs.oracle.com/en/learn/oracle-linux-postfix/#introduction

On my lab VM, sendmail is not installed.

[root@srv-ol8-ora ~]# rpm -qa | grep sendmail

Set Postfix as the default Mail Transfer Agent
[root@srv-ol8-ora ~]# alternatives --set mta /usr/sbin/sendmail.postfix

Enable and start the Postfix service
[root@srv-ol8-ora ~]# systemctl enable --now postfix
Created symlink /etc/systemd/system/multi-user.target.wants/postfix.service → /usr/lib/systemd/system/postfix.service.

Install SASL

SASL (Simple Authentication and Security Layer) is mandatroy to be used in the Postfix SMTP client.

We need both following packages : cyrus-sasl cyrus-sasl-plain. I checked which cyrus packages are already installed and available.

[root@srv-ol8-ora postfix]# dnf list cyrus-sasl cyrus-sasl-lib cyrus-sasl-plain
Last metadata expiration check: 1:59:10 ago on Mon 23 Jan 2023 09:13:46 AM CET.
Installed Packages
cyrus-sasl-lib.x86_64                                                                                     2.1.27-6.el8_5                                                                                    @ol8_baseos_latest
Available Packages
cyrus-sasl.i686                                                                                           2.1.27-6.el8_5                                                                                    ol8_baseos_latest
cyrus-sasl.src                                                                                            2.1.27-6.el8_5                                                                                    ol8_baseos_latest
cyrus-sasl.src                                                                                            2.1.27-6.el8_5                                                                                    ol8_appstream
cyrus-sasl.x86_64                                                                                         2.1.27-6.el8_5                                                                                    ol8_baseos_latest
cyrus-sasl-lib.i686                                                                                       2.1.27-6.el8_5                                                                                    ol8_baseos_latest
cyrus-sasl-plain.i686                                                                                     2.1.27-6.el8_5                                                                                    ol8_baseos_latest
cyrus-sasl-plain.x86_64                                                                                   2.1.27-6.el8_5                                       

I have installed both needed packages.

[root@srv-ol8-ora postfix]# dnf install cyrus-sasl cyrus-sasl-plain
Last metadata expiration check: 1:59:36 ago on Mon 23 Jan 2023 09:13:46 AM CET.
Dependencies resolved.
==============================================================================================================================================================================================================================
 Package                                                 Architecture                                  Version                                                 Repository                                                Size
==============================================================================================================================================================================================================================
Installing:
 cyrus-sasl                                              x86_64                                        2.1.27-6.el8_5                                          ol8_baseos_latest                                         96 k
 cyrus-sasl-plain                                        x86_64                                        2.1.27-6.el8_5                                          ol8_baseos_latest                                         47 k

Transaction Summary
==============================================================================================================================================================================================================================
Install  2 Packages

Total download size: 144 k
Installed size: 194 k
Is this ok [y/N]: y
Downloading Packages:
(1/2): cyrus-sasl-plain-2.1.27-6.el8_5.x86_64.rpm                                                                                                                                             946 kB/s |  47 kB     00:00
(2/2): cyrus-sasl-2.1.27-6.el8_5.x86_64.rpm                                                                                                                                                   1.6 MB/s |  96 kB     00:00
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                         2.2 MB/s | 144 kB     00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                      1/1
  Installing       : cyrus-sasl-plain-2.1.27-6.el8_5.x86_64                                                                                                                                                               1/2
  Running scriptlet: cyrus-sasl-2.1.27-6.el8_5.x86_64                                                                                                                                                                     2/2
  Installing       : cyrus-sasl-2.1.27-6.el8_5.x86_64                                                                                                                                                                     2/2
  Running scriptlet: cyrus-sasl-2.1.27-6.el8_5.x86_64                                                                                                                                                                     2/2
  Verifying        : cyrus-sasl-2.1.27-6.el8_5.x86_64                                                                                                                                                                     1/2
  Verifying        : cyrus-sasl-plain-2.1.27-6.el8_5.x86_64                                                                                                                                                               2/2

Installed:
  cyrus-sasl-2.1.27-6.el8_5.x86_64                                                                           cyrus-sasl-plain-2.1.27-6.el8_5.x86_64

Complete!                                     

Checking…

[root@srv-ol8-ora postfix]# dnf list cyrus-sasl cyrus-sasl-lib cyrus-sasl-plain
Last metadata expiration check: 1:59:46 ago on Mon 23 Jan 2023 09:13:46 AM CET.
Installed Packages
cyrus-sasl.x86_64                                                                                         2.1.27-6.el8_5                                                                                    @ol8_baseos_latest
cyrus-sasl-lib.x86_64                                                                                     2.1.27-6.el8_5                                                                                    @ol8_baseos_latest
cyrus-sasl-plain.x86_64                                                                                   2.1.27-6.el8_5                                                                                    @ol8_baseos_latest
Available Packages
cyrus-sasl.i686                                                                                           2.1.27-6.el8_5                                                                                    ol8_baseos_latest
cyrus-sasl.src                                                                                            2.1.27-6.el8_5                                                                                    ol8_baseos_latest
cyrus-sasl.src                                                                                            2.1.27-6.el8_5                                                                                    ol8_appstream
cyrus-sasl-lib.i686                                                                                       2.1.27-6.el8_5                                                                                    ol8_baseos_latest
cyrus-sasl-plain.i686                                                                                     2.1.27-6.el8_5                                                                                    ol8_baseos_latest                                  

Configure postfix

I have added following needed parameter to the postfix configuration, including the SMTP server name and a reference to a file (sasl_passwd) storing the username and password.

[root@srv-ol8-ora ~]# postconf -e "relayhost = [email-smtp.eu-central-1.amazonaws.com]:587"
[root@srv-ol8-ora ~]# postconf -e "smtp_sasl_auth_enable = yes"
[root@srv-ol8-ora ~]# postconf -e "smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd"
[root@srv-ol8-ora ~]# postconf -e "smtp_sasl_security_options = noanonymous"
[root@srv-ol8-ora ~]# postconf -e "smtp_sasl_tls_security_options = noanonymous"
[root@srv-ol8-ora ~]# postconf -e "smtp_use_tls = yes"                                                                                  ol8_baseos_latest                                  

I have created the file storing the username and password for sending emails. The file is called sasl_passwd in stored in /etc/postfix. The format is [<smtp_server_name>]:<port> <username>:<password>.

[root@srv-ol8-ora ~]# vi /etc/postfix/sasl_passwd
[root@srv-ol8-ora ~]# cat /etc/postfix/sasl_passwd
[email-smtp.eu-central-1.amazonaws.com]:587 A*****************H:B******************z

I have change file permission

[root@srv-ol8-ora ~]# chmod 600 /etc/postfix/sasl_passwd

I have run postmap on the file so it will create a database-like file so postfix can read it.

[root@srv-ol8-ora ~]# postmap /etc/postfix/sasl_passwd

And finally reload postfix configuration.

[root@srv-ol8-ora ~]# postfix reload
postfix/postfix-script: refreshing the Postfix mail system

Test

In order to run test, I have installed mailx.

[root@srv-ol8-ora ~]# dnf list mailx
Last metadata expiration check: 0:52:42 ago on Mon 23 Jan 2023 09:13:46 AM CET.
Available Packages
mailx.src                                                                                              12.5-29.el8                                                                                           ol8_baseos_latest
mailx.x86_64 
                                                                                          12.5-29.el8                                                                                           ol8_baseos_latest
[root@srv-ol8-ora ~]# dnf install mailx
Last metadata expiration check: 0:52:56 ago on Mon 23 Jan 2023 09:13:46 AM CET.
Dependencies resolved.
==============================================================================================================================================================================================================================
 Package                                          Architecture                                      Version                                                Repository                                                    Size
==============================================================================================================================================================================================================================
Installing:
 mailx                                            x86_64                                            12.5-29.el8                                            ol8_baseos_latest                                            257 k

Transaction Summary
==============================================================================================================================================================================================================================
Install  1 Package

Total download size: 257 k
Installed size: 491 k
Is this ok [y/N]: y
Downloading Packages:
mailx-12.5-29.el8.x86_64.rpm                                                                                                                                                                  5.1 MB/s | 257 kB     00:00
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                         4.7 MB/s | 257 kB     00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                      1/1
  Installing       : mailx-12.5-29.el8.x86_64                                                                                                                                                                             1/1
  Running scriptlet: mailx-12.5-29.el8.x86_64                                                                                                                                                                             1/1
  Verifying        : mailx-12.5-29.el8.x86_64                                                                                                                                                                             1/1

Installed:
  mailx-12.5-29.el8.x86_64

Complete!

I ran a test sending an email to my email address.

[root@srv-ol8-ora ~]# echo "This is a DMK test through SMTP relay." | mailx -s "DMK test" marc.wagner@dbi-services.com

I checked the log file.

[root@srv-ol8-ora ~]# tail -f /var/log/maillog
Jan 23 11:18:32 srv-ol8-ora postfix/pickup[47181]: 09472140C214: uid=0 from=
Jan 23 11:18:32 srv-ol8-ora postfix/cleanup[47187]: 09472140C214: message-id=
Jan 23 11:18:32 srv-ol8-ora postfix/qmgr[47182]: 09472140C214: from=, size=512, nrcpt=1 (queue active)
Jan 23 11:18:32 srv-ol8-ora postfix/smtp[47184]: 09472140C214: to=, relay=email-smtp.eu-central-1.amazonaws.com[3.74.180.161]:587, delay=0.24, delays=0.01/0/0.14/0.1, dsn=5.0.0, status=bounced (host email-smtp.eu-central-1.amazonaws.com[3.74.180.161] said: 554 Message rejected: Email address is not verified. The following identities failed the check in region EU-CENTRAL-1: root@srv-ol8-ora.localdomain, root  (in reply to end of DATA command))
Jan 23 11:18:32 srv-ol8-ora postfix/cleanup[47187]: 44B7C140C22A: message-id=
Jan 23 11:18:32 srv-ol8-ora postfix/bounce[47186]: 09472140C214: sender non-delivery notification: 44B7C140C22A
Jan 23 11:18:32 srv-ol8-ora postfix/qmgr[47182]: 44B7C140C22A: from=, size=3090, nrcpt=1 (queue active)
Jan 23 11:18:32 srv-ol8-ora postfix/qmgr[47182]: 09472140C214: removed
Jan 23 11:18:32 srv-ol8-ora postfix/local[47188]: 44B7C140C22A: to=, relay=local, delay=0.01, delays=0/0/0/0, dsn=2.0.0, status=sent (delivered to mailbox)
Jan 23 11:18:32 srv-ol8-ora postfix/qmgr[47182]: 44B7C140C22A: removed

The problem comes from the fact that the sender email address, root@srv-ol8-ora.localdomain, is not authorized on the AWS SMTP server. Using option -r in the mailx command will not help, as not working with SMTP. And in any case, I would have the same problem with DMK sending an email.

This is why I had to change the sender in the postfix configuration.

Changing sender in the postfix configuration

To do so I have created a new configuration file, sender_email, in /etc/postfix mapping the sender email address. This file has been added to the postfix configuration at the ender_canonical_maps parameter.

[root@srv-ol8-ora postfix]# vi sender_email

[root@srv-ol8-ora postfix]# cat sender_email
root@srv-ol8-ora.localdomain marc.wagner@dbi-services.com
oracle@srv-ol8-ora.localdomain marc.wagner@dbi-services.com

[root@srv-ol8-ora postfix]# postconf -e "sender_canonical_maps = hash:/etc/postfix/sender_email"

[root@srv-ol8-ora postfix]# postmap /etc/postfix/sender_email

[root@srv-ol8-ora postfix]# postfix reload
postfix/postfix-script: refreshing the Postfix mail system

New test

I ran a mailx command again.

[root@srv-ol8-ora ~]# echo "This is a DMK test through SMTP relay." | mailx -s "DMK test" marc.wagner@dbi-services.com

And checked the log file again.

[root@srv-ol8-ora ~]# tail -f /var/log/maillog
Jan 23 12:02:45 srv-ol8-ora postfix/pickup[57484]: 46AC1140C215: uid=0 from=
Jan 23 12:02:45 srv-ol8-ora postfix/cleanup[57682]: 46AC1140C215: message-id=
Jan 23 12:02:45 srv-ol8-ora postfix/qmgr[57485]: 46AC1140C215: from=, size=508, nrcpt=1 (queue active)
Jan 23 12:02:45 srv-ol8-ora postfix/smtp[57684]: 46AC1140C215: to=, relay=email-smtp.eu-central-1.amazonaws.com[52.28.191.33]:587, delay=0.45, delays=0.03/0.06/0.13/0.23, dsn=2.0.0, status=sent (250 Ok 01070185de4b762a-1c453360-ba6f-49f2-bc7e-508941e8cc7e-000000)
Jan 23 12:02:45 srv-ol8-ora postfix/qmgr[57485]: 46AC1140C215: removed

As we can see in the log file, the email sending is now clean. I could confirm this as well getting the email in my email inbox. I could test and validate the same from the DMK software as well.

L’article Configuring a SMTP relay on Oracle Linux Server v8 est apparu en premier sur dbi Blog.

SQL Server: Heap page deallocation and IndexOptimize

Mon, 2023-01-23 00:00
Introduction

HEAP tables are a type of data structure that, unlike a clustered index table, does not have a specific ordering of its rows, making them faster to insert and delete data.

They are often used for temporary “staging” tables.

This is all fine. But, if your table is never dropped and reused over time for data loading, you need to be aware of how SQL Server manages HEAPs pages.

I recently was asked by a customer why a HEAP table would consume significantly more disk space than the same table with a clustered index. Do you have any idea why?”

HEAP page deallocation

Well, SQL Server does not deallocate pages when doing a DELETE on a Heap. When a row is deleted, it does not mark the space as reusable, as a result, the space remains allocated and consumes more disk space.
This may seem surprising but it is documented in Microsoft Docs.
See: Space that a table uses is not completely released after you use a DELETE statement to delete data from the table in SQL Server

After you use a DELETE statement in Microsoft SQL Server to delete data from a table, you may notice that the space that the table uses is not completely released.

There’s also a great article by Paul Randal on the topic:
SQLskills SQL101: Why does my heap have a bunch of empty pages?

The table I was asked about is a heap table with a size of approximately 130GB. This table experiences a high volume of data modification, with almost the entire table being deleted and inserted again on a daily basis.
In this specific scenario, the best workaround would be to perform a TRUNCATE statement on the table prior to data loading, as this would clear out any unused pages and release the space back to the operating system.

Another workaround that is not mentioned in the documentation is to perform index maintenance periodically on the HEAP with a table REBUILD, this will defragment the heap and make the space more contiguous, as well as release any unused space back to the operating system.

IndexOptimize and HEAP fragmentation

The Ola Hallengren maintenance script is a widely-used open-source solution for maintaining the performance and integrity of SQL Server databases. One of the features of the script is its ability to rebuild indexes to fix fragmentation.

However, the script does not include any specific functionality for rebuilding heaps or fixing heap fragmentation.

It has been two years since an issue was opened on GitHub for this feature, and we are still waiting for it to be implemented.

Heap empty pages and IndexOptimize demo

Here is an example of creating a HEAP table with 3000 empty and unallocated pages. Running the indexOptimize procedure will not address this table, and therefore will not release this unused space.

use master
go
DROP DATABASE IF EXISTS Heapo;
CREATE DATABASE Heapo;
GO

USE Heapo;
GO
CREATE TABLE HeapTest (
	[c1] INT IDENTITY
	, [c2] VARCHAR(4000) DEFAULT replicate('a', 4000)
	, [c3] VARCHAR(4000) DEFAULT ''
);
GO

-- Insert 3000 rows, which will allocate 1500 pages
INSERT INTO HeapTest DEFAULT VALUES;
GO 3000

In this table, we can observe that there are 3000 rows with 1500 pages.

SELECT [page_count], [record_count], [avg_page_space_used_in_percent]
FROM sys.dm_db_index_physical_stats (DB_ID (N'Heapo'), OBJECT_ID (N'HeapTest'), 0,  DEFAULT, 'DETAILED');
GO

Let’s delete all the rows.

-- Delete all the rows
DELETE FROM HeapTest;
GO

Rows are still allocated:

Now, what happens if we run Ola Hallengren’s index maintenance script.

exec master.dbo.IndexOptimize @Databases = 'Heapo'

Same things. IndexOptimize does not do anything about HEAPs.

Rebuilding the index will release the unused pages.

ALTER TABLE HeapTest REBUILD
Conclusion

HEAP tables in SQL Server are a powerful data structure that can be used to store temporary data or large volumes of data that are inserted, updated, and deleted frequently.
However, it is important for DBAs to be aware of the issues with deallocations and to manually address HEAP issues as needed.
One strategy is to periodically rebuild the HEAPs using the ALTER TABLE … REBUILD command.

L’article SQL Server: Heap page deallocation and IndexOptimize est apparu en premier sur dbi Blog.

Pages