Feed aggregator

Bloom Filter Efficiency And Cardinality Estimates

Randolf Geist - Tue, 2019-04-23 18:45
I've recently came across an interesting observation I've not seen documented yet, so I'm publishing a simple example here to demonstrate the issue.

In principle it looks like that the efficiency of Bloom Filter operations are dependent on the cardinality estimates. This means that in particular cardinality under-estimates of the optimizer can make a dramatic difference how efficient a corresponding Bloom Filter operation based on such a cardinality estimate will work at runtime. Since Bloom Filters are crucial for efficient processing in particular when using Exadata or In Memory column store this can have significant impact on the performance of affected operations.

While other operations based on SQL workareas like hash joins for example can be affected by such cardinality mis-estimates, too, these seem to be capable of adapting at runtime - at least to a certain degree. However I haven't seen such an adaptive behaviour of Bloom Filter operations at runtime (not even when executing the same statement multiple times and statistics feedback not kicking in).

To demonstrate the issue I'll create two simple tables that get joined and one of them gets a filter applied:

create table t1 parallel 4 nologging compress
generator1 as
select /*+
rownum as id
, rpad('x', 100) as filler
connect by
level <= 1e3
generator2 as
select /*+
rownum as id
, rpad('x', 100) as filler
connect by
level <= 1e4
, id as id2
, rpad('x', 100) as filler
from (
select /*+ leading(b a) */
(a.id - 1) * 1e4 + b.id as id
generator1 a
, generator2 b

alter table t1 noparallel;

create table t2 parallel 4 nologging compress as select * from t1;

alter table t2 noparallel;

All I did here is create two tables with 10 million rows each, and I'll look at the runtime statistics of the following query:

select /*+ no_merge(x) */ * from (
select /*+
opt_estimate(table t1 rows=1)
--opt_estimate(table t1 rows=250000)
, t2.id2
, t2
mod(t1.id2, 40) = 0
-- t1.id2 between 1 and 250000
and t1.id = t2.id
) x
where rownum > 1;

Note: If you try to reproduce make sure you get actually a Bloom Filter operation - in an unpatched version I had to add a PARALLEL(2) hint to actually get the Bloom Filter operation.

The query filters on T1 so that 250K rows will be returned and then joins to T2. The first interesting observation regarding the efficiency of the Bloom Filter is that the actual data pattern makes a significant difference: When using the commented filter "T1.ID2 BETWEEN 1 and 250000" the resulting cardinality will be same as when using the "MOD(T1.ID2, 40) = 0", but the former will result in a perfect filtering of the Bloom Filter regardless of the OPT_ESTIMATE hint used, whereas when using the latter the efficiency will be dramatically different.

This is what I get when using version 18.3 ( showed very similar results) and force the under-estimate using the OPT_ESTIMATE ROWS=1 hint - the output is from my XPLAN_ASH script and edited for brevity:

| Id | Operation | Name | Rows | Bytes | Execs | A-Rows | PGA |
| 0 | SELECT STATEMENT | | | | 1 | 0 | |
| 1 | COUNT | | | | 1 | 0 | |
|* 2 | FILTER | | | | 1 | 0 | |
| 3 | VIEW | | 1 | 26 | 1 | 250K | |
|* 4 | HASH JOIN | | 1 | 24 | 1 | 250K | 12556K |
| 5 | JOIN FILTER CREATE| :BF0000 | 1 | 12 | 1 | 250K | |
|* 6 | TABLE ACCESS FULL| T1 | 1 | 12 | 1 | 250K | |
| 7 | JOIN FILTER USE | :BF0000 | 10M| 114M| 1 | 10000K | |
|* 8 | TABLE ACCESS FULL| T2 | 10M| 114M| 1 | 10000K | |

The Bloom Filter didn't help much, only a few rows were actually filtered (otherwise my XPLAN_ASH script would have shown "10M" as actually cardinality instead of "10000K", which is something slightly less than 10M rounded up).

Repeat the same but this time using the OPT_ESTIMATE ROWS=250000 hint:

| Id | Operation | Name | Rows | Bytes |TempSpc| Execs | A-Rows| PGA |
| 0 | SELECT STATEMENT | | | | | 1 | 0 | |
| 1 | COUNT | | | | | 1 | 0 | |
|* 2 | FILTER | | | | | 1 | 0 | |
| 3 | VIEW | | 252K| 6402K| | 1 | 250K | |
|* 4 | HASH JOIN | | 252K| 5909K| 5864K| 1 | 250K | 12877K |
| 5 | JOIN FILTER CREATE| :BF0000 | 250K| 2929K| | 1 | 250K | |
|* 6 | TABLE ACCESS FULL| T1 | 250K| 2929K| | 1 | 250K | |
| 7 | JOIN FILTER USE | :BF0000 | 10M| 114M| | 1 | 815K | |
|* 8 | TABLE ACCESS FULL| T2 | 10M| 114M| | 1 | 815K | |

So we end up with exactly the same execution plan but the efficiency of the Bloom Filter at runtime has changed dramatically due to the different cardinality estimate the Bloom Filter is based on.

I haven't spent much time yet with the corresponding undocumented parameters that might influence the Bloom Filter behaviour, but when I repeated the same and used the following settings in the session (and ensuring an adequate PGA_AGGREGATE_TARGET setting otherwise the hash join might be starting spilling to disk, which means the Bloom Filter size is considered when calculating SQL workarea sizes):

alter session set "_bloom_filter_size" = 1000000;

I got the following result:

| Id | Operation | Name | Rows | Bytes | Execs | A-Rows| PGA |
| 0 | SELECT STATEMENT | | | | 1 | 0 | |
| 1 | COUNT | | | | 1 | 0 | |
|* 2 | FILTER | | | | 1 | 0 | |
| 3 | VIEW | | 1 | 26 | 1 | 250K | |
|* 4 | HASH JOIN | | 1 | 24 | 1 | 250K | 12568K |
| 5 | JOIN FILTER CREATE| :BF0000 | 1 | 12 | 1 | 250K | |
|* 6 | TABLE ACCESS FULL| T1 | 1 | 12 | 1 | 250K | |
| 7 | JOIN FILTER USE | :BF0000 | 10M| 114M| 1 | 815K | |
|* 8 | TABLE ACCESS FULL| T2 | 10M| 114M| 1 | 815K | |

which shows a slightly increased PGA usage compared to the first output but the same efficiency as when having the better cardinality estimate in place.

Increasing the size I couldn't however convince Oracle to make the Bloom Filter even more efficient, even when the better cardinality estimate was in place.


Obviously the efficiency / internal sizing of the Bloom Filter vector at runtime depends on the cardinality estimates of the optimizer. Depending on the actual data pattern this can make a significant difference in terms of efficiency. Yet another reason why having good cardinality estimates is a good thing and yet sometimes so hard to achieve, in particular for join cardinalities.


On MyOracleSupport I've found the following note regarding Bloom Filter efficiency:

Bug 8932139 - Bloom filtering efficiency is inversely proportional to DOP (Doc ID 8932139.8)

Another interesting behaviour - the bug is only fixed in version 19.1 but also included in the latest RU(R)s of 18c and 12.2 from January 2019 on.

Chinar Aliyev's Blog

Randolf Geist - Tue, 2019-04-23 17:04
Chinar Aliyev has recently started to pick up on several of my blog posts regarding Parallel Execution and the corresponding new features introduced in Oracle 12c.

It is good to see that obviously Oracle has since then improved some of these and added new ones as well.

Here are some links to the corresponding posts:

New automatic Parallel Outer Join Null Handling in 18c

Improvements regarding automatic parallel distribution skew handling in 18c

Chinar has also put some more thoughts on the HASH JOIN BUFFERED operation:

New thoughts about the HASH JOIN BUFFERED operation

There are also a number of posts on his blog regarding histograms and in particular how to properly calculate the join cardinality in the presence of additional filters and resulting skew, which is a very interesting topic and yet to be handled properly by the optimizer even in the latest versions.

Parse Calls

Jonathan Lewis - Tue, 2019-04-23 12:31

When dealing with the library cache / shared pool it’s always worth checking from time to time to see if a new version of Oracle has changed any of the statistics you rely on as indicators of potential problems. Today is also (coincidentally) a day when comments about “parses” and “parse calls” entered my field of vision from two different directions. I’ve tweeted out references to a couple of quirkly little posts I did some years ago about counting parse calls and what a parse call may entail, but I thought I’d finish the day off with a little demo of what the session cursor cache does for you when your client code issues parse calls.

There are two bit of information I want to highlight – activity in the library cache and a number that shows up in the session statistics. Here’s the code to get things going:

rem     Script:         12c_session_cursor_cache.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2019
rem     Note:
rem     start_1.sql contains the one line
rem          select * from t1 where n1 = 0;

create table t1 
select 99 n1 from dual

execute dbms_stats.gather_table_stats(user,'t1')

spool 12c_session_cursor_cache

prompt  =======================
prompt  No session cursor cache
prompt  =======================

alter session set session_cached_cursors = 0;

set serveroutput off
set feedback off

execute snap_libcache.start_snap
execute snap_my_stats.start_snap

execute snap_libcache.start_snap
execute snap_my_stats.start_snap


set feedback on
set serveroutput on

execute snap_my_stats.end_snap
execute snap_libcache.end_snap

prompt  ============================
prompt  Session cursor cache enabled
prompt  ============================

alter session set session_cached_cursors = 50;

set serveroutput off
set feedback off

execute snap_libcache.start_snap
execute snap_my_stats.start_snap

execute snap_libcache.start_snap
execute snap_my_stats.start_snap


set feedback on
set serveroutput on

execute snap_my_stats.end_snap
execute snap_libcache.end_snap

spool off

I’ve made use of a couple of little utilities I wrote years ago to take snapshots of my session statistics and the library cache (v$librarycache) stats. I’ve also used my “repetition” framework to execute a basic query 1,000 times. The statement is a simple “select from t1 where n1 = 0”, chosen to return no rows.

The purpose of the whole script is to show you the effect of running exactly the same SQL statement many times – first with the session cursor cache disabled (session_cached_cursors = 0) then with the cache enabled at its default size.

Here are some results from an instance of – which I’ve edited down by eliminating most of the single-digit numbers.

No session cursor cache
Session stats - 23-Apr 17:41:06
Interval:-  4 seconds
Name                                                                         Value
----                                                                         -----
Requests to/from client                                                      1,002
opened cursors cumulative                                                    1,034
user calls                                                                   2,005
session logical reads                                                        9,115
non-idle wait count                                                          1,014
session uga memory                                                          65,488
db block gets                                                                2,007
db block gets from cache                                                     2,007
db block gets from cache (fastpath)                                          2,006
consistent gets                                                              7,108
consistent gets from cache                                                   7,108
consistent gets pin                                                          7,061
consistent gets pin (fastpath)                                               7,061
logical read bytes from cache                                           74,670,080
calls to kcmgcs                                                              5,005
calls to get snapshot scn: kcmgss                                            1,039
no work - consistent read gets                                               1,060
table scans (short tables)                                                   1,000
table scan rows gotten                                                       1,000
table scan disk non-IMC rows gotten                                          1,000
table scan blocks gotten                                                     1,000
buffer is pinned count                                                       2,000
buffer is not pinned count                                                   2,091
parse count (total)                                                          1,035
parse count (hard)                                                               8
execute count                                                                1,033
bytes sent via SQL*Net to client                                           338,878
bytes received via SQL*Net from client                                     380,923
SQL*Net roundtrips to/from client                                            1,003

PL/SQL procedure successfully completed.

Library Cache - 23-Apr 17:41:06
Interval:-      4 seconds
Type      Cache                           Gets        Hits Ratio        Pins        Hits Ratio   Invalid    Reload
----      -----                           ----        ---- -----        ----        ---- -----   -------    ------
NAMESPACE SQL AREA                       1,040       1,032   1.0       1,089       1,073   1.0         0         1
NAMESPACE TABLE/PROCEDURE                   17          16    .9         101          97   1.0         0         0
NAMESPACE BODY                               9           9   1.0          26          26   1.0         0         0
NAMESPACE SCHEDULER GLOBAL ATTRIBU          40          40   1.0          40          40   1.0         0         0

PL/SQL procedure successfully completed.

The thing to notice, of course, is the large number of statistics that are (close to) multiples of 1,000 – i.e. the number of executions of the SQL statement. In particular you can see the ~1,000 “parse count (total)” which is not reflected in the “parse count (hard)” because the statement only needed to be loaded into the library cache and optimized once.

The other notable statistics come from the library cache where we do 1,000 gets and pins on the “SQL AREA” – the “get” creates a “KGL Lock” (the “breakable parse lock”) that is made visible as an entry in v$open_cursor (x$kgllk), and the the “pin” created a “KGL Pin” that makes it impossible for anything to flush the child cursor from memory while we’re executing it.

So what changes when we enabled the session cursor cache:

Session cursor cache enabled

Session altered.

Session stats - 23-Apr 17:41:09
Interval:-  3 seconds
Name                                                                         Value
----                                                                         -----
Requests to/from client                                                      1,002
opened cursors cumulative                                                    1,004
user calls                                                                   2,005
session logical reads                                                        9,003
non-idle wait count                                                          1,013
db block gets                                                                2,000
db block gets from cache                                                     2,000
db block gets from cache (fastpath)                                          2,000
consistent gets                                                              7,003
consistent gets from cache                                                   7,003
consistent gets pin                                                          7,000
consistent gets pin (fastpath)                                               7,000
logical read bytes from cache                                           73,752,576
calls to kcmgcs                                                              5,002
calls to get snapshot scn: kcmgss                                            1,002
no work - consistent read gets                                               1,000
table scans (short tables)                                                   1,000
table scan rows gotten                                                       1,000
table scan disk non-IMC rows gotten                                          1,000
table scan blocks gotten                                                     1,000
session cursor cache hits                                                    1,000
session cursor cache count                                                       3
buffer is pinned count                                                       2,000
buffer is not pinned count                                                   2,002
parse count (total)                                                          1,002
execute count                                                                1,003
bytes sent via SQL*Net to client                                           338,878
bytes received via SQL*Net from client                                     380,923
SQL*Net roundtrips to/from client                                            1,003

PL/SQL procedure successfully completed.

Library Cache - 23-Apr 17:41:09
Interval:-      3 seconds
Type      Cache                           Gets        Hits Ratio        Pins        Hits Ratio   Invalid    Reload
----      -----                           ----        ---- -----        ----        ---- -----   -------    ------
NAMESPACE SQL AREA                           5           5   1.0       1,014       1,014   1.0         0         0
NAMESPACE TABLE/PROCEDURE                    7           7   1.0          31          31   1.0         0         0
NAMESPACE BODY                               6           6   1.0          19          19   1.0         0         0

PL/SQL procedure successfully completed.

The first thing to note is that “parse count (total)” still shows up 1,000 parse calls. However we also see the statistic “session cursor cache hits” at 1,000. Allowing for a little noise around the edges virtually every parse call has turned into a short-cut that takes us through the session cursor cache directly to the correct cursor.

This difference shows up in the library cache activity where we still see 1,000 pins – we have to pin the cursor to execute it – but we no longer see 1,000 “gets”. In the absence of the session cursor cache the session has to keep searching for the statement then creating and holding a KGL Lock while we execute the statement – but when the cache is enabled the session will very rapidly recognise that the statement is one we are likely to re-use, so it will continue to hold the KGL lock
after we have finished executing the statement and we can record the location of the KGL lock in a session state object. After the first couple of executions of the statement we no longer have to search for the statement and attach a spare lock to it, we can simply navigate from our session state object to the cursor.

As before, the KGL Lock will show up in v$open_cursor – though this time it will not disappear between executions of the statement. Over the history of Oracle versions the contents of v$open_cursor have become increasingly helpful, so I’ll just show you what the view held for my session by the end of the test:

SQL> select cursor_type, sql_text from V$open_cursor where sid = 250 order by cursor_type, sql_text;

CURSOR_TYPE                                                      SQL_TEXT
---------------------------------------------------------------- ------------------------------------------------------------
DICTIONARY LOOKUP CURSOR CACHED                                  BEGIN snap_libcache.end_snap; END;
DICTIONARY LOOKUP CURSOR CACHED                                  BEGIN snap_my_stats.end_snap; END;
DICTIONARY LOOKUP CURSOR CACHED                                  SELECT DECODE('A','A','1','2') FROM SYS.DUAL
OPEN                                                             begin         dbms_application_info.set_module(
OPEN                                                             table_1_ff_2eb_0_0_0
OPEN-RECURSIVE                                                    SELECT VALUE$ FROM SYS.PROPS$ WHERE NAME = 'OGG_TRIGGER_OPT
OPEN-RECURSIVE                                                   select STAGING_LOG_OBJ# from sys.syncref$_table_info where t
OPEN-RECURSIVE                                                   update user$ set spare6=DECODE(to_char(:2, 'YYYY-MM-DD'), '0
SESSION CURSOR CACHED                                            BEGIN DBMS_OUTPUT.ENABLE(1000000); END;
SESSION CURSOR CACHED                                            BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
SESSION CURSOR CACHED                                            BEGIN snap_libcache.start_snap; END;
SESSION CURSOR CACHED                                            BEGIN snap_my_stats.start_snap; END;
SESSION CURSOR CACHED                                            select * from t1 where n1 = 0
SESSION CURSOR CACHED                                            select /*+ no_parallel */ spare4 from sys.optstat_hist_contr

17 rows selected.

The only one of specific interest is the penultimate one in the output – its type is “SESSION CURSOR CACHED” and we can recognise our “select from t1” statement.

Deploying A Micronaut Microservice To The Cloud

OTN TechBlog - Tue, 2019-04-23 10:17

So you've finally done it. You created a shiny new microservice. You've written tests that pass, ran it locally and everything works great. Now it's time to deploy and you're ready to jump to the cloud. That may seem intimidating, but honestly there's no need to worry. Deploying your Micronaut application to the Oracle Cloud is really quite easy and there are several options to chose from. In this post I'll show you a few of those options and by the time you're done reading it you'll be ready to get your app up and running.

If you haven't yet created an application, feel free to check out my last post and use that code to create a simple app that uses GORM to interact with an Oracle ATP instance.  Once you've created your Micronaut application you'll need to create a runnable JAR file. For this blog post I'll assume you followed my blog post and any assets that I refer to will reflect that assumption. With Micronaut creating a runnable JAR is as easy as using ./gradlew assemble or ./mvnw package (depending on which build automation tool your project uses). Creating the artifact will take a bit longer than you're probably used to if you haven't used Micronaut before. That's because Micronaut precompiles all necessary metadata for Dependency Injection so that it can minimize/reduce runtime reflection to obtain that metadata. Once your task completes you will have a runnable JAR file in the build/libs directory of your project. You can launch your application locally by running java -jar /path/to/your.jar. So to launch the JAR created from the previous blog post, I set some environment variables and run:

Which results in the application running locally:

So far, pretty easy. But we want to do more than launch a JAR file locally. We want to run it in the cloud, so let's see what that takes. The first method I want to look at is more of a "traditional" approach: launching a simple compute instance and deploying the JAR file.

Creating A Virtual Network

If this is your first time creating a compute instance you'll need to set up virtual networking.  If you have a network ready to go, skip down to "Creating An Instance" below. 

Your instance needs to be associated with a virtual network in the Oracle Cloud. Virtual cloud networks (hereafter referred to as VCNs) can be pretty complicated, but as a developer you need to know enough about them to make sure that your app is secure and accessible from the internet. To get started creating a VCN, either click "Create a virtual cloud network" from the dashboard:

Or select "Networking" -> "Virtual Cloud Networks" from the sidebar menu and then click "Create Virtual Cloud Network" on the VCN overview page:

In the "Create Virtual Cloud Network" dialog, populate a name and choose the option "Create Virtual Cloud Network Plus Related Resources" and click "Create Virtual Cloud Network" at the bottom of the dialog:

The "related resources" here refers to the necessary Internet Gateways, Route Table, Subnets and related Security Lists for the network. The security list by default will allow SSH, but not much else, so we'll edit that once the VCN is created.  When everything is complete, you'll receive confirmation:

Close the dialog and back on the VCN overview page, click on the name of the new VCN to view details:

On the details page for the VCN, choose a subnet and click on the Security List to view it:

On the Security List details page, click on "Edit All Rules":

And add a new rule that will expose port 8080 (the port that our Micronaut application will run on) to the internet:

Make sure to save the rules and close out. This VCN is now ready to be associated with an instance running our Micronaut application.

Creating An Instance

To get started with an Oracle Cloud compute instance log in to the cloud dashboard and either select "Create a VM instance":

Or choose "Compute" -> "Instances" from the sidebar and click "Create Instance" on the Instance overview page:

In the "Create Instance" dialog you'll need to populate a few values and make some selections. It seems like a long form, but there aren't many changes necessary from the default values for our simple use case. The first part of the form requires us to name the instance, select an Availability Domain, OS and instance type:


The next section asks for the instance shape and boot volume configuration, both of which I leave as the default. At this point I select a public key that I can use later on to SSH in to the machine:

Finally, select the a VCN that is internet accessible with port 8080 open:

Click "Create" and you'll be taken to the instance details page where you'll notice the instance in a "Provisioning" state.  Once the instance has been provisioned, take note of the public IP address:

Deploying Your Application To The New Instance

Using the instance public IP address, SSH in via the private key associated with the public key used to create the instance:

We're almost ready to deploy our application, we just need a few things.  First, we need a JDK.  I like to use SDKMAN for that, so I first install SDKMAN, then use it to install the JDK with sdk install java 8.0.212-zulu and confirm the installation:

We'll also need to open port 8080 on the instance firewall so that our instance will allow the traffic:

We can now upload our instance with SCP:

I've copied the JAR file, my Oracle ATP wallet and 2 simple scripts to help me out. The first script sets some environment variables:

The second script is what we'll use to launch the application:

Next, move the wallet directory from the user home directory to the root with sudo mv wallet/ /wallet and source the environment variables with . ./env.sh. Now run the application with ./run.sh:

And hit the public IP in your browser to confirm the app is running and returning data as expected!

You've just deployed your Micronaut application to the Oracle Cloud! Of course, a manual VM install is just one method for deployment and isn't very maintainable long term for many applications, so in future posts we'll look at some other options for deploying that fit in the modern application development cycle.

.gist{ border-left: none } code { padding: 2px 4px; font-size: 90%; display: inline; margin: 0;}

Latest Blog Posts from Oracle ACEs: April 14-20, 2019

OTN TechBlog - Tue, 2019-04-23 10:06

In writing the blog posts listed below, the endgame for the Oracle ACE program members is simple: sharing their experience and expertise with the community. That doesn't make them superheroes, but you have to marvel at their willingness to devote time and energy to helping others.

Here's what they used their powers to produce for the week of April 14-20, 2019.


Oracle ACE Director Francisco AlvarezFrancisco Munoz Alvarez
CEO, CloudDB
Sydney, Australia


Oracle ACE Director Ludovico CaldaraLudovico Caldara
Computing Engineer, CERN
Nyon, Switzerland


Oracle ACE Director Martin Giffy D'SouzaMartin D'Souza
Director of Innovation, Insum Solutions
Alberta, Canada


Oracle ACE Director Opal AlapatOpal Alapat
Vision Team Practice Lead, interRel Consulting
Arlington, Texas


Oracle ACE Director Syed Jaffar HussainSyed Jaffar Hussain
CTO, eProseed
Riyadh, Saudi Arabia


Oracle ACE Alfredo KreigAlfredo Krieg
Senior Principal Consultant, Viscosity North America
Dallas, Texas


Oracle ACE Marco MischkeMarco Mischke
Team Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden, Germany
Oracle ACE Marco Mischke


Oracle ACE Noriyushi ShinodaNoriyoshi Shinoda
Database Consultant, Hewlett Packard Enterprise Japan
Tokyo, Japan
Oracle ACE Noriyushi Shinoda



Oracle ACE Patrick JolliffePatrick Jolliffe
Manager, Li & Fung Limited
Hong Kong
Oracle ACE Patrick Joliffe


Oracle ACE Phil WilkinsPhil Wilkins
Senior Consultant, Capgemini
Reading, United Kingdom
Oracle ACE Phil Wilkins


Oracle ACE Syed ZaheerZaheer Syed
Oracle Application Specialist, Tabadul
Riyadh, Saudi Arabia
Oracle ACE Zaheer Syed


Batmunkh Moltov
Chief Technology Officer, Global Data Engineering Co.
Ulaanbaatar, Mongolia
Oracle ACE Associate


Oracle ACE Associate Flora BarrieleFlora Barriele
Oracle Database Administrator, Etat de Vaud
Lausanne, Switzerland
Oracle ACE Associate Flora Barriele



Related Resources

[Video] Critical Patch Update(CPU) for April 2019 is Now Available: Apply Now

Online Apps DBA - Tue, 2019-04-23 07:22

Oracle has released their Quarterly Security Patches i.e. Critical Patch Update (CPU) on 16th April 2019 and it’s very important that you must apply them immediately. The Above Video is for you, If you are working as an Apps DBA, Architect, Fusion Middleware, DBA, Weblogic Admin, & Want To learn: ☑ What these Critical Patch […]

The post [Video] Critical Patch Update(CPU) for April 2019 is Now Available: Apply Now appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

62 Percent of Restaurants Feel Unprepared for a Mobile Future

Oracle Press Releases - Tue, 2019-04-23 07:00
Press Release
62 Percent of Restaurants Feel Unprepared for a Mobile Future Restaurateurs engaging customers with mobile offerings today, but are not confident in keeping pace with the mobile innovations of tomorrow

Redwood Shores, Calif.—Apr 23, 2019

A recent survey of food and beverage leaders highlights that while a large percentage feel confident in their restaurant’s current use of mobile technology, only 48 percent feel prepared to capitalize on future innovations. Sixty-two percent of respondents expressed doubts over their ability to keep up with the speed of mobile technology changes. And more than half (59 percent) agreed that their company faces the threat of disruption from their more mobile-enabled competitors.

“The rise of mobile ordering and on-demand food delivery services are completely changing the restaurant and guest experience,” said Simon de Montfort Walker, senior vice president and general manager for Oracle Food and Beverage. “In order to remain relevant to a rapidly evolving audience, restaurants must act quickly to modernize their mobile strategy and offerings. Today, the experience a customer has ordering online or from a kiosk can be just as essential as if they were ordering in the store.”

The study findings point to a clear and urgent need for restaurants to embrace the right mobile and back-end technology to drive higher ticket value, turn tables faster and enable more cross and upsell. In addition, the findings highlight the need to embrace mobile technology to avoid being outpaced by the competition, help cut labor costs and improve the guest experience—all critical components to revenue growth.

Improving Loyalty and the Dining Experience

Today’s foodies want choices. In addition to great food, what drives their loyalty is easy ordering and delivery, fast, seamless payments, and a personalized experience.

  • 86 percent of operators say branded mobile apps increase their speed of service and therefore revenue
  • 93 percent believe their guest-facing apps enhance the guest experience, promote loyalty and drive repeat business
  Cutting Costs, Saving Time Equals Increased Revenues

Restaurants are investing in mobile technology to cut costs and save time in areas such as hiring less serving staff but more runners, keeping a close eye on stock levels to avoid over-ordering and waste, and the ability to quickly change the menu and offer specials when there is an over-stock of inventory.  

  • 84 percent of food and beverage executives believe the adoption of guest-facing apps drives down labor costs
  • 96 percent agree, with 40 percent strongly agreeing, that expanded mobile inventory management will drive time and money savings
  Perceived Future Benefits of Mobile Technology

Restaurants are already using mobile devices for table reservations, taking orders, and processing payments, but what value do restaurateurs believe will come from future mobile innovations?

  • 82 percent believe partnerships with third-party delivery services like Uber Eats and GrubHub will help grow their business
  • 89 percent believe check averages will increase thanks to in-app recommendations
  • 95 percent believe the guest experience and customer loyalty will continue to improve
  The Road Ahead

While most organizations rated themselves as highly able to meet new consumer demands, an undercurrent of anxiety about the future was also apparent with only 48 percent of respondents reporting that they have the tools they need to meet the mobile demands of tomorrow. The mobility study findings show a clear path for restaurateurs including applying mobile innovation to broader areas such as inventory efficiency, getting new customers in the door, serving them more efficiently, and keeping them coming back.


For this survey, Oracle queried 279 leaders in the food and beverage industry who use mobile technology in their organizations during the summer of 2018. 45 percent of those surveyed were from full-service restaurants, 24 percent from fast casual and 23 percent from quick service. Seventy-one percent of respondents are director level or higher, with 45 percent hailing from companies that generate more than $500M in annual revenue.

Contact Info
Valerie Beaudett
+1 650 400 7833
About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650 400 7833

Mobile Is Key to Boosting Guest Experiences Say Hoteliers

Oracle Press Releases - Tue, 2019-04-23 07:00
Press Release
Mobile Is Key to Boosting Guest Experiences Say Hoteliers But many not prepared to deliver forward-thinking mobile innovations shows new survey

Redwood Shores, Calif.—Apr 23, 2019

A whopping 91 percent of hotel executives surveyed said mobile technologies are critical to improving guest experience and cultivating loyalty. But only 69 percent were confident in their organization’s ability to adopt and deliver those mobile experiences.

“It’s clear that hotels need to provide mobile innovations to meet the requirements of today’s savvy consumers, yet some haven’t started their mobile journey. Customers want to be able to engage with brands wherever they are—booking a room from their child’s soccer game or ordering drinks while sitting poolside at the hotel. The properties that can’t deliver these kinds of mobile experiences will quickly lose to those that can make the engagement simple and seamless for their customers,” said Greg Webb, senior vice president and general manager of Oracle Hospitality.

The 2019 Hospitality Benchmark - Mobile Maturity Analysis study, which was conducted by Oracle, focused on three key areas of mobility:

  • The ability to offer WIFI to guests throughout the property
  • Guest-facing apps to enhance the customer experience; and
  • Staff-facing mobile to improve the hotel team’s daily operational workflow

Despite high self-ratings for mobile utilization prowess, 50 percent of respondents expressed fear that their organization would be disrupted by more mobile-friendly competitors. So it was not surprising that 90 percent of the hotel executives surveyed agreed that mobile was critical to maintaining a competitive advantage. Ninety percent also added that guest experience could be improved by the ability to use smartphones to manage basic services such as booking a room and managing the check-in and check-out processes. And 91 percent said their guest-facing mobile app is the preferred way they’d like guests to request service from hotel staff. 

In addition to enhancing guest experience, 66 percent of respondents said reducing operational costs was another major driver for embracing mobility.

Even with the high ratings for hotel mobile adoption, there is room for improvement in elevating the guest experience and providing personalized services via mobile—starting with awareness. Twenty-three percent of respondents agreed that they struggle to promote their guest-facing mobile app technology. The survey underscores the importance of offering guests incentives—such as free perks, drinks or discounted room service—to download and use hotel apps. In the absence of such mobile initiatives, it is essential for hoteliers to provide guests with other communication channels, such as texting, to quickly respond to their needs.

The majority of hotel executives believe that mobile technologies are critical to guest experiences, and Oracle believes that there are three areas they can focus on to improve the guest experience including empowering guests to take advantage of self-service tools, allowing guests to communicate with the hotel through their preferred channel, and continuing to invest in mobile technologies to reduce friction.


199 executive leaders in the hospitality industry were surveyed regarding the current use of mobile technology within their organizations. Seventy seven percent of respondents were director level or higher, with 53% from companies whose annual revenue is greater than $500M.

Contact Info
Valerie Beaudett
+1 650 400 7833
About Oracle Hospitality

Oracle Hospitality brings over 40 years of experience in providing technology solutions to independent hoteliers, global and regional chains, gaming, and cruise lines. We provide hardware, software, and services that allow our customers to act on rich data insights that deliver personalized guest experiences, maximize profitability and encourage long-term loyalty. Our solutions include platforms for property management, point-of-sale, distribution, reporting and analytics all delivered from the cloud to lower IT cost and maximize business agility. Oracle Hospitality’s OPERA is recognized globally as the leading property management platform and continues to serve as a foundation for industry innovation. 

For more information about Oracle Hospitality, please visit www.oracle.com/Hospitality

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650 400 7833

Bitmap Index On Column With 212552698 Distinct Values, What Gives? (I’d Rather Be High)

Richard Foote - Mon, 2019-04-22 21:45
In my previous post on Indexing The Autonomous Warehouse, I highlighted how it might be necessary to create indexes to improve the performance and scalability of highly selective queries, as it might on any Data Warehouse running on an Exadata platform. In the post, I created a Bitmap Index and showed how improve SQL performance […]
Categories: DBA Blogs

Utilities Testing Accelerator Now Available

Anthony Shorten - Mon, 2019-04-22 14:19

Oracle Utilities is pleased to announce the general availability of Oracle Utilities Testing Accelerator Version via the Oracle Software Delivery Cloud with exciting new features which provide improved test asset building and execution capabilities. This release is a foundation release for future releases with key new and improved features.

Last year the first release of the Oracle Utilities Testing Accelerator was released to replace the Oracle Functional Testing Advanced Pack for Oracle Utilities product to optimize the functional testing of Oracle Utilities products. The new version extends the existing feature set and adds new capabilities for the testing of Oracle Utilities products.

The key changes and new capabilities in this release include the following:

  • Accessible. This release is now accessible making the product available to a wider user audience.
  • Extensions to Test Accelerator Repository. The Oracle Utilities Testing Accelerator was shipped with a database repository, Test Accelerator Repository, to store test assets. This repository has been extended to accommodate new objects introduced in this release including a newly redesigned Test Results API to provide comprehensive test execution information. 
  • New! Server Execution Engine. In past releases, the only way to execute tests was using the provided Oracle Utilities Testing Accelerator Eclipse Plugin. Whilst that plugin is still available and will continue to be provided, an embedded scalable server execution engine has been implemented directly in the Oracle Utilities Testing Accelerator Workbench. This allows testers to build and execute test assets without leaving the browser. This engine will be the premier method of executing tests in this release and in future releases of the Oracle Utilities Testing Accelerator.
  • New! Test Data Management. One of the identified bottlenecks in automation is the provision and re-usability of test data for testing activities. The Oracle Utilities Testing Accelerator has added an additional capability to extend the original test data capabilities by allowing test users to extract data from non-production sources for reuse in test data. The principle is based upon the notion that it is quicker to update data than create it. The tester can specify a secure connection to a non-production source to pull the data from and allow manipulation at the data level for testing complex scenarios. This test data can be stored at the component level to create reusable test data banks or at the flow level to save a particular set of data for reuse. With this capability testers can quickly get sets of data to be reused within and across flows. The capability includes the ability to save and name test data within the extended Test Accelerator repository.
  • New! Flow Groups are now supported. The Oracle Utilities Testing Accelerator supports the concept of Flow Groups. These are groups of flows that can be executed as a set in parallel or serial to reduce test execution time. This capability is used by the Server Execution Engine to execute groups of flows efficiently. This capability is also foundation of future functionality.
  • New! Groovy Support for Validation. In this release, it is possible to use Groovy to express rules for validation in addition to the component validation language already supported. This capability allows partners and testers to add complex rule logic at the component and flow level. As with the Groovy support within the Oracle Utilities Application Framework, the language is whitelisted and does not support external Groovy frameworks.
  • Annotation Support. In the component API, it is possible to annotate each step in the process to make it more visible. This information, if populated, is now displayed on the flow tree for greater visibility. For backward compatibility, this information may be blank on the tree unless it is already populated.
  • New! Test Dashboard Zones. An additional set of test dashboard zones have been added to cover the majority of the queries needed for test execution and results.
  • New! Security Enhancements. For the Oracle Utilities SaaS Cloud releases of the product, the Oracle Utilities Testing Accelerator has been integrated with Oracle Identity Cloud Service to manage identity in the product as part of the related Oracle Utilities SaaS Cloud Services.

Note: This upgrade is backward compatible with test assets built with the previous Oracle Utilities Testing Accelerator releases so no rework is anticipated on existing assets as part of the upgrade process.

For more details of this release and the capabilities of the Oracle Utilities Testing Accelerator product refer to Oracle Utilities Testing Accelerator (Doc Id: 2014163.1) available from My Oracle Support.

Automating DevSecOps for Java Apps with Oracle Developer Cloud

OTN TechBlog - Mon, 2019-04-22 11:32

Looking to improve your application's security? Automating vulnerability reporting helps you prevent attacks that leverage known security problems in code that you use. In this blog we'll show you how to achieve this with Oracle's Developer Cloud.

Most developers rely on third party libraries when developing applications. This helps them reduce the overall development timelines by providing working code for specific needs. But are you sure that the libraries you are using are secure? Are you keeping up to date with the latest reports about security vulnerabilities that were found in those libraries? What about apps that you developed a while back and are still running but might be using older versions of libraries that don't contain the latest security fixes?

DevSecOps aims to integrate security aspects into the DevOps cycle, ideally automating security checks as part of the dev to release lifecycle. The latest release of Oracle Developer Cloud Service - Oracle's cloud based DevOps and Agile team platform - includes a new capability to integrate security check into your DevOps pipelines.

Relying on the public National Vulnerability Database, the new dependency vulnerability analyzer scans the libraries used in your application against the database of known issues, and flags any security risks your app might have based on this data. The current version of DevCS support this for any Maven based Java project. Leveraging the pom files as a source of truth for the list of libraries used in your code.

Vulnerability Analyzer Step

When running the check, you can specify your level of tolerance to issues - for example defining that you are ok with low risk issues, but not with medium to high risk vulnerabilities. When a check finds issues you can fail the build pipeline, send notifications, and in addition add an issue into the issue tracking system provided for free with Developer Cloud.

Check out this demo video to see the process in action.

Having these type of vulnerability scans applied to your platform can save you from situation where hackers leverage publicly known issues and out of date libraries usage to break into your systems. These checks can be part of your regular build cycle, and can also be scheduled to run on a regular basis on systems that have already been deployed - to verify that we keep them up to date with the latest security checks.


I gave up my cell phone & laptop for the weekend: This is what I learned

Look Smarter Than You Are - Mon, 2019-04-22 10:10
It was time for a technology detox. When I left work on Good Friday, I left my laptop at the office. I got home at 3PM and put my mobile phone on a charger that I wouldn't see until Monday at 9AM. And my life free of external, involuntary, technological distraction began... along with the stress of being out of touch for the next 3 days. Here's what I learned.

Biggest Lessons
  1. It's really stressful at first, but you get over it.
  2. All those people you told "if it's an emergency, contact my significant other" will not have any emergencies suitable for contacting your significant other.
  3. It will leave you wanting more.
I learned far more about myself and we'll get to that in a second.

Why in the name of God?
Thanks to the cruel "Screen Time" tracking feature of my Apple iPhone, I found that on the average day, I lift up my phone more than 30 times before 11AM every day and then it gets worse from there. In general, I am using my phone 6+ hours per day and many days are a lot worse. I pay more attention to my phone than the people around me: if it's always within arm's reach and I use it for everything. As a CEO, my outward reason for my phone addiction is that I have to be connected: emails and text messages must be dealt with immediately and without my calendar, I might miss a Very Important Meeting. In reality, I am completely addicted to my cell phone and the whole "I have to stay connected" thing is largely rationalization.

But about a week ago, I looked around at the people in my life and realized that we're all addicted: for some of us, it's about communication. Others live in their games. Some people are on Instagram looking at puppies and kittens. Whatever your thing, you're getting it through either your phone or your laptop.

So why take a break? Mostly to find out 1) if I could make it for 42 hours; and 2) what I could learn from the experience. I settled on Easter weekend (April 19-22).

Things I thought I couldn't live without
Texting. According the aforementioned Evil Screen Time, I knew that I spent 1.5 hours a day on text messaging. To be clear, I'm not a tween: my company uses text messaging more than any other communication vehicle, it's how I stay in contact with friends (who has time for phone calls?), and it's about the only way my kids will talk to me.

Email. While texting is great for short communications and quick back-and-forths, I get around 200 non-spam emails on the average day and about 50 on the average weekend. When you have something longer to say or it's not urgent, email is the way to go.

Navigation. I have long since forgotten how to drive without the little blue dot directing me. There are about four places I felt I could find on my own (work, home, airport, grocery store), but I was sure that I would be lost without Google Maps or Waze.

Games. I am level 40 on Pokemon Go (humble brag) and I have played it every day since July 2016. It's literally the only game on my phone, but I have to keep my daily streak going lest... I don't know, actually, but the stress of missing out on my 7-day rewards was seriously getting to me.

Turns out, I didn't miss Pokemon Go, I'm actually a decent driver without a phone (it's like falling off a bike: you never forget how), and if you're off email, you never know what you're missing. I did miss texting, but not in the way I thought I would. So what did I actually miss?

Things I actually missed
Bitmoji. I genuinely missed sending cute pictures around to my friends of me as the Easter Bunny and receiving their pictures dressed up inside Easter eggs. I kept wanting to sneak peeks at my wife's phone to see if she was getting anything cute, though I did manage to resist.

Information. I had forgotten the days when questions didn't have answers. What's the address of Academy Sports? I didn't know, so I just had to drive in the general area where I thought it was. What time does Salata open? No idea, so I drove there and got to wander outside for a bit until they opened for the day (fun fact: stores still post actual opening/closing hours on their front doors!). What time is the movie Little playing at the AMC Grapevine Mills 30? Who won the Texas Rangers game (when in doubt, assume it's the team they're playing against)? Who is the actor that plays that one character in that movie, oh, come on, you know who I'm talking about, that guy, let me just look it up for you, oh, damn, I can't until Monday, FML?

Calendar. I worried all weekend about my schedule for the upcoming week: when was my first appointment on Monday, what did I have scheduled for after work, was there anything I should be preparing for, when was I leaving town next, where was I supposed to be for Memorial Day weekend? It went on-and-on, and it turns out that none of it matters.

Photos. I didn't realize how many photos I take of the world around me, until I couldn't take any photos at all. I had to use a long-forgotten mental trick called "memory." It made me pay a lot more attention to the world around me, and I genuinely remember more of how I experienced the weekend than if I had been trying to catalog everything through pictures. I'm sure photos would have made this blog more appealing, but I'm doing all this from memory, so all we have are words.

Connection. I wanted to know what my friends and family were doing and to let them know I was thinking of them. Without technology, this is almost impossible nowadays. I had to resort to seeing them in-person: I met a couple of them at a restaurant and we got together with another friend for cycling, a movie, and Game of Thrones. But it turns out that those friends - the ones I spent time with in-person - I felt more deeply connected to than before the weekend started. Texting is about surface-level connecting, but facetime (note that this is different than FaceTime) is about bonding.

What changed over the weekend?For one, I spent a lot more time outside. I played frisbee, went on a fourteen-mile bike ride, worked out at the gym, walked around some, went to the mall, saw a movie, and in general, I actually experienced more of the world than I normally do. I also didn't trip over a curb once, because unlike normal, I was looking up the whole time.

I read more instead of looking at my phone each night to fall asleep. I made it 100 pages into a book that I've been meaning to read for a year now. And in the morning I didn't reach for my phone on my bedside table either. I tend to forget how immersed you can get in a book when you don't have notifications popping up constantly telling you what you should be doing instead of reading in peace.

I spent a lot of time with my wife this weekend to the point that she was probably sick of me by Sunday night, but we spent real time with each other without any technological distractions. I finally gave her an Edward Break last night by heading off to take a long bath while reading more of my book (Stealing Snow, if you're curious). She fell asleep and I stayed up reading until midnight.

Any lasting effects?I thought I would be longing for my phone and my laptop (particularly text and emails) at exactly 9AM this morning. I waited until 9AM and opened up my laptop to see what appointment I had at 9AM. It turns out no one needs me - or loves me? - until 10:30AM, so I opened up a browser window to write my first blog entry in many, many months. My cell phone is still face down, and as of 10AM, I still have no idea who texted or emailed me all weekend. I'm blissfully writing away, and I have to admit, I'm not looking forward to going back to my constantly-connected world.

Will giving up your technology addiction for a weekend give you some sort of mystical clarity, a purity of soul that let's you know how the Dalai Lama must feel when he's between text messages? No, but it will help you find out just how addicted you are, and how strong your willpower is. It'll help you understand what you're missing when you're disconnected, and if you're like me, you'll find that in some ways, you actually like it.

Now will I ever do this again? I'll let you know after I log into my email, read all my texts, and see just how bad the world got over the weekend. Until then, I'm blissfully unaware.
Categories: BI & Warehousing

Final Conclusion for 18c Cluster upgrade state is [NORMAL]

Michael Dinh - Sun, 2019-04-21 22:46

Finally, I have reached a point that I can live with for Grid 18c upgrade because the process runs to completion without any error and intervention.

Note that ACFS Volume is created in CRS DiskGroup which may not be ideal for production.

Rapid Home Provisioning Server is configured and is not running.

The outcome is different depending on whether the upgrade is performed via GUI or silent as demonstrated 18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

Rene Antunez also demonstrates another method UPGRADE ORACLE GI FROM 12.1 TO 18.5 FAILS AND LEAVES CRS WITH STATUS OF UPGRADE FINAL

While we both encountered the same error “Upgrading RHP Repository failed”, we accomplished the same results via different course of action.

The unexplained and unanswered questions is, “Why RHP Repository is being upgraded?”

Ultimately, it is cluvfy that change for cluster upgrade state and this is shown from gridSetupActions2019-04-21_02-10-47AM.log

INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Executing RHPUPGRADE

INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'

INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Executing CLUVFY
INFO: [Apr 21, 2019 2:46:34 AM] Command /u01/ stage -post crsinst -collect cluster -gi_upgrade -n all

INFO: [Apr 21, 2019 2:51:37 AM] Completed Plugin named: cvu
INFO: [Apr 21, 2019 2:51:38 AM] ConfigClient.saveSession method called
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'

INFO: [Apr 21, 2019 2:51:38 AM] Successfully executed the flow in SILENT mode
INFO: [Apr 21, 2019 2:51:39 AM] inventory location is/u01/app/oraInventory
INFO: [Apr 21, 2019 2:51:39 AM] Exit Status is 0
INFO: [Apr 21, 2019 2:51:39 AM] Shutdown Oracle Grid Infrastructure 18c Installer

I would suggest to run the last step using GUI if feasible versus silent to see what is happening:

/u01/ -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

So how did I get myself into this predicament? I followed blindly. I trust but did not verify. Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running and later on Oracle Linux (Doc ID 2369422.1)

Step 2.1 - Understand how MGMTDB is handled during upgrade

Upgrading GI 18.1 does not require upgrading MGMTDB nor does it require installing a MGMTDB if it currently does not exist. 
It's the user's discretion to maintain and upgrade the MGMTDB for their application needs.

Note: MGMTDB is required when using Rapid Host Provisioning. 
The Cluster Health Monitor functionality will not work without MGMTDB configured.
If you consider to install a MGMTDB later,  it is configured to use 1G of SGA and 500 MB of PGA. 
MGMTDB SGA will not be allocated in hugepages (this is because it's init.ora setting 'use_large_pages' is set to false.

The following parameters from (Doc ID 2369422.1) were the root cause for all the issues in my test cases.

Because MGMTDB is not required, it makes sense to set the following but resulted in chaos.

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

How To Setup a Rapid Home Provisioning (RHP) Server and Client (Doc ID 2097026.1)

Starting with Oracle Grid Infrastructure, when you install Oracle Grid Infrastructure, the Rapid Home Provisioning Server is configured, by default, in the local mode to support the local switch home capability. 

Here is what worked from end to end without any failure or invention.
The response file was ***not*** modified for each of the test cases.

/u01/ -silent -skipPrereqs \
-applyRU /media/patch/Jan2019/28828717 \
-responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

Here is what the environment looks like after the 18c GI upgrade.

Notice ACFS is configured for RHP.

[oracle@racnode-dc1-1 ~]$ /media/patch/crs_Query.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
Oracle Instance alive for sid "+ASM1"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is []
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is []
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is []. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
+ exit

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
Oracle Instance alive for sid "+ASM1"
+ /u01/ lspatches
28864607;ACFS RELEASE UPDATE (28864607)
28864593;OCW RELEASE UPDATE (28864593)
28822489;Database Release Update : (28822489)
28547619;TOMCAT RELEASE UPDATE (28547619)
28435192;DBWLM RELEASE UPDATE (28435192)
27923415;OJVM RELEASE UPDATE: (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
Oracle Instance alive for sid "hawk1"
+ /u01/app/oracle/ lspatches
28731800;Database Bundle Patch : (28731800)
28729213;OCW PATCH SET UPDATE (28729213)

OPatch succeeded.
+ exit

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc1-1 ~]$ crsctl check cluster -all
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t
Name           Target  State        Server                   State details
Local Resources
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
Cluster Resources
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
      1        ONLINE  ONLINE       racnode-dc1-1   172.16
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            STABLE
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
      1        ONLINE  ONLINE       racnode-dc1-1            Open,HOME=/u01/app/o
      2        ONLINE  ONLINE       racnode-dc1-2            Open,HOME=/u01/app/o
      1        ONLINE  ONLINE       racnode-dc1-1            Open,STABLE
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
      1        OFFLINE OFFLINE                               STABLE
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE

[oracle@racnode-dc1-1 ~]$ srvctl config mgmtdb -all
Database unique name: _mgmtdb
Database name:
Oracle home: <CRS home>
  /u01/ on node racnode-dc1-1
Oracle user: oracle
Spfile: +CRS/_MGMTDB/PARAMETERFILE/spfile.271.1006137461
Password file:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB service: GIMR_DSCREP_10
Cluster name: vbox-rac-dc1
Management database is enabled.
Management database is individually enabled on nodes:
Management database is individually disabled on nodes:
Database instance: -MGMTDB

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.crs.ghchkpt.acfs -t
Name           Target  State        Server                   State details
Local Resources
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w 'TYPE = ora.acfs.type' -t
Name           Target  State        Server                   State details
Local Resources
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.drivers.acfs -init
STATE=ONLINE on racnode-dc1-1

[oracle@racnode-dc1-1 ~]$ mount|egrep -i 'asm|ghchkpt'
oracleasmfs on /dev/oracleasm type oracleasmfs (rw,relatime)

[oracle@racnode-dc1-1 ~]$ acfsutil version
acfsutil version:

[oracle@racnode-dc1-1 ~]$ acfsutil registry
Mount Object:
  Device: /dev/asm/ghchkpt-61
  Mount Point: /opt/oracle/rhp_images/chkbase
  Disk Group: CRS
  Volume: GHCHKPT
  Options: none
  Nodes: all
  Accelerator Volumes:

[oracle@racnode-dc1-1 ~]$ acfsutil info fs
acfsutil info fs: ACFS-03036: no mounted ACFS file systems

[oracle@racnode-dc1-1 ~]$ acfsutil info storage
Diskgroup      Consumer      Space     Size With Mirroring  Usable Free  %Free   Path
CRS                          59.99              59.99          34.95       58%
DATA                         99.99              99.99          94.76       94%
FRA                          59.99              59.99          59.43       99%
unit of measurement: GB

[root@racnode-dc1-1 ~]# srvctl start filesystem -device /dev/asm/ghchkpt-61
PRCA-1138 : failed to start one or more file system resources:
CRS-2501: Resource 'ora.crs.ghchkpt.acfs' is disabled
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ asmcmd -V
asmcmd version

[oracle@racnode-dc1-1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_diskoting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    35784                0           35784                        Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304    102396    97036                0           97036                        N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    60856                0           60856                        N  FRA/

[oracle@racnode-dc1-1 ~]$ srvctl status rhpserver
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is not running

[oracle@racnode-dc1-1 ~]$ ps -ef|grep [p]mon
oracle    3571     1  0 02:40 ?        00:00:03 mdb_pmon_-MGMTDB
oracle   17109     1  0 Apr20 ?        00:00:04 asm_pmon_+ASM1
oracle   17531     1  0 Apr20 ?        00:00:06 ora_pmon_hawk1
[oracle@racnode-dc1-1 ~]$

Let me show you how this is convoluted.
In my case, it’s easy because there were only 2 actions performed.
Do you know what GridSetupAction was performed based on the directory name?

$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 18:59 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 18:56 GridSetupActions2019-04-21_02-10-47AM

This is how you can find out.

$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 19:20 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 19:22 GridSetupActions2019-04-21_02-10-47AM

### gridSetup.sh -silent -skipPrereqs -applyRU
$ ll
total 13012
-rw-r----- 1 oracle oinstall   20562 Apr 20 19:09 AttachHome2019-04-20_06-51-48PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall       0 Apr 20 18:59 gridSetupActions2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall 7306374 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall 2374182 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall 3582408 Apr 20 18:59 installerPatchActions_2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall       0 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall       0 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall     157 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall      29 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.out.racnode-dc1-2
drwxrwx--- 2 oracle oinstall    4096 Apr 20 19:01 temp_ob
-rw-r----- 1 oracle oinstall   12467 Apr 20 19:09 time2019-04-20_06-51-48PM.log

$ grep ROOTSH_LOCATION gridSetupActions2019-04-20_06-51-48PM.log
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/'. Received the value from a code block.

$ grep "Execute Root Scripts successful" time2019-04-20_06-51-48PM.log
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914

### gridSetup.sh -executeConfigTools -silent
$ ll
total 1116
-rw-r----- 1 oracle oinstall       0 Apr 21 02:10 gridSetupActions2019-04-21_02-10-47AM.err
-rw-r----- 1 oracle oinstall  122568 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall 1004378 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.out
-rw-r----- 1 oracle oinstall     129 Apr 21 02:10 installerPatchActions_2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall    3155 Apr 21 02:51 time2019-04-21_02-10-47AM.log

$ grep rhprepos *
gridSetupActions2019-04-21_02-10-47AM.log:INFO:  [Apr 21, 2019 2:45:37 AM] Command /u01/ upgradeSchema -fromversion

$ grep executeSelectedTools gridSetupActions2019-04-21_02-10-47AM.log
INFO:  [Apr 21, 2019 2:11:37 AM] Entering ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate oAggregate=oracle.crs:oracle.crs:
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate action assigned
INFO:  [Apr 21, 2019 2:51:38 AM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 21, 2019 2:51:38 AM] Exiting ConfigClient.executeSelectedToolsInAggregate method

It might be better to use GUI if available but be careful.

For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.

I was using X and connection was lost during the upgrade. It was a kiss of death with this being the last screen capture.

Rene’s quote:

After looking for information in MOS, there wasn’t much that could lead me on how to solve the issue, just a lot of bugs related to the RHP repository.

I was lucky enough to get on a call with a good friend (@_rickgonzalez ) who is the PM of the RHP and we were able to work through it. So below is what I was able to do to solve the issue.

Also it was confirmed by them , that this is a bug in the upgrade process of 18.X, so hopefully they will be fixing it soon.

I concur and conclude, the process for GI 18c Upgrade is overly complicated, convoluted, contradicting, and not clearly documented, all having to do with MGMTDB and Rapid Home Provisioning (RHP) repository.

Unless you’re lucky or know someone, good luck with your upgrade.

Lastly, it would be greatly appreciated if you would share your upgrade experiences and/or results.

Did you use GUI or silent?

[Solved] OCI Load Balancer Throwing Error: 502 Bad Gateway

Online Apps DBA - Sun, 2019-04-21 01:04

[Solved] Load Balancer On Cloud (OCI) Throwing Error: 502 Bad Gateway Configured load balancer on Oracle Cloud (OCI) and hitting 502 Bad Gateway while accessing on Oracle Cloud Infrastructure(OCI), Then Check https://k21academy.com/oci33 to learn Step-wise & Get: ✔ Overview Of Load Balancer In Oracle Cloud (OCI) ✔ What are the Types Of Load Balancer In […]

The post [Solved] OCI Load Balancer Throwing Error: 502 Bad Gateway appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle Ksplice introduces Known Exploit Detection functionality

Wim Coekaerts - Sat, 2019-04-20 12:04

The Oracle Ksplice team has added some really cool new functionality in Oracle Ksplice. Instead of writing and copying the blog pretty much, just go directly to the source:

It's unique, it's awesome, it's part of Oracle Linux premier subscription and it's included in Oracle Cloud instances at no extra cost for all customers using Oracle Linux. 



Understanding Nested Lists Dictionaries of JSON in Python and AWS CLI

Pakistan's First Oracle Blog - Sat, 2019-04-20 03:01

After lots of hair pulling, bouts of frustration, I was able to grasp this nested list and dictionary thingie in JSON output of AWS cli commands such as describe-db-instances and others. If you run the describe-db-instances for rds or describe-instances for ec2, you get a huge pile of JSON mumbo-jumpo with all those curly and square brackets studded with colons and commas. The output is heavily nested.

For example, if you do :

aws rds describe-db-instances

you get all the information but heavily nested within. Now if you only want to extract or iterate through, say VPCSecurityGroupId of all database instances, then you have to traverse all that nested information which comprises of dictionary consisting of keys which have values as arrays and those arrays have more dictionaries and so on.

After the above rant, let me try to ease the pain a bit by explaining this. For clarity, I have just taken out following chunk from describe-db-instance output. Suppose, the only thing you are interested in is the value of VpcSecurityGroupId from  following chunk:

mydb= {'DBInstances':
            {'VpcSecurityGroups': [ {'VpcSecurityGroupId': 'sg-0ed48bab1d54e9554', 'Status': 'active'}]}

The variable mydb is a dictionary with key DBInstances. This key DBInstances has an array as its value. Now the first item of that array is another dictionary and the first key of that dictionary is VpcSecurityGroups. Now the value this key VpcSecurityGroups another array. This another array's first item is again a dictionary. This last dictionary has a key VpcSecurityGroupId and we want value of this key.

If your head has stopped spinning, then read on and stop cursing me as I am going to demystify it now.

If you want to print that value just use following command:


So the secret is that if its a dictionary, then use key name and if its an array then use index and keep going. That's all there is to it. Full code to print this using Python, boto3 etc is as follows:

import boto3
import click

rds = boto3.client('rds',region_name='ap-southeast-2')
dbs = rds.describe_db_instances()

def cli():
    "Gets RDS data"

def list_database(onedb):
    "List info about one database"
    #Following line only prints value of VpcSecurityGroupId of RDS instance
    #Following line only prints value of OptionGroup of RDS instance
    #Following line only prints value of Parameter Group of RDS instance

if __name__ == '__main__':

I hope that helps. If you know any easier way, please do favor and let us know in comments. Thanks.

Categories: DBA Blogs

Economics and Innovations of Serverless

OTN TechBlog - Fri, 2019-04-19 13:08

The term serverless has been one of the biggest mindset changes since the term cloud, and learning how to “think serverless” should be part of every developers cloud-native journey. This is why one of Oracle’s 10 Predictions for Developers in 2019 is “The Economics of Serverless Drives Innovation on Multiple Fronts”. Let’s unpack what we mean by economics and innovation while covering a few common misconceptions.

The Economics

Cost is only part of the story

I often hear “cost reduction” as a key driver of serverless architectures. Everyone wants to save money and be a hero for their organization. Why pay for a full time server when you can pay per function millisecond? The ultimate panacea of utility computing — pay for exactly what you need and no more. This is only part of the story.

Economics is a broad term for the production, distribution, and consumption of things. Serverless is about producing software. And software is about using computers as leverage to produce non-linear value. Facebook (really MySpace) leveraged software to change the way the world connected. Uber leveraged software to transform the transportation industry. Netflix leveraged software to change the way the world consumed movies. Software is transforming every major company in every major industry, and for most, is now at the heart of how they deliver value to end users. So why the fuss about serverless?

Serverless is About Driving Non-Linear Value

Because serverless is ultimately about driving non-linear business value which can fundamentally change the economics of your business. I’ve talked about this many times , but Ben nails it — “serverless is a ladder. You’re climbing to some nirvana where you get to deliver pure business value with no overhead.”

Pundits point out that “focus on business value” has been said many times over the years, and they’re right. But every software architecture cycle learns from past cycles and incorporates new ways to achieve this goal of greater focus, which is why serverless is such an important cycle to watch. It effectively incorporates the promise (and best) of cloud with the promise (and learnings) of SOA .

Ultimately the winning businesses reduce overhead while increasing value to their customers by empowering their developers. That’s why the economics are too compelling to ignore. Not because your CRON job server goes from $30 to $0.30/month (although a nice use case), but because creating a culture of innovation and focus on driving business value is a formula for success.

So we can’t ignore the economics. Let’s move to the innovations.

The Innovations

The tech industry is in constant motion. Apps, infrastructure, and the delivery process drive each other forward together in a ping-pong fashion. Here are a few of the key areas to watch that are contributing to forward movement in the innovation cycle, as illustrated in the “Digital Trialectic”:

Depth of Services

The web is fundamentally changing how we deliver services. We’re moving towards an “everything-as-a-service” world where important bits of functionality can be consumed by simply calling an API. Programming is changing, and this is driven largely by the depth of available services to solve problems that once plagued developers working hours.

Twilio now removes the need for SMS, voice, and now email (acquired Sendgrid) code and infrastructure. Google’s Cloud Vision API removes the need for complex object and facial detection code and infrastructure. AWS’s Ground Station removes the need for satellite communications code and infrastructure (finally?), and Oracle’s Autonomous Database replaces your existing Oracle Database code and infrastructure.

Pizzas, weather, maps, automobile data, cats – you have an endless list of things accessible across simple API calls.

Open Source

As always, serverless innovation is happening in the world of open source as well, many of which end up as part of the list of services above. The Fn Project is fully open source code my team is working on which will allow anyone to run their own serverless infrastructure on any cloud, starting with Functions-as-a-service and moving towards things like workflow as well. Come say hi in our Slack.

But you can get to serverless faster with the managed Fn service, Oracle Functions. And there are other great industry efforts as well including Knative by Google, OpenFaas by Alex Ellis, and OpenWhisk by IBM.

All of these projects focus mostly on the compute aspect of a serverless architecture. There are many projects that aim to make other areas easier such as storage, networking, security, etc, and all will eventually have their own managed service counterparts to complete the picture. The options are a bit bewildering, which is where standards can help.


With a paradox of choice emerging in serverless, standards aim to ease the pain in providing common interfaces across projects, vendors, and services. The most active forum driving these standards is the Serverless Working Group, a subgroup of the Cloud Native Compute Foundation. Like cats and dogs living together, representatives from almost every major vendor and many notable startups and end users have been discussing how to “harmonize” the quickly-moving serverless space. CloudEvents has been the first major output from the group, and it’s a great one to watch. Join the group during the weekly meetings, or face-to-face at any of the upcoming KubeCon’s.

Expect workflow, function signatures, and other important aspects of serverless to come next. My hope is that the group can move quickly enough to keep up with the quickly-moving space and have a material impact on the future of serverless architectures, further increasing the focus on business value for developers at companies of all sizes.

A Final Word

We’re all guilty of skipping to the end in long posts. So here’s the net net: serverless is the next cycle of software architecture, its roots and learnings coming from best-of SOA and cloud. Its aim is to change the way in which software is produced by allowing developers to focus on business value, which in turn drives non-linear business value. The industry is moving quickly with innovation happening through the proliferation of services, open source, and ultimately standards to help harmonize this all together.

Like anything, the best way to get started is to just start. Pick your favorite cloud, and start using functions. You can either install Fn manually or sign up for early access to Oracle Functions.

If you don’t have an Oracle Cloud account, take a free trial today.

Oracle VM Server: Working with ovm cli

Dietrich Schroff - Fri, 2019-04-19 06:01
After getting the ovmcli run, here some commands which are quite helpful, when you are working with Oracle VM server.
But first:
Starting the ovmcli is done via
ssh admin@localhost -p 10000
at the OVM Manager.

After that you can get some overviews:
OVM> list server
Command: list server
Status: Success
Time: 2019-01-25 06:56:55,065 EST
  id:18:e2:a6:9d:5c:b6:48:3a:9b:d2:b0:0f:56:7e:ab:e9  name:oraclevm
OVM> list vm
Command: list vm
Status: Success
Time: 2019-01-25 06:56:57,357 EST
  id:0004fb0000060000fa3b1b883e717582  name:myAlpineLinux
OVM> list ServerPool
Command: list ServerPool
Status: Success
Time: 2019-01-25 06:57:12,165 EST
  id:0004fb0000020000fca85278d951ce27  name:MyServerPool
A complete list of all list commands can be obtained like this:
OVM> list ?
An overview which kind of command can be used like list:
OVM> help
For Most Object Types:
    create [(attribute1)="value1"] ... [on ]
    edit   (attribute1)="value1" ...
For Most Object Types with Children:
    add to
    remove from
Client Session Commands:
    set alphabetizeAttributes=[Yes|No]
    set commandMode=[Asynchronous|Synchronous]
    set commandTimeout=[1-43200]
    set endLineChars=[CRLF,CR,LF]
    set outputMode=[Verbose,XML,Sparse]
Other Commands:
If you want to get you vm.cfg file, you can use the id from "list vm" and type:
OVM> getVmCfgFileContent Vm id=0004fb0000060000fa3b1b883e717582
Command: getVmCfgFileContent Vm id=0004fb0000060000fa3b1b883e717582
Status: Success
Time: 2019-01-25 06:59:46,875 EST
  OVM_domain_type = xen_pvm
  bootargs =
  disk = [file:/OVS/Repositories/0004fb0000030000dad74d9c43176d2e/ISOs/0004fb0000150000226a713414eaa501.iso,xvda:cdrom,r,file:/OVS/Repositories/0004fb0000030000dad74d9c43176d2e/VirtualDisks/0004fb0000120000f62a7bba83063840.img,xvdb,w]
  bootloader = /usr/bin/pygrub
  vcpus = 1
  memory = 512
  on_poweroff = destroy
  OVM_os_type = Other Linux
  on_crash = restart
  cpu_weight = 27500
  OVM_description =
  cpu_cap = 0
  on_reboot = restart
  OVM_simple_name = myAlpineLinux
  name = 0004fb0000060000fa3b1b883e717582
  maxvcpus = 1
  vfb = [type=vnc,vncunused=1,vnclisten=,keymap=en-us]
  uuid = 0004fb00-0006-0000-fa3b-1b883e717582
  guest_os_type = linux
  OVM_cpu_compat_group =
  OVM_high_availability = false
  vif = []
Very helpful is the Oracle documentation (here).

Creating A Microservice With Micronaut, GORM And Oracle ATP

OTN TechBlog - Thu, 2019-04-18 12:56

Over the past year, the Micronaut framework has become extremely popular. And for good reason, too. It's a pretty revolutionary framework for the JVM world that uses compile time dependency injection and AOP that does not use any reflection. That means huge gains for your startup and runtime performance and memory consumption. But it's not enough to just be performant, a framework has to be easy to use and well documented. The good news is, Micronaut is both of these. And it's fun to use and works great with Groovy, Kotlin and GraalVM. In addition, the people behind Micronaut understand the direction that the industry is heading and have built the framework with that direction in mind. This means that things like Serverless and Cloud deployments are easy and there are features that provide direct support for them.  

In this post we'll look at how to create a Microservice with Micronaut which will expose a "Person" API. The service will utilize GORM which is a "data access toolkit" - a fancy way of saying it's a really easy way to work with databases (from traditional RDBMS to MongoDB, Neo4J and more). Specifically, we'll utilize GORM for Hibernate to interact with an Oracle Autonomous Transaction Processing DB. Here's what we'll be doing:

  1. Create the Micronaut application with Groovy support
  2. Configure the application to use GORM connected to an ATP database.
  3. Create a Person model
  4. Create a Person service to perform CRUD operations on the Person model
  5. Create a controller to interact with the Person service

First things first, make sure you have an Oracle ATP instance up and running. Luckily, that's really easy to do and this post by my boss Gerald Venzl will show you how to set up an ATP instance in less than 5 minutes. Once you have a running instance, grab a copy of your Client Credentials "Wallet" and unzip it somewhere on your local system.

Before we move on to the next step, create a new schema in your ATP instance and create a single table using the following DDL:

You're now ready to move on to the next step, creating the Micronaut application.

Create The Micronaut Application

If you've never used it before, you'll need to install Micronaut which includes a helpful CLI for scaffolding certain elements like the application itself and controllers, etc as you work with your application. Once you've confirmed the install, run the following command to generate your basic application:

Take a look inside that directory to see what the CLI has generated for you. 

As you can see, the CLI has generated a Gradle build script, a Dockerfile and some other config files as well as a `src` directory. That directory looks like this:

At this point you can import the application into your favorite IDE, so do that now. The next step is to generate a controller:

We'll make one small adjustment to the generated controller, so open it up and add the `@CompileStatic` annotation to the controller. It should like so once you're done:

Now run the application using `gradle run` (we can also use the Gradle wrapper with `./gradlew run`) and our application will start up and be available via the browser or a simple curl command to confirm that it's working.  You'll see the following in your console once the app is ready to go:

Give it a shot:

We aren't returning any content, but we can see the '200 OK' which means the application received the request and returned the appropriate response.

To make things easier for development and testing the app locally I like to create a custom Run/Debug configuration in my IDE (IntelliJ IDEA) and point it at a custom Gradle task. We'll need to pass in some System properties eventually, and this enables us to do that when launching from the IDE. Create a new task in `build.gradle` named `myTask` that looks like so:

Now create a custom Run/Debug configuration that points at this task and add the VM options that we'll need later on for the Oracle DB connection:

Here are the properties we'll need to populate for easier copy/pasting:

Let's move to the next step and get the application ready to talk to ATP!

Configure The Application For GORM and ATP

Before we can configure the application we need to make sure we have the Oracle JDBC drivers available. Download them, create a directory called `libs` in the root of your application and place them there.  Make sure that you have the following JARs in the `libs` directory:

Modify your `dependencies` block in your `build.gradle` file so that the Oracle JDB JARs and the `micronaut-hibernate-gorm` artifacts are included as dependencies:

Now let's modify the file located at `src/main/resources/application.yml` to configure the datasource and Hibernate.  

Our app is now ready to talk to ATP via GORM, so it's time to create a service, model and some controller methods! We'll start with the model.

Creating A Model

GORM models are super easy to work with.  They're just POGO's (Plain Old Groovy Objects) with some special annotations that help identify them as model entities and provide validation via the Bean Validation API. Let's create our `Person` model object by adding a Groovy class called 'Person.groovy' in a new directory called `model`.  Populate the model as such:

Take note of a few items here. We've annotated the class with @Entity (`grails.gorm.annotation.Entity`) so GORM knows that this is an entity it needs to manage. Our model has 3 properties: firstName, lastName and isCool. If you look back at the DDL we used to create the `person` table above you'll notice that we have two additional columns that aren't addressed in the model: ID and version. The ID column is implicit with a GORM entity and the version column is auto-managed by GORM to handle optimistic locking on entities. You'll also notice a few annotations on the properties which are used for data validation as we'll see later on.

We can start the application up again at this point and we'll see that GORM has identified our entity and Micronaut has configured the application for Hibernate:

Let's move on to creating a service.

Creating A Service

I'm not going to lie to you. If you're waiting for things to get difficult here, you're going to be disappointed. Creating the service that we're going to use to manage `Person` CRUD operations is really easy to do. Create a Groovy class called `PersonService` in a new directory called `service` and populate it with the following:

That's literally all it takes. This service is now ready to handle operations from our controller. GORM is smart enough to take the method signatures that we've provided here and implement the methods. The nice thing about using an abstract class approach (as opposed to using the interface approach) is that we can manually implement the methods ourselves if we have additional business logic that requires us to do so.

There's no need to restart the application here, as we've made no changes that would be visible at this point. We're going to need to modify our controller for that, so let's create one!

Creating A Controller

Lets modify the `PersonController` that we created earlier to give us some endpoints that we can use to do some persistence operations. First, we'll need to inject our PersonService into the controller.  This too is straightforward by simply including the following just inside our class declaration:

The first step in our controller should be a method to save a `Person`.  Let's add a method annotated with `@Post` to handle this and within the method we'll call the `PersonService.save()` method.  If things go well, we'll return the newly created `Person`, if not we'll return a list of validation errors. Note that Micronaut will bind the body of the HTTP request to the `person` argument of the controller method meaning that inside the method we'll have a fully populated `Person` bean to work with.

If we start up the application we are now able to persist a `Person` via the `/person/save` endpoint:

Note that we've received a 200 OK response here with an object containing our `Person`.  However, if we tried the operation with some invalid data, we'd receive some errors back:

Since our model (very strangely) indicated that the `Person` firstName must be between 5 and 50 characters we receive a 422 Unprocessable Entity response that contains an array of validation errors back with this response.

Now we'll add a `/list` endpoint that users can hit to list all of the Person objects stored in the ATP instance. We'll set it up with two optional parameters that can be used for pagination.

Remember that our `PersonService` had two signatures for the `findAll` method - one that accepted no parameters and another that accepted a `Map`.  The Map signature can be used to pass additional parameters like those used for pagination.  So calling `/person/list` without any parameters will give us all `Person` objects:

Or we can get a subset via the pagination params like so:

We can also add a `/person/get` endpoint to get a `Person` by ID:

And a `/person/delete` endpoint to delete a `Person`:


We've seen here that Micronaut is a simple but powerful way to create performant Microservice applications and that data persistence via Hibernate/GORM is easy to accomplish when using an Oracle ATP backend.  Your feedback is very important to me so please feel free to comment below or interact with me on Twitter (@recursivecodes).

If you'd like to take a look at this entire application you can view it or clone via Github.


Subscribe to Oracle FAQ aggregator