Feed aggregator

Email Spoofing

Yann Neuhaus - Mon, 2019-07-15 11:07

Have you ever had this unhealthy sensation of being accused of facts that do not concern you? To feel helpless in the face of an accusing mail, which, because of its imperative and accusing tone, has the gift of throwing us the opprobrium?

This is the purpose of this particular kind of sextortion mail that uses spoofing, to try to extort money from you. A message from a supposed “hacker” who claims to have hacked into your computer. He threatens you with publishing compromising images taken without your knowledge with your webcam and asks you for a ransom in virtual currency most of the time.

Something like that:

 

Date:  Friday, 24 May 2019 at 09:19 UTC+1
Subject: oneperson
Your account is hacked! Renew the pswd immediately!
You do not heard about me and you are definitely wondering why you’re receiving this particular electronic message, proper?
I’m ahacker who exploitedyour emailand digital devicesnot so long ago.
Do not waste your time and make an attempt to communicate with me or find me, it’s not possible, because I directed you a letter from YOUR own account that I’ve hacked.
I have started malware to the adult vids (porn) site and suppose that you watched this website to enjoy it (you understand what I mean).
Whilst you have been keeping an eye on films, your browser started out functioning like a RDP (Remote Control) that have a keylogger that gave me authority to access your desktop and camera.
Then, my softaquiredall data.
You have entered passcodes on the online resources you visited, I intercepted all of them.
Of course, you could possibly modify them, or perhaps already modified them.
But it really doesn’t matter, my app updates needed data regularly.
And what did I do?
I generated a reserve copy of every your system. Of all files and personal contacts.
I have managed to create dual-screen record. The 1 screen displays the clip that you were watching (you have a good taste, ha-ha…), and the second part reveals the recording from your own webcam.
What exactly must you do?
So, in my view, 1000 USD will be a reasonable amount of money for this little riddle. You will make the payment by bitcoins (if you don’t understand this, search “how to purchase bitcoin” in Google).
My bitcoin wallet address:
1816WoXDtSmAM9a4e3HhebDXP7DLkuaYAd
(It is cAsE sensitive, so copy and paste it).
Warning:
You will have 2 days to perform the payment. (I built in an exclusive pixel in this message, and at this time I understand that you’ve read through this email).
To monitorthe reading of a letterand the actionsin it, I utilizea Facebook pixel. Thanks to them. (Everything thatis usedfor the authorities may helpus.)

In the event I do not get bitcoins, I shall undoubtedly give your video to each of your contacts, along with family members, colleagues, etc?

 

Users who are victims of these scams receive a message from a stranger who presents himself as a hacker. This alleged “hacker” claims to have taken control of his victim’s computer following consultation of a pornographic site (or any other site that morality would condemn). The cybercriminal then announces having compromising videos of the victim made with his webcam. He threatens to publish them to the victim’s personal or even professional contacts if the victim does not pay him a ransom. This ransom, which ranges from a few hundred to several thousand dollars, is claimed in a virtual currency (usually in Bitcoin but not only).

To scare the victim even more, cybercriminals sometimes go so far as to write to the victim with his or her own email address, in order to make him or her believe that they have actually taken control of his or her account. 

First of all, there is no need to be afraid of it. Indeed, if the “piracy” announced by cybercriminals is not in theory impossible to achieve, in practice, it remains technically complex and above all time-consuming to implement. Since scammers target their victims by the thousands, it can be deduced that they would not have the time to do what they claim to have done. 

These messages are just an attempt at a scam. In other words, if you receive such a blackmail message and do not pay, nothing more will obviously happen. 

Then, no need to change your email credentials. Your email address is usually something known and already circulates on the Internet because you use it regularly on different sites to identify and communicate. These sites have sometimes resold or exchanged their address files with different partners more or less scrupulous in marketing objectives.

If cybercriminals have finally written to you with your own email address to make you believe that they have taken control of it: be aware that the sender’s address in a message is just a simple display that can very easily be usurped without having to have a lot of technical skills. 

In any case, the way to go is simple: don’t panic, don’t answer, don’t pay, just throw this mail in the trash (and don’t forget to empty it regularly). 

On the mail server side, setting up certain elements can help to prevent this kind of mail from spreading in the organization. This involves deploying the following measures on your mail server:

  •       SPF (Sender Policy Framework): This is a standard for verifying the domain name of the sender of an email (standardized in RFC 7208 [1]). The adoption of this standard is likely to reduce spam. It is based on the SMTP (Simple Mail Transfer Protocol) which does not provide a sender verification mechanism. SPF aims to reduce the possibility of spoofing by publishing a record in the DNS (Domain Name Server) indicating which IP addresses are allowed or forbidden to send mail for the domain in question.
  •         DKIM (DomainKeys Identified Mail): This is a reliable authentication standard for the domain name of the sender of an email that provides effective protection against spam and phishing (standardized in RFC 6376 [2]). DKIM works by cryptographic signature, verifies the authenticity of the sending domain and also guarantees the integrity of the message.
  •       DMARC (Domain-based Message Authentication, Reporting and Conformance): This is a technical specification to help reduce email misuse by providing a solution for deploying and monitoring authentication issues (standardized in RFC 7489 [3]). DMARC standardizes the way how recipients perform email authentication using SPF and DKIM mechanisms.

 

REFERENCES

[1] S. Kitterman, “Sender Policy Framework (SPF),” ser. RFC7208, 2014, https://tools.ietf.org/html/rfc7208

[2] D. Crocker, T. Hansen, M. Kucherawy, “DomainKeys Identified Mail (DKIM) Signatures” ser. RFC6376, 2011,  https://tools.ietf.org/html/rfc6376

[3] M. Kuchewary, E. Zwicky, “Domain-based Message Authentication, Reporting and Conformance (DMARC)”, ser. RFC7489, 2015, https://tools.ietf.org/html/rfc7489

Cet article Email Spoofing est apparu en premier sur Blog dbi services.

Forecast Model Tuning with Additional Regressors in Prophet

Andrejus Baranovski - Mon, 2019-07-15 04:17
I’m going to share my experiment results with Prophet additional regressors. My goal was to check how extra regressor would weight on forecast calculated by Prophet.

Using dataset from Kaggle — Bike Sharing in Washington D.C. Dataset. Data comes with a number for bike rentals per day and weather conditions. I have created and compared three models:

1. Time series Prophet model with date and number of bike rentals
2. A model with additional regressor —weather temperature
3. A model with additional regressor s— weather temperature and state (raining, sunny, etc.)

We should see the effect of regressor and compare these three models.

Read more in my Towards Data Science post.

I’m Kyle Benson and this is how I work

Duncan Davies - Fri, 2019-07-12 06:00

I’ve not blogged on this site for a while so it takes a special post to break the hiatus. I’m delighted to finally be able to share the “How I Work” entry for Kyle Benson, one half of the all-conquering PSAdmin.io duo. Kyle and Dan are super-busy, splitting their time between PeopleSoft consulting and the PSAdmin.io slack community, their Podcast, their conference and their website.  I’m thrilled that he has added his profile to our ‘How I Work‘ series.

kyle-hike

Name: Kyle Benson

Occupation: Independent PeopleSoft Consultant and Co-owner of psadmin.io
Location: Minneapolis, MN
Current computer: Dell Precision 5510
Current mobile devices: Pixel 2
I work: To keep from getting bored. I have a ton of fun solving tough problems and optimizing things.

What apps/software/tools can’t you live without?

Besides your phone and computer, what gadget can’t you live without?
Saying I “can’t live without” this is overstating it, but I love my home automation gadgets. I have been slowly adding more and more to my home. Lately my pace has slowed down so my family can keep up with my craziness. I’m currently using a SmartThing Gen. 1 HUB and I’m liking the ecosystem.  That reminds me, time to upgrade!

What’s your workspace like?
I split time between client sites and my home office. I like to use a standing desk and keep it rather tidy. I love my ultrawide monitor and have a “studio” step for creating psadmin.io content.

kyle-desk

What do you listen to while you work?
I love to put on mellow, ambient, downtempo style music. I often listen to the same playlist on repeat for months. Something about the relaxing, repetitive sounds helps get me in “flow” faster. The artist Blackmill really started me down this road. The current playlists I’m listening to on Spotify are ‘Atmospheric Calm’ and ‘Soundscapes For Gaming’.

What PeopleSoft-related productivity apps do you use?
I love Phire for development and git for DPK/admin scripts. Having the history and flexibility to migrate is so nice. Using psadmin-plus helps a lot, too!

Do you have a 2-line tip that some others might not know?
Make sure you are using aliases so you aren’t wasting time typing! Here is a short list of aliases I use often, mostly related to changing directories.

  • cddpk
    • Change to the DPK base directory
  • cdcfg
    • When using multiple $PS_CFG_HOMEs on a server, change to the config homes base directory
  • cdweb $domain_name
    • Change to the PORTAL.war directory of a domain
  • pupapp $environment
    • Run puppet apply for an $environment (ie. production)

What SQL/Code do you find yourself writing most often?
Currently I’ve found myself living in the browser development tools. I’ve been exploring some of the new JavaScript that Fluid and Unified Navigation introduces. I do a lot of debugging, playing in console, etc to find out how some of these features work. This is all pretty complex stuff and you can really get lost down the rabbit hole.

What would be the one item you’d add to PeopleSoft if you could?
Current CPU archives in DPK.

What everyday thing are you better at than anyone else?
I love riddles and puzzles. I’ve been really into escape rooms this past year, too.

How do you keep yourself healthy and happy?
Getting outside with the family year round is key. Living in a place like Minnesota, you learn a lack of vitamin D and cabin fever is no joke. Walking, hiking, biking in the summer. Biking and cross country skiing in the winter. Also, the family heads up to the North Shore of Lake Superior every few months. These weekend getaways are always a great recharge.

What’s the best advice you’ve ever received?
Find a job you enjoy doing, and you will never have to work a day in your life.

2019 Oracle ITA National Fall Championships Come to Newport Beach, California

Oracle Press Releases - Thu, 2019-07-11 12:00
Press Release
2019 Oracle ITA National Fall Championships Come to Newport Beach, California

TEMPE, Ariz.—Jul 11, 2019

The Intercollegiate Tennis Association (ITA) and Oracle announced today that Newport Beach Tennis Club and The Tennis Club at Newport Beach Country Club will serve as host sites for the 2019 Oracle ITA National Fall Championships November 6–10. The men’s and women’s finals will be held at Newport Beach Tennis Club.

The event returns to Southern California for the second time in the last three years. Arizona’s Surprise Tennis & Racquet Complex held the tournament in 2018. The JW Marriott Desert Springs Resort and Indian Wells Tennis Garden co-hosted in 2017.

“Oracle’s commitment to college tennis continues to help move our sport to the forefront of intercollegiate athletics,” ITA Chief Executive Officer Dr. Timothy Russell said. “The ITA is proud that our championships are some of the best in college sports. We are very excited to come to Newport Beach, which promises to ensure a fantastic student-athlete experience.”

The Newport Beach Tennis Club features 19 lighted tennis courts and a sunken center court with stadium seating. It has hosted numerous professional events throughout its history, including the Davis Cup and Oracle Challenger Series. The Tennis Club at Newport Beach offers 24 outdoor courts.

“Oracle remains committed to collegiate tennis and ensuring young players get the opportunity to improve their games and compete in great venues,” Oracle CEO Mark Hurd said.  “We’re looking forward to seeing American collegians and juniors play some terrific tennis at this year’s Oracle ITA National Championships.”

The Oracle ITA National Fall Championships features 128 of the nation’s top collegiate singles players (64 men and 64 women) and 64 doubles teams (32 men’s teams and 32 women’s teams). It is the only event on the collegiate tennis calendar that highlights competitors from all five divisions (NCAA Divisions I, II, III, NAIA, and Junior College) playing in the same tournament. Now in its third year, the event replaced the ITA National Indoor Intercollegiate Championships.

The Oracle ITA National Fall Championships joins the Oracle ITA Masters as one of two major collegiate tournaments held in the Southern California area and co-sponsored by Oracle and the ITA. The Oracle Masters returns to Pepperdine University and the Malibu Racquet Club for the fifth consecutive year and is scheduled for Sept. 26–29.

Contact Info
Mindi Bach
Oracle Corporate Communications
650-506-3221
mindi.bach@oracle.com
Al Barba
Director of Communications, Marketing & Advanced Media, ITA
602-687-6379
abarba@itatennis.com
About the Intercollegiate Tennis Association

The Intercollegiate Tennis Association (ITA) is committed to serving college tennis and returning the leaders of tomorrow. As the governing body of college tennis, the ITA oversees men’s and women’s varsity tennis at NCAA Divisions I, II and III, NAIA and Junior/Community College divisions. The ITA administers a comprehensive awards and rankings program for men’s and women’s varsity players, coaches and teams in all divisions, providing recognition for their accomplishments on and off the court. For more information on the ITA, visit the ITA website at www.itatennis.com, like the ITA on Facebook or follow @ITA_Tennis on Twitter and Instagram.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Mindi Bach

  • 650-506-3221

Al Barba

  • 602-687-6379

Migrating your users from md5 to scram authentication in PostgreSQL

Yann Neuhaus - Thu, 2019-07-11 03:43

One of the new features in PostgreSQL 10 was the introduction of stronger password authentication based on SCRAM-SHA-256. How can you migrate your existing users that currently use md5 authentication to the new method without any interruption? Actually that is quite easy, as you will see in a few moments, but there is one important point to consider: Not every client/driver does already support SCRAM-SHA-256 authentication so you need to check that before. Here is the list of the drivers and their support for SCRAM-SHA-256.

The default method that PostgreSQL uses to encrypt password is defined by the “password_encryption” parameter:

postgres=# show password_encryption;
 password_encryption 
---------------------
 md5
(1 row)

Let’s assume we have a user that was created like this in the past:

postgres=# create user u1 login password 'u1';
CREATE ROLE

With the default method of md5 the hashed password looks like this:

postgres=# select passwd from pg_shadow where usename = 'u1';
               passwd                
-------------------------------------
 md58026a39c502750413402a90d9d8bae3c
(1 row)

As you can see the hash starts with md5 so we now that this hash was generated by the md5 algorithm. When we want this user to use scram-sha-256 instead, what do we need to do? The first step is to change the “password_encryption” parameter:

postgres=# alter system set password_encryption = 'scram-sha-256';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
postgres=# select current_setting('password_encryption');
 current_setting 
-----------------
 scram-sha-256
(1 row)

From now on the server will use scram-sha-256 and not anymore md5. But what happens when our user wants to connect to the instance once we changed that? Currently this is defined in pg_hba.conf:

postgres=> \! grep u1 $PGDATA/pg_hba.conf
host    postgres        u1              192.168.22.1/24         md5

Even though the default is not md5 anymore the user can still connect to the instance because the password hash did not change for that user:

postgres=> \! grep u1 $PGDATA/pg_hba.conf
host    postgres        u1              192.168.22.1/24         md5

postgres@rhel8pg:/home/postgres/ [PGDEV] psql -h 192.168.22.100 -p 5433 -U u1 postgres
Password for user u1: 
psql (13devel)
Type "help" for help.

postgres=> 

Once the user changed the password:

postgres@rhel8pg:/home/postgres/ [PGDEV] psql -h 192.168.22.100 -p 5433 -U u1 postgres
Password for user u1: 
psql (13devel)
Type "help" for help.

postgres=> \password
Enter new password: 
Enter it again: 
postgres=> 

… the hash of the new password is not md5 but SCRAM-SHA-256:

postgres=# select passwd from pg_shadow where usename = 'u1';
                                                                passwd                               >
----------------------------------------------------------------------------------------------------->
 SCRAM-SHA-256$4096:CypPmOW5/uIu4NvGJa+FNA==$PNGhlmRinbEKaFoPzi7T0hWk0emk18Ip9tv6mYIguAQ=:J9vr5CQDuKE>
(1 row)

One could expect that from now on the user is not able to connect anymore as we did not change pg_hba.conf until now:

postgres@rhel8pg:/home/postgres/ [PGDEV] psql -h 192.168.22.100 -p 5433 -U u1 postgres
Password for user u1: 
psql (13devel)
Type "help" for help.

postgres=> 

But in reality that still works as the server now uses the SCRAM-SHA-256 algorithm. So once all the users changed their passwords you can safely switch the rule in pg_hba.conf and you’re done:

postgres=> \! grep u1 $PGDATA/pg_hba.conf
host    postgres        u1              192.168.22.1/24         scram-sha-256

postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

You just need to make sure that all the users do not have a hash starting with md5 but the new one starting with SCRAM-SHA-256.

Cet article Migrating your users from md5 to scram authentication in PostgreSQL est apparu en premier sur Blog dbi services.

GoCardless Banks on NetSuite to Support International Expansion

Oracle Press Releases - Thu, 2019-07-11 03:00
Press Release
GoCardless Banks on NetSuite to Support International Expansion NetSuite Helps Innovative UK Fintech Company Enhance Financial Operations and Reshape Global Payments Industry

LONDON, UK—Jul 11, 2019

GoCardless, a global direct debit network headquartered in the UK, has selected Oracle NetSuite to support its mission to take the pain out of getting paid for businesses with recurring revenue. With NetSuite, the fintech company, which grew by 60 percent in the last year, has been able to automate financial management and help reduce the complexities of operating across multiple markets, currencies and tax laws as it rapidly expands its international operations.

Founded in 2012, GoCardless has created a global bank debit network to rival credit and debit cards, as well as a platform designed to take invoice, subscription, membership and installment payments. As demand for its services grows, with $10 billion in transactions a year and 40,000 customers around the world, GoCardless needed a single, scalable business platform that could provide the visibility and control required to optimise its financial reporting. After a careful evaluation, GoCardless selected NetSuite to manage and automate core business processes.

“Since implementing NetSuite, we have gone from basic accounting to conducting in-depth financial analysis,” said Catherine Birkett, CFO, GoCardless. “We can now report financial close faster and more accurately, quickly and easily setup new subsidiaries, and efficiently meet our stakeholders’ reporting requirements. This is incredibly valuable as we continue to expand into new markets and the best part about NetSuite is we now have a solution that will scale with our growth path for years to come.”

With NetSuite, GoCardless will be able to increase the agility of its financial operations as it expands globally. By gaining a unified view into the business, GoCardless will be better enabled to address the complexity it faces with entering new international markets and make decisions more confidently and quickly.

“GoCardless has a very advanced business model that is changing the way organisations collect payments,” said Nicky Tozer, VP of EMEA, Oracle NetSuite. “As its network expands to cover North America, Australia and more than 30 European countries, GoCardless needed a single and scalable business platform that could support its future growth and that’s why it selected NetSuite.”

Contact Info
Samuel Jamieson
PR Manager, EMEA
+44 (0)7468 752231
sjamieson@netsuite.com
About GoCardless

GoCardless is a global leader in recurring payments. GoCardless’ global payments network and technology platform take the pain out of getting paid for businesses with recurring revenue. More than 40,000 businesses worldwide, from multinational corporations to SMBs, transact through GoCardless each month, and the business processes $10bn of payments each year. GoCardless now has five offices: UK, France, Australia, Germany and USA.

About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials/Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 18,000 customers in 203 countries and dependent territories.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Samuel Jamieson

  • +44 (0)7468 752231

Adding a Fluid WorkCenter to a Navigation Collection

Jim Marion - Wed, 2019-07-10 20:36

Oracle has done an outstanding job converting Classic Self-service to Fluid to promote the modern, mobile user experience. But what about back-office functionality? We certainly can't predict the future, but it seems that back-office transactions will remain Classic for a very long time. Rather than change the appearance of the back-office user experience, I believe our best strategy is to build back-office, business process-based navigation. Our users don't seem excited about the NavBar and Navigator and we can nearly eliminate its use through properly constructed business process based navigation. Here are a couple of business process based navigation tools:

  • Navigation Collections
  • Master Detail
  • Dashboards
  • Activity Guides
  • WorkCenters

Because of its simplicity and ease of maintenance, we often recommend customers start with Tile Wizard-based Navigation Collections. Oracle, on the other hand, is providing business process based navigation by converting Classic WorkCenters to Fluid WorkCenters.

In a recent attempt to provide a segue from one business process to another, I added a Fluid WorkCenter to a Navigation Collection. Both a Tile Wizard-based Navigation Collection and a Fluid WorkCenter contain a left-hand sidebar. Embedding one in another creates a Left-panel Collision. To avoid this collision, I marked the Navigation Collection item Replace Window property. Unfortunately, trying to launch the Fluid WorkCenter from a Navigation Collection generated an SQL error. This prompted me to try launching the Fluid WorkCenter outside the Navigation Collection. To my surprise, this also generated an SQL error. The WorkCenter worked before adding it to a WorkCenter, so this was clearly unexpected. After reviewing the app server log, I discovered a single-row subquery within the Fluid WorkCenter framework was returning more than one row. It didn't do this before adding the Fluid WorkCenter to a Navigation Collection, so what changed? One thing: I added a Fluid WorkCenter to a Navigation Collection. The SQL that caused the problem looks for any CREF that uses the WorkCenter's target component and is marked as a Fluid Workcenter (contains &FLWC=Y in the CREF additional parameters). By adding a Fluid WorkCenter CREF to a Navigation Collection, I created a CREF Link to the original CREF. The end result was a second matching row in the portal registry table (PSPRSMDEF).

Lesson learned: Don't add a Fluid WorkCenter to a Navigation Collection or any other structure that will result in a second CREF with the same (or similar) target. This makes sense because Fluid WorkCenters are business process-based navigation. Adding business process-based navigation to business process-based navigation may not make sense.

Is there a workaround? Absolutely! Instead of adding the Fluid WorkCenter directly to a Navigation Collection, create a redirect iScript. The PeopleCode in the iScript will send the user to the existing Fluid WorkCenter content reference rather than duplicating the existing content reference in the Navigation Collection.

Is the workaround worth the effort? That is an entirely different question. First, the effort is minimal and will require just a few lines of PeopleCode and a Permission List update. But what is the savings and user experience impact? Fluid WorkCenters are designed to be launched as homepage tiles. To launch a homepage tile, you must be on a homepage. The user savings, therefore, is the user won't have to return to a homepage to launch the next business process but can transfer directly from one to the next. Returning to the prior business process is as simple as clicking the Fluid header back button.

Configuring productive Business Process navigation is critical to successful Fluid implementation. Are you ready to learn more? Register now for our Fluid 1 course online. Do you have a whole team to train? Contact us for group pricing and delivery options.

Oracle Names Rona Fairhead to the Board of Directors

Oracle Press Releases - Wed, 2019-07-10 15:45
Press Release
Oracle Names Rona Fairhead to the Board of Directors

Redwood Shores, Calif.—Jul 10, 2019

The Oracle Board of Directors today announced that it has unanimously elected Rona A. Fairhead to the company’s Board of Directors. The election is effective as of July 31, 2019 and increases the size of the Board to 15 directors.

“I am very pleased to welcome Mrs. Fairhead to the Board,” said Larry Ellison, Chairman of the Board of Directors and Chief Technology Officer. Bruce Chizen, Chair of the Nomination and Governance Committee, added, “Mrs. Fairhead is an accomplished leader with extensive international experience in finance, risk management, government affairs and global operations.  The Board will benefit from her unique perspective.”

Mrs. Fairhead, 57, most recently served as Minister of State for Trade and Export Promotion, Department for International Trade in the United Kingdom from September 2017 to May 2019. She previously served as Chair of the British Broadcasting Corporation Trust (BBC) from 2014 to 2017. From 2006 to 2013, Mrs. Fairhead served as Chair and Chief Executive Officer of the Financial Times Group Limited, which was a division of Pearson plc, and, prior to that, she served as Pearson’s Chief Financial Officer. Before joining Pearson, Mrs. Fairhead held a variety of leadership positions at Bombardier Inc. and Imperial Chemical Industries plc. Mrs. Fairhead previously served as a director of HSBC Holdings plc and PepsiCo, Inc.

Members of Oracle’s Board of Directors serve one-year terms and stand for election at the company’s next annual meeting of stockholders in November 2019.

Contact Info
Ken Bond
Oracle Investor Relations
1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communications
1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

“Safe Harbor” Statement

Statements in this press release relating to Oracle’s future plans, expectations, beliefs, intentions and prospects are “forward-looking statements” and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (“SEC”) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading “Risk Factors.” Copies of these filings are available online from the SEC, by contacting Oracle Corporation’s Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of July 10, 2019. Oracle undertakes no duty to update any statement in light of new information or future events.

Talk to a Press Contact

Ken Bond

  • 1.650.607.0349

Deborah Hellinger

  • 1.212.508.7935

Auto Seat Manufacturer TACHI-S Builds Global Talent Management Foundation with Oracle HCM Cloud

Oracle Press Releases - Wed, 2019-07-10 11:23
Press Release
Auto Seat Manufacturer TACHI-S Builds Global Talent Management Foundation with Oracle HCM Cloud Oracle HCM will drive innovation and optimize HR for TACHI-S worldwide, across 66 offices in 14 countries

Tokyo, Japan—Jul 10, 2019

Oracle Corporation Japan announced today that TACHI-S Co., Ltd., a leading independent automobile seat supplier, which handles everything from seat design and development to manufacturing, has selected Oracle Human Capital Management (HCM) Cloud to solidify the future of its global HR development. Oracle HCM Cloud will become the company’s global HR management framework, enabling TACHI-S to manage operations for more than 13,000 employees worldwide using one single system.

Established in 1954, TACHI-S has outpaced its rivals with technological advancements to create appealing seats for countless automobile manufacturers. The company has created a unique advantage with its technical and developmental capabilities to handle a wide range of vehicle types—including luxury cars and sports cars, compact cars, and even trucks – on a global scale.

With 66 regional business sites across 14 countries, including the United States, Mexico and China, the team at global headquarters in Japan is currently advancing its "Global Teamwork 2020" management strategy. This strategy is aimed at increasing the business value of TACHI-S through globally-integrated management to ensure that the company continues to be selected based on the trust of its customers. With this goal in mind, TACHI-S established its Global HR Department in April 2019, which has since launched a variety of projects to optimize the deployment of human resources around the globe, including efforts to develop personnel who can succeed internationally and create a worldwide HR management system. This can only be achieved by linking regional workforces and sharing human resources information globally, which is why TACHI-S has chosen Oracle HCM Cloud.

Helping businesses recruit, retain and engage their workforce, Oracle HCM Cloud delivers a complete, global HR solution to streamline processes and optimize performance with workforce analytics. Driven by emerging technologies like artificial intelligence and machine learning, the integrated suite enables customers to make faster and smarter business decisions in order to keep up with changing employee expectations and market demands.

When searching for a solution to support the work of its global HR departments, TACHI-S selected Oracle HCM Cloud for its ability to consolidate all HR-related data in a single database and analyze data in a comprehensive and multi-faceted manner.

"Our overseas sales ratio is around 60%, so all our global offices are working together to make innovation and develop seats that are trusted by customers around the world. We are working to create a 'single global team' as part of our 'Global Teamwork 2020' strategy,” said Shinichi Nakahara, Senior Director, Global HR Department, Human Resources Deprt & Corporate Planning Office and Director, Global HR Department, TACHI-S CO LTD. “Oracle HCM Cloud caught our eye not only because it allows us to analyze HR data in sophisticated ways, but also because it's a cloud-based service, which means we can use it around the world. Another factor is that Oracle uses the same service in its own HR data management. This allows us to absorb the knowledge they have gained to help us bring our talent management to the next level."

The rollout began in April 2019, and the system is expected to be fully operational by August 1. TACHI-S then plans to use the system at a more sophisticated level to determine specific ways to leverage HR data in order to achieve more advanced results, visualize personnel data, develop new talent, and optimize employee deployment for the group as a whole.

The Qunie Corporation, a business consulting company with expert knowledge of talent management and a history of supporting business innovations, acted as a consultant in the implementation of this system.

Contact Info
Norihito Yachita
Oracle Corporation Japan
+81 3 6834 4835
norihito.yachita@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Norihito Yachita

  • +81 3 6834 4835

Huawei Dorado 6000 V3 benchmark

Yann Neuhaus - Wed, 2019-07-10 02:39

I had the opportunity to test the new Dorada 6000 V3 All-Flash storage system.
See what the all-new Dorado 6000 V3 All-Flash Storage system is capable as storage for your database system.

Before you read

This is a series of different blog posts:
In the first blog post, I talk about “What you should measure on your database storage and why”.
The second blog post will talk about “How to do database storage performance benchmark with FIO”.
The third blog post will show “How good is the new HUAWEI Dorada 6000V3 All-Flash System for databases” measured with the methods and tools from post one and two (aka this one here).

The first two posts give you the theory to understand all the graphics and numbers I will show in the third blog post.

So in this post, we see, what are the results when we test a Huawei Dorado 6000V3 All-Flash storage system with these technics.

I uploaded all the files to a github repository: Huawei-Dorado6000V3-Benchmark.

Foreword

The setup was provided by Huawei in Shengsen, China. I’ve got remote access with a timeout at a certain point. Every test run runs for 10h, because of the timeout I was sometimes not able to capture all performance view pictures. That’s why some of the pictures are missing. Storage array and servers were provided free of charge, there was no exercise of influence from Huawei on the results or conclusion in any way.

Setup

4 Server were. provided, each with 4×16 GBit/s FC adapter direct connected to the storage systems.
There are 256 GByte of memory installed and 2x 14 Cores 2.6 GHz E5-2690 Intel CPUs.
Hyperthreading is disabled.
The 10 GBit/s network interfaces are irrelevant for this test here because all storage. traffic runs over FC.

The Dorado 6000 V3 System has 1 TByte of cache and 50x 900 GByte SSD from Huawei.
Deduplication was disabled.
Tests were made with and without compression.

Theoretical max speed

With 4x16GBit/s a maximal throughput of 64 GBit/s or 8 GByte/s is possible.
In IOPS this means we can transmit 8192 IOPS with 1 MByte block size or 1’048’576 IOPS with 8 KByte block size.
As mentioned in the title, this is theoretically or raw bandwidth, the usable bandwidth or payload is, of course, smaller: A FC-frames is 2112 bytes with 36 bytes of protocol overhead.
So in a 64 GBit/s FC network we can transfer: 64GBit/s / 8 ==> 8GByte/s * 1024 ==> 8192 MByte/s (raw) * (100-(36/2.112))/100 ==> 6795MByte/s (payload).

So we end up with a maximum of 6975 IOPS@1MByte or 869’841 IOPS@8KByte (payload) not included is the effect, that we are using multipathing* with 4x16GBit/s, which will also consume some power.

*If somebody out there has a method to calculate the overhead of multipathing in such a setup, please contact me!

Single-Server Results General

All single server tests were made on devices with enabled data compression. Unfortunately, I do not have the results from my tests with uncompressed devices for single server anymore, but you can see the difference in the multi-server section.

8 KByte block size

The 8 KByte block size tests on a single server were very performant.
What we can already tell, as higher the parallelity as better the storage performs. This is not really a surprise. Most storage systems work better, as higher the parallel access is.
Specialy for 1 thread, we see the differenc between having one disk in a diskgroup and be able to use 3967 IOPS or using e.g. 5 disks and 1 thread an be able to use 16700 IOPS.
The latency for all tests was great with 0.25ms to 0.4ms for reading operation and 0.1 to 0.4ms for write operations.
The 0.1ms for write is not that impressive, because it is mainly the performance of the write cache, but even when we exceeded the write cache we were not higher then 0.4ms

1 MByte block size

On the 1 MByte tests, we see, that we already hit the max speed with 6 devices (parallelity of 6) to 9 devices (parallelity 2).

As an example to interpret the graphic, when you have a look at the green line (6 devices), we reach the peak performance at a parallelity of 6.
For the dark blue line (7 devices) we hit the max peak at parallelity 4 and so on.

If we increase the parallelity over this point, the latency will grow or even the throughput will decrease.
For the 1 MByte tests, we hit a limitation at around 6280 IOPS. This is around 90% of the calculated maximum speed.

So if we go with Oracle ASM, we should bundle at least 5 devices together to a diskgroup.
We also see, that when we run a rebalance diskgroup we should go for a small rebalance power. A value smaller than 4 should be chosen, every value over 8 is counterproductive and will consume all possible I/O on your system and slow all databases on this server.
Monitoring / Verification

To verify the results, I am using dbms_io_calibration on the very same devices as the performance test was running. The expectation is, that we will see more or less the same results.

On large IO the measured 6231 IOPS by IO calibration is almost the same as measured by FIO (+/- 1%).
IO calibration measured 604K IOPS for small IO, which is significantly more than the +/- 340kw IOPS measured with FIO. This is explainable because IO calibration is working with the number of disks for the parallelity and I did this test with 20 disks instead. of 10. Sadly when I realized my mistake, I already had no more access to the system.

In the following pictures you see the performance view of the storage system with the data measured by FIO as an overlay. As we can see, the values for the IOPS matches perfectly.
The value for latency was lower on the storage part, which is explainable with the different points where we are measuring (once on the storage side, once on the server side).
All print screens of the live performance view of the storage can be found in the git repository. The values for Queue depth, throughput, and IOPS matched perfectly with the measured results.


Multi-Server Results with compression General

The tests for compressed and uncompressed devices were made with 3 parallel servers.

8 KByte block size

For random read with 8 KByte blocks, the IOPS increased almost linear from 1 to 3 nodes and we hit a peak of 655’000 IOPS with 10 devices / 10 threads. The answer time was between 0.3 and 0.45 ms.
For random write, we hit some kind of limitation at around 250k IOPS. We could not get a higher value than that which was kind of surprising for me. I would have expected better results here.
From the point, where we hit the maximum number of IOPS we see the same behavior as with 1 MByte blocks: More threads does only increase the answer time but does not get you better performance.
So for random write with 8 KByte blocks, the maximum numbers are around 3 devices and 10 threads or 10 devices and 3 threads or a parallelity of 30.
As long as we stay under this limit we see answer times between 0.15 and 0.5ms, over this limit the answer times can increase <10ms.
1 MByte block size

The multi-server tests show some interesting behavior with large reads on this storage system.
We hit a limitation at around 7500 to 7800 IOPS per second. For sequential write, we could achieve almost double this result with up to 14.5k IOPS.

Of course, I discussed all the results with Huawei to see their view on my tests.
The explanation for the way better performance on write then read was, with write we go straight to the 1 TByte big cache, for reading the system had to scratch everything from disk. This Beta-Firmware version did not have any read cache and that’s why the results were lower. All Firmwares starting from the end of February do have also read cache.
I go with this answer and hope to retest it in the future with the newest firmware, still thinking the 7500 IOPS is a little bit low even without read cache.
Multi-Server Results without compression

Comparing the results for compressed devices to uncompressed devices we see an increase of IOPS up to 30% and a decrease of latency at the same level for 8 KByte block size.
For 1 MByte sequential read, the difference was smaller with around 10%, for 1 MByte sequential write we could gain an increase of around 15-20%.

Multi-Server Results with high parallelity General

Because the tests with 3 servers did no max out the storage on the 8 KByte block size, I decided to do a max test with 4 parallel servers and with a parallelity from 1-100 instead of 1-10.
The steps were 1,5,10,15,20,30,40,50,75 and 100.
These tests were only performed on uncompressed devices.

8 KByte block size

It took 15 threads (per server) with 10 devices: 60 processes in total to reach the peak performance of the Dorado 6000V3 systems.
At this point, we reached 8 KByte random read 940k IOPS @0.637 ms. Remembering the answer, that this Firmware version does not have any read cache, this performance is achieved completely from the SSDs and could theoretically be even better with enabled read cache
If we increase the parallelity further, we see the same effect as with 1 MByte blocks: the answer time is increasing (dramatically) and the throughput is decreasing.

Depending on the number of parallel devices, we need between 60 parallel processes (with 10 devices) up to 300 parallel processes (with 3 parallel devices).

1 MByte block size

For the large IOs, we see the same picture as with 1 or 3 servers. A combined parallelity of 20-30 can max out the storage systems. So be very careful with your large IO tasks not to affect the other operations on the storage system.

Mixed Workload

After these tests, we know, the upper limit for this storage in single case tests. In a normal workload, we will never see only one kind of IO: There will always be a mixture of 8 KByte read & write IOPS side by side with 1 MByte IO. To simulate this picture, we create two FIO files. One creates approx: 40k-50k IOPS with random read and random write in a 50/50 split.
This will be our baseline, then we add approx. 1000 1 MByte IOPS every 60 seconds and see how the answer time reacts.


As seen in this picture from the performance monitor of the storage system the 1 MByte IOPS blocks had two effects on the smaller IOPS
The throughput of the small IOPS is decreasing
The latency is increasing.
In the middle of the test, we stop the small IOPS to see the latency of just the 1 MByte IOPS.

Both effects are expected and within the expected parameters: Test passed.

So with a base workload of 40k-50k IOPS, we can run e.g. backups in parallel with a bandwidth up to 5.5 GByte/s without interfering with the database work or we can do up to 5 active duplicates on the same storage without interfering with the other databases.

Summary

This storage system showed a fantastic performance at 8 KByte block size with very low latency. Especially the high number of parallel processes we can run against it before we hit the peak performance makes it a good choice to serving a large number of Oracle databases on it.

The large IO (1 MByte) performance for write operations was good but not that good compared with the excellent 8 KByte performance. The sequential read part is missing the read cache badly compared to the performance which is possible for writing. But even that is not on top of the line compared to other storage systems. Here I had seen other storage systems with a comparable configuration which were able to deliver up to 12k IOPS@1MByte.

Remember the questions from the first blog post:
-How many devices should I bundle into a diskgroup for best performance?
As many as possible.

-How many backups/duplicates can I run in parallel to my normal database workload without interfering with it?
You can run 5 parallel backup/duplicates with 1000 IOPS each without interferring a base line of 40-50k IOPS@8KByte

-What is the best rebalance power I can use on my system?
2-4 is absolutley enough for this system. More will slow down your other operations on the server.

Cet article Huawei Dorado 6000 V3 benchmark est apparu en premier sur Blog dbi services.

Storage performance benchmarking with FIO

Yann Neuhaus - Wed, 2019-07-10 02:18

Learn how to do storage performance benchmarks for your database system with the open source tool FIO.

Before you read

This is a series of different blog posts:
In the first blog post, I talk about “What you should measure on your database storage and why”.
The second blog post will talk about “How to do database storage performance benchmark with FIO” (aka this one here).
The third blog post will show “How good is the new HUAWEI Dorada 6000V3 All-Flash System for databases” measured with the methods and tools from post one and two.

The first two posts give you the theory to understand all the graphics and numbers I will show in the third blog post.

Install FIO

Many distributions have FIO in their repositories. On a Fedora/RHEL system, you can just use
yum install fio
and you are ready to go.

Start a benchmark with FIO

There are mainly two different ways to start a benchmark with FIO

Command line

Starting from the command line is the way to go when you just wanna have a quick feeling about the system performance.
I prefer to do more complex setups with job files. It is easier to create and debug.
Here a small example how to start a benchmark direct from the command line:
fio --filename=/dev/xvdf --direct=1 --rw=randwrite --refill_buffers --norandommap \
--randrepeat=0 --ioengine=libaio --bs=128k --rate_iops=1280 --iodepth=16 --numjobs=1 \
--time_based --runtime=86400 --group_reporting –-name=benchtest

FIO Job files

An FIO job file holds a [GLOBAL] section and one or many [JOBS] sections. This section holds the shared parameters which are used for all the jobs when you do not override them in the job sections.
Here is what a typical GLOBAL section from my files looks like:
[global] ioengine=libaio    #ASYNCH IO
invalidate=1       #Invalidate buffer-cache for the file prior to starting I/O.
                   #Should not be necessary because of direct IO but just to be sure ;-)
ramp_time=5        #First 5 seconds do not count to the result.
iodepth=1          #Number of I/O units to keep in flight against the file
runtime=60         #Runtime for every test
time_based         #If given, run for the specified runtime duration even if the files are completely read or written.
                   #The same workload will be repeated as many times as runtimeallows.
direct=1           #Use non buffered I/O.
group_reporting=1  #If set, display per-group reports instead of per-job when numjobs is specified.
per_job_logs=0     #If set, this generates bw/clat/iops log with per file private filenames.
                   #If not set, jobs with identical names will share the log filename.
bs=8k              #Block size
rw=randread        #I/O Type

Now that we have defined the basics, we can start with the JOBS section:
Example of single device test with different parallelity:


#
#Subtest: 1
#Total devices = 1
#Parallelity = 1
#Number of processes = devices*parallelity ==> 1*1 ==> 1
#
[test1-subtest1-blocksize8k-threads1-device1of1]     #Parallelity 1, Number of device: 1/1
stonewall                               #run this test until the next [JOB SECTION] with the “stonewall” keyword
filename=/dev/mapper/device01           #Device to use
numjobs=1                               #Create the specified number of clones of this job.
                                        #Each clone of job is spawned as an independent thread or process.
                                        #May be used to setup a larger number of threads/processes doing the same thing.
                                        #Each thread is reported separately: to see statistics for all clones as a whole
                                        #use group_reporting in conjunction with new_group.
#
#Subtest: 5
#Total devices = 1
#Parallelity = 5
#Number of processes = devices*parallelity ==> 1*5 ==> 5
#
[test1-subtest5-blocksize8k-threads5-device1of1]     #Parallelity 5, Number of device: 1/1
stonewall
numjobs=5
filename=/dev/mapper/device01

Example of multi device test with different parallelity:

#Subtest: 1
#Total devices = 4
#Parallelity = 1
#Number of processes = devices*parallelity ==> 4
#
[test1-subtest1-blocksize8k-threads1-device1of4]     # Parallelity 1, Number of device 1/4
stonewall
numjobs=1
filename=/dev/mapper/device01
[test1-subtest1-blocksize8k-threads1-device2of4]     # Parallelity 1, Number of device 2/4
numjobs=1
filename=/dev/mapper/device02
[test1-subtest1-blocksize8k-threads1-device3of4]     # Parallelity 1, Number of device 3/4
numjobs=1
filename=/dev/mapper/device03
[test1-subtest1-blocksize8k-threads1-device4of4]     # Parallelity 1, Number of device 4/4
numjobs=1
filename=/dev/mapper/device04
#
#Subtest: 5
#Total devices = 3
#Parallelity = 5
#Number of processes = devices*parallelity ==> 5
#
[test1-subtest5-blocksize8k-threads5-device1of3]     # Parallelity 5, Number of device 1/3
stonewall
numjobs=5
filename=/dev/mapper/device01
[test1-subtest5-blocksize8k-threads5-device2of3]     # Parallelity 5, Number of device 2/3
filename=/dev/mapper/device02
[test1-subtest5-blocksize8k-threads5-device3of3]     # Parallelity 5, Number of device 3/3
filename=/dev/mapper/device03

You can download a compelete set of FIO job files for running the described testcase on my github repository.
Job files list

To run a complete test with my job files you have to replace the devices. There is a small shell script to replace the devices called “replaceDevices.sh”

#!/bin/bash
######################################################
# dbi services michael.wirz@dbi-services.com
# Vesion: 1.0
#
# usage: ./replaceDevices.sh
#
# todo before use: modify newname01-newname10 with
# the name of your devices
######################################################
sed -i -e 's_/dev/mapper/device01_/dev/mapper/newname01_g' *.fio
sed -i -e 's_/dev/mapper/device02_/dev/mapper/newname02_g' *.fio
sed -i -e 's_/dev/mapper/device03_/dev/mapper/newname03_g' *.fio
sed -i -e 's_/dev/mapper/device04_/dev/mapper/newname04_g' *.fio
sed -i -e 's_/dev/mapper/device05_/dev/mapper/newname05_g' *.fio
sed -i -e 's_/dev/mapper/device06_/dev/mapper/newname06_g' *.fio
sed -i -e 's_/dev/mapper/device07_/dev/mapper/newname07_g' *.fio
sed -i -e 's_/dev/mapper/device08_/dev/mapper/newname08_g' *.fio
sed -i -e 's_/dev/mapper/device09_/dev/mapper/newname09_g' *.fio
sed -i -e 's_/dev/mapper/device10_/dev/mapper/newname10_g' *.fio

!!!After you replaced the filenames you should double check, that you have the correct devices, because when you start the test, all data on these devices is lost!!!

grep filename *.fio|awk -F '=' '{print $2}'|sort -u
/dev/mapper/device01
/dev/mapper/device02
/dev/mapper/device03
/dev/mapper/device04
/dev/mapper/device05
/dev/mapper/device06
/dev/mapper/device07
/dev/mapper/device08
/dev/mapper/device09
/dev/mapper/device10

To start the test run:

for job_file in $(ls *.fio)
do
    fio ${job_file} --output /tmp/bench/${job_file%.fio}.txt
done

Multiple Servers

FIO supports to do tests on multiple servers in parallel which is very nice! Often a single server can not max out a modern all-flash storage system, this could be of bandwidth problems (e.g. not enough adapters per server) or one server is just not powerful enough.

You need to start FIO in server mode on all machines you wanna test:
fio --server

Then you start the test with
fio --client=serverA,serverB,serverC /path/to/fio_jobs.file

Should you have a lot of servers you can put them in a file and use this as input for your fio command:


cat fio_hosts.list
serverA
serverB
serverC
serverD
...

fio --client=fio_hosts.list /path/to/fio_jobs.file

Results

The output files are not really human readable, so you can go with my getResults.sh script which formats you the output ready to copy/past to excel:


cd /home/user/Huawei-Dorado6000V3-Benchmark/TESTRUN5-HOST1_3-COMPR/fio-benchmark-output
bash ../../getResults.sh
###########################################
START :Typerandread-BS8k
FUNCTION: getResults
###########################################
Typerandread-BS8k
LATENCY IN MS
.399 .824 1.664 2.500 3.332 5.022 6.660 8.316 12.464 16.683
.392 .826 1.667 2.495 3.331 4.995 6.680 8.344 12.474 16.637
.397 .828 1.661 2.499 3.330 4.992 6.656 8.329 12.505 16.656
.391 .827 1.663 2.493 3.329 5.002 6.653 8.330 12.482 16.656
.398 .827 1.663 2.497 3.327 5.005 6.660 8.327 12.480 16.683
.403 .828 1.662 2.495 3.326 4.995 6.663 8.330 12.503 16.688
.405 .825 1.662 2.496 3.325 4.997 6.648 8.284 12.369 16.444
.417 .825 1.661 2.497 3.326 4.996 6.640 8.256 12.303 16.441
.401 .826 1.661 2.500 3.327 4.999 6.623 8.273 12.300 16.438
.404 .826 1.661 2.500 3.327 4.993 6.637 8.261 12.383 16.495
IOPS
2469 6009 5989 5986 5991 5966 5998 6006 6012 5989
5004 12000 11000 11000 11000 11000 11000 11000 12000 12000
7407 17000 18000 17000 17000 18000 18000 17000 17000 17000
10000 23000 23000 24000 23000 23000 24000 23000 24000 23000
12300 29000 29000 29000 30000 29900 29000 29000 30000 29900
14600 35900 35000 35000 36000 35000 35000 35000 35000 35900
16000 42100 41000 41000 42000 41000 42100 42200 42400 42500
16500 42100 41000 41900 42000 41000 42100 42400 42600 42500
19600 48000 47000 47900 47000 47900 48300 48300 48700 48600
21900 54000 53000 53900 53000 53000 54200 54400 54400 54400
###########################################
START :Typerandwrite-BS8k
FUNCTION: getResults
###########################################
Typerandwrite-BS8k
LATENCY IN MS
.461 .826 1.662 2.501 3.332 5.022 6.660 8.317 12.467 16.676
.457 .826 1.668 2.495 3.330 5.002 6.681 8.346 12.473 16.635
.449 .826 1.662 2.499 3.327 4.991 6.664 8.326 12.497 16.649
.456 .828 1.661 2.496 3.331 4.997 6.663 8.329 12.477 16.651
.460 .827 1.663 2.495 3.327 5.001 6.660 8.333 12.484 16.676
.463 .830 1.663 2.495 3.325 4.997 6.661 8.330 12.503 16.684
.474 .827 1.661 2.495 3.324 4.999 6.665 8.334 12.451 16.580
.469 .828 1.661 2.497 3.324 5.002 6.668 8.322 12.489 16.594
.471 .827 1.660 2.499 3.327 4.998 6.663 8.335 12.481 16.609
.476 .825 1.675 2.500 3.328 4.992 6.675 8.334 12.480 16.623
IOPS
2137 5997 5990 5985 5991 5966 5998 6005 6010 5992
4306 12000 11900 11000 11000 11000 11000 11000 12000 12000
6571 17000 17000 17000 18000 18000 17000 17000 17000 18000
8635 23900 23000 23000 23000 23000 23000 23000 24000 24000
10700 29000 29000 29000 30000 29900 29000 29000 30000 29000
12800 35900 35000 35000 36000 35000 35000 35000 35000 35900
14500 41000 41000 41000 42000 41000 41000 41000 42100 42200
14700 41000 41000 41900 42000 41900 41900 42000 42000 42100
16700 48000 48000 47900 47000 47000 47000 47900 47000 48100
18600 54100 53500 53900 53000 54000 53900 53900 53000 54100
...

Copy & paste the result into the excel template and you can have an easy over view of the results:
fio summary excel

Troubleshooting

If you’ve got a libaio error you have to install the libaio libraries:

fio: engine libaio not loadable
fio: failed to load engine
fio: file:ioengines.c:89, func=dlopen, error=libaio: cannot open shared object file: No such file or directory

yum install libaio-devel

Cet article Storage performance benchmarking with FIO est apparu en premier sur Blog dbi services.

Witty Screen Names and Why You Should Use Them

VitalSoftTech - Tue, 2019-07-09 10:08

There are several reasons why someone would require a screen name for social media. Everyone manages their privacy in their unique ways. Some are more comfortable letting on about themselves to the oldest and most trusted friends. Similarly, others tell their grave dark tales to strangers on trains or in these days, social media. Considering […]

The post Witty Screen Names and Why You Should Use Them appeared first on VitalSoftTech.

Categories: DBA Blogs

Change Item Icon Dynamically

Jeff Kemp - Tue, 2019-07-09 04:18

The floating item type has an optional “Icon” property that allows you to render an icon next to the item, which can help users quickly identify what the item is for. This is especially helpful when the form has a lot of items.

The icon attribute can be static, e.g. fa-hashtag, or it can be chosen based on the value of another item, e.g. &P1_FA_ICON..

If you want the icon to change dynamically as the user enters or modifies data, it’s a little bit more complicated. I have a list item based on a table of asset categories, and each asset category has an icon assigned to it. When the user selects an asset category from the list I want it to get the icon from the table and show it in the item straight away.

To do this, I use two Dynamic Actions: (1) a PL/SQL action which updates the hidden Pn_FA_ICON item, and (2) a Javascript action which manipulates the displayed icon next to the list item.

This is my item and its two dynamic actions. The Icon attribute causes the icon to be shown when the page is loaded.

The Execute PL/SQL Code action is a simple PL/SQL block which gets the icon from the reference table for the selected category code. Make sure the “Wait for Result” is “Yes”, and make sure the Items to Submit and Items to Return are set to P260_CATEGORY_CODE and P260_CATEGORY_FA_ICON, respectively.

select x.fa_icon
into   :P260_CATEGORY_FA_ICON
from   asset_categories x
where  x.code = :P260_CATEGORY_CODE;

On examining the source of the page, we see that the select item is immediately followed by a span which shows the icon:

The Execute JavaScript Code action finds the item (in this case, the triggering element), then searches the DOM for the following span with the apex-item-icon class. Once found, it resets the classes on the span with a new set of classes, including the new icon.

It’s a little gimmicky but it’s an easy way to delight users, and it might help them to quickly identify data entry mistakes.

Warning: due to the way the javascript manipulates the DOM, this method is not guaranteed to work correctly in future releases of APEX., so it will need to be retested after upgrades.

Pepkor Europe Selects Oracle Cloud as a Platform for Growth

Oracle Press Releases - Tue, 2019-07-09 04:00
Press Release
Pepkor Europe Selects Oracle Cloud as a Platform for Growth

London and Redwood Shores, Calif.—Jul 9, 2019

Pepkor Europe, the leading pan-European variety discount retailer, has chosen Oracle Cloud to support the planned future growth of its brands, PEPCO, Poundland and Dealz. Pepkor sells clothing and fast-moving consumer goods such as food, health, beauty products, and general merchandise to families on a budget across Europe.

“The Pepkor Europe brands serve customers in 14 countries through over 2,000 stores, offering a diverse and constantly evolving range of products, delivering great value to our customers, aided by being a high-volume business. We are confident that the centralised and enhanced inventory management capability that Oracle Retail provides, will improve our operational agility and flexibility through better visibility into inventory and margins,” said Andy Bond, chief executive officer, Pepkor Europe. “After a rigorous evaluation, we chose Oracle as our partner for this key element of our infrastructure transformation.”

Pepkor Europe will leverage Oracle Retail Merchandising Cloud Service to unify inventory management and Oracle Enterprise Resource Planning (ERP) Cloud to automate and streamline the organisation’s end-to-end financial management processes.

“Pepkor Europe needed a technology foundation that would match the requirements of its business and deliver a new level of insight and operational efficiency,” said Mike Webster, senior vice president and general manager, Oracle Retail. “From backend financials to managing complex retail operations, only Oracle Cloud can provide the end-to-end solutions Pepkor Europe needs to continue its international expansion while supporting multiple accounting approaches, currencies, languages, and legal entities.”

Contact Info
Kris Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
Nick Wharton
Pepkor Europe
07880 784319
nick.w@pepkor.co.uk
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About Pepkor Europe

Pepkor Europe was established in 2015 and comprised three strong, independent value retailers PEPCO, Poundland and Dealz.  Its vertically-integrated global sourcing arm, PGS enables its retail brands to deliver the value its customers demand in general merchandise and apparel.  In FMCG, thanks to its scale, it can offer widely recognised grocery brands at a significant discount.

PEPCO, Poundland & Dealz operate across some of Europe’s largest economies. Pepkor Europe now has 2,473 stores in 14 countries including the UK, the Republic of Ireland, Spain and across the CEE region, employing over 33,000 people.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Nick Wharton

  • 07880 784319

[Video] Oracle Exadata Cloud Service(ExaCS) Offerings

Online Apps DBA - Tue, 2019-07-09 00:50

[Video] Oracle Exadata Cloud Service(ExaCS) Offerings Exadata Cloud Service is available in 4 different configurations or shapes and 2 models. 1. What are the 4 shapes available in ExaCS? 2. Which is the newly released shape of ExaCS? 3. What are the specifications of each shape? 4. How does the Exadata Machine Model affect the […]

The post [Video] Oracle Exadata Cloud Service(ExaCS) Offerings appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Converting columns from one data type to another in PostgreSQL

Yann Neuhaus - Mon, 2019-07-08 00:19

Usually you should use the data type that best fits the representation of your data in a relational database. But how many times did you see applications that store dates or numbers as text or dates as integers? This is not so uncommon as you might think and fixing that could be quite a challenge as you need to cast from one data type to another when you want to change the data type used for a specific column. Depending on the current format of the data it might be easy to fix or it might become more complicated. PostgreSQL has a quite clever way of doing that.

Frequent readers of our blog might know that already: We start with a simple, reproducible test setup:

postgres=# create table t1 ( a int, b text );
CREATE TABLE
postgres=# insert into t1 values ( 1, '20190101');
INSERT 0 1
postgres=# insert into t1 values ( 2, '20190102');
INSERT 0 1
postgres=# insert into t1 values ( 3, '20190103');
INSERT 0 1
postgres=# select * from t1;
 a |    b     
---+----------
 1 | 20190101
 2 | 20190102
 3 | 20190103
(3 rows)

What do we have here? A simple table with two columns: Column “a” is an integer and column “b” is of type text. For humans it seems obvious that the second column in reality contains a date but stored as text. What options do we have to fix that? We could try something like this:

postgres=# alter table t1 add column c date default (to_date('YYYYDDMM',b));
psql: ERROR:  cannot use column reference in DEFAULT expression

That obviously does not work. Another option would be to add another column with the correct data type, populate that column and then drop the original one:

postgres=# alter table t1 add column c date;
ALTER TABLE
postgres=# update t1 set c = to_date('YYYYMMDD',b);
UPDATE 3
postgres=# alter table t1 drop column b;
ALTER TABLE

But what is the downside of that? This will probably break the application as the column name changed and there is no way to avoid that. Is there a better way of doing that? Let’s start from scratch:

postgres=# create table t1 ( a int, b text );
CREATE TABLE
postgres=# insert into t1 values ( 1, '20190101');
INSERT 0 1
postgres=# insert into t1 values ( 2, '20190102');
INSERT 0 1
postgres=# insert into t1 values ( 3, '20190103');
INSERT 0 1
postgres=# select * from t1;
 a |    b     
---+----------
 1 | 20190101
 2 | 20190102
 3 | 20190103
(3 rows)

The same setup as before. What other options do we have to convert "b" to a real date without changing the name of the column. Let's try the most obvious way and let PostgreSQL decide what to do:

postgres=# alter table t1 alter column b type date;
psql: ERROR:  column "b" cannot be cast automatically to type date
HINT:  You might need to specify "USING b::date".

This does not work as PostgreSQL in this case can not know how to go from one data type to another. But the “HINT” does already tell us what we might need to do:

postgres=# alter table t1 alter column b type date using (b::date);
ALTER TABLE
postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | date    |           |          | 

postgres=# 

For our data in the “b” column that does work. but consider you have data like this:

postgres=# drop table t1;
DROP TABLE
postgres=# create table t1 ( a int, b text );
CREATE TABLE
postgres=# insert into t1 values (1,'01-JAN-2019');
INSERT 0 1
postgres=# insert into t1 values (2,'02-JAN-2019');
INSERT 0 1
postgres=# insert into t1 values (3,'03-JAN-2019');
INSERT 0 1
postgres=# select * from t1;
 a |      b      
---+-------------
 1 | 01-JAN-2019
 2 | 02-JAN-2019
 3 | 03-JAN-2019
(3 rows)

Would that still work?

postgres=# alter table t1 alter column b type date using (b::date);;
ALTER TABLE
postgres=# select * from t1;
 a |     b      
---+------------
 1 | 2019-01-01
 2 | 2019-01-02
 3 | 2019-01-03
(3 rows)

Yes, but in this case it will not:

DROP TABLE
postgres=# create table t1 ( a int, b text );
CREATE TABLE
postgres=# insert into t1 values (1,'First--January--19');
INSERT 0 1
postgres=# insert into t1 values (2,'Second--January--19');
INSERT 0 1
postgres=# insert into t1 values (3,'Third--January--19');
INSERT 0 1
postgres=# select * from t1;
 a |          b           
---+---------------------
 1 | First--January--19
 2 | Second--January--19
 3 | Third--January--19
(3 rows)

postgres=# alter table t1 alter column b type date using (b::date);;
psql: ERROR:  invalid input syntax for type date: "First--January--19"
postgres=# 

As PostgreSQL has no idea how to do the conversion this will fail, no surprise here. But still you have the power of doing that by providing a function that does the conversion in exactly the way you want to have it:

create or replace function f_convert_to_date ( pv_text in text ) returns date
as $$
declare
begin
  return date('20190101');
end;
$$ language plpgsql;

Of course you would add logic to parse the input string so that the function will return the matching date and not a constant as in this example. For demonstration purposes we will go with this fake function:

postgres=# alter table t1 alter column b type date using (f_convert_to_date(b));;
ALTER TABLE
postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | date    |           |          | 

postgres=# select * from t1;
 a |     b      
---+------------
 1 | 2019-01-01
 2 | 2019-01-01
 3 | 2019-01-01
(3 rows)

… and here we go. The column was converted from text to date and we provided the exact way of doing that by calling a function that contains the logic to do that. As long as the output of the function conforms to the data type you want and you did not do any mistakes you can potentially go from any source data type to any target data type.

There is one remaining question: Will that block other sessions selecting from the table while the conversion is ongoing?

postgres=# drop table t1;
DROP TABLE
postgres=# create table t1 ( a int, b text );
CREATE TABLE
postgres=# insert into t1 select a, '20190101' from generate_series(1,1000000) a;
INSERT 0 1000000
postgres=# create index i1 on t1(a);
CREATE INDEX

In one session we will do the conversion and in the other session we will do a simple select that goes over the index:

-- first session
postgres=# alter table t1 alter column b type date using (f_convert_to_date(b));

Second one at the same time:

-- second session
postgres=# select * from t1 where a = 1;
-- blocks

Yes, that will block, so you should plan such actions carefully when you have a busy system. But this is still better than adding a new column.

Cet article Converting columns from one data type to another in PostgreSQL est apparu en premier sur Blog dbi services.

Telling the PostgreSQL optimizer more about your functions

Yann Neuhaus - Sun, 2019-07-07 05:29

When you reference/call functions in PostgreSQL the optimizer does not really know much about the cost nor the amount of rows that a function returns. This is not really surprising as it is hard to predict what the functions is doing and how many rows will be returned for a given set of input parameters. What you might not know is, that indeed you can tell the optimizer a bit more about your functions.

As usual let’s start with a little test setup:

postgres=# create table t1 ( a int, b text, c date );
CREATE TABLE
postgres=# insert into t1 select a,a::text,now() from generate_series(1,1000000) a;
INSERT 0 1000000
postgres=# create unique index i1 on t1(a);
CREATE INDEX
postgres=# analyze t1;
ANALYZE

A simple table containing 1’000’000 rows and one unique index. In addition let’s create a simple function that will return exactly one row from that table:

create or replace function f_tmp ( a_id in int ) returns setof t1
as $$
declare
begin
  return query select * from t1 where a = $1;
end;
$$ language plpgsql;

What is the optimizer doing when you call that function?

postgres=# explain (analyze) select f_tmp (1);
                                         QUERY PLAN                                         
--------------------------------------------------------------------------------------------
 ProjectSet  (cost=0.00..5.27 rows=1000 width=32) (actual time=0.654..0.657 rows=1 loops=1)
   ->  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.003..0.004 rows=1 loops=1)
 Planning Time: 0.047 ms
 Execution Time: 0.696 ms
(4 rows)

We know that only one row will be returned but the optimizer is assuming that 1000 rows will be returned. This is the default and documented. So, no matter how many rows will really be returned, PostgreSQL will always estimate 1000. But you have some control and can tell the optimizer that the function will return one row only:

create or replace function f_tmp ( a_id in int ) returns setof t1
as $$
declare
begin
  return query select * from t1 where a = $1;
end;
$$ language plpgsql
   rows 1;

Looking again at the execution plan again:

postgres=# explain (analyze) select f_tmp (1);
                                        QUERY PLAN                                        
------------------------------------------------------------------------------------------
 ProjectSet  (cost=0.00..0.27 rows=1 width=32) (actual time=0.451..0.454 rows=1 loops=1)
   ->  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.003..0.004 rows=1 loops=1)
 Planning Time: 0.068 ms
 Execution Time: 0.503 ms
(4 rows)

Instead of 1000 rows we now do see that only 1 row was estimated which is what we specified when we created the function. Of course this is a very simple example and in reality you often might not be able to tell exactly how many rows will be returned from a function. But at least you can provide a better estimate as the default of 1000. In addition you can also specify a cost for your function (based on cpu_operator_cost):

create or replace function f_tmp ( a_id in int ) returns setof t1
as $$
declare
begin
  return query select * from t1 where a = $1;
end;
$$ language plpgsql
   rows 1
   cost 1;

If you use functions remember that you can give the optimizer more information and that there is a default of 1000.

Cet article Telling the PostgreSQL optimizer more about your functions est apparu en premier sur Blog dbi services.

[Video] Oracle Autonomous Database Overview : ADW, ATP, Serverless & Dedicated Infrastructure

Online Apps DBA - Fri, 2019-07-05 02:18

[Video] Oracle Autonomous Database Overview : ADW, ATP, Serverless & Dedicated Infrastructure Oracle Autonomous Database is a combination of Exadata with Database and Infrastructure Automation on Oracle Gen 2 Cloud. Autonomous Databases are of two types based on workload: 1. Autonomous Data Warehouse (ADW) 2. Autonomous Transaction Processing (ATP) Autonomous Databases can be deployed in […]

The post [Video] Oracle Autonomous Database Overview : ADW, ATP, Serverless & Dedicated Infrastructure appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

SQL Server containers and docker network driver performance considerations

Yann Neuhaus - Fri, 2019-07-05 01:45

Few months ago I attended to the Franck Pachot session about Microservices and databases at SOUG Romandie in Lausanne on 2019 May 21th. He covered some performance challenges that can be introduced by Microservices architecture design and especially when database components come into the game with chatty applications. One year ago, I was in a situation where a customer installed some SQL Server Linux 2017 containers in a Docker infrastructure with user applications located outside of this infrastructure. It is likely an uncommon way to start with containers but anyway when you are immerging in a Docker world you just notice there is a lot of network drivers and considerations you may be aware of and just for a sake of curiosity, I proposed to my customer to perform some network benchmark tests to get a clear picture of these network drivers and their related overhead in order to design correctly Docker infrastructure from a performance standpoint.

The initial customer’s scenario included a standalone Docker infrastructure and we considered different approaches about application network configurations from a performance perspective. We did the same for the second scenario that concerned a Docker Swarm infrastructure we installed in a second step.

The Initial reference – Host network and Docker host network

The first point was to get an initial reference with no network management overhead directly from the network host. We used the iperf3 tool for the tests. This is a kind of tool I’m using with virtual environments as well to ensure network throughput is what we really expect and sometimes I got some surprises on this topic. So, let’s go back to the container world and each test was performed from a Linux host outside to the concerned Docker infrastructure according to the customer scenario.

The attached network card speed link of the Docker Host is supposed to be 10GBits/sec …

$ sudo ethtool eth0 | grep "Speed"
        Speed: 10000Mb/s

 

… and it is confirmed by the first iperf3 output below:

Let’s say that we tested the Docker host driver as well and we got similar results.

$ docker run  -it --rm --name=iperf3-server  --net=host networkstatic/iperf3 -s

 

Docker bridge mode

The default modus operandi for a Docker host is to create a virtual ethernet bridge (called docker0), attach each container’s network interface to the bridge, and to use network address translation (NAT) when containers need to make themselves visible to the Docker host and beyond. Unless specified, a docker container will use it by default and this is exactly the network driver used by containers in the context of my customer. In fact, we used user-defined bridge network but I would say it doesn’t matter for the tests we performed here.

$ ip addr show docker0
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:70:0a:e8:7a brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:70ff:fe0a:e87a/64 scope link
       valid_lft forever preferred_lft forever

 

The iperf3 docker container I ran for my tests is using the default bridge network as show below. The interface with index 24 corresponds to the veth0bfc2dc peer of the concerned container.

$ docker run  -d --name=iperf3-server -p 5204:5201 networkstatic/iperf3 -s
…
$ docker ps | grep iperf
5c739940e703        networkstatic/iperf3              "iperf3 -s"              38 minutes ago      Up 38 minutes                0.0.0.0:5204->5201/tcp   iperf3-server
$ docker exec -ti 5c7 ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
24: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

[clustadmin@docker1 ~]$ ethtool -S veth0bfc2dc
NIC statistics:
     peer_ifindex: 24

 

Here the output after running the iperf3 benchmark:

It’s worth noting that the “Bridge” network adds some overheads with an impact of 13% in my tests but in fact, it is an expected outcome to be honest and especially if we refer to the Docker documentation:

Compared to the default bridge mode, the host mode gives significantly better networking performance since it uses the host’s native networking stack whereas the bridge has to go through one level of virtualization through the docker daemon.

 

When the docker-proxy comes into play

Next scenario we wanted to test concerned the closet network proximity we may have between the user applications and the SQL Server containers in the Docker infrastructure. In other words, we assumed the application resides on the same host than the SQL Server container and we got some surprises from the docker-proxy itself.

Before running the iperf3 result, I think we have to answer to the million-dollar question here: what is the docker-proxy? But did you only pay attention to this process on your docker host? Let’s run a pstree command:

$ pstree
systemd─┬─NetworkManager───2*[{NetworkManager}]
        ├─agetty
        ├─auditd───{auditd}
        ├─containerd─┬─containerd-shim─┬─npm─┬─node───9*[{node}]
        │            │                 │     └─9*[{npm}]
        │            │                 └─12*[{containerd-shim}]
        │            ├─containerd-shim─┬─registry───9*[{registry}]
        │            │                 └─10*[{containerd-shim}]
        │            ├─containerd-shim─┬─iperf3
        │            │                 └─9*[{containerd-shim}]
        │            └─16*[{containerd}]
        ├─crond
        ├─dbus-daemon
        ├─dockerd─┬─docker-proxy───7*[{docker-proxy}]
        │         └─20*[{dockerd}]

 

Well, if I understand correctly the Docker documentation, the purpose of this process is to enable a service consumer to communicate with the service providing container …. but it’s only used in particular circumstances. Just bear in mind that controlling access to a container’s service is massively done through the host netfilter framework, in both NAT and filter tables and the docker-proxy mechanism is required only when this method of control is not available:

  • When the Docker daemon is started with –iptables=false or –ip-forward=false or when the Linux host cannot act as a router with Linux kernel parameter ipv4.ip_forward=0. This is not my case here.
  • When you are using localhost in the connection string of your application that implies to use the loopback interface (127.0.0.0/8) and the Kernel doesn’t allow routing traffic from it. Therefore, it’s not possible to apply netfilter NAT rules and instead, netfilter sends packets through the filter table’s INPUT chain to a local process listening on the docker-proxy
$ sudo iptables -L -n -t nat | grep 127.0.0.0
DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

 

In the picture below you will notice I’m using the localhost key word in my connection string so the docker-proxy comes into play.

A huge performance impact for sure which is about 28%. This performance drop may be explained by the fact the docker-proxy process is consuming 100% of my CPUs:

The docker-proxy operates in userland and I may simply disable it with the docker daemon parameter – “userland-proxy”: false – but I would say this is a case we would not encounter in practice because applications will never use localhost in their connection strings. By the way, changing the connection string from localhost to the IP address of the host container gives a very different outcome similar to the Docker bridge network scenario.

 

Using an overlay network

Using a single docker host doesn’t fit well with HA or scalability requirements and in a mission-critical environment I strongly guess no customer will go this way. I recommended to my customer to consider using an orchestrator like Docker Swarm or K8s to anticipate future container workload that was coming from future projects. The customer picked up Docker Swarm for its easier implementation compared to K8s.

 

After implementing a proof of concept for testing purposes (3 nodes included one manager and two worker nodes), we took the opportunity to measure the potential overhead implied by the overlay network which is the common driver used by containers through stacks and services in such situation. Referring to the Docker documentation overlay networks manage communications among the Docker daemons participating in the swarm and used by services deployed on it. Here the docker nodes in the swarm infrastructure:

$ docker node ls
ID                            HOSTNAME                    STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
vvdofx0fjzcj8elueoxoh2irj *   docker1.dbi-services.test   Ready               Active              Leader              18.09.5
njq5x23dw2ubwylkc7n6x63ly     docker2.dbi-services.test   Ready               Active                                  18.09.5
ruxyptq1b8mdpqgf0zha8zqjl     docker3.dbi-services.test   Ready               Active                                  18.09.5

 

An ingress overlay network is created by default when setting up a swarm cluster. User-defined overlay network may be created afterwards and extends to the other nodes only when needed by containers.

$ docker network ls | grep overlay
NETWORK ID    NAME              DRIVER   SCOPE
ehw16ycy980s  ingress           overlay  swarm

 

Here the result of the iperf3 benchmark:

Well, the same result than the previous test with roughly 30% of performance drop. Compared to the initial reference, this is again an expected outcome but I didn’t imagine how important could be the impact in such case.  Overlay network introduces additional overhead by putting together behind the scene a VXLAN tunnel (virtual Layer 2 network on top of an existing Layer 3 infrastructure), VTEP endpoints for encapsulation/de-encapsulation stuff and traffic encryption by default.

Here a summary of the different scenarios and their performance impact:

Scenario Throughput (GB/s) Performance impact Host network 10.3 Docker host network 10.3 Docker bridge network 8.93 0.78 Docker proxy 7.37 0.71 Docker overlay network 7.04 0.68

 

In the particular case of my customer where SQL Server instances sit on the Docker infrastructure and applications reside outside of it, it’s clear that using directly Docker host network may be a good option from a performance standpoint assuming this infrastructure remains simple with few SQL Server containers. But in this case, we have to change the SQL Server default listen port with MSSQL_TCP_PORT parameter because using Docker host networking doesn’t provide port mapping capabilities. According to our tests, we didn’t get any evidence of performance improvement in terms of application response time between Docker network drivers but probably because those applications are not network bound here. But I may imagine scenarios where it can be. Finally, this kind of scenario encountered here is likely uncommon and I see containerized apps with database components outside the Docker infrastructure more often but it doesn’t change the game at all and the same considerations apply here … Today I’m very curious to test real microservices scenarios where database and application components are all sitting on a Docker infrastructure.

See you!

 

Cet article SQL Server containers and docker network driver performance considerations est apparu en premier sur Blog dbi services.

Wipe APEX mail queue

Jeff Kemp - Thu, 2019-07-04 01:32

Refreshing any of our non-prod environments (e.g. dev, test, etc.) with a clone from production is a fairly regular process at my client. A recurring issue with this is emails: we’ve had occasion where users have received a second copy of an email immediately after the clone has completed. This was confusing because they thought the event that had triggered the email actually occurred twice.

As it turns out, the duplicate emails were caused by the fact that the emails happened to be waiting in the APEX mail queue in production at the time of the export. After the export, the APEX mail queue was processed normally in production and the users received their emails as expected; after the clone was completed, the database jobs were restarted in the cloned environment which duly processed the emails sitting in the cloned queue and the users effectively got the same emails a second time.

What’s worse, if the same export were to be used for multiple clones, the users might get the same emails again and again!

A good way to solve this sort of issue would be to isolate the non-prod environments behind a specially configured mail server with a whitelist of people who want (and expect) to get emails from the non-prod systems. We don’t have this luxury at this client, however.

Instead, we have a post_clone.sql script which is run by the DBAs immediately after creating the clone. They already stop all the jobs by setting job_queue_processes=0.

In case the mail queue happens to have any emails waiting to be sent, the post clone script now includes the following step:

begin
*** WARNING: DO NOT RUN THIS IN PRODUCTION! ***
  for r in (
    select workspace_id
          ,workspace
    from apex_workspaces
    ) loop
    apex_application_install.set_workspace_id (r.workspace_id);
    apex_util.set_security_group_id
      (p_security_group_id => apex_application_install.get_workspace_id);
    delete apex_mail_queue;
  end loop;
  commit;
end;
/

This script is run as SYS but it could also be run as SYSTEM or as APEX_nnnnnn, depending on your preference.

ADDENDUM: Overriding the From Email Address

Christian Neumüller commented that an additional technique that might be useful is to override the From (sender) email address to indicate which environment each email was sent from. To do this, run something like the following:

begin
  apex_instance_admin.set_parameter('EMAIL_FROM_OVERRIDE',
    'apex-' || sys_context('userenv','db_name') || '@mydomain');
end;

I’ve tested this in APEX 19.1 and it seems to work fine. Regardless of the p_from parameter that the code passes to apex_mail.send, the EMAIL_FROM_OVERRIDE email address is used instead.
Note that this is currently undocumented, so this may stop working or change in a future release.

Pages

Subscribe to Oracle FAQ aggregator