Feed aggregator

Patching E-Business Suite without Downtime

Gerger Consulting - 8 hours 40 min ago

Downtime during upgrades can cause a lot of headaches to E-Business Suite customers. No more!

Attend the free webinar by Oracle ACE Associate and Chicago OUG President Alfredo Abate and learn how you can patch your EBS installations without any downtime.

Register at this link.


Categories: Development

August 2018 Update to E-Business Suite Technology Codelevel Checker (ETCC)

Steven Chan - 8 hours 42 min ago

The E-Business Suite Technology Codelevel Checker (ETCC) tool helps you identify application or database tier overlay patches that need to be applied to your Oracle E-Business Suite Release 12.2 system. ETCC maps missing overlay patches to the default corresponding Database Patch Set Update (PSU) patches, and displays them in a patch recommendation summary.

What’s New

ETCC has been updated to include bug fixes and patching combinations for the following recommended versions of the following updates:

  • Oracle Database Proactive BP 12.1.0.2.180717
  • Oracle Database PSU 12.1.0.2.180717
  • Oracle JavaVM Component Database PSU 12.1.0.2.180717
  • Oracle Database Patch for Exadata BP 12.1.0.2.180717
  • Oracle Database PSU 12.1.0.2.180717
  • Oracle JavaVM Component Database PSU 12.1.0.2.180717
  • Microsoft Windows Database BP 12.1.0.2.180717
  • Oracle JavaVM Component 12.1.0.2.180717 on Windows
  • Microsoft Windows Database BP 12.1.0.2.180717
  • Oracle JavaVM Component 12.1.0.2.180717 on Windows

Obtaining ETCC

We recommend always using the latest version of ETCC, as new bugfixes will not be checked by older versions of the utility. The latest version of the ETCC tool can be downloaded via Patch 17537119 from My Oracle Support.

References

Related Articles

Categories: APPS Blogs

Introducing GraphPipe

OTN TechBlog - 9 hours 38 min ago
Dead Simple Machine Learning Model Serving

There has been rapid progress in machine learning over the past few years. Today, you can grab one of a handful of frameworks, follow some online tutorials, and have a working machine learning model in a matter of hours. Unfortunately, when you are ready to deploy that model into production you still face several unique challenges.

First, there is no standard for model serving APIs, so you are likely stuck with whatever your framework gives you. This might be protocol buffers or custom JSON. Your business application will generally need a bespoke client just to talk to your deployed model. And it's even worse if you are using multiple frameworks. If you want to create ensembles of models from multiple frameworks, you'll have to write custom code to combine them.

Second, building your model server can be incredibly complicated. Deployment gets much less attention than training, so out-of-the-box solutions are few and far between. Try building a GPU version of TensorFlow-serving, for example. You better be prepared to bang your head against it for a few days.

Finally, many of the existing solutions don't focus on performance, so for certain use cases they fall short. Serving a bunch of tensor data from a complex model via a python-JSON API not going to cut it for performance-critical applications.

We created GraphPipe to solve these three challenges. It provides a standard, high-performance protocol for transmitting tensor data over the network, along with simple implementations of clients and servers that make deploying and querying machine learning models from any framework a breeze. GraphPipe's efficient servers can serve models built in TensorFlow, PyTorch, mxnet, CNTK, or caffe2. We are pleased to announce that GraphPipe is available on Oracle's GitHub. Documentation, examples, and other relevant content can be found at https://oracle.github.io/graphpipe.

The Business Case

In the enterprise, machine-learning models are often trained individually and deployed using bespoke techniques. This impacts an organizations’ ability to derive value from its machine learning efforts. If marketing wants to use a model produced by the finance group, they will have to write custom clients to interact with the model. If the model becomes popular sales wants to use it as well, the custom deployment may crack under the load.

It only gets worse when the models start appearing in customer-facing mobile and IoT applications. Many devices are not powerful enough to run models locally and must make a request to a remote service. This service must be efficient and stable while running models from varied machined learning frameworks.

A standard allows researchers to build the best possible models, using whatever tools they desire, and be sure that users can access their models' predictions without bespoke code. Models can be deployed across multiple servers and easily aggregated into larger ensembles using a common protocol. GraphPipe provides the tools that the business needs to derive value from its machine learning investments.

Implementation Details

GraphPipe is an efficient network protocol designed to simplify and standardize transmission of machine learning data between remote processes. Presently, no dominant standard exists for how tensor-like data should be transmitted between components in a deep learning architecture. As such it is common for developers to use protocols like JSON, which is extremely inefficient, or TensorFlow-serving's protocol buffers, which carries with it the baggage of TensorFlow, a large and complex piece of software. GraphPipe is designed to bring the efficiency of a binary, memory-mapped format while remaining simple and light on dependencies.

GraphPipe includes:

  • A set of flatbuffer definitions
  • Guidelines for serving models consistently according to the flatbuffer definitions
  • Examples for serving models from TensorFlow, ONNX, and caffe2
  • Client libraries for querying models served via GraphPipe

In essence, a GraphPipe request behaves like a TensorFlow-serving predict request, but using flatbuffers as the message format. Flatbuffers are similar to google protocol buffers, with the added benefit of avoiding a memory copy during the deserialization step. The flatbuffer definitions provide a request message that includes input tensors, input names and output names. A GraphPipe remote model accepts the request message and returns one tensor per requested output name. The remote model also must provide metadata about the types and shapes of the inputs and outputs that it supports.

Performance

First, we compare serialization and deserialization speed of float tensor data in python using a custom ujson API, protocol buffers using a TensorFlow-serving predict request, and a GraphPipe remote request. The request consists of about 19 million floating-point values (consisting of 128 224x224x3 images) and the response is approximately 3.2 million floating point values (consisting of 128 7x7x512 convolutional outputs). The units on the left are in seconds.

Graphpipe is especially performant on the deserialize side, because flatbuffers provide access to underlying data without a memory copy.

Second, we compare end-to-end throughput using a Python-JSON TensorFlow model server, TensorFlow-serving, and the GraphPipe-go TensorFlow model server. In each case the backend model is the same. Large requests are made to the server using 1 thread and then again with 5 threads. The units on the left are rows calculated by the model per second.

Note that this test uses the recommended parameters for building Tensorflow-serving. Although the recommended build parameters for TensorFlow-serving do not perform well, we were ultimately able to discover compilation parameters that allow it to perform on par with our GraphPipe implementation. In other words, an optimized TensorFlow-serving performs similarly to GraphPipe, although building TensorFlow-serving to perform optimally is not documented nor easy.

Where Do I Get it?

You can find plenty of documentation and examples at https://oracle.github.io/graphpipe. The GraphPipe flatbuffer spec can be found on Oracle's GitHub along with servers that implement the spec for Python and Go. We also provide clients for Python, Go, and Java (coming soon), as well as a plugin for TensorFlow that allows the inclusion of a remote model inside a local TensorFlow graph.

Oracle Ksplice patch for CVE-2018-3620 and CVE-2018-3646 for Oracle Linux UEK r4

Wim Coekaerts - 9 hours 46 min ago

There was an Intel disclosure yesterday of a set of vulnerabilities around L1TF. You can read a summary here.

We released, as you can see from the blog, a number of kernel updates for Oracle Linux and a Ksplice patch for the same.  I wanted to take the opportunity again to show off how awesome Oracle Ksplice is.

The kernel patch we have for L1TF was about 106 different patches together. 54 files changed, 2079 insertions(+), 501 deletions(-). About 1.2Mb binary size of the ksplice kernel module for this patch. All this went into a single Ksplice patch!

Applied in a few microseconds. On one server I have in Oracle Cloud, I always run # uptrack-upgrade manually, on another server I have autoinstall=yes.

# uptrack-upgrade The following steps will be taken: Install [1vao34m9] CVE-2018-3620, CVE-2018-3646: Information leak in Intel CPUs under terminal fault. Go ahead [y/N]? y Installing [1vao34m9] CVE-2018-3620, CVE-2018-3646: Information leak in Intel CPUs under terminal fault. Your kernel is fully up to date. Effective kernel version is 4.1.12-124.18.1.el7uek

My other machine was up to date automatically and I didn't even know it.  I had to run # uptrack-show and it already had it applied. No reboot, no impact on my stuff I run here. Just autonomously done. Patched. Current.

Folks sometimes ask me about other live patch abilities from some other vendors. Well,  We have the above for every errata kernel released since the spectre/meltdown CVEs (as this is a layer on top of that code) at the same time as the kernel RPMs were released, as an integrated service. 'nuf said.

Oh and everyone in Oracle Cloud, remember, the Oracle Ksplice tools (uptrack) are installed in every OL image by default and you can run this without any additional configuration (or additional charges).

Easily manage dual backup destination with RMAN

Yann Neuhaus - 9 hours 58 min ago

Backup on disk with RMAN is great. It’s fast, you can set as many channels as your platform can handle for faster backups. And you can restore as fast as you can read and write files on disk with these multiple channels. As far as you’re using Enterprise Edition because Standard Edition is stuck to a single channel.

Disk space is very often limited and you’ll probably have to find another solution to keep backups longuer if you want to. You can think about tapes or you can connect RMAN to a global backup tool, but it requires additional libraries that are not free, and it definitely adds complexity.

The other solution is to have dual disk destination for the backups. The first one will be the main destination for your daily backups, the other one will be dedicated to long-term backups, maybe on slower disks but with more free space available. This second destination can eventualy be backed up with another tool without using any library.

For the demonstration, assume you have 2 filesystems, /backup is dedicated to latest daily backups and /lt_backup is for long-term backups.

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

4.0K    backup
ls: cannot access backup/*: No such file or directory

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

First of all, take a backup on the first destination:

RMAN> backup as compressed backupset database format '/oracle/backup/%U';

 

This is a small database and backup is done with the default single channel, so there is only two backupsets, one for the datafiles and the other for the controlfile and the spfile:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:27 backup/2btaj0mt_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:27 backup/2ctaj0nm_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

It’s quite easy to move the backup to the long term destination with RMAN:

RMAN> backup backupset all format '/oracle/lt_backup/%U' delete input;

 

BACKUP BACKUPSET with DELETE INPUT is basically the same as a system mv or move. But it does not require to recatalog the backup files as RMAN is doing this automatically.

Now our backup is located in the second destination:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

4.0K    backup
ls: cannot access backup/*: No such file or directory

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:28 lt_backup/2btaj0mt_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:28 lt_backup/2ctaj0nm_1_2

 

You can see here that backup filename has changed: last number increased. Oracle knows that this is the second copy of these backupsets (even the first ones don’t exist anymore).

Like a mv command you can put again your backup in previous destination:

RMAN> backup backupset all format '/oracle/backup/%U' delete input;

162M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:29 backup/2btaj0mt_1_3
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:29 backup/2ctaj0nm_1_3

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

All the backupsets are now back to the first destination only, and you can see another increase on the filename. And RMAN catalog is up-to-date.

Now let’s make the first folder the default destination for the backups, and go for compressed backupset as a default behavior:

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET ;
RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/oracle/backup/%U';

 

Now you only need a 2-word command to backup the database:

RMAN> backup database;

 

New backup is in first destination as expected:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

323M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:29 backup/2btaj0mt_1_3
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:29 backup/2ctaj0nm_1_3
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:35 backup/2dtaj15o_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:35 backup/2etaj16h_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Suppose you want to move the oldest backups, those done before 1.30AM:

RMAN> backup backupset completed before 'TRUNC(SYSDATE)+1.5/24' format '/oracle/lt_backup/%U' delete input;

 

Everything is working as expected, latest backup is still in the first destination, and the oldest one is in the lt_backup filesystem. With another increase of the number ending the filename:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:35 backup/2dtaj15o_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:35 backup/2etaj16h_1_1

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:38 lt_backup/2btaj0mt_1_4
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:38 lt_backup/2ctaj0nm_1_4

 

Now that the tests are OK, let’s simulate a real world example. First, tidy up all the backups:

RMAN> delete noprompt backupset;

 

Let’s take a new backup.

RMAN> backup database;

 

Backup is in default destination:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:43 backup/2ftaj1lv_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:43 backup/2gtaj1mo_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Let’s take another backup later:

RMAN> backup database;

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

323M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:43 backup/2ftaj1lv_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:43 backup/2gtaj1mo_1_1
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Now let’s move the oldest backup to the other folder:

RMAN> backup backupset completed before 'TRUNC(SYSDATE)+2/24' format '/oracle/lt_backup/%U' delete input;

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 02:02 lt_backup/2ftaj1lv_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:02 lt_backup/2gtaj1mo_1_2

 

Storing only the oldest backups in the long-term destination is not so clever, imagine you loose your first backup destination? It could be great to have the latest backup in both destinations. You can do that with a BACKUP BACKUPSET COMPLETED AFTER and no DELETE INPUT for basically the same as a cp or copy command:

RMAN> backup backupset completed after 'TRUNC(SYSDATE)+2/24' format '/oracle/lt_backup/%U';

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

323M    lt_backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 02:02 lt_backup/2ftaj1lv_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:02 lt_backup/2gtaj1mo_1_2
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:03 lt_backup/2htaj2m4_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:03 lt_backup/2itaj2mt_1_2

 

That’s it, you now have a first destination for newest backups, and a second one for all the backups. And you just have to schedule these 2 BACKUP BACKUPSET after your daily backup of your database.

Note that backups will stay in both destinations until they reach the retention limit you defined for your database. The DELETE OBSOLETE will purge the backupsets wherever they are and delete all the known copies.

 

Cet article Easily manage dual backup destination with RMAN est apparu en premier sur Blog dbi services.

Oracle 18c DataGuard : Rman RECOVER STANDBY DATABASE

Yann Neuhaus - 10 hours 30 min ago

With Oracle Database 18c, we can now refresh a standby database over the network using one RMAN command, RECOVER STANDBY DATABASE.

The RECOVER STANDBY DATABASE command restarts the standby instance, refreshes the control file from the primary database, and automatically renames data files, temp files, and online logs. It restores new data files that were added to the primary database and recovers the standby database up to the current time.
When you use the RECOVER STANDBY DATABASE command to refresh a standby database, you specify either a FROM SERVICE clause or a NOREDO clause. The FROM SERVICE clause specifies the name of a primary service. The NOREDO clause specifies that backups should be used for the refresh, which allows a standby to be rolled forward to a specific time or SCN.
The MRP must be manually stopped on the standby before any attempt is made to sync with primary database.

In this blog I am doing some tests of standby refresh using the Recover Standby Database command.

From a fine Data Guard let’s set the property StandbyFileManagement to MANUAL

DGMGRL> show configuration;

Configuration - CONT18C_DR

  Protection Mode: MaxPerformance
  Members:
  CONT18C_SITE  - Primary database
    CONT18C_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 18 seconds ago)

DGMGRL>

DGMGRL> edit database 'CONT18C_SITE' set property StandbyFileManagement=MANUAL;
Property "standbyfilemanagement" updated
DGMGRL> edit database 'CONT18C_SITE1' set property StandbyFileManagement=MANUAL;
Property "standbyfilemanagement" updated
DGMGRL> show  database 'CONT18C_SITE' StandbyFileManagement;
  StandbyFileManagement = 'manual'
DGMGRL> show  database 'CONT18C_SITE1' StandbyFileManagement;
  StandbyFileManagement = 'manual'
DGMGRL>

And Then I create add new tablespace and new table in the primary

SQL> create tablespace TBS_2 datafile '/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf' size 5M ;

Tablespace created.

SQL> create table test (id number) tablespace TBS_2;

Table created.

SQL> insert into test values (1);

1 row created.

SQL> insert into test values (2);

1 row created.

SQL> commit;

Commit complete.

SQL>

As expected the changes are not being replicated as shown in the standby alert logfile and in the broker sonfiguration

(3):File #14 added to control file as 'UNNAMED00014' because
(3):the parameter STANDBY_FILE_MANAGEMENT is set to MANUAL
(3):The file should be manually created to continue.
MRP0 (PID:6307): MRP0: Background Media Recovery terminated with error 1274
2018-08-15T13:31:08.343276+02:00
Errors in file /u01/app/oracle/diag/rdbms/cont18c_site1/CONT18C/trace/CONT18C_mrp0_6307.trc:
ORA-01274: cannot add data file that was originally created as '/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf'
MRP0 (PID:6307): Managed Standby Recovery not using Real Time Apply
Recovery interrupted!

Using the broker

DGMGRL> show database 'CONT18C_SITE1';

Database - CONT18C_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          4 minutes 33 seconds (computed 1 second ago)
  Average Apply Rate: 3.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    CONT18C

  Database Error(s):
    ORA-16766: Redo Apply is stopped

  Database Warning(s):
    ORA-16853: apply lag has exceeded specified threshold

Database Status:
ERROR

DGMGRL>

Now let’s try to sync the standby database using the RECOVER command. First let’s stop the recovery process.

DGMGRL> edit database 'CONT18C_SITE1' set state ='APPLY-OFF';
Succeeded.
DGMGRL> show database 'CONT18C_SITE1';

Database - CONT18C_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-OFF
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          26 minutes 28 seconds (computed 0 seconds ago)
  Average Apply Rate: (unknown)
  Real Time Query:    OFF
  Instance(s):
    CONT18C

Database Status:
SUCCESS

DGMGRL>

After let’s connect with Rman as the target to the standby and let’s run the command
If we try to run the command while connecting to the primary as target we will get following error

RMAN> RECOVER STANDBY DATABASE FROM SERVICE CONT18c_SITE;

Starting recover at 15-AUG-18
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/15/2018 14:00:15
RMAN-05146: must be connected to standby database to issue RECOVER STANDBY DATABASE

RMAN>

So from the standby as target. Note that outputs are truncated

[oracle@primaserver admin]$ rman target sys/root@cont18c_site1

Recovery Manager: Release 18.0.0.0.0 - Production on Wed Aug 15 14:03:55 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONT18C (DBID=4292751651)

RMAN>  RECOVER STANDBY DATABASE FROM SERVICE CONT18c_SITE;

Starting recover at 15-AUG-18
using target database control file instead of recovery catalog
Executing: alter database flashback off
Oracle instance started

Total System Global Area     956299440 bytes

Fixed Size                     8902832 bytes
Variable Size                348127232 bytes
Database Buffers             595591168 bytes
Redo Buffers                   3678208 bytes

contents of Memory Script:
{
   restore standby controlfile from service  'CONT18c_SITE';
   alter database mount standby database;
}
executing Memory Script

Starting restore at 15-AUG-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=39 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
output file name=/u01/app/oracle/oradata/CONT18C/control01.ctl
output file name=/u01/app/oracle/oradata/CONT18C/control02.ctl
Finished restore at 15-AUG-18

released channel: ORA_DISK_1
Statement processed

contents of Memory Script:
{
set newname for datafile  14 to
 "/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf";
   restore from service  'CONT18c_SITE' datafile
    14;
   catalog datafilecopy  "/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf";
   switch datafile all;
}
executing Memory Script

executing command: SET NEWNAME

Starting restore at 15-AUG-18
Starting implicit crosscheck backup at 15-AUG-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=51 device type=DISK
Crosschecked 5 objects
Finished implicit crosscheck backup at 15-AUG-18

Starting implicit crosscheck copy at 15-AUG-18
using channel ORA_DISK_1
Finished implicit crosscheck copy at 15-AUG-18

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/app/oracle/fast_recovery_area/CONT18C/CONT18C_SITE1/archivelog/2018_08_15/o1_mf_1_47_fq7q5ls5_.arc
File Name: /u01/app/oracle/fast_recovery_area/CONT18C/CONT18C_SITE1/archivelog/2018_08_15/o1_mf_1_48_fq7qn5s3_.arc
File Name: /u01/app/oracle/fast_recovery_area/CONT18C/CONT18C_SITE1/archivelog/2018_08_15/o1_mf_1_49_fq7r0715_.arc
File Name: 
…
…

contents of Memory Script:
{
  recover database from service  'CONT18c_SITE';
}
executing Memory Script

Starting recover at 15-AUG-18
using channel ORA_DISK_1
skipping datafile 5; already restored to SCN 1550044
skipping datafile 6; already restored to SCN 1550044
skipping datafile 8; already restored to SCN 1550044
skipping datafile 14; already restored to SCN 2112213
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
destination for restore of datafile 00001: /u01/app/oracle/oradata/CONT18C/system01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
destination for restore of datafile 00003: 
…
…
destination for restore of datafile 00012: /u01/app/oracle/oradata/CONT18C/PDB1/users01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
destination for restore of datafile 00013: /u01/app/oracle/oradata/CONT18C/PDB1/tbs_nolog01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

starting media recovery

media recovery complete, elapsed time: 00:00:00
Finished recover at 15-AUG-18
flashback needs to be reenabled on standby open
Finished recover at 15-AUG-18

RMAN>

And we can verify that the configuration is now sync

DGMGRL> edit database 'CONT18C_SITE1' set state ='APPLY-ON';
Succeeded.
DGMGRL> show configuration;

Configuration - CONT18C_DR

  Protection Mode: MaxPerformance
  Members:
  CONT18C_SITE  - Primary database
    CONT18C_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 37 seconds ago)

DGMGRL>

After opening the standby on read only mode we can verify that everything is now fine

SQL> alter session set container=pdb1;

Session altered.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
TBS_NOLOG
TBS_2

7 rows selected.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/CONT18C/PDB1/system01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/users01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/tbs_nolog01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf

6 rows selected.

SQL> select * from test;

        ID
----------
         1
         2

SQL>
 

Cet article Oracle 18c DataGuard : Rman RECOVER STANDBY DATABASE est apparu en premier sur Blog dbi services.

Podcast: Developer Evolution: What's rockin’ roles in IT?

OTN TechBlog - Tue, 2018-08-14 23:00

The good news is that the US Bureau of Labor Statistics predicts 24% growth in software developer jobs through 2026. That’s well above average. The outlook for Database administrators certainly isn’t bleak, but with projected job growth of 11% to 2026, that’s less than half the growth projected for developers. Job growth for System administrators, at 6% through 2016, is considered average by the BLS. So while the news is positive all around, developers certainly have an advantage. Each of these roles certainly has separate and distinct responsibilities. But why is the outlook so much better for developers, and what does this say about what’s happening in the IT ecosystem?

"More than ever," says Oracle Developer Champion Rolando Carrasco, "institutions, organizations, and governments are keen to generate a new crop of developers that can help them to to create something new." In today's business climate competition is tough, and high premium is placed on innovation. "But developers have a lot of tools,  a lot of abilities within reach, and the opportunity to make something that can make a competitive difference."

But the role of the developer is morphing into something new, according to Oracle ACE Director Martin Giffy D'Souza. "In the next couple years we're also going to see that  the typical developer is not going to be the traditional developer that went to school, or the script kitties that just got into the business. We're going see what is called the citizen developer. We're going to see a lot more people transition to that simply because it adds value to their job. Those people are starting to hit the limits of writing VBA macros in Excel and they want to write custom apps. I think that's what we're going to see more and more of, because we already know there's a developer job shortage."

But why is the job growth for developers outpacing that for DBAs and SysAdmins? "If you take it at very high level, devs produce things," Martin says. "They produce value. They produce products.  DBAs and IT people are maintainers. They’re both important, but the more products and solutions we can create," the more value to the business.

Oracle ACE Director Mark Rittman has spent the last couple of years working as a product manager in a start-up, building a tech platform. "I never saw a DBA there," he admits. "It was at the point that if I were to try to explain what a DBA was to people there, all of whom are uniformly half my age, they wouldn't know what I was talking about. That's because the platforms people use these days, within the Oracle ecosystem or Google or Amazon or whatever, it's all very much cloud, and it's all very much NoOPs, and it's very much the things that we used to spend ages worrying about,"

This frees developers to do what they do best. "There are far fewer people doing DBA work and SysAdmin work," Mark says. "That’s all now in the cloud. And that also means that developers can also develop now. I remember, as a BI developer working on projects, it was surprising how much of my time was spent just getting the system working in the first place, installing things, configuring things, and so on. Probably 75% of every project was just getting the thing to actually work."

Where some roles may vanish altogether, others will transform. DBAs have become data engineers or infrastructure engineers, according to Mark. "So there are engineers around and there are developers around," he observes, "but I think administrator is a role that, unless you work for one of the big cloud companies in one of those big data centers, is largely kind of managed away now."

Phil Wilkins, an Oracle ACE, has witnessed the changes. DBAs in particular, as well as network people focused on infrastructure, have been dramatically affected by cloud computing, and the ground is still shaking. "With the rise and growth in cloud adoption these days, you're going to see the low level, hard core technical skills that the DBAs used to bring being concentrated into the cloud providers, where you're taking a database as a service. They're optimizing the underlying infrastructure, making sure the database is running. But I'm just chucking data at it, so I don't care about whether the storage is running efficiently or not. The other thing is that although developers now get a get more freedom, and we've got NoSQL and things like that, we're getting more and more computing power, and it's accelerating at such a rate now that, where 10 years ago we used to have to really worry about the tuning and making sure the database was performant, we can now do a lot of that computing on an iPhone. So why are we worrying when we've got huge amounts of cloud and CPU to the bucketload?

These comments represent just a fraction of the conversation captured in this latest Oracle Developer Community Podcast, in which the panelists dive deep into the forces that are shaping and re-shaping roles, and discuss their own concerns about the trends and technologies that are driving that evolution. Listen!

The Panelists Rolando Carrasco

Rolando Carrasco
Oracle Developer Champion
Oracle ACE
Co-owner, Principal SOA Architect, S&P Solutions
Twitter LinkedIn

Martin Giffy D'Souza

Martin Giffy D'Souza
Oracle ACE Director
Director of Innovation, Insum Solutions
Twitter LinkedIn 

Mark Rittman

Mark Rittman
Oracle ACE Director
Chief Executive Officer, MJR Analytics
Twitter LinkedIn 

Phil Wilkins

Phil Wilkins
Oracle ACE
Senior Consultant, Capgemini
Twitter LinkedIn 5

Related Oracle Code One Sessions

The Future of Serverless is Now: Ci/CD for the Oracle Fn Project, by Rolando Carrasco and Leonardo Gonzalez Cruz [DEV5325]

Other Related Content

Podcast: Are Microservices and APIs Becoming SOA 2.0?

Vibrant and Growing: The Current State of API Management

Video: 2 Minute Integration with Oracle Integration Cloud Service

It's Always Time to Change

Coming Soon

The next program, coming on Sept 5, will feature a discussion of "DevOps to NoOps," featuring panelists Baruch Sadogursky, Davide Fiorentino, Bert Jan Schrijver, and others TBA. Stay tuned!

Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

Oracle Database on OpenShift

Yann Neuhaus - Tue, 2018-08-14 15:39
By Franck Pachot

.
In a previous post I described the setup of MiniShift on my laptop in order to run OpenShift for test purpose. I even pulled the Oracle Database image from the Docker Store. But the goal is to import it into OpenShift to deploy it from the Image Stream.

I start MiniShift on my laptop, specifying a larger disk (default is 20GB)

C:\Users\Franck>minishift start --disk-size 40g
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 40 GB
-- Starting Minishift VM .................................................................... OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 1% used OK
Importing 'openshift/origin:v3.9.0' ............. OK
Importing 'openshift/origin-docker-registry:v3.9.0' ... OK
Importing 'openshift/origin-haproxy-router:v3.9.0' ...... OK
-- OpenShift cluster will be configured with ...
Version: v3.9.0
-- Copying oc binary from the OpenShift container image to VM ... OK
-- Starting OpenShift cluster ...........................................................
Using nsenter mounter for OpenShift volumes
Using public hostname IP 192.168.99.105 as the host IP
Using 192.168.99.105 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.
 
The server is accessible via web console at:
https:⁄⁄192.168.99.105:8443
 
You are logged in as:
User: developer
Password:
 
To login as administrator:
oc login -u system:admin

MiniShift is starting a VirualBox and gets an IP address from the VirtualBox DHCP – here 192.168.99.105
I can access to the console https://192.168.99.105:8443 and log as developer or admin but for the moment I’m continuing in command line.

At any moment I can log to the VM running OpenShift with the minishift command. Here checking the size of the disks

C:\Users\Franck>minishift ssh
 
[docker@minishift ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/live-rw 9.8G 697M 9.0G 8% /
devtmpfs 974M 0 974M 0% /dev
tmpfs 1000M 0 1000M 0% /dev/shm
tmpfs 1000M 18M 983M 2% /run
tmpfs 1000M 0 1000M 0% /sys/fs/cgroup
/dev/sr0 344M 344M 0 100% /run/initramfs/live
/dev/sda1 39G 1.8G 37G 5% /mnt/sda1
tmpfs 200M 0 200M 0% /run/user/0
tmpfs 200M 0 200M 0% /run/user/1000

Build the Docker image

The goal is to run in OpenShift a container from an image that has been build somewhere else. In this example I’ll not build one but use one provided on the Docker store: the Oracle Database ‘slim’ image. For this example, I’ll use the minishift VM docker, just because it is there.

I have DockerTools installed on my laptop and just want to set the environment to connect to the docker server on the minishift VM. I can get the environment from minishift:

C:\Users\Franck>minishift docker-env
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.105:2376
SET DOCKER_CERT_PATH=C:\Users\Franck\.minishift\certs
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

Here is how to directly set the environemnt from it:

C:\Users\Franck>@FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

Now my docker commands will connect to this docker server. Here are the related info, minishift is already running several containers there for its own usage:

C:\Users\Franck>docker info
Containers: 9
Running: 7
Paused: 0
Stopped: 2
Images: 6
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log:
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: e9c345b3f906d5dc5e8100b05ce37073a811c74a (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
Profile: default
selinux
Kernel Version: 3.10.0-862.6.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.953GiB
Name: minishift
ID: U7IQ:TE3X:HSGK:3ES2:IO6G:A7VI:3KUU:YMBC:3ZIR:QYUL:EQUL:VFMS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: pachot
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
172.30.0.0/16
127.0.0.0/8
Live Restore Enabled: false

As for this example, I’ll use the Oracle Database image, I need to log to the Docker Store to prove that I accept the licensing conditions:

C:\Users\Franck>docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username:
Password:
Login Succeeded

I pull the image, takes some time because ‘slim’ means 2GB with Oracle Database.

C:\Users\Franck>docker pull store/oracle/database-enterprise:12.2.0.1-slim
Trying to pull repository docker.io/store/oracle/database-enterprise ...
12.2.0.1-slim: Pulling from docker.io/store/oracle/database-enterprise
4ce27fe12c04: Pull complete
9d3556e8e792: Pull complete
fc60a1a28025: Pull complete
0c32e4ed872e: Pull complete
be0a1f1e8dfd: Pull complete
Digest: sha256:dbd87ae4cc3425dea7ba3d3f34e062cbd0afa89aed2c3f3d47ceb5213cc0359a
Status: Downloaded newer image for docker.io/store/oracle/database-enterprise:12.2.0.1-slim

Here is the image:

C:\Users\Franck>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 aa12a2fc57f7 7 weeks ago 495MB
openshift/origin-docker-registry v3.9.0 0530b896b578 7 weeks ago 465MB
openshift/origin-haproxy-router v3.9.0 6b85d7aec983 7 weeks ago 1.28GB
openshift/origin-deployer v3.9.0 39ee47797d2e 7 weeks ago 1.26GB
openshift/origin v3.9.0 12a3f005312b 7 weeks ago 1.26GB
openshift/origin-pod v3.9.0 6e08365fbba9 7 weeks ago 223MB
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 12 months ago 2.08GB

My minishift VM disk has increased by 2GB:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 3.9G 35G 11% /mnt/sda1

Push the image to OpenShift registry

OpenShift has its integrated container registry from which the Docker images are visible to Image Stream.
Here is the address of the registry:

C:\Users\Franck>minishift openshift registry
172.30.1.1:5000

I’ll run some OpenShift commands and the path to the minishift cache for ‘oc’ can be set with:

C:\Users\Franck>minishift oc-env
SET PATH=C:\Users\Franck\.minishift\cache\oc\v3.9.0\windows;%PATH%
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i
 
C:\Users\Franck>@FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

I am still connected as developer to OpenShift:

C:\Users\Franck>oc whoami
developer

and I get the login token:

C:\Users\Franck>oc whoami -t
lde5zRPHjkDyaXU9ninZ6zX50cVu3liNBjQVinJdwFc

I use this token to login to the OpenShift registry with docker in order to be able to push the image:

C:\Users\Franck>docker login -u developer -p lde5zRPHjkDyaXU9ninZ6zX50cVu3liNBjQVinJdwFc 172.30.1.1:5000
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

I create a new project to import the image to:

C:\Users\Franck>oc new-project oracle --display-name=Oracle
Now using project "oracle" on server "https://192.168.99.105:8443".
 
You can add applications to this project with the 'new-app' command. For example, try:
 
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
 
to build a new example application in Ruby.

This can also be done from the GUI. Here is the project on the right:
CaptureOpenShiftProject

I tag the image with the name of the registry (172.30.1.1:5000) and the name of the project (oracle) and add an image name, so that the full name is: 172.30.1.1:5000/oracle/ora122slim

C:\Users\Franck>docker tag store/oracle/database-enterprise:12.2.0.1-slim 172.30.1.1:5000/oracle/ora122slim

We can see this tagged image

C:\Users\Franck>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 aa12a2fc57f7 7 weeks ago 495MB
openshift/origin-docker-registry v3.9.0 0530b896b578 7 weeks ago 465MB
openshift/origin-haproxy-router v3.9.0 6b85d7aec983 7 weeks ago 1.28GB
openshift/origin-deployer v3.9.0 39ee47797d2e 7 weeks ago 1.26GB
openshift/origin v3.9.0 12a3f005312b 7 weeks ago 1.26GB
openshift/origin-pod v3.9.0 6e08365fbba9 7 weeks ago 223MB
172.30.1.1:5000/oracle/ora122slim latest 27c9559d36ec 12 months ago 2.08GB
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 12 months ago 2.08GB

Note that it is the same IMAGE ID and doesn’t take more space:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 3.9G 35G 11% /mnt/sda1

Then I’m finally ready to pull the image to the OpenShift docker registry:

C:\Users\Franck>docker push 172.30.1.1:5000/oracle/ora122slim
The push refers to a repository [172.30.1.1:5000/oracle/ora122slim] 066e811424fb: Pushed
99d7f2451a1a: Pushed
a2c532d8cc36: Pushed
49c80855196a: Pushed
40c24f62a02f: Pushed
latest: digest: sha256:25b0ec7cc3987f86b1e754fc214e7f06761c57bc11910d4be87b0d42ee12d254 size: 1372

This is a copy, and takes an additional 2GB:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 5.4G 33G 14% /mnt/sda1

Deploy the image

Finally, I can deploy the image as it is visible in the GUI:
CaptureOpenShiftImport

I choose to deploy from fommand line:

C:\Users\Franck>oc new-app --image-stream=ora122slim --name=ora122slimdeployment
--> Found image 27c9559 (12 months old) in image stream "oracle/ora122slim" under tag "latest" for "ora122slim"
 
* This image will be deployed in deployment config "ora122slimdeployment"
* Ports 1521/tcp, 5500/tcp will be load balanced by service "ora122slimdeployment"
* Other containers can access this service through the hostname "ora122slimdeployment"
* This image declares volumes and will default to use non-persistent, host-local storage.
You can add persistent volumes later by running 'volume dc/ora122slimdeployment --add ...'

--> Creating resources ...
imagestreamtag "ora122slimdeployment:latest" created
deploymentconfig "ora122slimdeployment" created
service "ora122slimdeployment" created
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/ora122slimdeployment'
Run 'oc status' to view your app.

CaptureOpenShiftDeploy

I expose the service:

C:\Users\Franck>oc expose service ora122slimdeployment
route "ora122slimdeployment" exposed

/bin/bash: /home/oracle/setup/dockerInit.sh: Permission denied

Here is one little thing to change. From the POD terminal, I can see the following error:
CaptureOpenShiftCrash

The same can be read from command line:

C:\Users\Franck>oc status
In project Oracle (oracle) on server https://192.168.99.105:8443
 
http://ora122slimdeployment-oracle.192.168.99.105.nip.io to pod port 1521-tcp (svc/ora122slimdeployment)
dc/ora122slimdeployment deploys istag/ora122slim:latest
deployment #1 deployed 7 minutes ago - 0/1 pods (warning: 6 restarts)
 
Errors:
* pod/ora122slimdeployment-1-86prl is crash-looping
 
1 error, 2 infos identified, use 'oc status -v' to see details.
 
C:\Users\Franck>oc logs ora122slimdeployment-1-86prl -c ora122slimdeployment
/bin/bash: /home/oracle/setup/dockerInit.sh: Permission denied

This is because by default, for security reason, OpenShift runs the container with a random user id. But the files are executable only by oracle:

sh-4.2$ ls -l /home/oracle/setup/dockerInit.sh
-rwxr-xr--. 1 oracle oinstall 2165 Aug 17 2017 /home/oracle/setup/dockerInit.sh
sh-4.2$

The solution is quite simple: allow the container to run with its own user id:

C:\Users\Franck>minishift addon apply anyuid
-- Applying addon 'anyuid':.
Add-on 'anyuid' changed the default security context constraints to allow pods to run as any user.
Per default OpenShift runs containers using an arbitrarily assigned user ID.
Refer to https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints and
https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-origin-specific-guidelines for more information.

The the restart of the POD will go further:
CaptureOpenShiftOracle

This Oracle Database from the Docker Store is not really an image of an installed Oracle Database, but just a tar of Oracle Home and Database files that have to be untared.

Now, in addition to the image size I have an additional 2GB layer for the container:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 11G 28G 28% /mnt/sda1
 
C:\Users\Franck>docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 7 6 3.568GB 1.261GB (35%)
Containers 17 9 1.895GB 58.87kB (0%)
Local Volumes 0 0 0B 0B
Build Cache 0B 0B

Of course there is more to customize. The minishift VM should have more memory and the container for Oracle Database as well. We probably want to add an external volume, and export ports outside of the minishift VM.

 

Cet article Oracle Database on OpenShift est apparu en premier sur Blog dbi services.

Federal Court Rules that Oracle Is Entitled to a Permanent Injunction Against Rimini Street and Awards Attorneys' Fees in Copyright Suit

Oracle Press Releases - Tue, 2018-08-14 15:09
Press Release
Federal Court Rules that Oracle Is Entitled to a Permanent Injunction Against Rimini Street and Awards Attorneys' Fees in Copyright Suit

Redwood Shores, Calif.—Aug 14, 2018

Today, for the second time, a Federal Court in Nevada granted Oracle's motion for a permanent injunction against Rimini Street for years of infringement of Oracle’s copyrights. In an opinion notable for its strong language condemning Rimini Street’s actions, the Court made clear that since its inception, Rimini’s business “was built entirely on its infringement of Oracle’s copyrighted software.” The Court also highlighted Rimini's “conscious disregard” for Oracle's copyrights and Rimini's “significant litigation misconduct” in granting Oracle's motion for its attorneys' fees to be paid.

“As the Court's Order today makes clear, Rimini Street's business has been built entirely on unlawful conduct, and Rimini's executives have repeatedly lied to cover up their company's illegal acts. Rimini, which admits that it is the subject of an ongoing federal criminal investigation, has proven itself to be disreputable, and it seems very clear that it will not stop its unlawful conduct until a Court orders it to stop. Oracle is grateful that today the Federal Court in Nevada did just that,” said Dorian Daley, Oracle's Executive Vice President and General Counsel.

The Court noted that it was Rimini's brazen misconduct that enabled it to "rapidly build" its infringing business, while at the same time irreparably damaging Oracle because Rimini's very business model “eroded the bonds and trust that Oracle has with its customers.” It also stressed that for over five years of litigation, “literally up until trial”, Rimini Street denied the allegations of infringement. At trial, however, Rimini CEO Seth Ravin, who was also a defendant, changed his story and admitted for the first time that Rimini Street did in fact engage in all the infringing activities that Oracle had identified.

Finally, the Court declared that over $28M in attorneys’ fees should be awarded to Oracle because of Rimini Street’s significant litigation misconduct in this action. Rimini comments to the market that this award would have to be returned to Rimini have proven to be false and misleading, like so many of its actions and assurances to customers and others.

Contact Info
Deborah Hellinger
Oracle
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

ODA database been stuck in deleting status

Yann Neuhaus - Tue, 2018-08-14 15:03

Facing an internal inconsistency in the ODA derby database is very painful (see https://blog.dbi-services.com/oda-lite-what-is-this-odacli-repository/ for more info about the derby database). I have recently faced a case where the database deletion was failing and the database remained then in “Deleting” status.  Connecting directly to the internal derby database and doing some self cleaning is very risky and should be performed at your own and known risk. So, in most of the case, a database inconsistency issue ends with an Oracle Support ticket to get their help for cleaning. Before doing so I wanted to look closer to the issue and was very happy to fix it myself. I wanted to share my experience here.

Issue description

As explained in the introduction, the database deletion failed and the database remained in “Deleting” status.

[root@prod1 ~]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
ea49c5a8-8747-4459-bb99-cd71c8c87d58     testtst1   Si       12.1.0.2             false      OLTP     Odb1s    ACFS       Deleting     80a2e501-31d8-4a5d-83db-e04dad34a7fa

Looking at the job activity log, we can see that the deletion is failing while trying to delete the FileSystem.

[root@prod1 ~]# odacli describe-job -i 50a8c1c2-686e-455e-878f-eaa537295c9f

Job details
----------------------------------------------------------------
                     ID:  50a8c1c2-686e-455e-878f-eaa537295c9f
            Description:  Database service deletion with db name: testtst1 with id : ea49c5a8-8747-4459-bb99-cd71c8c87d58
                 Status:  Failure
                Created:  July 25, 2018 9:40:17 AM CEST
                Message:  DCS-10011:Input parameter 'ACFS Device for delete' cannot be NULL.

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
database Service deletion for ea49c5a8-8747-4459-bb99-cd71c8c87d58 July 25, 2018 9:40:17 AM CEST       July 25, 2018 9:40:22 AM CEST       Failure
database Service deletion for ea49c5a8-8747-4459-bb99-cd71c8c87d58 July 25, 2018 9:40:17 AM CEST       July 25, 2018 9:40:22 AM CEST       Failure
Validate db ea49c5a8-8747-4459-bb99-cd71c8c87d58 for deletion July 25, 2018 9:40:17 AM CEST       July 25, 2018 9:40:17 AM CEST       Success
Database Deletion                        July 25, 2018 9:40:18 AM CEST       July 25, 2018 9:40:18 AM CEST       Success
Unregister Db From Cluster               July 25, 2018 9:40:18 AM CEST       July 25, 2018 9:40:19 AM CEST       Success
Kill Pmon Process                        July 25, 2018 9:40:19 AM CEST       July 25, 2018 9:40:19 AM CEST       Success
Database Files Deletion                  July 25, 2018 9:40:19 AM CEST       July 25, 2018 9:40:19 AM CEST       Success
Deleting FileSystem                      July 25, 2018 9:40:21 AM CEST       July 25, 2018 9:40:22 AM CEST       Failure

I decided to have a look why it would have failed on the file system deletion step, and I was very surprised to see there was no data volume for this database anymore. This can be seen in the below volinfo command output. Not sure what happened, but it is weird : why failing if what you want to delete is no more existing and stopping processing further.

ASMCMD> volinfo --all
Diskgroup Name: DATA

         Volume Name: COMMONSTORE
         Volume Device: /dev/asm/commonstore-265
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /opt/oracle/dcs/commonstore

Diskgroup Name: RECO

         Volume Name: RECO
         Volume Device: /dev/asm/reco-403
         State: ENABLED
         Size (MB): 304128
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /u03/app/oracle/
 Solution

So why not trying to give the ODA what he is expecting to see? Therefore I tried to create the ACFS volume with exact naming and I was very happy to see that this solved the problem. There was no other relation key than the name of the volume. Let’s look in details the steps I performed.

Let’s create the database expected data volume.

ASMCMD> volcreate -G DATA -s 10G DATTESTTST1

ASMCMD> volinfo -G DATA -a
Diskgroup Name: DATA

         Volume Name: COMMONSTORE
         Volume Device: /dev/asm/commonstore-265 
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /opt/oracle/dcs/commonstore

         Volume Name: DATTESTTST1
         Volume Device: /dev/asm/dattesttst1-265
         State: ENABLED
         Size (MB): 10240
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage:
         Mountpath:

Let’s create the file system for the newly created volume.

grid@prod1:/home/grid/ [+ASM1] mkfs.acfs /dev/asm/dattesttst1-265
mkfs.acfs: version                   = 12.2.0.1.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/dattesttst1-265
mkfs.acfs: volume size               = 10737418240  (  10.00 GB )
mkfs.acfs: Format complete.

Let’s check the expected mount points needed for the corresponding database.

[root@prod1 ~]# odacli describe-dbstorage -i 31d852f7-bdd0-40f5-9224-2ca139a2c3db
DBStorage details
----------------------------------------------------------------
                     ID: 31d852f7-bdd0-40f5-9224-2ca139a2c3db
                DB Name: testtst1
          DBUnique Name: testtst1_RZ1
         DB Resource ID: ea49c5a8-8747-4459-bb99-cd71c8c87d58
           Storage Type: Acfs
          DATA Location: /u02/app/oracle/oradata/testtst1_RZ1
          RECO Location: /u03/app/oracle/fast_recovery_area/
          REDO Location: /u03/app/oracle/redo/
   FLASH Cache Location:
                  State: ResourceState(status=Configured)
                Created: July 18, 2018 10:28:39 AM CEST
            UpdatedTime: July 18, 2018 10:29:01 AM CEST

In order to add and start the appropriate file system.

[root@prod1 testtst1_RZ1]# cd /u01/app/12.2.0.1/grid/bin/
[root@prod1 bin]# ./srvctl add filesystem -volume DATTESTTST1 -diskgroup DATA -path /u02/app/oracle/oradata/testtst1_RZ1 -fstype ACFS -autostart ALWAYS -mountowner oracle
[root@prod1 bin]# ./srvctl start filesystem -device /dev/asm/dattesttst1-265

Let’s check the mounted file system.

[root@prod1 bin]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot
                       30G   24G  4.1G  86% /
tmpfs                 189G  1.3G  187G   1% /dev/shm
/dev/md0              477M   40M  412M   9% /boot
/dev/sda1             500M  320K  500M   1% /boot/efi
/dev/mapper/VolGroupSys-LogVolOpt
                       59G   13G   44G  22% /opt
/dev/mapper/VolGroupSys-LogVolU01
                       99G   25G   69G  27% /u01
/dev/asm/commonstore-265
                      5.0G  319M  4.7G   7% /opt/oracle/dcs/commonstore
/dev/asm/reco-403     297G   14G  284G   5% /u03/app/oracle
/dev/asm/dattesttst1-265
                       10G  265M  9.8G   3% /u02/app/oracle/oradata/testtst1_RZ1

Let’s now try to delete the database again. Option -fd is mandatory to force deletion.

[root@prod1 bin]# odacli delete-database -i ea49c5a8-8747-4459-bb99-cd71c8c87d58 -fd
{
  "jobId" : "976c8689-a69d-4e0d-a5e0-e40a30a77d29",
  "status" : "Running",
  "message" : null,
  "reports" : [ {
    "taskId" : "TaskZJsonRpcExt_471",
    "taskName" : "Validate db ea49c5a8-8747-4459-bb99-cd71c8c87d58 for deletion",
    "taskResult" : "",
    "startTime" : "July 25, 2018 10:04:24 AM CEST",
    "endTime" : "July 25, 2018 10:04:24 AM CEST",
    "status" : "Success",
    "taskDescription" : null,
    "parentTaskId" : "TaskSequential_469",
    "jobId" : "976c8689-a69d-4e0d-a5e0-e40a30a77d29",
    "tags" : [ ],
    "reportLevel" : "Info",
    "updatedTime" : "July 25, 2018 10:04:24 AM CEST"
  } ],
  "createTimestamp" : "July 25, 2018 10:04:23 AM CEST",
  "resourceList" : [ ],
  "description" : "Database service deletion with db name: testtst1 with id : ea49c5a8-8747-4459-bb99-cd71c8c87d58",
  "updatedTime" : "July 25, 2018 10:04:23 AM CEST"
}

The database deletion is now successful.

[root@prod1 bin]# odacli describe-job -i 976c8689-a69d-4e0d-a5e0-e40a30a77d29

Job details
----------------------------------------------------------------
                     ID:  976c8689-a69d-4e0d-a5e0-e40a30a77d29
            Description:  Database service deletion with db name: testtst1 with id : ea49c5a8-8747-4459-bb99-cd71c8c87d58
                 Status:  Success
                Created:  July 25, 2018 10:04:23 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate db ea49c5a8-8747-4459-bb99-cd71c8c87d58 for deletion July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Database Deletion                        July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Unregister Db From Cluster               July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Kill Pmon Process                        July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Database Files Deletion                  July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:25 AM CEST      Success
Deleting Volume                          July 25, 2018 10:04:30 AM CEST      July 25, 2018 10:04:32 AM CEST      Success

Let’s check the volume and file system to make sure they have been removed.

ASMCMD> volinfo --all
Diskgroup Name: DATA

         Volume Name: COMMONSTORE
         Volume Device: /dev/asm/commonstore-265
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /opt/oracle/dcs/commonstore

Diskgroup Name: RECO

         Volume Name: RECO
         Volume Device: /dev/asm/reco-403
         State: ENABLED
         Size (MB): 304128
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /u03/app/oracle/

grid@prod1:/home/grid/ [+ASM1] df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot
                       30G   24G  4.1G  86% /
tmpfs                 189G  1.3G  187G   1% /dev/shm
/dev/md0              477M   40M  412M   9% /boot
/dev/sda1             500M  320K  500M   1% /boot/efi
/dev/mapper/VolGroupSys-LogVolOpt
                       59G   13G   44G  22% /opt
/dev/mapper/VolGroupSys-LogVolU01
                       99G   25G   69G  27% /u01
/dev/asm/commonstore-265
                      5.0G  319M  4.7G   7% /opt/oracle/dcs/commonstore
/dev/asm/reco-403     297G   14G  284G   5% /u03/app/oracle
grid@prod1:/home/grid/ [+ASM1]

Listing the database would show that the unique database has now been deleted.

[root@prod1 bin]# odacli list-databases
DCS-10032:Resource database is not found.

To complete the test and make sure all is ok, I created a new database, which I expected would be successful.

[root@prod1 bin]# odacli describe-job -i cf896c7f-0675-4980-a63f-a8a2b09b1352

Job details
----------------------------------------------------------------
                     ID:  cf896c7f-0675-4980-a63f-a8a2b09b1352
            Description:  Database service creation with db name: testtst2
                 Status:  Success
                Created:  July 25, 2018 10:12:24 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance               July 25, 2018 10:12:25 AM CEST      July 25, 2018 10:12:25 AM CEST      Success
Creating volume dattesttst2              July 25, 2018 10:12:25 AM CEST      July 25, 2018 10:12:36 AM CEST      Success
Creating ACFS filesystem for DATA        July 25, 2018 10:12:36 AM CEST      July 25, 2018 10:12:44 AM CEST      Success
Database Service creation                July 25, 2018 10:12:44 AM CEST      July 25, 2018 10:18:49 AM CEST      Success
Database Creation                        July 25, 2018 10:12:44 AM CEST      July 25, 2018 10:17:36 AM CEST      Success
Change permission for xdb wallet files   July 25, 2018 10:17:36 AM CEST      July 25, 2018 10:17:36 AM CEST      Success
Place SnapshotCtrlFile in sharedLoc      July 25, 2018 10:17:36 AM CEST      July 25, 2018 10:17:37 AM CEST      Success
Running DataPatch                        July 25, 2018 10:18:34 AM CEST      July 25, 2018 10:18:47 AM CEST      Success
updating the Database version            July 25, 2018 10:18:47 AM CEST      July 25, 2018 10:18:49 AM CEST      Success
create Users tablespace                  July 25, 2018 10:18:49 AM CEST      July 25, 2018 10:18:51 AM CEST      Success



[root@prod1 bin]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
e0e8163d-dcaa-4692-85c5-24fb9fe17291     testtst2   Si       12.1.0.2             false      OLTP     Odb1s    ACFS       Configured   80a2e501-31d8-4a5d-83db-e04dad34a7fa

 

 

 

Cet article ODA database been stuck in deleting status est apparu en premier sur Blog dbi services.

Intel Processor L1TF vulnerabilities: CVE-2018-3615, CVE-2018-3620, CVE-2018-3646

Oracle Security Team - Tue, 2018-08-14 12:00

Today, Intel disclosed a new set of speculative execution side-channel processor vulnerabilities affecting their processors.    These L1 Terminal Fault (L1TF) vulnerabilities affect a number of Intel processors, and they have received three CVE identifiers:

  • CVE-2018-3615 impacts Intel Software Guard Extensions (SGX) and has a CVSS Base Score of 7.9.

  • CVE-2018-3620 impacts operating systems and System Management Mode (SMM) running on Intel processors and has a CVSS Base Score of 7.1.

  • CVE-2018-3646 impacts virtualization software and Virtual Machine Monitors (VMM) running on Intel processors and has a CVSS Base Score of 7.1

These vulnerabilities derive from a flaw in Intel processors, in which operations performed by a processor while using speculative execution can result in a compromise of the confidentiality of data between threads executing on a physical CPU core. 

As with other variants of speculative execution side-channel issues (i.e., Spectre and Meltdown), successful exploitation of L1TF vulnerabilities require the attacker to have the ability to run malicious code on the targeted systems.  Therefore, L1TF vulnerabilities are not directly exploitable against servers which do not allow the execution of untrusted code. 

While Oracle has not yet received reports of successful exploitation of this speculative execution side-channel issue “in the wild,” Oracle has worked with Intel and other industry partners to develop technical mitigations against these issues. 

The technical steps Intel recommends to mitigate L1TF vulnerabilities on affected systems include:

  • Ensuring that affected Intel processors are running the latest Intel processor microcode. Intel reports that the microcode update  it has released for the Spectre 3a (CVE-2018-3640) and Spectre 4 (CVE-2018-3639) vulnerabilities also contains the microcode instructions which can be used to mitigate the L1TF vulnerabilities. Updated microcode by itself is not sufficient to protect against L1TF.

  • Applying the necessary OS and virtualization software patches against affected systems. To be effective, OS patches will require the presence of the updated Intel processor microcode.  This is because updated microcode by itself is not sufficient to protect against L1TF.  Corresponding OS and virtualization software updates are also required to mitigate the L1TF vulnerabilities present in Intel processors.

  • Disabling Intel Hyper-Threading technology in some situations. Disabling HT alone is not sufficient for mitigating L1TF vulnerabilities. Disabling HT will result in significant performance degradation.

In response to the various L1TF Intel processor vulnerabilities:

Oracle Hardware

  • Oracle recommends that administrators of x86-based Systems carefully assess the L1TF threat for their systems and implement the appropriate security mitigations.Oracle will provide specific guidance for Oracle Engineered Systems.

  • Oracle has determined that Oracle SPARC servers are not affected by the L1TF vulnerabilities.

  • Oracle has determined that Oracle Intel x86 Servers are not impacted by vulnerability CVE-2018-3615 because the processors in use with these systems do not make use of Intel Software Guard Extensions (SGX).

Oracle Operating Systems (Linux and Solaris) and Virtualization

  • Oracle has released security patches for Oracle Linux 7, Oracle Linux 6 and Oracle VM Server for X86 products.  In addition to OS patches, customers should run the current version of the Intel microcode to mitigate these issues. 

  • Oracle Linux customers can take advantage of Oracle Ksplice to apply these updates without needing to reboot their systems.

  • Oracle has determined that Oracle Solaris on x86 is not affected by vulnerabilities CVE-2018-3615 and CVE-2018-3620 regardless of the underlying Intel processor on these systems.  It is however affected by vulnerability CVE-2018-3646 when using Kernel Zones. The necessary patches will be provided at a later date

  • Oracle Solaris on SPARC is not affected by the L1TF vulnerabilities.

Oracle Cloud

  • The Oracle Cloud Security and DevOps teams continue to work in collaboration with our industry partners on implementing the necessary mitigations to protect customer instances and data across all Oracle Cloud offerings: Oracle Cloud (IaaS, PaaS, SaaS), Oracle NetSuite, Oracle GBU Cloud Services, Oracle Data Cloud, and Oracle Managed Cloud Services.  

  • Oracle’s first priority is to mitigate the risk of tenant-to-tenant attacks.

  • Oracle will notify and coordinate with the affected customers for any required maintenance activities as additional mitigating controls continue to be implemented.

  • Oracle has determined that a number of Oracle's cloud services are not affected by the L1TF vulnerabilities.  They include Autonomous Data Warehouse service, which provides a fully managed database optimized for running data warehouse workloads, and Oracle Autonomous Transaction Processing service, which provides a fully managed database service optimized for running online transaction processing and mixed database workloads.  No further action is required by customers of these services as both were found to require no additional mitigating controls based on service design and are not affected by the L1TF vulnerabilities (CVE-2018-3615, CVE-2018-3620, and CVE-2018-3646).   

  • Bare metal instances in Oracle Cloud Infrastructure (OCI) Compute offer full control of a physical server and require no additional Oracle code to run.  By design, the bare metal instances are isolated from other customer instances on the OCI network whether they be virtual machines or bare metal.  However, for customers running their own virtualization stack on bare metal instances, the L1TF vulnerability could allow a virtual machine to access privileged information from the underlying hypervisor or other VMs on the same bare metal instance.  These customers should review the Intel recommendations about vulnerabilities CVE-2018-3615, CVE-2018-3620, CVE-2018-3646 and make changes to their configurations as they deem appropriate.

Note that many industry experts anticipate that new techniques leveraging these processor flaws will continue to be disclosed for the foreseeable future.  Future speculative side-channel processor vulnerabilities are likely to continue to impact primarily operating systems and virtualization platforms, as addressing them will likely require software update and microcode update.  Oracle therefore recommends that customers remain on current security release levels, including firmware, and applicable microcode updates (delivered as Firmware or OS patches), as well as software upgrades. 

 

For more information:
 

JRE 1.8.0_181 Certified with Oracle EBS 12.1 and 12.2

Steven Chan - Tue, 2018-08-14 11:29

Java logo

Java Runtime Environment 1.8.0_181 (a.k.a. JRE 8u181-b13) and later updates on the JRE 8 codeline are now certified with Oracle E-Business Suite 12.1 and 12.2 for Windows clients.

Java Web Start is available

This JRE release may be run with either the Java plug-in or Java Web Start.

Java Web Start is certified with EBS 12.1 and 12.2 for Windows clients.  

Considerations if you're also running JRE 1.6 or 1.7

JRE 1.7 and JRE 1.6 updates included an important change: the Java deployment technology (i.e. the JRE browser plugin) is no longer available for those Java releases. It is expected that Java deployment technology will not be packaged in later Java 6 or 7 updates.

JRE 1.7.0_161 (and later 1.7 updates) and 1.6.0_171 (and later 1.6 updates) can still run Java content.  They cannot launch Java.

End-users who only have JRE 1.7 or JRE 1.6 -- and not JRE 1.8 -- installed on their Windows desktop will be unable to launch Java content.

End-users who need to launch JRE 1.7 or 1.6 for compatibility with other third-party Java applications must also install the JRE 1.8.0_152 or later JRE 1.8 updates on their desktops.

Once JRE 1.8.0_152 or later JRE 1.8 updates are installed on a Windows desktop, it can be used to launch JRE 1.7 and JRE 1.6. 

How do I get help with this change?

EBS customers requiring assistance with this change to Java deployment technology can log a Service Request for assistance from the Java Support group.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

All patches required for ensuring full compatibility of the E-Business Suite with JRE 8 are documented in these Notes:

For EBS 12.1 & 12.2

Implications of Java 6 and 7 End of Public Updates for EBS Users

The Oracle Java SE Support Roadmap and Oracle Lifetime Support Policy for Oracle Fusion Middleware documents explain the dates and policies governing Oracle's Java Support.  The client-side Java technology (Java Runtime Environment / JRE) is now referred to as Java SE Deployment Technology in these documents.

Starting with Java 7, Extended Support is not available for Java SE Deployment Technology.  It is more important than ever for you to stay current with new JRE versions.

If you are currently running JRE 6 on your EBS desktops:

  • You can continue to do so until the end of Java SE 6 Deployment Technology Extended Support in June 2017
  • You can obtain JRE 6 updates from My Oracle Support.  See:

If you are currently running JRE 7 on your EBS desktops:

  • You can continue to do so until the end of Java SE 7 Deployment Technology Premier Support in October 2017.
  • You can obtain JRE 7 updates from My Oracle Support.  See:

If you are currently running JRE 8 on your EBS desktops:

  • You can continue to do so until the end of Java SE 8 Deployment Technology Premier Support in March 2019
  • You can obtain JRE 8 updates from the Java SE download site or from My Oracle Support. See:

Will EBS users be forced to upgrade to JRE 8 for Windows desktop clients?

No.

This upgrade is highly recommended but remains optional while Java 6 and 7 are covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 and 7 desktop clients. Note that there are different impacts of enabling JRE Auto-Update depending on your current JRE release installed, despite the availability of ongoing support for JRE 6 and 7 for EBS customers; see the next section below.

Impact of enabling JRE Auto-Update

Java Auto-Update is a feature that keeps desktops up-to-date with the latest Java release.  The Java Auto-Update feature connects to java.com at a scheduled time and checks to see if there is an update available.

Enabling the JRE Auto-Update feature on desktops with JRE 6 installed will have no effect.

With the release of the January Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Enabling the JRE Auto-Update feature on desktops with JRE 8 installed will apply JRE 8 updates.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

JRE 8 is certified for Mac OS X 10.8 (Mountain Lion), 10.9 (Mavericks), 10.10 (Yosemite), and 10.11 (El Capitan) desktops.  For details, see:

Will EBS users be forced to upgrade to JDK 8 for EBS application tier servers?

No.

JRE is used for desktop clients.  JDK is used for application tier servers.

JRE 8 desktop clients can connect to EBS environments running JDK 6 or 7.

JDK 8 is not certified with the E-Business Suite.  EBS customers should continue to run EBS servers on JDK 6 or 7.

Known Issues

Internet Explorer Performance Issue

Launching JRE 1.8.0_73 through Internet Explorer will have a delay of around 20 seconds before the applet starts to load (Java Console will come up if enabled).

This issue fixed in JRE 1.8.0_74.  Internet Explorer users are recommended to uptake this version of JRE 8.

Form Focus Issue Clicking outside the frame during forms launch may cause a loss of focus when running with JRE 8 and can occur in all Oracle E-Business Suite releases. To fix this issue, apply the following patch:

References

Related Articles
Categories: APPS Blogs

JRE 1.7.0_191 Certified with Oracle E-Business Suite 12.1 and 12.2

Steven Chan - Tue, 2018-08-14 11:24

Java logo

Java Runtime Environment 1.7.0_191 (a.k.a. JRE 7u191-b09) and later updates on the JRE 7 codeline are now certified with Oracle E-Business Suite Release 12.1 and 12.2 for Windows-based desktop clients.

What's new in this update?

This update includes an important change: the Java deployment technology (i.e. the JRE browser plugin) is no longer available as of this Java release. It is expected that Java deployment technology will not be packaged in later Java 7 updates.

JRE 1.7.0_161  and later JRE 1.7 updates can still run Java content.  These releases cannot launch Java.

End-users who only have JRE 1.7.0_161 and later JRE 1.7 updates -- but not JRE 1.8 -- installed on their Windows desktop will be unable to launch Java content.

End-users who need to launch JRE 1.7 for compatibility with other third-party Java applications must also install the October 2017 CPU release JRE 1.8.0_151 or later JRE 1.8 updates on their desktops.

Once JRE 1.8.0_151 or a later JRE 1.8 update is installed on a Windows desktop, it can be used to launch JRE 1.7.0_161 and later updates on the JRE 1.7 codeline. 

How do I get help with this change?

EBS customers requiring assistance with this change to Java deployment technology can log a Service Request for assistance from the Java Support group.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

Effects of new support dates on Java upgrades for EBS environments

Support dates for the E-Business Suiteand Java have changed.  Please review the sections below for more details:

  • What does this mean for Oracle E-Business Suite users?
  • Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients?
  • Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

How can EBS customers obtain Java 7?

EBS customers can download Java 7 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see:

Both JDK and JRE packages are now contained in a single combined download.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package. 

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

Java Auto-Update Mechanism

With the release of the January 2015 Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

Mac users running Mac OS X 10.7 (Lion), 10.8 (Mountain Lion), 10.9 (Mavericks), and 10.10 (Yosemite) can run JRE 7 or 8 plug-ins.  See:

Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

JRE ("Deployment Technology") is used for desktop clients.  JDK is used for application tier servers.

JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers. 

Java SE 6 (excluding Deployment Technology) is covered by Extended Support until December 2018.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 (excluding Deployment Technology) by December 2018. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms.

JDK 7 is certified with E-Business Suite 12.  See:

Known Issues

When using Internet Explorer, JRE 1.7.0_01 had a delay of around 20 seconds before the applet started to load. This issue is fixed in JRE 1.7.0_95.

References

Related Articles
Categories: APPS Blogs

JRE 1.6.0_201 Certified with Oracle E-Business Suite

Steven Chan - Tue, 2018-08-14 11:20

Java logThe latest Java Runtime Environment 1.6.0_201 (a.k.a. JRE 6u201-b07) and later updates on the JRE 6 codeline are now certified with Oracle E-Business Suite Release 12.1 and 12.2 for Windows-based desktop clients.

What's new in this update?

This update includes an important change: the Java deployment technology (i.e. the JRE browser plugin) is no longer available as of the Java 1.6.0_171 release. It is expected that Java deployment technology will not be packaged in later Java 6 updates.

JRE 1.6.0_171 and later JRE 1.6 updates can still run Java content.  These releases cannot launch Java.

End-users who only have JRE 1.6.0_171 and later JRE 1.6 updates -- but not JRE 1.8 -- installed on their Windows desktop will be unable to launch Java content.

End-users who need to launch JRE 1.6 for compatibility with other third-party Java applications must also install the October 2017 PSU release JRE 1.8.0_152 or later JRE 1.8 updates on their desktops.

Once JRE 1.8.0_152 or a later JRE 1.8 update is installed on a Windows desktop, it can be used to launch JRE 1.6.0_171 and later updates on the JRE 1.6 codeline. 

How do I get help with this change?

EBS customers requiring assistance with this change to Java deployment technology can log a Service Request for assistance from the Java Support group.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

Effects of new support dates on Java upgrades for EBS environments

Support dates for the E-Business Suiteand Java have changed.  Please review the sections below for more details:

  • What does this mean for Oracle E-Business Suite users?
  • Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients?
  • Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?
32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Deploying JRE documentation for your EBS release for details.

How can EBS customers obtain Java 6 updates?

Java 6 is now available only via My Oracle Support for E-Business Suite users.  You can find links to this release, including Release Notes, documentation, and the actual Java downloads here: Both JDK and JRE packages are contained in a single combined download after 6u45.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

Mac users running Mac OS X 10.10 (Yosemite) can run JRE 7 or 8 plug-ins.  See:

Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

JRE ("Deployment Technology") is used for desktop clients.  JDK is used for application tier servers.

JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers. 

Java SE 6 (excluding Deployment Technology) is covered by Extended Support until December 2018.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 by December 2018. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms.

JDK 7 is the latest Java release certified with E-Business Suite 12 servers.  See:

References

Related Articles
Categories: APPS Blogs

Upgrade EM 13.2 to EM 13.3

Yann Neuhaus - Tue, 2018-08-14 10:05

As the last Enterprise Manager Cloud Control 13.3 is out since a few days, I decided to test the upgrade procedure from the Enterprise Manager Cloud Control 13.2

You have to follow some pre-requisites:

First you copy the emkey :

oracle@localhost:/home/oracle/ [oms13c] emctl config emkey 
-copy_to_repos_from_file -repos_conndesc '"(DESCRIPTION=(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=em13c)(PORT=1521)))(CONNECT_DATA=
(SERVICE_NAME=EMREP13C)))"' -repos_user sysman -repos_pwd manager1 
-emkey_file /home/oracle/oms13c/sysman/config/emkey.ora
Oracle Enterprise Manager Cloud Control 13c Release 2  
Copyright (c) 1996, 2016 Oracle Corporation.  All rights reserved.
Enter Admin User's Password : 
The EMKey has been copied to the Management Repository. 
This operation will cause the EMKey to become unsecure.
After the required operation has been completed, secure the EMKey by running 
"emctl config emkey -remove_from_repos".

Check that the parameter in the repository database “_allow_insert_with_update_check” is TRUE:

SQL> show parameter _allow

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
_allow_insert_with_update_check      boolean	 TRUE

Just before running the upgrade procedure, you have to stop the OMS with the command emctl stop oms -all , and to stop the agent with the classical command emctl stop agent.

I will also recommend to run a full rman backup of the repository database.

Then once you have unzipped the binaries you have downloaded, you simply run the command:

oracle@localhost:/home/oracle/software/ [oms13c] ./em13300_linux64.bin 
0%...........................................................................100%
Launcher log file is /tmp/OraInstall2018-08-13_10-45-07AM/
launcher2018-08-13_10-45-07AM.log.
Starting Oracle Universal Installer

Checking if CPU speed is above 300 MHz.   Actual 2591.940 MHz    Passed
Checking monitor: must be configured to display at least 256 colors.   
Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 5567 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed 
Preparing to launch the Oracle Universal Installer from 
/tmp/OraInstall2018-08-13_10-45-07AM
====Prereq Config Location main=== 
/tmp/OraInstall2018-08-13_10-45-07AM/stage/prereq
EMGCInstaller args -scratchPath
EMGCInstaller args /tmp/OraInstall2018-08-13_10-45-07AM
EMGCInstaller args -sourceType
EMGCInstaller args network
EMGCInstaller args -timestamp
EMGCInstaller args 2018-08-13_10-45-07AM
EMGCInstaller args -paramFile
EMGCInstaller args /tmp/sfx_WIQ10z/Disk1/install/linux64/oraparam.ini
EMGCInstaller args -nocleanUpOnExit
DiskLoc inside SourceLoc/home/oracle/software
EMFileLoc:/tmp/OraInstall2018-08-13_10-45-07AM/oui/em/
ScratchPathValue :/tmp/OraInstall2018-08-13_10-45-07AM

Picture1

 

Picture2

I skipped the Updates

Picture3png

The check are successfull

Picture4

We upgrade an existing Enterprise Manager System, we enter the existing Middleware home.

Picture5

We enter the new Middleware home.

Picture6

We enter the sys and sysman passwords.

Picture7

We can select additional plug-ins

Picture8

We enter the Weblogic password

Picture9

We do not share location for Oracle BI Publisher, but we enable BI Publisher

Picture11

We choose the default configuration ports.

Picture12

Picture13

At this time you can drink some coffees because the upgrade procedure takes a long time …

Picture14

Just before the end of the upgrade process, you have to run the allroot.sh script:

[root@localhost oms133]# ./allroot.sh

Starting to execute allroot.sh ......... 

Starting to execute /u00/app/oracle/oms133/root.sh ......
/etc exist
/u00/app/oracle/oms133
Finished execution of  /u00/app/oracle/oms133/root.sh ......

Picture15

The upgrade is successful :=)

Picture16

But the upgrade is not yet finished, you have to restart and upgrade the management agent and delete the old OMS installation

In order to upgrade the agent, you select the Upgrade agent from the tool menu:

Picture17

But I had a problem with my agent in 13.2 version. The agent was in a non-upgradable state, and Oracle recommended to run emctl control agent runCollection <target>:oracle_home oracle_home_config but the command did not work and saying : EMD runCollection error:no target collection

So I decided to delete the agent and to install manually a new agent following the classical GUI method.

The agent in version 13.3 in now up and running:

Picture18

As in the previous Enterprise Manager versions, the deinstallation is very easy. You only have to check if any old processes are running:

oracle@localhost:/home/oracle/oms13c/oui/bin/ [oms13c] ps -ef | grep /home/oracle/oms13c
oracle    9565 15114  0 11:51 pts/0    00:00:00 grep --color=auto /home/oracle/oms13c

Then we simply delete the old OMS HOME:

oracle@localhost:/home/oracle/ [oms13c] rm -rf oms13c

There are not so many features in Enterprise Manager 13.3. they concern the framework and infrastructure, the Middleware Management,  the Cloud management and the Database management. You can have a look at those new features:

https://docs.oracle.com/cd/cloud-control-13.3/EMCON/GUID-503991BC-D1CD-46EC-8373-8423B2D43437.htm#EMCON-GUID-503991BC-D1CD-46EC-8373-8423B2D43437

Even if the upgrade procedure lasted a long time, I did not encounter any blocking errors. The upgrade procedure is quite the same as before.

Furthermore with Enterprise Manager 13.3, we have support for monitoring and management for Oracle databases version 18c:

Picture19

 

Cet article Upgrade EM 13.2 to EM 13.3 est apparu en premier sur Blog dbi services.

Start-up Uses Oracle Cloud to Launch AI-Powered Hub for Social Networks

Oracle Press Releases - Tue, 2018-08-14 10:00
Press Release
Start-up Uses Oracle Cloud to Launch AI-Powered Hub for Social Networks Virtual Artifacts’ new Hibe hub connects diverse mobile apps, enabling consumers to stick with their social platform of choice

Redwood Shores, Calif.—Aug 14, 2018

Oracle today announced Virtual Artifacts has launched its mobile application network, Hibe, with Oracle Cloud. The company has developed Hibe as a new social network for mobile applications that lets consumers communicate with each other from their social platform of choice. To prepare for rapid growth, Virtual Artifacts invested in Oracle Cloud, including Oracle Autonomous Database, Oracle Cloud Platform, and Oracle Cloud Infrastructure.

Hibe helps different mobile applications to seamlessly communicate with each other without fear of data or privacy spillage. Built on an AI-driven proprietary privacy engine, users are able to easily connect, interact, and transact together, each from their own favorite communication tools. The new network removes the inconvenience of having to switch between applications by keeping communications synched and in one place.

"With 4.7 billion users on 6 million mobile apps, our ability to grow our database exponentially and globally was critical,” said Stéphane Lamoureux, COO, Virtual Artifacts. “We wanted a cloud provider that shares our values on privacy, is agile, and can work closely with us to build specific solutions. Oracle was the only vendor that could meet all of our requirements. With Oracle Autonomous Database, we no longer need to worry about manual, time-intensive tasks, such as database patching, upgrading, and tuning.”

With Oracle Autonomous Database, the industry’s first self-driving, self-securing, and self-repairing database, Virtual Artifacts can avoid the complexities of database configuration, tuning, and administration and instead focus on innovation. Oracle Autonomous Data Warehouse Cloud provides an easy-to-set-up and use data store for Hibe data, enabling Virtual Artifacts to better understand customer behavior.

The Hibe platform uses a number of Oracle Cloud Platform and Infrastructure services to support its current operations and anticipated growth. Using Oracle Mobile Cloud’s AI-driven intelligent bot on Hibe, Virtual Artifacts will be able to answer developers’ questions and make it easy for app providers to access the platform. The integration of Oracle’s mobile applications and chatbot technology on Hibe will also enable customers to directly engage with consumers, making it possible to quickly understand key audiences and improve the mobile experience.

Additionally, Virtual Artifacts will work with Oracle to develop the Hibe Marketplace, including an advertising engine to match items with interested consumers connected to Hibe through their respective mobile applications. The Hibe platform will also host and distribute content. By using Oracle’s blockchain platform, Virtual Artifacts and other content contributors on the platform will be able to track usage, distribution, and copyright attribution to help ensure it is being used and credited properly.

“The launch of the Hibe platform showcases the capabilities of Oracle’s Autonomous Cloud,” said Christopher Chelliah, group vice president and chief architect, Oracle Asia Pacific. “We essentially become the back end globally, leaving Virtual Artifacts to concentrate on the Hibe platform. This means that as a native cloud startup, we are supporting Virtual Artifacts from zero-to-production, and beyond. The combination of the strength of Oracle Cloud and Virtual Artifacts’ innovative privacy engine has the potential to revolutionize the app-to-app ecosystem.”

Contact Info
Dan Muñoz
Oracle
650.506.2904
dan.munoz@oracle.com
Jesse Caputo
Oracle
650.506.5967
jesse.caputo@oracle.com
About Virtual Artifacts

Based in Montreal, Canada, Virtual Artifacts is committed to deliver privacy-centric technologies that bridges the gap between online and real-life interactions and fundamentally transform how individuals, brands, and organizations meet, connect, communicate and transact online. These proprietary technologies both protect individual privacy and  disrupt many online industries, including ecommerce, advertising, shipping, mobile and the manner in which  brands manage key relationships with customers. Over the last 10 years, the company has developed decision-making AI engines and a centralized blockchain-like data structure to host, contextualize, distribute and ensure confidentiality of online activities powered by the company’s technologies. For more information about Virtual Artifacts, please visit us at www.virtualartifacts.com and www.hibe.com

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Dan Muñoz

  • 650.506.2904

Jesse Caputo

  • 650.506.5967

Licensable targets and Management Packs with EM13c

Yann Neuhaus - Tue, 2018-08-14 09:42

When you add a new target in Enterprise Manager 13c , the management packs are enabled by default. This could be a problem in case of a LMS control, and to avoid any problem, you have to  manually disable those management packs.

If like me you recently have moved your database infrastructure to a new one and have to add one hundred targets, you will have to click some hundredth of times on the management packs page in EM13C.

As you can see I added a new Oracle Database target and all the management packs are enabled:

mgmt1

 

There is a possibility to define the way EM13c manages the licensable targets. In the management Packs page , you select Auto Licensing, you select the packs you want to disable, and the next time you will add a new target (oracle database host database or weblogic), only the packs you have defined as enabled will be defined for the targets you decided to add:

mgmt2

I decided to keep enabled only the Database Tuning Pack and Database Diagnostics Pack.

mgmt3

And now the management packs are correct when I add a new database target:

mgmt4

I have missed this feature for a long time, I would have earned a lot of time avoiding deactivating the packs in Enterprise Manager Cloud Control :=)

 

 

 

Cet article Licensable targets and Management Packs with EM13c est apparu en premier sur Blog dbi services.

Oracle Recognized as a Leader in the 2018 Gartner Magic Quadrant for Web Content Management

Oracle Press Releases - Tue, 2018-08-14 07:00
Press Release
Oracle Recognized as a Leader in the 2018 Gartner Magic Quadrant for Web Content Management For the second consecutive year, Oracle positioned as a Leader based on completeness of vision and ability to execute

Redwood Shores, Calif.—Aug 14, 2018

Oracle today announced that it has been named a Leader in Gartner’s 2018 “Magic Quadrant for Web Content Management” report1 for the second consecutive year. The recognition is further validation of Oracle’s continued leadership in this category and unique approach to helping enterprises streamline content management and build immersive digital experiences for users.

“As the world increasingly adopts internet and mobile-first strategies, the ability to create a cohesive, seamless experience across an enterprise’s properties is critical,” said Amit Zavery, executive vice president, Oracle Cloud Platform. “Customers recognize the importance of their content and the benefits of Oracle Content and Experience Cloud, including its expansive features and ability to easily manage content and deliver exceptional user experiences across all platforms.”

According to Gartner, “Leaders should drive market transformation. They have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clear vision and a thorough appreciation of the broader context of digital business. They have strong channel partners, a presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technologies or vertical markets. Leaders are aware of the ecosystem in which their offerings need to fit. Leaders can:

  • Demonstrate enterprise deployments
  • Offer integration with other business applications and content repositories
  • Support multiple vertical and horizontal contexts”

Oracle Content and Experience Cloud is the modern platform for driving omni-channel content and workflow management as well as rich experiences to optimize customer journeys, and partner and employee engagement. It is the only solution of its kind available with out-of-the-box integrations with Oracle on-premises and SaaS solutions. With Oracle Content and Experience Cloud, organizations can enable content collaboration, deliver consistent experiences across online properties, and manage media-centric content storage all within one central content hub. In addition, Oracle’s capabilities extend beyond the typical role of content management, providing low-code development tools for building digital experiences that leverage a service catalog of data connections.

Access a complimentary copy of Gartner’s 2018 “Magic Quadrant for Web Content Management” here.

1. Gartner, “Magic Quadrant for Web Content Management,” by Mick MacComascaigh, Jim Murphy, July 30, 2018

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Jesse Caputo
Oracle
+1.650.506.5697
jesse.caputo@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Jesse Caputo

  • +1.650.506.5697

Oracle Helps Organizations Manage International Trade More Efficiently

Oracle Press Releases - Tue, 2018-08-14 06:30
Press Release
Oracle Helps Organizations Manage International Trade More Efficiently New releases of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud help customers reduce costs, streamline trade regulatory compliance, and accelerate customer fulfillment

Redwood Shores, Calif.—Aug 14, 2018

To help customers succeed in an increasingly complex global economy, Oracle today introduced new releases of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud. The new releases enable customers who seek to reduce costs, streamline compliance with global trade regulations, and accelerate customer fulfillment, to manage these significant needs on a single, integrated platform.

The latest releases of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud provide real-time, data-driven insights into shipment routes and automated event handling. They also offer expanded regulatory support for fast, accurate screening and customs declarations. For example, Nahdi Medical Company has been able to improve truck utilization by five to ten percent since implementing the latest release of Oracle Transportation Management.

“We now have greater visibility into shipment demands and delivery status, which means we have the flexibility to quickly re-do plans as new requirements arise,” said Sayed Al-Sayed, supply chain applications manager, Nahdi Medical Company. “The accuracy of Oracle Transportation Management Cloud’s recommended plans and route optimizations has enabled us to increase truck utilization rates, which has allowed us to cut time on shipments, while also saving money.”

Enhancements to Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud include:

  • Enhanced routing: Enables customers to make better decisions when routing shipments by accounting for factors such as historic traffic patterns, hazardous materials, and tolls when planning shipments

  • Expanded regulatory support: Enables customers to better meet their global trade needs in today’s dynamically changing, regulatory environment by supporting expanded regulatory content, allowing more accurate screening, and providing enhancements for summary customs declarations

  • Advanced planning: Enables customers to better map out and improve transportation planning in end-to-end, outbound order fulfillment by providing enhancements to sample integrated flows with Oracle Order Management Cloud and Oracle Inventory Management Cloud

  • Driver-oriented features: Enables customers to improve the handling of assignments for shift-based drivers by providing enhanced support for planning and execution of private and dedicated fleets, including driver-oriented workflow in the OTM Mobile App

  • IoT Fleet Monitoring integration: Enables customers to have real-time visibility into a shipment’s location through automatic event handling

  • UX and collaboration: Enables customers to have a simpler and more configurable user interface, including integration with Oracle Content and Experience Cloud, which streamlines collaboration with peers

  • Graphical diagnostics tool: Enables customers to easily research and resolve questions in real time that may arise from the shipment planning processes by providing them with a visual diagnostic tool, which highlights areas that warrant further investigation

“The global trade and logistics landscape is dramatically changing and organizations need to be able to accommodate a complex web of regulations and customer expectations for each shipment in order to be competitive,” said Derek Gittoes, vice president, SCM Product Strategy, Oracle. “The combination of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud gives our customers an innovative platform to reduce complexity and effectively improve the efficiency of their global trade compliance and transport operations.”

The enhancements further extend Oracle’s leadership position in the transportation management category. Oracle Transportation Management Cloud provides a single platform for transportation orchestration across the entire supply chain. Oracle Global Trade Management Cloud offers unparalleled visibility and control over orders and shipments to reduce costs, automate processes, streamline compliance, and improve service delivery.

Contact Info
Vanessa Johnson
Oracle
+1650.607.1692
vanessa.n.johnson@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle. 

Talk to a Press Contact

Vanessa Johnson

  • +1650.607.1692

Using the managed PostgreSQL service in Azure

Yann Neuhaus - Tue, 2018-08-14 00:59

In the last post we had a look on how you can bring up a customized PostgreSQL instance in the Azure cloud. Now I want to check what you can do with the managed service. For the managed service I am expecting that I can bring up a PostgreSQL quite easily and fast and that I can add replicas on demand. Lets see what is there and how you can use it.

Of course we need to login again:

dwe@dwe:~$ cd /var/tmp
dwe@dwe:/var/tmp$ az login

The az command for working with PostgreSQL is simply “postgres”:

dwe@dwe:~$ az postgres --help

Group
    az postgres : Manage Azure Database for PostgreSQL servers.

Subgroups:
    db          : Manage PostgreSQL databases on a server.
    server      : Manage PostgreSQL servers.
    server-logs : Manage server logs.

Does not look like we can do much but you never know so lets bring up an instance. Again, we need a resource group first:

dwe@dwe:~$ az group create --name PGTEST --location "westeurope"
{
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST",
  "location": "westeurope",
  "managedBy": null,
  "name": "PGTEST",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null
}

Lets try to bring up an instance with a little storage (512MB), SSL enabled and the standard postgres user:

dwe@dwe:~$ az postgres server create --name mymanagedpg1 --resource-group PGTEST --sku-name B_Gen4_2 --ssl-enforcement Enabled --storage-size 512 --admin-user postgres --admin-password xxxxx --location westeurope
Deployment failed. Correlation ID: e3cd6d04-3557-4c2a-b70f-7c11a61c395d. Server name 'PG1' cannot be empty or null. It can only be made up of lowercase letters 'a'-'z', the numbers 0-9 and the hyphen. The hyphen may not lead or trail in the name.

Ok, seems upper case letters are not allowed, try again:

dwe@dwe:~$ az postgres server create --name mymanagedpg1 --resource-group PGTEST --sku-name B_Gen4_2 --ssl-enforcement Enabled --storage-size 512 --admin-user postgres --admin-password postgres --location westeurope
Deployment failed. Correlation ID: e50ca5d6-0e38-48b8-8015-786233c0d103. The storage size of 512 MB does not meet the minimum required storage of 5120 MB.

Ok, we need a minimum of 5120 MB of storage, again:

dwe@dwe:~$ az postgres server create --name mymanagedpg1 --resource-group PGTEST --sku-name B_Gen4_2 --ssl-enforcement Enabled --storage-size 5120 --admin-user postgres --admin-password postgres --location westeurope
Deployment failed. Correlation ID: 470975ce-1ee1-4531-8703-55947772fb51. Password validation failed. The password does not meet policy requirements because it is not complex enough.

This one is good as it at least denies the postgres/postgres combination. Again with a better password:

dwe@dwe:~$ az postgres server create --name mymanagedpg1 --resource-group PGTEST --sku-name B_Gen4_2 --ssl-enforcement Enabled --storage-size 5120 --admin-user postgres --admin-password "xxx" --location westeurope
{
  "administratorLogin": "postgres",
  "earliestRestoreDate": "2018-08-13T12:30:10.763000+00:00",
  "fullyQualifiedDomainName": "mymanagedpg1.postgres.database.azure.com",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1",
  "location": "westeurope",
  "name": "mymanagedpg1",
  "resourceGroup": "PGTEST",
  "sku": {
    "capacity": 2,
    "family": "Gen4",
    "name": "B_Gen4_2",
    "size": null,
    "tier": "Basic"
  },
  "sslEnforcement": "Enabled",
  "storageProfile": {
    "backupRetentionDays": 7,
    "geoRedundantBackup": "Disabled",
    "storageMb": 5120
  },
  "tags": null,
  "type": "Microsoft.DBforPostgreSQL/servers",
  "userVisibleState": "Ready",
  "version": "9.6"
}

Better. What I am not happy with is that the default seems to be PostgreSQL 9.6. PostgreSQL 10 is out around a year now and that should definitely by the default. In the portal it looks like this and there you can also find the information required for connecting to the instance:
Selection_004

So lets try to connect:

dwe@dwe:~$ psql -h mymanagedpg1.postgres.database.azure.com -U postgres@mymanagedpg1
psql: FATAL:  no pg_hba.conf entry for host "x.x.x.xx", user "postgres", database "postgres@mymanagedpg1", SSL on
FATAL:  SSL connection is required. Please specify SSL options and retry.

How do we manage that with the managed PostgreSQL service? Actually there is no az command to modify pg_hba_conf but what we need to do is to create a firewall rule:

dwe@dwe:~$ az postgres server firewall-rule create -g PGTEST -s mymanagedpg1 -n allowall --start-ip-address 0.0.0.0 --end-ip-address 255.255.255.255
{
  "endIpAddress": "255.255.255.255",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/firewallRules/allowall",
  "name": "allowall",
  "resourceGroup": "PGTEST",
  "startIpAddress": "0.0.0.0",
  "type": "Microsoft.DBforPostgreSQL/servers/firewallRules"
}

Of course you should not open to the whole world as I am doing here. When the rule is in place connections do work:

dwe@dwe:~$ psql -h mymanagedpg1.postgres.database.azure.com -U postgres@mymanagedpg1 postgres
Password for user postgres@mymanagedpg1: 
psql (9.5.13, server 9.6.9)
WARNING: psql major version 9.5, server major version 9.6.
         Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-SHA384, bits: 256, compression: off)
Type "help" for help.

postgres=> 

There is an additional database called “azure_maintenance” and we are not allowed to connect there:

postgres=> \l
                                                               List of databases
       Name        |      Owner      | Encoding |          Collate           |           Ctype            |          Access privileges          
-------------------+-----------------+----------+----------------------------+----------------------------+-------------------------------------
 azure_maintenance | azure_superuser | UTF8     | English_United States.1252 | English_United States.1252 | azure_superuser=CTc/azure_superuser
 postgres          | azure_superuser | UTF8     | English_United States.1252 | English_United States.1252 | 
 template0         | azure_superuser | UTF8     | English_United States.1252 | English_United States.1252 | =c/azure_superuser                 +
                   |                 |          |                            |                            | azure_superuser=CTc/azure_superuser
 template1         | azure_superuser | UTF8     | English_United States.1252 | English_United States.1252 | =c/azure_superuser                 +
                   |                 |          |                            |                            | azure_superuser=CTc/azure_superuser
(4 rows)
postgres=> \c azure_maintenance
FATAL:  permission denied for database "azure_maintenance"
DETAIL:  User does not have CONNECT privilege.
Previous connection kept

The minor release is one release behind but as the latest minor release was released this week that seems to be fine:

postgres=> select version();
                           version                           
-------------------------------------------------------------
 PostgreSQL 9.6.9, compiled by Visual C++ build 1800, 64-bit
(1 row)

postgres=> 

I would probably not compile PostgreSQL with “Visual C++” but given that we use a Microsoft product, surprise, we are running on Windows:

postgres=> select name,setting from pg_settings where name = 'archive_azure_location';
          name          |          setting          
------------------------+---------------------------
 archive_azure_location | c:\BackupShareDir\Archive
(1 row)

… and the PostgreSQL source code was modified as this parameter does not exist in the community version.

Access to the server logs is quite easy:

dwe@dwe:~$ az postgres server-logs list --resource-group PGTEST --server-name mymanagedpg1
[
  {
    "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/logFiles/postgresql-2018-08-13_122334.log",
    "lastModifiedTime": "2018-08-13T12:59:26+00:00",
    "logFileType": "text",
    "name": "postgresql-2018-08-13_122334.log",
    "resourceGroup": "PGTEST",
    "sizeInKb": 6,
    "type": "Microsoft.DBforPostgreSQL/servers/logFiles",
    "url": "https://wasd2prodweu1afse118.file.core.windows.net/74484e5541e04b5a8556eac6a9eb37c8/pg_log/postgresql-2018-08-13_122334.log?sv=2015-04-05&sr=f&sig=ojGG2km5NFrfQ8dJ0btz8bhmwNMe0F7oq0iTRum%2FjJ4%3D&se=2018-08-13T14%3A06%3A16Z&sp=r"
  },
  {
    "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/logFiles/postgresql-2018-08-13_130000.log",
    "lastModifiedTime": "2018-08-13T13:00:00+00:00",
    "logFileType": "text",
    "name": "postgresql-2018-08-13_130000.log",
    "resourceGroup": "PGTEST",
    "sizeInKb": 0,
    "type": "Microsoft.DBforPostgreSQL/servers/logFiles",
    "url": "https://wasd2prodweu1afse118.file.core.windows.net/74484e5541e04b5a8556eac6a9eb37c8/pg_log/postgresql-2018-08-13_130000.log?sv=2015-04-05&sr=f&sig=k8avZ62KyLN8RW0ZcIigyPZa40EKNBJvNvneViHjyeI%3D&se=2018-08-13T14%3A06%3A16Z&sp=r"
  }
]

We can just download the logs and have a look at them:

dwe@dwe:~$ wget "https://wasd2prodweu1afse118.file.core.windows.net/74484e5541e04b5a8556eac6a9eb37c8/pg_log/postgresql-2018-08-13_122334.log?sv=2015-04-05&sr=f&sig=Mzy2dQ%2BgRPY8lfkUAP5X%2FkXSxoxWSwrphy7BphaTjLk%3D&se=2018-08-13T14%3A07%3A29Z&sp=r" 
we@dwe:~$ more postgresql-2018-08-13_122334.log\?sv\=2015-04-05\&sr\=f\&sig\=Mzy2dQ%2BgRPY8lfkUAP5X%2FkXSxoxWSwrphy7BphaTjLk%3D\&se\=2018-08-13T14%3A07%3A29Z\&sp\=r
2018-08-13 12:23:34 UTC-5b717845.6c-LOG:  could not bind IPv6 socket: A socket operation was attempted to an unreachable host.
	
2018-08-13 12:23:34 UTC-5b717845.6c-HINT:  Is another postmaster already running on port 20686? If not, wait a few seconds and retry.
2018-08-13 12:23:34 UTC-5b717846.78-LOG:  database system was shut down at 2018-08-13 12:23:32 UTC
2018-08-13 12:23:35 UTC-5b717846.78-LOG:  database startup complete in 1 seconds, startup began 2 seconds after last stop
...

The PostgreSQL configuration is accessible quite easy:

dwe@dwe:~$ az postgres server configuration list --resource-group PGTEST --server-name mymanagedpg1 | head -20
[
  {
    "allowedValues": "on,off",
    "dataType": "Boolean",
    "defaultValue": "on",
    "description": "Enable input of NULL elements in arrays.",
    "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/configurations/array_nulls",
    "name": "array_nulls",
    "resourceGroup": "PGTEST",
    "source": "system-default",
    "type": "Microsoft.DBforPostgreSQL/servers/configurations",
    "value": "on"
  },
  {
    "allowedValues": "safe_encoding,on,off",
    "dataType": "Enumeration",
    "defaultValue": "safe_encoding",
    "description": "Sets whether \"\\'\" is allowed in string literals.",
    "id": "/subscriptions/030698d5-42d6-41a1-8740-355649c409e7/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/configurations/backslash_quote",
    "name": "backslash_quote",

Setting a parameter is easy as well:

dwe@dwe:~$ az postgres server configuration set --name work_mem --value=32 --resource-group PGTEST --server-name mymanagedpg1
Deployment failed. Correlation ID: 634fd473-0c28-43a7-946e-ecbb26faf961. The value '32' for configuration 'work_mem' is not valid. The allowed values are '4096-2097151'.
dwe@dwe:~$ az postgres server configuration set --name work_mem --value=4096 --resource-group PGTEST --server-name mymanagedpg1
{
  "allowedValues": "4096-2097151",
  "dataType": "Integer",
  "defaultValue": "4096",
  "description": "Sets the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files.",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/configurations/work_mem",
  "name": "work_mem",
  "resourceGroup": "PGTEST",
  "source": "system-default",
  "type": "Microsoft.DBforPostgreSQL/servers/configurations",
  "value": "4096"
}

The interesting point is what happens when we change a parameter that requires a restart:

dwe@dwe:~$ az postgres server configuration set --name shared_buffers --value=4096 --resource-group PGTEST --server-name mymanagedpg1
Deployment failed. Correlation ID: d849b302-1c41-4b13-a2d5-6b24f144be89. The configuration 'shared_buffers' does not exist for PostgreSQL server version 9.6.
dwe@dwe:~$ psql -h mymanagedpg1.postgres.database.azure.com -U postgres@mymanagedpg1 postgresPassword for user postgres@mymanagedpg1: 
psql (9.5.13, server 9.6.9)
WARNING: psql major version 9.5, server major version 9.6.
         Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-SHA384, bits: 256, compression: off)
Type "help" for help.

postgres=> show shared_buffers ;
 shared_buffers 
----------------
 512MB
(1 row)
postgres=> 

So memory configuration depends on the pricing models, more information here. If you want to scale up or down “you can independently change the vCores, the hardware generation, the pricing tier (except to and from Basic), the amount of storage, and the backup retention period”.

Some final thoughts: Bringing an instance up is quite easy and simple. The default PostgreSQL version is 9.6.x, which is not a good choice in my opinion, version 10 already got the 5th minor release and is stable and the most recent version. Scaling up and down is a matter of changing basic stuff such as cores, memory, storage and pricing models. For many workloads this is probably fine, if you want to have more control you’d do better in provisioning VMs and then do the PostgreSQL stuff for your own. High availability is not implemented by adding replicas but by creating new nodes, attaching the storage to that node and then bring it up. This might be sufficient, it might be not, depends on your requirements.

In a next post we will build our own PostgreSQL HA solution on Azure.

 

Cet article Using the managed PostgreSQL service in Azure est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator