Feed aggregator

New Oracle Autonomous Database Dedicated Deployment Eliminates Roadblocks to Moving Enterprise Databases to the Autonomous Cloud

Oracle Press Releases - Wed, 2019-06-26 07:00
Press Release
New Oracle Autonomous Database Dedicated Deployment Eliminates Roadblocks to Moving Enterprise Databases to the Autonomous Cloud

Redwood Shores, Calif.—Jun 26, 2019

Driven by strong customer demand including more than 5,000 new Autonomous Database trials in Q4FY19 alone, Oracle has expanded its Autonomous Database capabilities to help meet the needs of enterprise customers who want to move their most mission-critical workloads to the cloud. Today, Oracle announced the availability of the Oracle Autonomous Database Dedicated service, which provides customers with the highest levels of security, reliability, and control for any class of database workload.

“Autonomous Database Dedicated enables customers to easily transform from manually-managed independent databases on premises, to a fully-autonomous and isolated private database cloud within the Oracle Public Cloud,” said Juan Loaiza, executive vice president, Mission-Critical Database Technologies, Oracle. “Our Autonomous Database Dedicated service eliminates the concerns enterprise customers previously had about security, isolation, and operational policies when moving to cloud.”

The Oracle Autonomous Database Dedicated service provides customers with a customizable private database cloud running on dedicated Exadata Infrastructure in the Oracle Cloud. It provides an ideal database as a service platform, enabling customers to run databases of any size, scale and criticality. This unique architecture delivers the highest degree of workload isolation, helping protect each database from both external threats and malicious internal users. The level of security and performance isolation can be easily tailored to the needs of each database. The Oracle Autonomous Database Dedicated service also features customizable operational policies, giving customers greater control over database provisioning, software updates, and availability.

The Oracle Autonomous Database Dedicated service is the latest offering within Oracle's Autonomous Database portfolio. Oracle Autonomous Database builds on 40 years of experience supporting the majority of the world’s most demanding applications. The first of its kind, Oracle Autonomous Database uses ground breaking machine learning to provide self-driving, self-repairing, and self-securing capabilities that automate key management and security processes in database systems like patching, tuning and upgrading, all while keeping the critical infrastructure constantly running for a modern cloud experience. Running on Oracle Cloud Infrastructure, Oracle Autonomous Database delivers significantly lower costs than alternatives. 

“In e-commerce, today’s greatest challenge is meeting customer demands for order fulfillment. Speed is no longer a luxury–it is a requirement,” said Craig Wilensky, CEO, Jasci. “With Oracle Autonomous Database, we have seen our performance increase by as much as 75x. Combine that with the elasticity and security offered by Oracle Cloud, and the possibilities are endless. With this database, Jasci is actively reshaping a new status-quo for our industry.”

Low Code Meets Autonomous

Today, Oracle is also announcing availability of a rich set of built-in Autonomous Database developer capabilities, including Oracle Application Express (APEX), Oracle SQL Developer Web, and Oracle REST Data Services so developers can quickly develop and deploy new data-driven applications.

Oracle’s industry-leading low-code application development platform, Oracle APEX, enables developers to quickly build scalable and secure enterprise apps with world-class features. Oracle APEX can be used to import spreadsheets and develop a single source of truth web application in minutes, create compelling reports and data visualizations, or build mission-critical data management applications. With Oracle APEX preinstalled and preconfigured in Oracle Autonomous Database, developers can start building applications within minutes.

Oracle also announced availability of Oracle SQL Developer Web, a web interface for working with the Oracle Autonomous Database, enabling developers to easily run queries, create tables, and generate schema diagrams. With native Oracle REST Data Services support, developers can now develop and deploy RESTful services for Oracle Autonomous Database, making it easy to develop modern REST interfaces for relational data.

Industry Analysts Validate Market Leadership

Multiple independent industry analyst reports recently recognized Oracle Autonomous Database for its innovative capabilities, such as continuous and autonomous optimization for any workload, including:

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Oracle Developer Tools - Do They Still Exist?

Andrejus Baranovski - Wed, 2019-06-26 01:55
People are frustrated about @OracleADF @JDeveloper on social media - "ADF boat has no captain", etc. I agree @Oracle is to blame big time for such lame handling of its own Developer Tools stack. @Oracle please wake up and spend some budget on @OracleADF. Read more:

Oracle VBCS - right now this tool gets the most of Oracle focus. Supposed to offer declarative #JavaScript development experience in the Cloud. Not well received by the community. Are there any VBCS customers, please respond if yes?

Oracle APEX - comes with a very strong community (mostly backed by DB folks). But is not strategic for Oracle. More likely will be used by PL/SQL guys then by Java or Web developers. 

Oracle JET - highly promoted by Oracle. Set of opensource #JavaScript libs, glued by Oracle layer. Nice, but can't be used as a direct replacement for @OracleADF, JET is UI layer only. Oracle folks often confuse community by saying - Oracle JET is a great option to replace ADF

Oracle Forms - still alive, but obviously can't be strategic Oracle platform. A few years ago, Oracle was promoting Forms modernization to @OracleADF

Summary - Oracle Developer tools offering is weak. Lack of Oracle investment into development tools - makes Oracle developers community shrink.

opt_estimate 2

Jonathan Lewis - Tue, 2019-06-25 14:22

This is a note that was supposed to be a follow-up to an initial example of using the opt_estimate() hint to manipulate the optimizer’s statistical understanding of how much data it would access and (implicitly) how much difference that would make to the resource usage. Instead, two years later, here’s part two – on using opt_estimate() with nested loop joins. As usual I’ll start with a little data set:


rem
rem     Script:         opt_est_nlj.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2017
rem

create table t1
as
select 
        trunc((rownum-1)/15)    n1,
        trunc((rownum-1)/15)    n2,
        rpad(rownum,180)        v1
from    dual
connect by
        level <= 3000 --> hint to avoid wordpress format issue
;

create table t2
pctfree 75
as
select 
        mod(rownum,200)         n1,
        mod(rownum,200)         n2,
        rpad(rownum,180)        v1
from    dual
connect by
        level <= 3000 --> hint to avoid wordpress format issue
;

create index t1_i1 on t1(n1);
create index t2_i1 on t2(n1);

There are 3,000 rows in each table, with 200 distinct values for each of columns n1 and n2. There is an important difference between the tables, though, as the rows for a given value are well clustered in t1 and widely scattered in t2. I’m going to execute a join query between the two tables, ultimately forcing a very bad access path so that I can show some opt_estimate() hints making a difference to cost and cardinality calculations. Here’s my starting query, with execution plan, unhinted (apart from the query block name hint):

select
        /*+ qb_name(main) */
        t1.v1, t2.v1
from    t1, t2
where
        t1.n1 = 15
and     t2.n1 = t1.n2
;

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |   225 | 83700 |    44   (3)| 00:00:01 |
|*  1 |  HASH JOIN                           |       |   225 | 83700 |    44   (3)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    15 |  2805 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS FULL                  | T2    |  3000 |   541K|    42   (3)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N1"="T1"."N2")
   3 - access("T1"."N1"=15)

You’ll notice the tablescan and hash join with t2 as the probe (2nd) table and a total cost of 44, which largely due to the tablescan cost of t2 (which I had deliberately defined with pctfree 75 to make the tablescan a little expensive). Let’s hint the query to do a nested loop from t1 to t2 to see why the hash join is preferred over the nested loop:


alter session set "_nlj_batching_enabled"=0;

select
        /*+
                qb_name(main)
                leading(t1 t2)
                use_nl(t2)
                index(t2)
                no_nlj_prefetch(t2)
        */
        t1.v1, t2.v1
from    t1, t2
where
        t1.n1 = 15
and     t2.n1 = t1.n2
;

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |   225 | 83700 |   242   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                        |       |   225 | 83700 |   242   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    15 |  2805 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |    15 |  2775 |    16   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN                  | T2_I1 |    15 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."N1"=15)
   5 - access("T2"."N1"="T1"."N2")

I’ve done two slightly odd things here – I’ve set a hidden parameter to disable nlj batching and I’ve used a hint to block nlj prefetching. This doesn’t affect the arithmetic but it does mean the appearance of the nested loop goes back to the original pre-9i form that happens to make it a little easier to see costs and cardinalities adding and multiplying their way through the plan.

As you can see, the total cost is 242 with this plan and most of the cost is due to the indexed access into t2: the optimizer has correctly estimated that each probe of t2 will acquire 15 rows and that those 15 rows will be scattered across 15 blocks, so the join cardinality comes to 15*15 = 255 and the cost comes to 15 (t1 rows) * 16 (t2 unit cost) + 2 (t1 cost) = 242.

So let’s tell the optimizer that its estimated cardinality for the index range scan is wrong.


select
        /*+
                qb_name(main)
                leading(t1 t2)
                use_nl(t2)
                index(t2)
                no_nlj_prefetch(t2)
                opt_estimate(@main nlj_index_scan, t2@main (t1), t2_i1, scale_rows=0.06)
        */
        t1.v1, t2.v1
from    t1, t2
where
        t1.n1 = 15
and     t2.n1 = t1.n2
;

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |   225 | 83700 |    32   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                        |       |   225 | 83700 |    32   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    15 |  2805 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |    15 |  2775 |     2   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN                  | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."N1"=15)
   5 - access("T2"."N1"="T1"."N2")

I’ve used the hint opt_estimate(@main nlj_index_scan, t2@main (t1), t2_i1, scale_rows=0.06).

The form is: (@qb_name nlj_index_scan, table_alias (list of possible driving tables), target_index, numeric_adjustment).

The numeric_adjustment could be rows=nnn or, as I have here, scale_rows=nnn; the target_index has to be specified by name rather than list of columns, and the list of possible driving tables should be a comma-separated list of fully-qualified table aliases. There’s a similar nlj_index_filter option which I can’t demonstrate in this post because it probably needs an index of at least two-columns before it can be used.

The things to note in this plan are: the index range scan at operation 5 has now has a cardinality (Rows) estimate of 1 (that’s 0.06 * the original 15). This hasn’t changed the cost of the range scan (because that cost was already one before we applied the opt_estimate() hint) but, because the cost of the table access is dependent on the index selectivity the cost of the table access is down to 2 (from 16). On the other hand the table cardinality hasn’t dropped so now it’s not consistent with the number of rowids predicted by the index range scan. The total cost of the query has dropped to 32, though, which is 15 (t1 rows) * 2 (t2 unit cost) + 2 (t1 cost).

Let’s try to adjust the predication that the optimizer makes about the number of rows we fetch from the table. Rather than going all the way to being consistent with the index range scan I’ll dictate a scaling factor that will make it easy to see the effect – let’s tell the optimizer that we will get one-fifth of the originally expected rows (i.e. 3).


select
        /*+
                qb_name(main)
                leading(t1 t2)
                use_nl(t2)
                index(t2)
                no_nlj_prefetch(t2)
                opt_estimate(@main nlj_index_scan, t2@main (t1), t2_i1, scale_rows=0.06)
                opt_estimate(@main table         , t2@main     ,        scale_rows=0.20)
        */
        t1.v1, t2.v1
from    t1, t2
where
        t1.n1 = 15
and     t2.n1 = t1.n2
;

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |    47 | 17484 |    32   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                        |       |    47 | 17484 |    32   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    15 |  2805 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     3 |   555 |     2   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN                  | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."N1"=15)
   5 - access("T2"."N1"="T1"."N2")

By adding the hint opt_estimate(@main table, t2@main, scale_rows=0.20) we’ve told the optimizer that it should scale the estimated row count down by a factor of 5 from whatever it calculates. Bear in mind that in a more complex query the optimizer might decode to follow the path we expected and that factor of 0.2 will be applied whenever t2 is accessed. Notice in this plan that the join cardinality in operation 1 has also dropped from 225 to 47 – if the optimizer is told that it’s cardinality (or selectivity) calculation is wrong for the table the numbers involved in the selectivity will carry on through the plan, producing a different “adjusted NDV” for the join cardinality calculation.

Notice, though, that the total cost of the query has not changed. The cost was dictated by the optimizer’s estimate of the number of table blocks to be visited after the index range scan. The estimated number of table blocks hasn’t changed, it’s just the number of rows we will find there that we’re now hacking.

Just for completion, let’s make one final change (again, something that might be necessary in a more complex query), let’s fix the join cardinality:


select
        /*+
                qb_name(main)
                leading(t1 t2)
                use_nl(t2)
                index(t2)
                no_nlj_prefetch(t2)
                opt_estimate(@main nlj_index_scan, t2@main (t1), t2_i1, scale_rows=0.06)
                opt_estimate(@main table         , t2@main     ,        scale_rows=0.20)
                opt_estimate(@main join(t2 t1)   ,                      scale_rows=0.5)
        */
        t1.v1, t2.v1
from    t1, t2
where
        t1.n1 = 15
and     t2.n1 = t1.n2
;

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |    23 |  8556 |    32   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                        |       |    23 |  8556 |    32   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    15 |  2805 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     2 |   370 |     2   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN                  | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."N1"=15)
   5 - access("T2"."N1"="T1"."N2")

I’ve used the hint opt_estimate(@main join(t2 t1), scale_rows=0.5) to tell the optimizer to halve its estimate of the join cardinality between t1 and t2 (whatever order they appear in). With the previous hints in place the estimate had dropped to 47 (which must have been 46 and a large bit), with this final hint it has now dropped to 23. Interestingly the cardinality estimate for the table access to t2 has dropped at the same time (almost as if the optimizer has “rationalised” the join cardinality by adjusting the selectivity of the second table in the join – that’s something I may play around with in the future, but it may require reading a 10053 trace, which I tend to avoid doing).

Side not: If you have access to MoS you’ll find that Doc ID: 2402821.1 “How To Use Optimizer Hints To Specify Cardinality For Join Operation”, seems to suggest that the cardinality() hint is something to use for single table cardinalities, and implies that the opt_estimate(join) option is for two-table joins. In fact both hints can be used to set the cardinality of multi-table joins.

Finally, then, let’s eliminate the hints that force the join order and join method and see what happens to our query plan if all we include is the opt_estimate() hints (and the qb_name() and no_nlj_prefetch hints).

select
        /*+
                qb_name(main)
                no_nlj_prefetch(t2)
                opt_estimate(@main nlj_index_scan, t2@main (t1), t2_i1, scale_rows=0.06)
                opt_estimate(@main table         , t2@main     ,        scale_rows=0.20)
                opt_estimate(@main join(t2 t1)   ,                      scale_rows=0.5)
        */
        t1.v1, t2.v1
from    t1, t2
where
        t1.n1 = 15
and     t2.n1 = t1.n2
;

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |    23 |  8556 |    32   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                        |       |    23 |  8556 |    32   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    15 |  2805 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     2 |   370 |     2   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN                  | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."N1"=15)
   5 - access("T2"."N1"="T1"."N2")

Note
-----
   - this is an adaptive plan

WIth a little engineering on the optimizer estimates we’ve managed to con Oracle into using a different path from the default choice. Do notice, though, the closing Note section (which didn’t appear in all the other examples): I’ve left Oracle with the option of checking the actual stats as the query runs, so if I run the query twice Oracle might spot that the arithmetic is all wrong and throw in some SQL Plan Directives – which are just another load of opt_estimate() hints.

In fact, in this example, the plan we wanted because desirable as soon as we applied the nlj_ind_scan fix-up as this made the estimated cost of the index probe into t2 sufficiently low (even though it left an inconsistent cardinality figure for the table rows) that Oracle would have switched from the default hash join to the nested loop on that basis alone.

Closing Comment

As I pointed out in the previous article, this is just scratching the surface of how the opt_estimate() hint works, and even with very simple queries it can be hard to tell whether any behaviour we’ve seen is actually doing what we think it’s doing. In a third article I’ll be looking at something prompted by the most recent email I’ve had about opt_estimate() – how it might (or might not) behave in the presence of inline views and transformations like merging or pushing predicates. I’ll try not to take 2 years to publish it.

 

EZConnect Oracle GoldenGate to Oracle Database – On-Premise or Cloud (DBaaS)

DBASolved - Tue, 2019-06-25 12:53

With the release of Oracle GoldenGate 19c, it has been made easier to connect Oracle GoldenGate to the Oracle Database – source or target. Gone are the days that you need to update your local tnsnames.ora file to point to the desired database. The only thing you have to do now, is ensure that your database is reachable via an easy connect (ezconnect) string.

Within Oracle GoldenGate 19c Microservices, you would simply setup the connection in the Credential Store within the Administration Service of the deployment where your GoldenGate processes will be running. In order to do this, you simply do the following:

1. Login to the Administration Service as an authorized user
2. Navigate to the Configuration option

3. Click on the plus ( + ) sign to add a new Credential. Fill in all the needed information. When filling in the information for User Id make sure you use the format for EZConnect:

<username>@<host>:<port>/<service_name>

4. Then click Submit

5. This will result in a useridalias connection being created and stored for the database. At this point, you can test the connection by clicking on the database icon for the connection.

If you provided the correct password for the user id, you should easily login to the Oracle Database you are pointed to.

This connection feature has been carried over to Oracle GoldenGate 19c Classic as well; afterall connections are part of the core product and enables both archiectures to connect in the same manner. The only difference is you would be workign from the GGSCI command line building the credential store information with some typing. To use EZConnect from GGSCI, follow these steps:

1. Login to GGSCI

$ cd $OGG_HOME
$ ./ggsci

2. Add the credential store (if you don’t already have one)

GGSCI (ogg19cca) 1> add credentialstore

3. Add the user with ezconnect string to the credential store

GGSCI (ogg19cca) 2> alter credentialstore add user c##ggate@oggdbaas:1521/orclogg password ************ alias SGGATE domain OracleGoldenGate

4. Test the connection using the DBLOGIN option

GGSCI (ogg19cca) 3> dblogin useridalias SGGATE domain OracleGoldenGate
GGSCI (ogg19cca as c##ggate@orclogg/CDB$ROOT) 4>

Enjoy!!!

Categories: DBA Blogs

Oracle Ushers in New Era of Analytics

Oracle Press Releases - Tue, 2019-06-25 12:15
Press Release
Oracle Ushers in New Era of Analytics Oracle announces new vision, new experience and new era of augmented analytics to automate insights

Redwood Shores, Calif.—Jun 25, 2019

Today, Oracle unveiled a new, customer-centric vision for Oracle Analytics at the company’s Analytics Summit. With Oracle’s industry-leading data platform and business applications, Oracle Analytics is uniquely positioned to marry data, analytics and applications, and address the needs of business users, analysts and IT. Oracle Analytics empowers customers with industry-leading AI-powered self-service analytic capabilities for data preparation, visualization, enterprise reporting, augmented analysis, and natural language processing (NLP). 

Key Highlights
  • One Offering: Oracle Analytics. Simplified product offering and clarity of direction by rationalizing 18+ products down to a single brand.
  • Powered by the Autonomous Data Warehouse and Machine Learning: Demonstrating the industry’s leading application analytics built on the Autonomous Data Warehouse and powered by Oracle Analytics Cloud.
  • Enabling Broad Enterprise Adoption: Affordable per user pricing for departmental business users plus per-CPU pricing for broad enterprise scale.
 

“We are committed to helping our customers get the most value from their data and to delivering the best analytics experience,” said T.K. Anand, senior vice president, AI, Data Analytics and Cloud, Oracle. “Today, we are announcing a new vision, product experience, and commitment to customer success that will enable us to collaborate with our entire ecosystem and deliver a new era of enterprise analytics.”

“Our clients are seeking next generation analytical solutions that are built with the enterprise in mind. Today, executives have access to more volumes of data than ever before, but what they really need are industrial strength platforms that can turn all that data into information to drive insights across their organization at different levels,” said Richard Solari, managing director, Deloitte Consulting LLP, and global Oracle analytics and cognitive leader. “Deloitte is committed to creating value for organizations enabled by the Oracle Analytics Cloud. Together, we bridge the gap between data and information and help leaders reach impactful business decisions using Oracle’s next generation analytics platforms and applications.”

Oracle’s analytic capabilities are available in the cloud via Oracle Analytics Cloud, on premises via Oracle Analytics Server, and within applications via Oracle Analytics for Oracle Cloud Applications. These solutions leverage Oracle’s existing analytics capabilities and add new features, including augmented analytics and NLP, which are embedded throughout the platform. In addition, Oracle Analytics now offers an integrated user experience across self-service data discovery and reporting and dashboards, delivering effortless access to insights that can be consumed in the cloud, on the desktop, and mobile.

Oracle Analytics Cloud

Built first for the cloud, Oracle Analytics Cloud is the centerpiece of Oracle Analytics. Oracle Analytics Cloud empowers business users with governed self-service analytic capabilities for data preparation, visualization, augmented analysis, and natural language processing. Oracle Analytics Cloud’s governed self-service experience enables Oracle Analytics users at enterprises around the world to drive faster insights and optimize business results.

“We love analytics, we love BI, and we love the fact that Oracle is putting all of this R&D into the cloud, and we want to benefit from that,” said Bill Roy, senior director, EPM and BI, Western Digital. “We see the cloud as enabling our internal customers to develop their own content and to be self-serving. That’s really where we see the benefit of using Oracle Analytics Cloud.”

“In business today, disruption is constant, causing organizations an array of unprecedented challenges. To succeed and potentially excel in this environment, leaders must exploit data to unlock valuable insights and drive better decisions”, said Todd Randolph, principal, Technology Enablement Practice, KPMG and US Oracle Analytics Leader. “With these new, simplified and powerful Oracle analytics offerings, we believe our clients will continue to adopt our Oracle Analytics Cloud-enabled solutions to support sustainable change through performance insights to create lasting value.”

Oracle Analytics Server

Oracle Analytics Server will comprise all of Oracle’s on-premises BI offerings, delivering competitive value to thousands of existing customers, as well as enabling customers in highly regulated industries or with multi-cloud architectures to experience the latest analytic capabilities on their own terms while ensuring an easy path to the cloud.

“We needed a solution. We went out to the marketplace and the best solution was chosen,” said John Cronin, group CIO, An Post. “Oracle Analytics for An Post has made a huge impact not only for ourselves and our ease of access to information but for our common customers as well. The future is all about analytics, artificial intelligence around analytics, and advanced analytics.”

“Our clients across all industries have realized the importance of data and analytics for decades. What is different now is their expectations on how analytics will be a key enabler to guide their business strategies. With advancements in technical capabilities such as artificial intelligence, machine learning, big data platforms and visualizations, our clients are demanding more out of their analytics investments,” said Hema Kadali, partner, Data and Analytics Leader, PwC. “Leveraging Oracle Analytics, we are helping our clients execute on industry-specific use cases that allow them to innovate, automate and transform their business operations with actionable insights that drive real business outcomes.”

Oracle Analytics for Oracle Cloud Applications

Oracle Analytics for Oracle Cloud Applications will be built on Oracle Analytics Cloud and powered by Oracle Autonomous Data Warehouse, bringing personalized application analytics, benchmarks and machine learning-powered predictive insights to business users, functions and processes.

Contact Info
Carolin Bachmann
Oracle
+1.650.506.1352
carolin.bachmann@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Please see www.deloitte.com/us/about for a detailed description of Deloitte’s legal structure.

Talk to a Press Contact

Carolin Bachmann

  • +1.650.506.1352

Oracle Recognized as a Leader in Gartner Magic Quadrant for Warehouse Management Systems

Oracle Press Releases - Tue, 2019-06-25 09:00
Press Release
Oracle Recognized as a Leader in Gartner Magic Quadrant for Warehouse Management Systems Oracle named a Leader based on completeness of vision and ability to execute

Redwood Shores, Calif.—Jun 25, 2019

Oracle has been named a Leader in Gartner’s 2019 "Magic Quadrant for Warehouse Management Systems1" report for the fourth consecutive year. Oracle Warehouse Management (WMS) Cloud is positioned as a Leader based on its ability to execute and completeness of vision.

Of the 14 products evaluated, Oracle WMS Cloud was recognized as a Leader for its ability to execute and completeness of vision.

According to Gartner, “Leaders combine the uppermost characteristics of vision and thought leadership with a strong consistent Ability to Execute. Leaders in the WMS market are present in a high percentage of new WMS deals, and they win a significant number of them. They have robust core WMSs and offer reasonable — although not necessarily leading-edge — capabilities in extended WMS areas, such as labor management, work planning and optimization, slotting, returns management, yard management and dock scheduling, and value-added services. To be a Leader, a vendor doesn’t necessarily need to have the absolute broadest or deepest WMS application. Its offerings must meet most mainstream warehousing requirements in complex warehouses without significant modifications, and a substantial number of high-quality implementations must be available to validate this. Leaders must anticipate where customer demands, markets and technology are moving, and must have strategies to support these emerging requirements ahead of actual customer demand. Leading vendors should have coherent strategies to support SCE convergence, and must invest in and have processes to exploit innovation. Leaders also have robust market momentum, market penetration and market awareness as well as strong client satisfaction — in the vendor’s local markets as well as internationally. Because Leaders are often well-established in leading-edge and complex user environments, they benefit from a user community that helps them remain in the forefront of emerging needs. Key characteristics: Reasonably broad and deep WMS offerings; Proven success in moderate- to high-complexity warehouse environments; Participation in a high percentage of new deals; Large customer installed base; A strong and consistent track record; Consistent performance, and vigorous new client growth and retention; Enduring visibility in the marketplace from both sales and marketing perspectives; Compelling SCE convergence strategy and capabilities; A proven ecosystem of partners; Global scale.”

“Supply chains have changed dramatically in the last five years as businesses have evolved to meet more demanding customer expectations. We now expect to be able to buy on multiple channels, have our orders delivered faster, and receive or return products from anywhere,” said Diego Pantoja-Navajas, vice president, WMS Cloud Development, Oracle. “The leading warehouse management solution built on a modern cloud architecture, Oracle WMS Cloud enables customers to benefit from new innovations in machine learning, blockchain and IoT to meet and exceed customer expectations. We believe this report is a validation of our product strengths, investment in innovation, and customer successes.”

Oracle’s suite of supply chain cloud applications has garnered industry recognition. Oracle was named a Leader in Gartner’s recent “Magic Quadrant for Supply Chain Planning System of Record2,” and Oracle was recognized in the “Magic Quadrant for Transportation Management Systems.3”

1Gartner, Magic Quadrant for Warehouse Management Systems, C. Klappich, Simon Tunstall, 8 May 2019
2Gartner, Magic Quadrant for Supply Chain Planning System of Record, Amber Salley, Tim Payne, Alex Pradhan, 21 August 2018
3Gartner, Magic Quadrant for Transportation Management Systems, Bart De Muynck, Brock Johns, Oscar Sanchez Duran, 27 March 2019

Gartner Disclaimer
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Additional Information

For additional information on Oracle Supply Chain Management (SCM) Cloud, visit FacebookTwitter or the Oracle SCM blog.

Contact Info
Bill Rundle
Oracle
+1.650.506.1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • +1.650.506.1891

SQLcl ALIAS – because you can’t remember everything.

The Anti-Kyte - Tue, 2019-06-25 08:47

I want to find out which file is going to hold any trace information generated by my database session. Unfortunately, I keep forgetting the query that I need to run to find out.
Fortunately I’m using SQLcl, which includes the ALIAS command.
What follows is a quick run-through of this command including :

  • listing the aliases that are already set up in SQLcl
  • displaying the code that an alias will execute
  • creating your own alias interactively
  • deleting an alias
  • using files to manage custom aliases

Whilst I’m at it, I’ll create the alias for the code to find that pesky trace file too.

In the examples that follow, I’m connected to an Oracle XE18c PDB using SQLcl 18.4 from my Ubuntu 16.4 LTS laptop via the Oracle Thin Client. Oh, and the Java details are :

Meet the ALIAS command

As so often in SQLcl, it’s probably a good idea to start with the help :

help alias

…which explains that :

“Alias is a command which allows you to save a sql, plsql or sqlplus script and assign it a shortcut command.”

A number of aliases are already included in SQLcl. To get a list of them simply type :

alias

…which returns :

locks
sessions
tables
tables2

If we want to see the code that will be run when an alias is invoked, we simply need to list the alias :

alias list tables

tables - tables <schema> - show tables from schema
--------------------------------------------------

select table_name "TABLES" from user_tables

Connected as HR, I can run the alias to return a list of tables that I own in the database :

Creating an ALIAS

To create an alias of my own, I simply need to specify the alias name and the statement I want to associate it with. For example, to create an alias called whoami :

alias whoami =
select sys_context('userenv', 'session_user')
from dual;

I can now confirm that the alias has been created :

alias list whoami
whoami
------

select sys_context('userenv', 'session_user')
from dual

…and run it…

I think I want to tidy up that column heading. I could do this by adding an alias in the query itself. However, alias does support the use of SQL*Plus commands…

alias whoami =
column session_user format a30
select sys_context('userenv', 'session_user') session_user
from dual;

…which can make the output look slightly more elegant :

A point to note here is that, whilst it is possible to include SQL*Plus statements in an alias for a PL/SQL block (well, sort of)…

alias whoami=set serverout on
exec dbms_output.put_line(sys_context('userenv', 'session_user'));

…when the alias starts with a SQL*Plus statement, it will terminate at the first semi-colon…

Where you do have a PL/SQL alias that contains multiple statement terminators (‘;’) you will need to run any SQL*Plus commands required prior to invoking it.
Of course, if you find setting output on to be a bit onerous, you can save valuable typing molecules by simply running :

alias output_on = set serverout on size unlimited

I can also add a description to my alias so that there is some documentation when it’s listed :

alias desc whoami The current session user

When I now list the alias, the description is included…more-or-less…

I’m not sure if the inclusion of the text desc whoami is simply a quirk of the version and os that I’m running on. In any case, we’ll come to a workaround for this minor annoyance in due course.

In the meantime, I’ve decided that I don’t need this alias anymore. To remove it, I simply need to run the alias drop command :

alias drop whoami


At this point, I know enough about the alias command to implement my first version of the session tracefile alias that started all this.
The query, that I keep forgetting, is :

select value
from v$diag_info
where name = 'Default Trace File'
/

To create the new alias :

alias tracefile =
select value "Session Trace File"
from v$diag_info
where name = 'Default Trace File';

I’ll also add a comment at this point :

alias desc tracefile The full path and filename on the database server of the tracefile for this session

My new alias looks like this :

The aliases.xml file

Unlike the pre-supplied aliases, the code for any alias you create will be held in a file called aliases.xml.

On Windows, this file will probably be somewhere under your OS user’s AppData directory.
On Ubuntu, it’s in $HOME/.sqlcl

With no custom aliases defined the file looks like this :

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<aliases/>

Note that, even though I have now defined a custom alias, it won’t be included in this file until I end the SQLcl session in which it was created.

Once I disconnect from this session, the file includes the new alias definition :

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<aliases>
<alias name="tracefile">
<description><![CDATA[desc tracefile The full path and filename on the database server of the tracefile for this session
]]></description>
<queries>
<query>
<sql><![CDATA[select value "Session Trace File"
from v$diag_info
where name = 'Default Trace File']]></sql>
</query>
</queries>
</alias>
</aliases>

Incidentally, if you’ve played around with SQLDeveloper extensions, you may find this file structure rather familiar.

The file appears to be read by SQLcl once on startup. Therefore, before I run SQLcl again, I can tweak the description of my alias to remove the extraneous text…

<description><![CDATA[The full path and filename on the database server of the tracefile for this session]]></description>

Sure enough, next time I start an SQLcl session, this change is now reflected in the alias definition :

Loading an alias from a file

The structure of the aliases.xml file gives us a template we can use to define an alias in the comfort of a text editor rather than on the command line. For example, we have the following PL/SQL block, which reads a bind variable :

declare
v_msg varchar2(100);
begin
if upper(:mood) = 'BAD' then
if to_char(sysdate, 'DAY') != 'MONDAY' then
v_msg := q'[At least it's not Monday!]';
elsif to_number(to_char(sysdate, 'HH24MI')) > 1200 then
v_msg := q'[At least it's not Monday morning!]';
else
v_msg := q'[I'm not surprised. It's Monday morning !]';
end if;
elsif upper(:mood) = 'GOOD' then
v_msg := q'[Don't tell me West Ham actually won ?!]';
else
v_msg := q'[I'm just a simple PL/SQL block and I can't handle complex emotions, OK ?!]';
end if;
dbms_output.new_line;
dbms_output.put_line(v_msg);
end;
/

Rather than typing this in on the command line, we can create a file ( called pep_talk.xml) which looks like this :

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<aliases>
<alias name="pep_talk">
<description><![CDATA[How are you feeling ? Usage is pep_talk <emotion>]]></description>
<queries>
<query>
<sql><![CDATA[
declare
v_msg varchar2(100);
begin
if upper(:mood) = 'BAD' then
if to_char(sysdate, 'DAY') != 'MONDAY' then
v_msg := q'[At least it's not Monday!]';
elsif to_number(to_char(sysdate, 'HH24MI')) > 1200 then
v_msg := q'[At least it's not Monday morning!]';
else
v_msg := q'[I'm not surprised. It's Monday morning !]';
end if;
elsif upper(:mood) = 'GOOD' then
v_msg := q'[Don't tell me West Ham actually won ?!]';
else
v_msg := q'[I'm just a simple PL/SQL block and I can't handle complex emotions, OK ?!]';
end if;
dbms_output.new_line;
dbms_output.put_line(v_msg);
end;
]]></sql>
</query>
</queries>
</alias>
</aliases>

Now, we can load this alias from the file as follows :

alias load pep_talk.xml
Aliases loaded

We can now execute our new alias. First though, we need to remember to turn serveroutput on before we invoke it :

Once you’ve terminated your SQLcl session, the new alias will be written to aliases.xml.

Exporting custom aliases

There may come a time when you want to share your custom aliases with your colleagues. After all, it’s always useful to know where the trace file is and who doesn’t need a pep talk from time-to-time ?

To “export” your aliases, you can issue the following command from SQLcl :

alias save mike_aliases.xml

This writes the file to the same location as your aliases.xml :

You can then import these aliases to another SQLcl installation simply by sharing the file and then using the alias load command.

References

As you can imagine, there are a wide variety of possible uses for the ALIAS command.

As the original author of this feature, this post by Kris Rice is probably worth a read.
Jeff Smith has written on this topic several times including :

Menno Hoogendijk has an example which employs some Javascript wizardry which he has published on GitHub.

Right, back to my trace files.

New Study: “Digital Natives” Value Brick and Mortar Stores More Than their Parents or Grandparents

Oracle Press Releases - Tue, 2019-06-25 08:00
Press Release
New Study: “Digital Natives” Value Brick and Mortar Stores More Than their Parents or Grandparents Global Study Highlights the Varying Shopping Expectations of Different Generations and the Role of Technology in Personalizing Retail

Redwood City, CA.—Jun 25, 2019

Despite clear differences in expectations among shoppers of different generations, almost half of retailers (44 percent) have made no progress in tailoring the in-store shopping experience according to a recent study conducted by Oracle NetSuite, Wakefield Research and The Retail Doctor. The global study of 1,200 consumers and 400 retail executives across the U.S., U.K. and Australia dispelled stereotypes around generations and found big differences in generational expectations across baby boomers, Gen X, millennials and Gen Z.

“We have seen decades of diminishing experiences in brick and mortar stores, and the differences identified in these results point to its impact on consumers over the years,” said Bob Phibbs, CEO, The Retail Doctor. “Retailers have fallen behind in offering in-store experiences that balance personalization and customer service but there’s an opportunity to take the reins back. The expectation from consumers is clear and it’s up to retailers to offer engaging and custom experiences that will cater to shoppers across a diverse group of generations.”

Beauty is in the eye of the beholder: Retailers struggle to keep stride with generational shoppers

The in-store shopping experience remains an important part of the retail environment for all generations, but the progress retailers are making to improve the in-store experience is being viewed differently by different generations.

  • Despite the stereotypes of “digital natives”, Gen Z and millennials (43 percent) are most likely to do more in-store shopping this year followed by Gen X (29 percent) and baby boomers (13 percent).
  • Gen Z and millennials (57 percent) had the most positive view of the current retail environment feeling it was more inviting, followed by Gen X (40 percent). Baby boomers (27 percent) were more likely to find the current retail environment less inviting than consumers overall.
  • Gen Z valued in-store interaction the least with 42 percent feeling more annoyed from increased interaction with retail associates. In contrast, millennials (56 percent), Gen X (44 percent) and baby boomer (43 percent) generations all noted they would feel more welcomed by more in-store interactions.

Retailers view emerging technologies through rose-colored glasses

While more than three quarters of retail executives (79 percent) believe having AI and VR in stores will increase sales, the study found that these technologies are not yet widely accepted by any generation.

  • Overall, only 14 percent of consumers believe that emerging technologies like AI and VR will have a significant impact on their purchase decisions.
  • Emerging tech in retail stores is most attractive to millennials (50 percent) followed by Gen Z (38 percent), Gen X (35 percent) and baby boomers (20 percent).
  • Perceptions of VR varied widely across different generations. Fifty-eight percent of Gen Z said VR would have some influence on their purchase decisions, while 59 percent of baby boomers said VR would have no influence on their purchase decision.

Insta-famous brands reach Gen Z and millennial consumers, but not as much as retailers think

While almost all retail executives (98 percent) think that engaging customers on social media is important to building stronger relationships with them, the study found a big disconnect with consumers across all generations.

  • Overall, only 12 percent of consumers think their engagement with brands on social media has a significant impact on the way they think or feel about a brand.
  • Among those who engage with brands on social media, Gen Z (38 percent) consumers are much more likely than other generations to engage with retailers on social to get to know the brand compared to millennials (25 percent) and baby boomers (21 percent).
  • Gen Z (65 percent) consumers and millennials (63 percent) believe their engagement with brands on social media platforms have an impact on their relationship with brands. 
  • More than half of baby boomers (53 percent) and 29 percent of Gen X consumers do not engage with brands on social media.

“After all the talk about brick and mortar stores being dead, it’s interesting to see that ‘digital natives’ are more likely to increase their shopping in physical stores this year than any other generation,” said Greg Zakowicz, senior commerce marketing analyst, Oracle NetSuite. “Stepping back, these findings fit with broader trends we have been seeing around the importance of immediacy and underlines why retailers cannot afford to make assumptions about the needs and expectations of different generations. It really is a complex puzzle and as this study clearly shows, retailers need to think carefully about how they meet the needs of different generations.”

To read more about NetSuite’s insights into the report’s finding visit NetSuite’s cloud blog.

Methodology

For this survey, 1,200 consumers and 400 retail executives were surveyed around the overall retail environment, in-store and online shopping experiences and advanced technologies. Both retailers and consumers were surveyed from three global markets including the U.S., U.K. and Australia with retail executives representing organizations between $10-100 million in annual sales.

Contact Info
Danielle Tarp
Oracle
650-506-2904
danielle.tarp@oracle.com
About Wakefield Research

Wakefield is a full-service market research firm that uncovers insights for brands to help them solve problems and grow their business. Wakefield Research is a partner to the world’s leading consumer and B2B brands, including 50 of the Fortune 100. Wakefield Research conducts qualitative and quantitative research in 70 countries. For more information, please visit https://www.wakefieldresearch.com

About The Retail Doctor

The Retail Doctor is a New York-based retail consulting firm created by expert retail consultant and leading business mentor Bob Phibbs. With over 30 years of experience in retail, Bob has worked as a consultant, speaker, and entrepreneur, helping businesses revolutionize their brand and grow their success. Bob is also the author of three highly-praised books, including The Retail Doctor's Guide to Growing Your Business (WILEY). His clients include some of the largest retail brands in the world including Bernina, Brother, Caesars Palace, Hunter Douglas, Lego, Omega and Yamaha. For more information, please visit www.retaildoc.com

About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials / Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 18,000 customers in 203 countries and dependent territories.

For more information, please visit http://www.netsuite.com

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • 650-506-2904

Belgian Telecom Provider Speeds Delivery of Customer Services with Oracle

Oracle Press Releases - Tue, 2019-06-25 07:00
Press Release
Belgian Telecom Provider Speeds Delivery of Customer Services with Oracle Proximus taps virtualized Oracle SBC Solution to boost deployment versatility, cut costs, and speed deployment

Redwood Shores, Calif.—Jun 25, 2019

Proximus, a leading international communications service provider, has chosen Oracle Communications virtualized Oracle Session Border Controller as a core network component to enable the delivery of its residential and enterprise communications cloud-based solutions for voice. As such, Proximus will be able to deploy its internet communications offerings faster, while decreasing operational expenses and increasing services flexibility.

Oracle’s virtualized SBC platform will be running on Proximus’ telco cloud and used for residential VoIP and SIP trunking for enterprise customers. This will enable them to deliver trusted and first-class, real-time communications services across the Internet. The virtualization of Oracle’s SBC is an important step in Proximus’s overall network strategy to virtualize the majority of its telco and service applications on a multitenant and open telco cloud. In addition, the automated and orchestrated core network will allow for adaptable capacity planning.

“As a digital service provider, we want to deliver the latest technologies to our customers in a way that simplifies and improves their lives and work environments,” said Laurent Claus, director service platforms & cloud, Proximus. “This is why our choice of Oracle was on target. Oracle Communications’ SBC delivers unparalleled operational efficiency and flexibility, which are essential as we continue to scale our offerings and customer base.”

“Given the scale and complexity of Proximus’ network needs, Oracle Communications is a strong fit,” said Greg Collins, founder & principal analyst, Exact Ventures. “As a tier-one communications service provider, Proximus requires the speed, trust and innovation that Oracle can deliver.” 

“Promixus has been a long time customer of Oracle Communications and this deployment is an exciting next step in their digital transformation journey,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “Matching Promixus’ ambition to deliver innovative services in an easy-to-consume way, we are confident that Oracle’s virtualized Session Border Controller will provide them the security, comprehensive control and scalability needed to bring their customers into the next generation of communications services.”

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Haroun Fenaux
Proximus
+32 476 60 03 33
press@proximus.com
About Proximus

Proximus Group is a telecommunication & ICT company operating in the Belgian and international markets, servicing residential, enterprise and public customers. Proximus’ ambition is to become a digital service provider, opening up a world of digital opportunities so people live better and work smarter. Through its best-quality integrated fixed and mobile networks, Proximus provides access anywhere and anytime to digital services and easy-to-use solutions, as well as to a broad offering of multimedia content. Proximus transforms technologies like the Internet of Things (IoT), Big Data, Cloud and Security into solutions with positive impact on people and society. With 13,391 employees, all engaged to offer customers a superior experience, the Group realized an underlying Group revenue of EUR 5,778 million end-2017.

Proximus (Euronext Brussels: PROX) is also active in Luxembourg through its affiliates Proximus Luxembourg and in the Netherlands through Telindus Netherlands. BICS is a leading international communications enabler, one of the key global voice carriers and the leading provider of mobile data services worldwide.

About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications

To learn more about Oracle Communications industry solutions, visit: Oracle Communications LinkedIn, or join the conversation at Twitter @OracleComms.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Haroun Fenaux

  • +32 476 60 03 33

Small- and Mid-sized Banks Fight Money Laundering with Oracle

Oracle Press Releases - Tue, 2019-06-25 07:00
Press Release
Small- and Mid-sized Banks Fight Money Laundering with Oracle Small- and Mid-sized Banks Fight Money Laundering with Oracle

Redwood Shores, Calif.—Jun 25, 2019

Oracle announced the availability of Oracle Financial Services Anti Money Laundering (AML) Express Edition targeted at small- and mid-sized banks. It provides a single, unified platform to efficiently detect, investigate, and report suspected money laundering and terrorist financing activity to comply with evolving regulations and guidelines.

Smaller banks need to address regulations and compliance the same as global top-tier banks but must do so with significantly smaller IT budgets and limited resources. AML Express uses new architecture principles to offer a choice of deployment and includes all the core functionality needed to fight financial crime.

“The largest financial institutions in the world have been using Oracle Anti Money Laundering solutions for decades. Today, the same comprehensive financial crime technology is now accessible for small- and mid-sized financial institutions. Lowering the total cost of ownership without compromising on the core functional capabilities is an engineering breakthrough made possible with the use of modern, cloud-compatible architectures,” said Sonny Singh, senior vice president and general manager, Oracle Financial Services.

To address the unique challenges of smaller banks, Oracle Financial Services created this scalable, out-of-the-box AML solution. Key features of AML Express include:

  • Architecture designed for rapid deployment on premise or on cloud infrastructures allowing firms to transition to their future states faster and at reduced implementation costs
  • In-built library of scenarios that detect the most common money laundering behaviors coupled with in-built case management abilities that reduce the time and resources needed for scenario configuration and case investigation.
  • Modern solution design that allows visual scenario configuration, reducing coding overheads, and enabling easy adaption to ever-changing compliance demands

For more information about AML Express, please click here.

Contact Info
Judi Palmer
Oracle
+1 650 784 7901
judi.palmer@oracle.com
Brian Pitts
Hill+Knowlton Strategies
+1 312 475 5921
brian.pitts@hkstrategies.com
Katie McCracken
CMG
+44 20 7861 0736
kmccracken@cmgrp.com
About Oracle Financial Services

Oracle Financial Services Global Business Unit provides clients in more than 140 countries with an integrated, best-in-class, end-to-end solution of intelligent software and powerful hardware designed to meet every financial service need. Our market leading platforms provide the foundation for banks and insurers’ digital and core transformations and we deliver a modern suite of Analytical Applications for Risk, Finance Compliance and Customer Insight. For more information, visit our website at https://www.oracle.com/industries/financial-services/index.html.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650 784 7901

Brian Pitts

  • +1 312 475 5921

Katie McCracken

  • +44 20 7861 0736

During Extract Upgrade “extract not ready to be upgraded because recovery SCN” returned

VitalSoftTech - Mon, 2019-06-24 23:59

During the upgrade of a Classical extract process to Integrated extract I get the "extract not ready to be upgraded because recovery SCN". How do I work around this?

The post During Extract Upgrade “extract not ready to be upgraded because recovery SCN” returned appeared first on VitalSoftTech.

Categories: DBA Blogs

Disable scheduler jobs during deployment

Jeff Kemp - Mon, 2019-06-24 19:54

Like most active sites our applications have a healthy pipeline of change requests and bug fixes, and we manage this pipeline by maintaining a steady pace of small releases.

Each release is built, tested and deployed within a 3-4 week timeframe. Probably once or twice a month, on a Thursday evening, one or more deployments will be run, and each deployment is fully scripted with as few steps as possible. My standard deployment script has evolved over time to handle a number of cases where failures have happened in the past; failed deployments are rare now.

One issue we encountered some time ago was when a deployment script happened to be run at the same time as a database scheduler job; the job started halfway during the deployment when some objects were in the process of being modified. This led to some temporary compilation failures that caused the job to fail. Ultimately the deployment was successful, and the next time the job ran it was able to recover; but we couldn’t be sure that another failure of this sort wouldn’t cause issues in future. So I added a step to each deployment to temporarily stop all the jobs and re-start them after the deployment completes, with a script like this:

prompt disable_all_jobs.sql

begin
  for r in (
    select job_name
    from   user_scheduler_jobs
    where  schedule_type = 'CALENDAR'
    and    enabled = 'TRUE'
    order by 1
  ) loop
    dbms_scheduler.disable
      (name  => r.job_name
      ,force => true);
  end loop;
end;
/

This script simply marks all the jobs as “disabled” so they don’t start during the deployment. A very similar script is run at the end of the deployment to re-enable all the scheduler jobs. This works fine, except for the odd occasion when a job just happens to start running, just before the script starts, and the job is still running concurrently with the deployment. The line force => true in the script means that my script allows those jobs to continue running.

To solve this problem, I’ve added the following:

prompt Waiting for any running jobs to finish...

whenever sqlerror exit sql.sqlcode;

declare
  max_wait_seconds constant number := 60;
  start_time       date := sysdate;
  job_running      varchar2(100);
begin
  loop

    begin
      select job_name
      into   job_running
      from   user_scheduler_jobs
      where  state = 'RUNNING'
      and    rownum = 1;
    exception
      when no_data_found then
        job_running := null;
    end;

    exit when job_running is null;

    if sysdate - start_time > max_wait_seconds/24/60/60 then

      raise_application_error(-20000,
           'WARNING: waited for '
        || max_wait_seconds
        || ' seconds but job is still running ('
        || job_running
        || ').');

    else
      dbms_lock.sleep(2);
    end if;

  end loop;
end;
/

When the DBA runs the above script, it pauses to allow any running jobs to finish. Our jobs almost always finish in less than 30 seconds, usually sooner. The loop checks for any running jobs; if there are no jobs running it exits straight away – otherwise, it waits for a few seconds then checks again. If a job is still running after a minute, the script fails (stopping the deployment) and the DBA can investigate further to see what’s going on; once the job has finished, they can re-start the deployment.

Stein Mart Boosts Omni-Channel Growth with Oracle Cloud

Oracle Press Releases - Mon, 2019-06-24 07:30
Press Release
Stein Mart Boosts Omni-Channel Growth with Oracle Cloud Merchandise Financial Planning helps national retailer leverage data to optimize inventory management

Redwood Shores, Calif. and Jacksonville, Fla.—Jun 24, 2019

Stein Mart, a national specialty off-price retailer, has gained a holistic view of its inventory and a more streamlined approach to merchandise planning with Oracle Cloud.

By consolidating the planning and forecasting process for its physical stores, online store and warehouses into one solution, Stein Mart will be better equipped to manage its inventory to support the needs of its customers, regardless of how they choose to shop. With Oracle Retail Cloud Services, Stein Mart has the tools to keep its merchandise assortments fresh and relevant for buyers.

“We have been focused on simplifying our merchandising processes while expanding our omni-channel capabilities and new business initiatives. The enhanced functionality of Oracle’s Merchandise Financial Planning solution will help us analyze data faster to create better plans up front so we can buy smarter and manage inventory more effectively,” said Nick Swetonic, Stein Mart’s senior vice president of planning and allocation.

“Today, retailers sell whatever they buy, often at the expense of the bottom line. Tomorrow, they will be able to more accurately predict placement, price, and sizes across every store and market. This is the promise of the Oracle Retail Cloud,” noted Mike Webster, senior vice president and general manager, Oracle Retail. “We are helping companies like Stein Mart refine their approach to inventory and purchasing, so they can continually delight customers while improving results with merchandise that turns quickly.”

Stein Mart partnered with Cognira, experts in analytics, configuration and integration, and retail consulting firm The Parker Avery Group to re-engineer business processes and implement Oracle Retail Merchandise Financial Planning Cloud Service. Both Cognira and Parker Avery are members of the Oracle PartnerNetwork (OPN). Previously, Stein Mart also implemented Oracle Retail Merchandising, Oracle Retail Store Inventory Management, Oracle GoldenGate, Oracle JD Edwards, and Oracle Retail Point of Sale.

Contact Info
Kris Reeves
Oracle PR
+1.925.787.6744
kris.reeves@oracle.com
Linda Tasseff
Stein Mart Investor Relations
+1.904.858.2639
ltasseff@steinmart.com
About Stein Mart

Stein Mart, Inc. is a national specialty off-price retailer offering designer and name-brand fashion apparel, home décor, accessories and shoes at everyday discount prices. Stein Mart provides real value that customers love every day both in stores and online. The Company currently operates 283 stores across 30 states. For more information, please visit www.steinmart.com.

About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website, www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Linda Tasseff

  • +1.904.858.2639

We moved to @Medium

Marcelo Ochoa - Sat, 2019-06-22 15:35
Since August 2017 We are moved to Medium, some of the reason are great described into this blog post "3 reasons we moved our startup blog to Medium". You are invited to see the new channel, greetings.

ANSI bug

Jonathan Lewis - Sat, 2019-06-22 07:01

The following note is about a script that I found on my laptop while I was searching for some details about a bug that appears when you write SQL using the ANSI style format rather than traditional Oracle style. The script is clearly one that I must have cut and pasted from somewhere (possibly the OTN/ODC database forum) many years ago without making any notes about its source or resolution. All I can say about it is that the file has a creation date of July 2012 and I can’t find any reference to a problem through Google searches – though the tables and even a set of specific insert statements appears in a number of pages that look like coursework for computer studies and MoS has a similar looking bug “fixed in 11.2”.

Here’s the entire script:

rem
rem     Script:         ansi_bug.sql
rem     Author:         ???
rem     Dated:          July 2012
rem

CREATE TABLE Student (
  sid INT PRIMARY KEY,
  name VARCHAR(20) NOT NULL,
  address VARCHAR(20) NOT NULL,
  major CHAR(2)
);

CREATE TABLE Professor (
  pid INT PRIMARY KEY,
  name VARCHAR(20) NOT NULL,
  department VARCHAR(10) NOT NULL
);

CREATE TABLE Course (
  cid INT PRIMARY KEY,
  title VARCHAR(20) NOT NULL UNIQUE,
  credits INT NOT NULL,
  area VARCHAR(5) NOT NULL
);

CREATE TABLE Transcript (
  sid INT,
  cid INT,
  pid INT,
  semester VARCHAR(9),
  year CHAR(4),
  grade CHAR(1) NOT NULL,
  PRIMARY KEY (sid, cid, pid, semester, year),
  FOREIGN KEY (sid) REFERENCES Student (sid),
  FOREIGN KEY (cid) REFERENCES Course (cid),
  FOREIGN KEY (pid) REFERENCES Professor (pid)
);

INSERT INTO Student (sid, name, address, major) VALUES (101, 'Nathan', 'Edinburg', 'CS');
INSERT INTO Student (sid, name, address, major) VALUES (105, 'Hussein', 'Edinburg', 'IT');
INSERT INTO Student (sid, name, address, major) VALUES (103, 'Jose', 'McAllen', 'CE');
INSERT INTO Student (sid, name, address, major) VALUES (102, 'Wendy', 'Mission', 'CS');
INSERT INTO Student (sid, name, address, major) VALUES (104, 'Maria', 'Pharr', 'CS');
INSERT INTO Student (sid, name, address, major) VALUES (106, 'Mike', 'Edinburg', 'CE');
INSERT INTO Student (sid, name, address, major) VALUES (107, 'Lily', 'McAllen', NULL);

INSERT INTO Professor (pid, name, department) VALUES (201, 'Artem', 'CS');
INSERT INTO Professor (pid, name, department) VALUES (203, 'John', 'CS');
INSERT INTO Professor (pid, name, department) VALUES (202, 'Virgil', 'MATH');
INSERT INTO Professor (pid, name, department) VALUES (204, 'Pearl', 'CS');
INSERT INTO Professor (pid, name, department) VALUES (205, 'Christine', 'CS');

INSERT INTO Course (cid, title, credits, area) VALUES (4333, 'Database', 3, 'DB');
INSERT INTO Course (cid, title, credits, area) VALUES (1201, 'Comp literacy', 2, 'INTRO');
INSERT INTO Course (cid, title, credits, area) VALUES (6333, 'Advanced Database', 3, 'DB');
INSERT INTO Course (cid, title, credits, area) VALUES (6315, 'Applied Database', 3, 'DB');
INSERT INTO Course (cid, title, credits, area) VALUES (3326, 'Java', 3, 'PL');
INSERT INTO Course (cid, title, credits, area) VALUES (1370, 'CS I', 4, 'INTRO');

INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (101, 4333, 201, 'Spring', '2009', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (101, 6333, 201, 'Fall', '2009', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (101, 6315, 201, 'Fall', '2009', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (103, 4333, 203, 'Summer I', '2010', 'B');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (102, 4333, 201, 'Fall', '2009', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (103, 3326, 204, 'Spring', '2008', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (104, 1201, 205, 'Fall', '2009', 'B');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (104, 1370, 203, 'Summer II', '2010', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (106, 1201, 205, 'Fall', '2009', 'C');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (106, 1370, 203, 'Summer II', '2010', 'C');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (105, 3326, 204, 'Spring', '2001', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (105, 6315, 203, 'Fall', '2008', 'A');

SELECT 
        pid, 
        name, title
FROM 
        Professor 
NATURAL LEFT OUTER JOIN 
        (
                Transcript 
        NATURAL JOIN 
                Course
        )
;

SELECT 
        name, title
FROM 
        Professor 
NATURAL LEFT OUTER JOIN 
        (
                Transcript 
        NATURAL JOIN 
                Course
        )
;

SELECT 
        name, title
FROM 
        Professor 
NATURAL LEFT OUTER JOIN 
        (
                Transcript 
        NATURAL JOIN 
                Course
        )
order by pid
;

I’ve run three minor variations of the same query – the one in the middle selects two columns from a three table join using natural joins. The first query does the same but includes an extra column in the select list while the third query selects only the original columns but orders the result set by the extra column.

The middle query returns 60 rows – the first and third, with the “extra” column projected somewhere in the execution plan, return 13 rows.

I didn’t even have a note of the then-current version of Oracle when I copied this script, but I’ve just run it on 12.2.0.1, 18.3.0.0, and 19.2.0.0 (using LiveSQL), and the error reproduces on all three versions.

Ubuntu Server: How to activate kernel dumps

Dietrich Schroff - Fri, 2019-06-21 14:25
If you are running ubuntu server, you can add kdump on your system to write kernel dumps in case of sudden reboots etc.

Installing is very easy:
root@ubuntuserver:/etc# apt install linux-crashdump
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following additional packages will be installed:
  binutils binutils-common binutils-x86-64-linux-gnu crash kdump-tools kexec-tools libbinutils libdw1 libsnappy1v5 makedumpfile
Suggested packages:
  binutils-doc
The following NEW packages will be installed:
  binutils binutils-common binutils-x86-64-linux-gnu crash kdump-tools kexec-tools libbinutils libdw1 libsnappy1v5 linux-crashdump makedumpfile
0 upgraded, 11 newly installed, 0 to remove and 43 not upgraded.
Need to get 2,636 B/5,774 kB of archives.
After this operation, 26.0 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 linux-crashdump amd64 4.15.0.46.48 [2,636 B]
Fetched 2,636 B in 0s (28.1 kB/s)    
Preconfiguring packages ...
Selecting previously unselected package binutils-common:amd64.
(Reading database ... 66831 files and directories currently installed.)
Preparing to unpack .../00-binutils-common_2.30-21ubuntu1~18.04_amd64.deb ...
Unpacking binutils-common:amd64 (2.30-21ubuntu1~18.04) ...
Selecting previously unselected package libbinutils:amd64.
Preparing to unpack .../01-libbinutils_2.30-21ubuntu1~18.04_amd64.deb ...
Unpacking libbinutils:amd64 (2.30-21ubuntu1~18.04) ...
Selecting previously unselected package binutils-x86-64-linux-gnu.
Preparing to unpack .../02-binutils-x86-64-linux-gnu_2.30-21ubuntu1~18.04_amd64.deb ...
Unpacking binutils-x86-64-linux-gnu (2.30-21ubuntu1~18.04) ...
Selecting previously unselected package binutils.
Preparing to unpack .../03-binutils_2.30-21ubuntu1~18.04_amd64.deb ...
Unpacking binutils (2.30-21ubuntu1~18.04) ...
Selecting previously unselected package libsnappy1v5:amd64.
Preparing to unpack .../04-libsnappy1v5_1.1.7-1_amd64.deb ...
Unpacking libsnappy1v5:amd64 (1.1.7-1) ...
Selecting previously unselected package crash.
Preparing to unpack .../05-crash_7.2.1-1ubuntu2_amd64.deb ...
Unpacking crash (7.2.1-1ubuntu2) ...
Selecting previously unselected package kexec-tools.
Preparing to unpack .../06-kexec-tools_1%3a2.0.16-1ubuntu1_amd64.deb ...
Unpacking kexec-tools (1:2.0.16-1ubuntu1) ...
Selecting previously unselected package libdw1:amd64.
Preparing to unpack .../07-libdw1_0.170-0.4_amd64.deb ...
Unpacking libdw1:amd64 (0.170-0.4) ...
Selecting previously unselected package makedumpfile.
Preparing to unpack .../08-makedumpfile_1%3a1.6.3-2_amd64.deb ...
Unpacking makedumpfile (1:1.6.3-2) ...
Selecting previously unselected package kdump-tools.
Preparing to unpack .../09-kdump-tools_1%3a1.6.3-2_amd64.deb ...
Unpacking kdump-tools (1:1.6.3-2) ...
Selecting previously unselected package linux-crashdump.
Preparing to unpack .../10-linux-crashdump_4.15.0.46.48_amd64.deb ...
Unpacking linux-crashdump (4.15.0.46.48) ...
Processing triggers for ureadahead (0.100.0-20) ...
Setting up libdw1:amd64 (0.170-0.4) ...
Setting up kexec-tools (1:2.0.16-1ubuntu1) ...
Generating /etc/default/kexec...
Setting up binutils-common:amd64 (2.30-21ubuntu1~18.04) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Setting up makedumpfile (1:1.6.3-2) ...
Setting up libsnappy1v5:amd64 (1.1.7-1) ...
Processing triggers for systemd (237-3ubuntu10.12) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up libbinutils:amd64 (2.30-21ubuntu1~18.04) ...
Setting up kdump-tools (1:1.6.3-2) ...

Creating config file /etc/default/kdump-tools with new version
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/50-curtin-settings.cfg'
Sourcing file `/etc/default/grub.d/kdump-tools.cfg'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.15.0-45-generic
Found initrd image: /boot/initrd.img-4.15.0-45-generic
done
Created symlink /etc/systemd/system/multi-user.target.wants/kdump-tools.service → /lib/systemd/system/kdump-tools.service.
Setting up linux-crashdump (4.15.0.46.48) ...
Setting up binutils-x86-64-linux-gnu (2.30-21ubuntu1~18.04) ...
Setting up binutils (2.30-21ubuntu1~18.04) ...
Setting up crash (7.2.1-1ubuntu2) ...
Processing triggers for ureadahead (0.100.0-20) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for systemd (237-3ubuntu10.12) ...
Within the installation you have to answer these questions:


After the installation the following parameter is added to the kernel cmdline:
grep -r crash /boot* |grep cfg
/boot/grub/grub.cfg:        linux    /boot/vmlinuz-4.15.0-46-generic root=UUID=a83c2a94-91c4-461a-b6a4-c7a81422a857 ro  maybe-ubiquity crashkernel=384M-:128M
/boot/grub/grub.cfg:            linux    /boot/vmlinuz-4.15.0-46-generic root=UUID=a83c2a94-91c4-461a-b6a4-c7a81422a857 ro  maybe-ubiquity crashkernel=384M-:128M
with
crashkernel=:[,:,...][@offset]
range=start-[end] 'start' is inclusive and 'end' is exclusive

The configuration is done via /etc/default/kdump-tools. Here the parameter to control the directory to dump the core into:

cat /etc/default/kdump-tools  |grep DIR
# KDUMP_COREDIR - local path to save the vmcore to.
KDUMP_COREDIR="/var/crash"
Next step is to reboot and verify the kernel cmdline.

#cat /proc/cmdline 
BOOT_IMAGE=/boot/vmlinuz-4.15.0-46-generic root=UUID=a83c2a94-91c4-461a-b6a4-c7a81422a857 ro maybe-ubiquity crashkernel=384M-:128M


To get a coredump just use the following commands:
root@ubuntuserver:/etc# sysctl -w kernel.sysrq=1
kernel.sysrq = 1
root@ubuntuserver:/etc# echo c > /proc/sysrq-trigger

Oracle ERP Cloud Recognized as a Leader in the Gartner Magic Quadrant for Cloud Core Financial Management Suites for Midsize, Large and Global Enterprises

Oracle Press Releases - Thu, 2019-06-20 07:30
Press Release
Oracle ERP Cloud Recognized as a Leader in the Gartner Magic Quadrant for Cloud Core Financial Management Suites for Midsize, Large and Global Enterprises Oracle named a Leader based on completeness of vision and ability to execute

Redwood Shores, Calif.—Jun 20, 2019

Oracle (NYSE: ORCL) has been named a Leader in Gartner’s 2019 “Magic Quadrant for Cloud Core Financial Management Suites for Midsize, Large and Global Enterprises” report1. Oracle ERP Cloud is positioned as a Leader based on its ability to execute and completeness of vision. A complimentary copy of the report is available here.

This is the third consecutive year that Oracle ERP Cloud has been recognized as a Leader in Gartner’s report, and out of 10 products evaluated, Oracle ERP Cloud is positioned highest for ability to execute as well as furthest to the right for completeness of vision.

According to the report, “Leaders demonstrate a market-defining vision of how core financial management systems and processes can be supported and improved by moving them to the cloud. They couple this with a clear ability to execute this vision through products, services and go-to-market strategies. They have a strong presence in the market and are growing their revenue and market share. In this market, Leaders show a consistent ability to secure deals with enterprises of different sizes, and have a good depth of functionality across all areas of core financial management. They have multiple proofs of successful deployments by customers, both in their home region and elsewhere. Their offerings are often used by system integrator partners to support financial transformation initiatives. Leaders typically address a wide market audience by supporting broad market requirements. However, they may fail to meet the specific needs of vertical markets or other, more specialized segments, which might be better addressed by Niche Players in particular.”

“Oracle remains laser-focused on our customer’s success. We are committed to continued significant investments in innovation that can help our 6,000+ ERP Cloud customers drive operational excellence in finance,” said Rondy Ng, Senior Vice President, Applications Development, Oracle. “We are ecstatic to be acknowledged once again as a Leader by Gartner. We believe this report is a validation of our product strengths, investment focus, and customer successes.”

Oracle ERP Cloud includes complete ERP capabilities across FinancialsProcurement, and Project Portfolio Management (PPM), as well as Enterprise Performance Management (EPM) and Governance Risk and Compliance (GRC). Together with Supply Chain Management (SCM) and native integration with the broader Oracle Cloud Applications suite, which includes Human Capital Management (HCM) and Customer Experience (CX) SaaS applications, Oracle helps customers to stay ahead of changing expectations, build adaptable organizations, and realize the potential of the latest innovations.

Oracle portfolio of financial management and planning cloud offerings have garnered industry recognition. Oracle ERP Cloud was named the sole Leader in Gartner’s 2018 Magic Quadrant for Cloud ERP for Product-Centric Midsize Enterprises.2 Oracle was also named the Leader in the Gartner 2018 Magic Quadrant for Cloud Financial Planning and Analysis Solutions3 (with the highest position for its ability to execute) and was named a Leader in the 2018 Magic Quadrant for Cloud Financial Close Solutions.4

1Gartner Magic Quadrant for Cloud Core Financial Management Suites for Midsize, Large and Global Enterprises, John Van Decker, Robert Anderson, Greg Leiter, 13 May 2019
2 Gartner Magic Quadrant for Cloud ERP for Product-Centric Midsize Enterprises, Mike Guay, John Van Decker, Christian Hestermann, Nigel Montgomery, Duy Nguyen, Denis Torii, Paul Saunders, Paul Schenck, Tim Faith, 31 October 2018
3 Gartner Magic Quadrant for Cloud Financial Planning and Analysis Solutions, Christopher Iervolino, John Van Decker, 24 July 2018
4 Gartner Magic Quadrant for Cloud Financial Close Solutions, John Van Decker, Christopher Iervolino, 26 July 2018

Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Additional Information

For additional information on Oracle ERP Cloud applications, visit Oracle Enterprise Resource Planning (ERP) Cloud’s Facebook and Twitter or the Modern Finance Leader blog.

Contact Info
Bill Rundle
Oracle PR
650.506.1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • 650.506.1891

Baylor University Selects Oracle Cloud Applications to Gain Competitive Advantage

Oracle Press Releases - Thu, 2019-06-20 07:00
Press Release
Baylor University Selects Oracle Cloud Applications to Gain Competitive Advantage Pioneering Texas university shifts business applications to the cloud to enhance user experience, gain real-time insights and improve organizational agility

Redwood Shores, Calif.—Jun 20, 2019

Credit: Baylor University

To compete more aggressively at the pinnacle of higher education, Baylor University—the oldest continuously operating university in Texas—has adopted Oracle Cloud Applications. With cloud-based applications for finance, planning and human resources, Baylor will be able to improve productivity and business insights by transforming administrative operations and employee experience and gaining real-time access to data from across its growing operations.

From its beginning as a small Baptist college in 1845, Baylor has grown to serve more than 16,000 students annually and has become a world-class brand in higher education. Oracle Cloud Applications play a supportive role in Baylor’s aspiration to become a preeminent research university as outlined in the institution’s academic strategic plan, Illuminate.

To stay at the forefront of higher education as it continues to evolve, Baylor is replacing its manual systems with an integrated suite of applications that can provide real-time insights into key business processes. To meet these needs and gain a competitive edge over peer-institutions, Baylor selected Oracle Enterprise Resource Planning (ERP) Cloud, Oracle Enterprise Performance Management (EPM) Cloud, and Oracle Human Capital Management (HCM) Cloud  

“Education is evolving and the technology that drives our organization forward needs to reflect modern education best practices,” said Becky King, associate vice president of IT, Baylor University. “Shifting to Oracle Cloud Applications will help us introduce modern best practices that will make our organization more efficient and reach our goal of becoming a top-tier, Christian research institution. Moving core finance, planning and HR systems to one cloud-based platform will also improve business insight and enhance our ability to respond to changing dynamics in education.”

With Oracle ERP Cloud, Oracle EPM Cloud and Oracle HCM Cloud, Baylor will be able to take advantage of the cloud to break down organizational silos, standardize processes and manage financial, planning and workforce data on a single integrated cloud platform. Oracle Cloud Applications’ common and intuitive interface enables rapid user adoption, delivers enhanced employee experience and improves productivity.

“To compete at the leading edge of higher education, institutions need real-time visibility across the entire organization in order to respond to rapidly changing educational needs and expectations,” said Hari Sankar, Group Vice President, Product Management. “With Oracle Cloud Applications, Baylor will be able to make smarter decisions about the direction of the organization while delivering better experiences to end users, improving its agility and enabling it to better compete in higher education.”

For additional information on Oracle Cloud Applications visit oracle.com/cloud/applications.

Contact Info
Bill Rundle
Oracle PR
650.506.1891
bill.rundle@oracle.com
About Baylor University

Baylor University is a private Christian University and a nationally ranked research institution. The University provides a vibrant campus community for more than 17,000 students by blending interdisciplinary research with an international reputation for educational excellence and a faculty commitment to teaching and scholarship. Chartered in 1845 by the Republic of Texas through the efforts of Baptist pioneers, Baylor is the oldest continually operating University in Texas. Located in Waco, Baylor welcomes students from all 50 states and more than 90 countries to study a broad range of degrees among its 12 nationally recognized academic divisions.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle.

Talk to a Press Contact

Bill Rundle

  • 650.506.1891

Q4 FY19 GAAP EPS Up 36% to $1.07 and NON-GAAP EPS Up 23% to $1.16

Oracle Press Releases - Wed, 2019-06-19 15:00
Press Release
Q4 FY19 GAAP EPS Up 36% to $1.07 and NON-GAAP EPS Up 23% to $1.16 Operating Income Up 3% in USD and 7% in Constant Currency

Redwood Shores, Calif.—Jun 19, 2019

Oracle Corporation (NYSE: ORCL) today announced fiscal 2019 Q4 results and fiscal 2019 full year results. Total Quarterly Revenues were $11.1 billion, up 1% in USD and up 4% in constant currency compared to Q4 last year. Cloud Services and License Support revenues were $6.8 billion, while Cloud License and On-Premise License revenues were $2.5 billion. Total Cloud Services and License Support plus Cloud License and On-Premise License revenues were $9.3 billion, up 3% in USD and 6% in constant currency.

Q4 GAAP Operating Income was up 2% to $4.3 billion and GAAP operating margin was 38%. Non-GAAP Operating Income was up 4% to $5.3 billion and non-GAAP operating margin was 47%. GAAP Net Income was up 14% to $3.7 billion and non-GAAP Net Income was up 3% to $4.1 billion. GAAP Earnings Per Share was $1.07, while non-GAAP Earnings Per Share was $1.16.

Short-term deferred revenues were $8.4 billion. Operating cash flow for fiscal 2019 was $14.6 billion.

For fiscal 2019, Total Revenues were $39.5 billion, slightly higher in USD and up 3% in constant currency. Cloud Services and License Support revenues were $26.7 billion, while Cloud License and On-Premise License revenues were $5.9 billion. Total Cloud Services and License Support plus Cloud License and On-Premise revenues were $32.6 billion, up 2% in USD and 4% in constant currency.

Fiscal 2019 GAAP Operating Income was $13.5 billion, and GAAP operating margin was 34%. Non-GAAP Operating Income was $17.4 billion, and non-GAAP operating margin was 44%. GAAP Net Income was $11.1 billion, while non-GAAP Net Income was $13.1 billion. GAAP Earnings Per Share increased 251% to $2.97, while non-GAAP Earnings Per Share was up 16% to $3.52.

“In Q4, our non-GAAP operating income grew 7% in constant currency—which drove EPS well above the high end of my guidance,” said Oracle CEO, Safra Catz. “Our high-margin Fusion and NetSuite cloud applications businesses are growing rapidly, while we downsize our low-margin legacy hardware business. The net result of this shift away from commodity hardware to cloud applications was a Q4 non-GAAP operating margin of 47%, the highest we’ve seen in five years.”

“Our Fusion ERP and HCM cloud applications suite revenues grew 32% in FY19,” said Oracle CEO, Mark Hurd. “Our NetSuite ERP cloud applications revenues also grew 32% this year. These strong results extend Oracle’s already commanding lead in worldwide Cloud ERP. Our cloud applications businesses are growing faster than our competitors. That said, let me call your attention to the following approved statement from industry analyst IDC.”

Per IDC’s latest annual market share results, Oracle gained the most market share globally out of all Enterprise Applications SaaS vendors three years running—in CY16, CY17 and CY18.

“We added over five thousand new Autonomous Database trials in Q4,” said Oracle Chairman and CTO, Larry Ellison. “Our new Gen2 Cloud Infrastructure offers those customers a compelling array of advance technology features including our self-driving database that automatically encrypts all your data, backs itself up, tunes itself, upgrades itself, and patches itself when a security threat is detected. It does all of this autonomously—while running—without the need for any human intervention, and without the need for any downtime. No other cloud infrastructure provides anything close to these autonomous features.”

The Board of Directors also declared a quarterly cash dividend of $0.24 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on July 17, 2019, with a payment date of July 31, 2019.

Q4 Fiscal 2019 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q4 results and fiscal 2019 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Passcode: 9955119.

Contact Info
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE:ORCL), visit us at www.oracle.com or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

“Safe Harbor” Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding the growth of our high-margin cloud applications businesses, are "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our cloud strategy, including our Oracle Software as a Service and Infrastructure as a Service offerings, may not be successful. (2) If we are unable to develop new or sufficiently differentiated products and services, integrate acquired products and services, or enhance and improve our existing products and support services in a timely manner, or price our products and services to meet market demand, customers may not purchase or subscribe to our software, hardware or cloud offerings or renew software support, hardware support or cloud subscriptions contracts. (3) Enterprise customers rely on our cloud, license and hardware offerings and related services to run their businesses and significant coding, manufacturing or configuration errors in our cloud, license and hardware offerings and related services could expose us to product liability, performance and warranty claims, as well as cause significant harm to our brand and reputation, which could impact our future sales. (4) If the security measures for our products and services are compromised and as a result, our customers' data or our IT systems are accessed improperly, made unavailable, or improperly modified, our products and services may be perceived as vulnerable, our brand and reputation could be damaged and we may experience legal claims and reduced sales. (5) Our business practices with respect to data could give rise to operational interruption, liabilities or reputational harm as a result of governmental regulation, legal requirements or industry standards relating to consumer privacy and data protection. (6) Economic, political and market conditions can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) Our international sales and operations subject us to additional risks that can adversely affect our operating results. (8) We have a selective and active acquisition program and our acquisitions may not be successful, may involve unanticipated costs or other integration issues or may disrupt our existing operations. A detailed discussion of these factors and other risks that affect our business is contained in our SEC filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of June 19, 2019. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Contextual Targeting vs Behavioral Targeting

VitalSoftTech - Tue, 2019-06-18 12:19

Let’s suppose these are the olden times and you have to advertise for a new circus in town. Do you paste the posters on the walls of places of entertainment like a movie theater, a bar, horse racing tracks, or a casino? Or do you spend a little time about town and look around for […]

The post Contextual Targeting vs Behavioral Targeting appeared first on VitalSoftTech.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator