Feed aggregator

Oracle, ITA Announce Wild Card Linkages Between Major College Championships and Oracle Pro Series Events

Oracle Press Releases - Mon, 2019-11-18 07:00
Press Release
Oracle, ITA Announce Wild Card Linkages Between Major College Championships and Oracle Pro Series Events

TEMPE, Ariz.—Nov 18, 2019

Oracle and the Intercollegiate Tennis Association (ITA) jointly announced today the creation of wild card linkages between major college tennis championships and the Oracle Pro Series. The champions and finalists from the Oracle ITA Masters, the ITA All-American Championships and the Oracle ITA National Fall Championships will be awarded wild card entries into Oracle Pro Series events beginning with the 2020 season.

The opportunity to earn wild card entries into Oracle Pro Series tournaments is available to college players from all five divisions (NCAA DI, DII, DIII, NAIA and Junior College). Singles and doubles champions from The All-American Championships and the Oracle ITA National Fall Championships as well as the Oracle ITA Masters singles champions will earn wild cards into Oracle Challenger level events. Singles and doubles finalists from the All-American Championships and the Oracle ITA National Fall Championships will earn wild cards into Oracle $25k tournaments. ITA Cup singles champions (from NCAA DII, DIII, NAIA and Junior College) will also earn wild card entries into Oracle $25K tournaments.

Eighteen individuals and eight doubles teams have already secured wild cards for Oracle Pro Series tournaments in 2020 following their play at the 2019 Oracle ITA Masters, 2019 ITA All-American Championships, 2019 ITA Cup, and 2019 Oracle ITA National Fall Championships. The list includes:

Oracle ITA Masters

  • Men’s Singles Champion – Daniel Cukierman (USC)
  • Men’s Singles Finalist – Keegan Smith (UCLA)
  • Women’s Singles Champion – Ashley Lahey (Pepperdine)
  • Women’s Singles Finalist – Jada Hart (UCLA)

Oracle ITA National Fall Championships

  • Men’s Singles Champion – Yuya Ito (Texas)
  • Men’s Singles Finalist – Damon Kesaris (Saint Mary’s)
  • Women’s Singles Champion – Sara Daavettila (North Carolina)
  • Women’s Singles Finalist – Anna Turati (Texas)
  • Men’s Doubles Champions – Dominik Kellovsky/Matej Vocel (Oklahoma State)
  • Men’s Doubles Finalists – Robert Cash/John McNally (Ohio State)
  • Women’s Doubles Champions – Elysia Bolton/Jada Hart (UCLA)
  • Women’s Doubles Finalists – Anna Rogers/Alana Smith (NC State)

ITA All-American Championships

  • Men’s Singles Champion – Yuya Ito (Texas)
  • Men’s Singles Finalist – Sam Riffice (Florida)
  • Men’s Doubles Champions – Jack Lin/Jackie Tang (Columbia)
  • Men’s Doubles Finalists – Gabriel Decamps/Juan Pablo Mazzuchi (UCF)
  • Women’s Singles Champion – Ashley Lahey (Pepperdine)
  • Women’s Singles Finalist – Alexa Graham (North Carolina)
  • Women’s Doubles Champions – Jessie Gong/Samantha Martinelli (Yale)
  • Women’s Doubles Finalists – Tenika McGiffin/Kaitlin Staines (Tennessee)

ITA Cup

  • Men’s Division II Singles Champion – Alejandro Gallego (Barry)
  • Men’s Division III Singles Champion – Boris Sorkin Tufts)
  • Men’s NAIA Singles Champion – Jose Dugo (Georgia Gwinnett)
  • Men’s Junior College Singles Champion – Oscar Gabriel Ortiz (Seward County)
  • Women’s Division II Singles Champion – Berta Bonardi (West Florida)
  • Women’s Division III Singles Champion – Justine Leong (Claremont-Mudd-Scripps)
  • Women’s NAIA Singles Champion – Elyse Lavender (Brenau)
  • Women’s Junior College Singles Champion – Tatiana Simova (ASA Miami)

“This is yet another exciting step forward for all of college tennis as we build upon our ever-growing partnership with Oracle,” said ITA Chief Executive Officer Timothy Russell. “We are forever grateful to our colleagues at Oracle for both their vision and execution of these fabulous opportunities.”

Oracle is partnering with InsideOut Sports & Entertainment, led by former World No. 1 and Hall of Famer Jim Courier and his business partner Jon Venison, to manage the Oracle Pro Series. InsideOut will work with the college players and their respective coaches to coordinate scheduling in respect to their participation in the Pro Series events.

The final schedule for the 2020 Oracle Pro Series will include more than 35 tournaments, most of which will be combined men’s and women’s events. Dates and locations are listed at https://oracleproseries.com/. Follow on social media through #OracleProSeries.

The expanding partnership between Oracle and the ITA builds upon their collaborative efforts to provide playing opportunities and their goal of raising the profile of college tennis and the sport in general. Oracle supports collegiate tennis through sponsorship of the ITA, including hosting marquee events throughout the year such as the Oracle ITA Masters and the Oracle ITA Fall Championships.

Through that partnership, the ITA has been able to showcase its top events to a national audience as the Oracle ITA Masters, ITA All-American Championships and Oracle ITA National Fall Championships singles finals have been broadcast live with rebroadcasts on the ESPN family of networks.

Contact Info
Mindi Bach
Oracle
650.506.3221
mindi.bach@oracle.com
Al Barba
ITA
602-687-6379
abarba@itatennis.com
About the Intercollegiate Tennis Association

The Intercollegiate Tennis Association (ITA) is committed to serving college tennis and returning the leaders of tomorrow. As the governing body of college tennis, the ITA oversees men’s and women’s varsity tennis at NCAA Divisions I, II and III, NAIA and Junior/Community College divisions. The ITA administers a comprehensive awards and rankings program for men’s and women’s varsity players, coaches and teams in all divisions, providing recognition for their accomplishments on and off the court. For more information on the ITA, visit the ITA website at www.itatennis.com, like the ITA on Facebook or follow @ITA_Tennis on Twitter and Instagram.

About Oracle Tennis

Oracle is committed to supporting American tennis for all players across the collegiate and professional levels. Through sponsorship of tournaments, players, ranking, organizations and more, Oracle has infused the sport with vital resources and increased opportunities for players to further their careers. For more information, visit www.oracle.com/corporate/tennis/. Follow @OracleTennis on Twitter and Instagram.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Mindi Bach

  • 650.506.3221

Al Barba

  • 602-687-6379

New Study: Only 11% of Brands Can Effectively Use Customer Data

Oracle Press Releases - Mon, 2019-11-18 07:00
Press Release
New Study: Only 11% of Brands Can Effectively Use Customer Data Independent study highlights the challenges of bringing together different data types to create a unified customer profile

Redwood Shores, Calif.—Nov 18, 2019

Despite all the hype around customer data platforms (CDPs), a new study conducted by Forrester Consulting and commissioned by Oracle found that brands are struggling to create a unified view of customers. The November 2019 study, “Getting Customer Data Management Right,” which includes insights from 337 marketing and advertising professionals in North America and Europe, found that brands want to unify customer data but face significant challenges in bringing different data types together. 

Brands Want to Centralize Customer Data

As consumers expect more and more personalized experiences, the ability to effectively leverage customer data is shifting from a “nice-to-have” to table stakes:

  • 75% of marketing and advertising professionals believe the ability to “improve the experience of our customers” is a critical or important objective when it comes to the use of customer engagement data.
  • 69% believe it is important to create a unified customer profile across channels and devices.
  • 64% stated that they adopted a CDP to develop a single source of truth so they could understand customers better.

Unified Customer Profiles Lead to Better Business Results

Brands that effectively leverage unified customer profiles are more likely to experience revenue growth, increased profitability and higher customer lifetime values:

  • Brands that use CDPs effectively are 2.5 times more likely to increase customer lifetime value.
  • When asked about the benefits of unified data management, the top two benefits were increased specific functional effectiveness (e.g., advertising, marketing, or sales) and increased channel effectiveness (e.g., email, mobile, web, social media).

The Marketing and Advertising Opportunity

While marketing and advertising professionals understand the critical role unified customer profiles play in personalizing the customer experience, the majority of brands are not able to effectively use a wide variety of data types:

  • 71% of marketing and advertising professionals say a unified customer profile is important or critical to personalization.
  • Only 11% of brands can effectively use a wide variety of data types in a unified customer profile to personalize experiences, provide a consistent experience across channels, and generally improve customer lifetime value and other business outcomes.
  • 69% expect to increase CDP investments at their organization over the next two years.

“A solid data foundation is the most fundamental ingredient to success in today’s Experience Economy, where consumers expect relevant, timely and consistent experiences,” said Rob Tarkoff, executive vice president and general manager, Oracle CX. “At Oracle we have been helping customers manage, secure and protect their data assets for more than 40 years, and this unique experience puts us in the perfect position to help brands leverage all their customer data – digital, marketing, sales, service, commerce, financial and supply chain – to make every customer interaction matter.” 

Read the full study here.

Contact Info
Kim Guillon
Oracle
+1.209-601-9152
kim.guillon@oracle.com
Methodology

Forrester Consulting conducted an online survey of 337 professionals in North America and Europe who are responsible for customer data, marketing analytics, or marketing/advertising technology. Survey participants included decision makers director level and above in marketing or advertising roles. Respondents were offered a small incentive as a thank you for time spent on the survey. The study began in August 2019 and was completed in September 2019.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kim Guillon

  • +1.209-601-9152

Fun with arrays in PostgreSQL

Yann Neuhaus - Mon, 2019-11-18 00:30

As you might already know, PostgreSQL comes with many, many data types. What you might not know is, that you can create arrays over all this data types quite easily. Is that important? Well, as always it depends on your requirements but there are use cases where arrays are great. As always, lets do some simple tests.

The following will create very simple table with one column, which is a one-dimensional array of integers.

postgres=# \d t1
                  Table "public.t1"
 Column |   Type    | Collation | Nullable | Default 
--------+-----------+-----------+----------+---------
 a      | integer[] |           |          | 

To insert data into that table you would either do it like this:

postgres=# insert into t1 (a) values ( '{1,2,3,4,5,6}' );
INSERT 0 1
postgres=# select * from t1;
       a       
---------------
 {1,2,3,4,5,6}
(1 row)

… or you can do it like this as well:

postgres=# insert into t1 (a) values ( ARRAY[1,2,3,4,5,6] );
INSERT 0 1
postgres=# select * from t1;
       a       
---------------
 {1,2,3,4,5,6}
 {1,2,3,4,5,6}
(2 rows)

Notice that I did not specify any size of the array. Although you can do that:

postgres=# create table t2 ( a int[6] );
CREATE TABLE

… the limit is not enforced by PostgreSQL:

postgres=# insert into t2 (a) values ( '{1,2,3,4,5,6,7,8}' );
INSERT 0 1
postgres=# select * from t2;
         a         
-------------------
 {1,2,3,4,5,6,7,8}
(1 row)

PostgreSQL does not limit you to one-dimensional arrays, you can well go ahead and create more dimensions:

postgres=# create table t3 ( a int[], b int[][], c int[][][] );
CREATE TABLE
postgres=# \d t3
                  Table "public.t3"
 Column |   Type    | Collation | Nullable | Default 
--------+-----------+-----------+----------+---------
 a      | integer[] |           |          | 
 b      | integer[] |           |          | 
 c      | integer[] |           |          | 

Although it does look like all of the columns are one-dimensional they are actually not:

postgres=# insert into t3 (a,b,c) values ( '{1,2,3}', '{{1,2,3},{1,2,3}}','{{{1,2,3},{1,2,3},{1,2,3}}}' );
INSERT 0 1
postgres=# select * from t3;
    a    |         b         |              c              
---------+-------------------+-----------------------------
 {1,2,3} | {{1,2,3},{1,2,3}} | {{{1,2,3},{1,2,3},{1,2,3}}}
(1 row)

In reality those array columns are not really one-dimensional, you can create as many dimensions as you like even when you think you created one dimension only:

postgres=# create table t4 ( a int[] );
CREATE TABLE
postgres=# insert into t4 (a) values ( '{1}' );
INSERT 0 1
postgres=# insert into t4 (a) values ( '{1,2}' );
INSERT 0 1
postgres=# insert into t4 (a) values ( '{{1,2},{1,2}}' );
INSERT 0 1
postgres=# insert into t4 (a) values ( '{{{1,2},{1,2},{1,2}}}' );
INSERT 0 1
postgres=# insert into t4 (a) values ( '{{{{1,2},{1,2},{1,2},{1,2}}}}' );
INSERT 0 1
postgres=# select * from t4;
               a               
-------------------------------
 {1}
 {1,2}
 {{1,2},{1,2}}
 {{{1,2},{1,2},{1,2}}}
 {{{{1,2},{1,2},{1,2},{1,2}}}}
(5 rows)

Now that there are some rows: how can we query that? This matches the first two rows of the table:

postgres=# select ctid,* from t4 where a[1] = 1;
 ctid  |   a   
-------+-------
 (0,1) | {1}
 (0,2) | {1,2}
(2 rows)

This matches the second row only:

postgres=# select ctid,* from t4 where a = '{1,2}';
 ctid  |   a   
-------+-------
 (0,2) | {1,2}
(1 row)

This matches row three only:

postgres=# select ctid, * from t4 where a[1:2][1:3] = '{{1,2},{1,2}}';
 ctid  |       a       
-------+---------------
 (0,3) | {{1,2},{1,2}}
(1 row)

You can even index array data types by using a GIN index:

postgres=# create index i1 ON t4 using gin (a);
CREATE INDEX
postgres=# \d t4
                  Table "public.t4"
 Column |   Type    | Collation | Nullable | Default 
--------+-----------+-----------+----------+---------
 a      | integer[] |           |          | 
Indexes:
    "i1" gin (a)

This does not make much sense right now is we do not have sufficient data for PostgreSQL considering the index, but a as soon as we have more data the index will be helpful:

postgres=# insert into t4 select '{{1,2},{1,2}}' from generate_series(1,1000000);
INSERT 0 1000000
postgres=# explain select ctid,* from t4 where a = '{1,2}';
                            QUERY PLAN                            
------------------------------------------------------------------
 Bitmap Heap Scan on t4  (cost=28.00..32.01 rows=1 width=51)
   Recheck Cond: (a = '{1,2}'::integer[])
   ->  Bitmap Index Scan on i1  (cost=0.00..28.00 rows=1 width=0)
         Index Cond: (a = '{1,2}'::integer[])
(4 rows)

In addition to that PostgreSQL comes with many support functions for working with arrays, e.g. to get the length of an array:

postgres=# select array_length(a,1) from t4 limit 2;
 array_length 
--------------
            1
            2

As I mentioned at the beginning of this post you can create arrays of all kinds of data types, not only integers:

postgres=# create table t5 ( a date[], b timestamp[], c text[], d point[], e boolean[] );
CREATE TABLE
postgres=# \d t5
                            Table "public.t5"
 Column |             Type              | Collation | Nullable | Default 
--------+-------------------------------+-----------+----------+---------
 a      | date[]                        |           |          | 
 b      | timestamp without time zone[] |           |          | 
 c      | text[]                        |           |          | 
 d      | point[]                       |           |          | 
 e      | boolean[]                     |           |          | 

Whatever you want. You can even create arrays over user typed types:

postgres=# create type type1 as ( a int, b text );
CREATE TYPE
postgres=# create table t6 ( a type1[] );
CREATE TABLE
postgres=# \d t6
                 Table "public.t6"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | type1[] |           |          | 

Quite powerful.

Cet article Fun with arrays in PostgreSQL est apparu en premier sur Blog dbi services.

EBS 12.2 ADOP Cycle Errors During Validation Cannot open XML file for load

Senthil Rajendran - Mon, 2019-11-18 00:17
EBS 12.2 ADOP Cycle Errors During Validation  Cannot open XML file for load

ADOP cycle will have validation errors in some cases.

*******FATAL ERROR*******
PROGRAM :
(/test/apps/CLONE/fs1/EBSapps/appl/ad/12.0.0/patch/115/bin/txkADOPEvalSrvStatus.pl)
TIME    : Wed Nov 13 15:50:36 2019
FUNCTION: TXK::XML::load_doc [ Level 1 ]
MESSAGES:
error = Cannot open XML file for load
errorno = No such file or directory
file =
/test/apps/CLONE/fs_ne/EBSapps/log/adop/6/fs_clone_20191113_153822/CLONE_test/TXK_EVAL_fs_clone_Wed_Nov_13_15_49_52_2014/ctx_files/CLONE_test.xml

*******FATAL ERROR*******
PROGRAM :
(/test/apps/CLONE/fs1/EBSapps/appl/ad/12.0.0/patch/115/bin/txkADOPPreparePhaseSynchronize.pl)
TIME    : Wed Nov 13 15:50:36 2019
FUNCTION: main::validatePatchContextFile [ Level 1 ]
MESSAGES:
message = Access permission error on test
File CLONE_test.xml not readable

If you see the above stack then do the fix as suggested below

- Validate FND_NODES table for valid hostnames
- Validate FND_OAM_CONTEXT_FILES table for run and patch context file
- If a valid node does not have a valid run and patch context file in FND_OAM_CONTEXT_FILES , then it has to be loaded either by running autoconfig from the respective file system or if you do not want to run autoconfig then load the context file using API

$ADJVAPRG oracle.apps.ad.autoconfig.oam.CtxSynchronizer action=upload contextfile=$CONTEXT_FILE

Rerun ADOP Cycle.

Have fun with Oracle EBS 12.2

Parse Time

Jonathan Lewis - Sun, 2019-11-17 13:37

This is a note I started drafting In October 2012. It’s a case study from an optimizer (10053) trace file someone emailed to me, and it describes some of the high-level steps I went through to see if I could pinpoint what the optimizer was doing that fooled it into spending a huge amount of time optimising a statement that ultimately executed very quickly.

Unfortunately I never finished my notes and I can no longer find the trace file that the article was based on, so I don’t really know what I was planning to say to complete the last observation I had recorded.

I was prompted a  couple of days ago to publish the notes so far becuase I was reminded in a conversation with members of the Oak Table Network about an article that Franck Pachot wrote a couple of years ago. In 12c Oracle Corp. introduced a time-reporting mechanism for the optimizer trace. If some optimisation step takes “too long” (1 second, by default) then then optimizer will write a “TIMER:” line into the trace file telling you what the operation was and how long it took to complete and how much CPU time it used.  The default for “too long” can be adjusted by setting a “fix control”.  This makes it a lot easier to find out where the time went if you see a very long parse time.

But let’s get back to the original trace file and drafted blog note. It started with a question on OTN and an extract from a tkprof output to back up a nasty  performance issue.

=============================================================================================

 

What do you do about a parse time of 46 seconds ? That was the question that came up on OTN a few days ago – and here’s the tkprof output to demonstrate it.

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1     46.27      46.53          0          5          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.33       0.63        129      30331          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     46.60      47.17        129      30336          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 144  
Number of plan statistics captured: 1
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=30331 pr=129 pw=0 time=637272 us)
       863        863        863   VIEW  VM_NWVW_1 (cr=30331 pr=129 pw=0 time=637378 us cost=1331 size=10 card=1)
       ... and lots more lines of plan

According to tkprof, it takes 46 seconds – virtually all CPU time – to optimise this statement, then 0.63 seconds to run it. You might spot that this is 11gR2 (in fact it’s 11.2.0.3) from the fact that the second line of the “Row Source Operation” includes a report of the estimated cost of the query, which is only 1,331.

Things were actually worse than they seem at first sight; when we saw more of tkprof output the following also showed up:

SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE 
  NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') 
  NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_00"), 
  NVL(SUM(C2),:"SYS_B_01") 
FROM
 (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("VAL_000002") FULL("VAL_000002") 
  NO_PARALLEL_INDEX("VAL_000002") */ :"SYS_B_02" AS C1, 
  CASE WHEN
    ...
  END AS C2 FROM "BISWEBB"."RECORDTEXTVALUE" 
  SAMPLE BLOCK (:"SYS_B_21" , :"SYS_B_22") SEED (:"SYS_B_23") "VAL_000002" 
  WHERE ... 
 ) SAMPLESUB
 
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        5      0.00       0.00          0          0          0           0
Execute      5      0.00       0.00          0          0          0           0
Fetch        5     21.41      24.14      11108      37331          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       15     21.41      24.15      11108      37331          0           5
 
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 144     (recursive depth: 1)
Number of plan statistics captured: 3
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=7466 pr=3703 pw=0 time=5230126 us)
   3137126    3137126    3137126   PARTITION HASH ALL PARTITION: 1 128 (cr=7466 pr=3703 pw=0 time=2547843 us cost=18758 size=131597088 card=3133264)
   3137126    3137126    3137126    TABLE ACCESS SAMPLE RECORDTEXTVALUE PARTITION: 1 128 (cr=7466 pr=3703 pw=0 time=2372509 us cost=18758 size=131597088 card=3133264)

This piece of SQL executed five times as the query was optimised, adding a further 24 seconds elapsed time and 21 CPU seconds which, surprisingly, weren’t included in the headline 46 seconds. The total time spent in optimising the statement was around 70 seconds, of which about 68 seconds were spent on (or waiting for) the CPU.

This is unusual – I don’t often see SQL statements taking more than a few seconds to parse – not since 8i, and not without complex partition views – and I certainly don’t expect to see a low cost query in 11.2.0.3 taking anything like 70 (or even 46) seconds to optimise.

The OP had enabled the 10046 and the 10053 traces at the same time – and since the parse time was sufficiently unusual I asked him to email me the raw trace file – all 200MB of it.

Since it’s not easy to process 200MB of trace the first thing to do is extract a few headline details, and I thought you might be interested to hear about some of the methods I use on the rare occasions when I decide to look at a 10053.

My aim is to investigate a very long parse time and the tkprof output had already shown me that there were a lot of tables in the query, so I had the feeling that the problem would relate to the amount of work done testing possible join orders; I’ve also noticed that the dynamic sampling code ran five times – so I’m expecting to see some critical stage of the optimisation run 5 times (although I don’t know why it should).

Step 1: Use grep (or find if you’re on Windows) to do a quick check for the number of join orders considered. I’m just searching for the text “Join order[” appearing at the start of line and then counting how many times I find it:

[jonathan@linux01 big_trace]$ grep "^Join order\[" orcl_ora_25306.trc  | wc -l
6266

That’s 6,266 join orders considered – let’s take a slightly closer look:

[jonathan@linux01 big_trace]$ grep -n "^Join order\[" orcl_ora_25306.trc >temp.txt
[jonathan@linux01 big_trace]$ tail -2 temp.txt
4458394:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...... from$_subquery$_008[TBL_000020]#2
4458825:Join order[1]:  VM_NWVW_1[VM_NWVW_1]#0

The line of dots represents another 11 tables (or similar objects) in the join order. But there are only 581 join orders (apparently) before the last one in the file (which is a single view transformation). I’ve used the “-n” option with grep, so if I wanted to look at the right bit of the file I could tail the last few thousand lines, but my machine is happy to use vi on a 200MB file, and a quick search (backwards) through the file finds the number 581 in the following text (which does not appear in all versions of the trace file):

Number of join permutations tried: 581

So a quick grep for “join permutations” might be a good idea. (In the absence of this line I’d have got to the same result by directing the earlier grep for “^Join order\[“ to a file and playing around with the contents of the file.

[jonathan@linux01 big_trace]$ grep -n "join permutations" orcl_ora_25306.trc
11495:Number of join permutations tried: 2
11849:Number of join permutations tried: 1
12439:Number of join permutations tried: 2
13826:Number of join permutations tried: 2
14180:Number of join permutations tried: 1
14552:Number of join permutations tried: 2
15938:Number of join permutations tried: 2
16292:Number of join permutations tried: 1
16665:Number of join permutations tried: 2
18141:Number of join permutations tried: 2
18550:Number of join permutations tried: 2
18959:Number of join permutations tried: 2
622799:Number of join permutations tried: 374
624183:Number of join permutations tried: 2
624592:Number of join permutations tried: 2
624919:Number of join permutations tried: 1
625211:Number of join permutations tried: 2
1759817:Number of join permutations tried: 673
1760302:Number of join permutations tried: 1
1760593:Number of join permutations tried: 2
1760910:Number of join permutations tried: 1
1761202:Number of join permutations tried: 2
2750475:Number of join permutations tried: 674
2751325:Number of join permutations tried: 2
2751642:Number of join permutations tried: 1
2751933:Number of join permutations tried: 2
2752250:Number of join permutations tried: 1
2752542:Number of join permutations tried: 2
3586276:Number of join permutations tried: 571
3587133:Number of join permutations tried: 2
3587461:Number of join permutations tried: 1
3587755:Number of join permutations tried: 2
3588079:Number of join permutations tried: 1
3588374:Number of join permutations tried: 2
4458608:Number of join permutations tried: 581
4458832:Number of join permutations tried: 1

The key thing we see here is that there are five sections of long searches, and a few very small searches. Examination of the small search lists shows that they relate to some inline views which simply join a couple of tables. For each of the long searches we can see that the first join order in each set is for 14 “tables”. This is where the work is going. But if you add up the number of permutations in the long searches you get a total of 2,873, which is a long way off the 6,266 that we found with our grep for “^Join order[“ – so where do the extra join orders come from ? Let’s take a closer look at the file where we dumped all the Join order lines – the last 10 lines look like this:

4452004:Join order[577]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4452086:Join order[577]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4453254:Join order[578]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4453382:Join order[578]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4454573:Join order[579]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4454655:Join order[579]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4455823:Join order[580]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4455905:Join order[580]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4457051:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...
4458394:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...
4458825:Join order[1]:  VM_NWVW_1[VM_NWVW_1]#0

Every single join order seems to have appeared twice, and doubling the counts we got for the sum of the permutations gets us close to the total we got for the join order search. Again, we could zoom in a little closer, does the text near the start of the two occurrences of join order 581 give us any clues ? We see the following just before the second one:

****** Recost for ORDER BY (using join row order) *******

The optimizer has tried to find a way of eliminating some of the cost by letting the table join order affect the order of the final output. Let’s do another grep to see how many join orders have been recosted:

[jonathan@linux01 big_trace]$ grep "Recost for ORDER BY" orcl_ora_25306.trc | sort | uniq -c
    452 ****** Recost for ORDER BY (using index) ************
   2896 ****** Recost for ORDER BY (using join row order) *******

So we’ve done a huge amount recosting. Let’s check arithmetic: 452 + 2,896 + 2,873 = 6,221, which is remarkably close to the 6,266 we needed (and we have ignored a few dozen join orders that were needed for the inline views, and the final error is too small for me to worry about).

We can conclude, therefore, that we did a huge amount of work costing a 14 table join a little over 6,000 times. It’s possible, of course, that we discarded lots of join orders very early on in the cost stage, so we could count the number of times we see a “Now joining” message – to complete a single pass on a 14 table join the optimizer will have to report “Now joining” 13 times.

[jonathan@linux01 big_trace]$ grep -n "Now joining" orcl_ora_25306.trc | wc -l
43989

Since the message appeared 44,000 times from 6,200 join orders we have an average of 7 steps evaluated per join order. Because of the way that the optimizer takes short-cuts I think this is a fairly strong clue that most of the join order calculations actually completed, or get very close to completing, over the whole 14 tables. (The optimizer remembers “partial results” from previous join order calculations, so doesn’t have to do 13 “Now joining” steps on every single join order.)

We still need to know why the optimizer tried so hard before supplying a plan – so let’s look for the “Best so far” lines, which the trace file reports each time the optimizer finds a better plan than the previous best. Here’s an example of what we’re looking for:

       Cost: 206984.61  Degree: 1  Resp: 206984.61  Card: 0.00 Bytes: 632
***********************
Best so far:  Table#: 0  cost: 56.9744  card: 1.0000  bytes: 30
              Table#: 3  cost: 59.9853  card: 0.0000  bytes: 83
              Table#: 6  cost: 60.9869  card: 0.0000  bytes: 151
              Table#:10  cost: 61.9909  card: 0.0000  bytes: 185
              Table#: 5  cost: 62.9928  card: 0.0000  bytes: 253
              Table#: 2  cost: 65.0004  card: 0.0000  bytes: 306
              Table#: 1  cost: 122.4741  card: 0.0000  bytes: 336
              Table#: 8  cost: 123.4760  card: 0.0000  bytes: 387
              Table#: 4  cost: 125.4836  card: 0.0000  bytes: 440
              Table#: 7  cost: 343.2625  card: 0.0000  bytes: 470
              Table#: 9  cost: 345.2659  card: 0.0000  bytes: 530
              Table#:11  cost: 206981.5979  card: 0.0000  bytes: 564
              Table#:12  cost: 206982.6017  card: 0.0000  bytes: 598
              Table#:13  cost: 206984.6055  card: 0.0000  bytes: 632
***********************

As you can see, we get a list of the tables (identified by their position in the first join order examined) with details of accumulated cost. But just above this tabular display there’s a repeat of the cost that we end up with. So let’s write, and apply, a little awk script to find all the “Best so far” lines and then print the line two above. Here’s a suitable script, followed by a call to use it:

{
        if (index($0,"Best so far") != 0) {print NR m2}
        m2 = m1; m1 = $0;
}

awk -f cost.awk orcl_ora_25306.trc >temp.txt

There was a bit of a mess in the output – there are a couple of special cases (relating, in our trace file, to the inline views and the appearance of a “group by placement”) that cause irregular patterns to appear, but the script was effective for the critical 14 table join. And looking through the list of costs for the various permutations we find that almost all the options show a cost of about 206,000 – except for the last few in two of the five “permutation sets” that suddenly drop to costs of around 1,500 and 1,300. The very high starting cost explains why the optimizer was prepared to spend so much time trying to find a good path and why it kept working so hard until the cost dropped very sharply.

Side bar: I have an old note from OCIS (the precursor or the precursor of the precursor of MOS) that the optimizer will stop searching when the number of join orders tests * the number of “non-single-row” tables (according to the single table access path) * 0.3 is greater than the best cost so far.  I even have a test script (run against 8.1.7.4, dated September 2002) that seems to demonstrate the formula.  The formula may be terribly out of date by now and the rules of exactly how and when it applies may have changed – the model didn’t seem to work when I ran it against 19.3 – but the principle probably still holds true.

At this point we might decide that we ought to look at the initial join order and at the join order where the cost dropped dramatically, and try to work out why Oracle picked such a bad starting join order, and what it was about the better join order that the optimizer had missed. This might allow us to recognise some error in the statistics for either the “bad” starting order or the “good” starting order and allow us to solve the problem by (e.g.) creating a column group or gather some specific statistics. We might simply decide that we’ll take a good join order and pass it to the optimizer through a /*+ leading() */ hint, or simply take the entire outline and attach it to the query through a faked SQL Profile (or embedded set of hints).

However, for the purposes of this exercise (and because sometimes you have to find a strategic solution rather than a “single statement” solution) I’m going to carry on working through mechanisms for dissecting the trace file without looking too closely at any of the fine detail.

The final “high-level” target I picked was to pin down why there were 5 sets of join orders. I had noticed something particular about the execution plan supplied – it showed several occurrences of the operation “VIEW PUSHED PREDICATE” so I wondered if this might be relevant. So I did a quick check near the start of the main body of the trace file for anything that might be a clue, and found the following just after the “QUERY BLOCK SIGNATURE”.

QUERY BLOCK SIGNATURE
---------------------
  signature(): NULL
***********************************
Cost-Based Join Predicate Push-down
***********************************
JPPD: Checking validity of push-down in query block SEL$6E5D879B (#4)
JPPD:   Checking validity of push-down from query block SEL$6E5D879B (#4) to query block SEL$C20BB4FE (#6)
Check Basic Validity for Non-Union View for query block SEL$C20BB4FE (#6)
JPPD:     JPPD bypassed: View has non-standard group by.
JPPD:   No valid views found to push predicate into.
JPPD: Checking validity of push-down in query block SEL$799AD133 (#3)
JPPD:   Checking validity of push-down from query block SEL$799AD133 (#3) to query block SEL$EFE55ECA (#7)
Check Basic Validity for Non-Union View for query block SEL$EFE55ECA (#7)
JPPD:     JPPD bypassed: View has non-standard group by.
JPPD:   No valid views found to push predicate into.
JPPD: Checking validity of push-down in query block SEL$C2AA4F6A (#2)
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$799AD133 (#3)
Check Basic Validity for Non-Union View for query block SEL$799AD133 (#3)
JPPD:     Passed validity checks
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$6E5D879B (#4)
Check Basic Validity for Non-Union View for query block SEL$6E5D879B (#4)
JPPD:     Passed validity checks
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$FC56C448 (#5)
Check Basic Validity for Non-Union View for query block SEL$FC56C448 (#5)
JPPD:     Passed validity checks
JPPD: JPPD:   Pushdown from query block SEL$C2AA4F6A (#2) passed validity checks.
Join-Predicate push-down on query block SEL$C2AA4F6A (#2)
JPPD: Using search type: linear
JPPD: Considering join predicate push-down
JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)

As you can see we are doing cost-based join-predicate pushdown, and there are three targets which are valid for the operation. Notice the line that says “using search type: linear”, and the suggestive “starting iteration 1” – let’s look for more lines with “Starting iteration”

[jonathan@linux01 big_trace]$ grep -n "Starting iteration" orcl_ora_25306.trc
9934:GBP: Starting iteration 1, state space = (20,21) : (0,0)
11529:GBP: Starting iteration 2, state space = (20,21) : (0,C)
11562:GBP: Starting iteration 3, state space = (20,21) : (F,0)
12479:GBP: Starting iteration 4, state space = (20,21) : (F,C)
12517:GBP: Starting iteration 1, state space = (18,19) : (0,0)
13860:GBP: Starting iteration 2, state space = (18,19) : (0,C)
13893:GBP: Starting iteration 3, state space = (18,19) : (F,0)
14587:GBP: Starting iteration 4, state space = (18,19) : (F,C)
14628:GBP: Starting iteration 1, state space = (16,17) : (0,0)
15972:GBP: Starting iteration 2, state space = (16,17) : (0,C)
16005:GBP: Starting iteration 3, state space = (16,17) : (F,0)
16700:GBP: Starting iteration 4, state space = (16,17) : (F,C)
16877:JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)
622904:JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0)
1759914:JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0)
2750592:JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1)

There are four iterations for state space (3,4,5) – and look at the huge gaps between their “Starting iteration” lines. In fact, let’s go a little closer and combine their starting lines with the lines above where I searched for “Number of join permutations tried:”


16877:JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)
622799:Number of join permutations tried: 374

622904:JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0)
1759817:Number of join permutations tried: 673

1759914:JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0)
2750475:Number of join permutations tried: 674

2750592:JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1)
3586276:Number of join permutations tried: 571

4458608:Number of join permutations tried: 581

At this point my notes end and I don’t know where I was going with the investigation. I know that I suggested to the OP that the cost-based join predicate pushdown was having a huge impact on the optimization time and suggested he experiment with disabling the feature. (Parse time dropped dramatically, but query run-time went through the roof – so that proved a point, but wasn’t a useful strategy). I don’t know, however, what the fifth long series of permutations was for, so if I could find the trace file one of the things I’d do next would be to look at the detail a few lines before line 4,458,608 to see what triggered that part of the re-optimization. I’d also want to know whether the final execution plan came from the fifth series and could be reached without involving all the join predicate pushdown work, or whether it was a plan that was only going to appear after the optimizer had worked through all 4 iterations.

The final plan did involve all 3 pushed predicates (which looksl like it might have been from iteration 4), so it might have been possible to find a generic strategy for forcing unconditional predicate pushing without doing all the expensive intermediate work.

Version 12c and beyond

That was then, and this is now. And something completely different might have appeared in 12c (or 19c) – but the one thing that is particularly helpful is that you can bet that every iteration of the JPPD state spaces would have produced a “TIMER:” line in the trace file, making it very easy to run grep -n “TIMER:” (or -nT as I recently discovered) against the trace file to pinpoint the issue very quickly.

Here’s an example from my “killer_parse.sql” query after setting “_fix_control”=’16923858:4′ (1e4 microseconds = 1/100th second) in an instance of 19c:


$ grep -nT TIMER or19_ora_21051.trc

16426  :TIMER:      bitmap access paths cpu: 0.104006 sec elapsed: 0.105076 sec
252758 :TIMER:     costing general plans cpu: 0.040666 sec elapsed: 0.040471 sec
309460 :TIMER:      bitmap access paths cpu: 0.079509 sec elapsed: 0.079074 sec
312584 :TIMER: CBQT OR expansion SEL$765CDFAA cpu: 10.474142 sec elapsed: 10.508788 sec
313974 :TIMER: Complex View Merging SEL$765CDFAA cpu: 1.475173 sec elapsed: 1.475418 sec
315716 :TIMER: Table Expansion SEL$765CDFAA cpu: 0.046262 sec elapsed: 0.046647 sec
316036 :TIMER: Star Transformation SEL$765CDFAA cpu: 0.029077 sec elapsed: 0.026912 sec
318207 :TIMER: Type Checking after CBQT SEL$765CDFAA cpu: 0.220506 sec elapsed: 0.219273 sec
318208 :TIMER: Cost-Based Transformations (Overall) SEL$765CDFAA cpu: 13.632516 sec elapsed: 13.666360 sec
328948 :TIMER:      bitmap access paths cpu: 0.093973 sec elapsed: 0.095008 sec
632935 :TIMER: Access Path Analysis (Final) SEL$765CDFAA cpu: 7.703016 sec elapsed: 7.755957 sec
633092 :TIMER: SQL Optimization (Overall) SEL$765CDFAA cpu: 21.539010 sec elapsed: 21.632012 sec

The closing 21.63 seconds (line 633092) is largely 7.7559 seconds (632,935) plus 13.666 seconds (line 318208) Cost-Based Transformation time, and that 13.666 seconds is mostly the 1.475 seconds (line 313,974) plus 10.508 seconds (line 312,584) for CBQT OR expansion – so let’s try disabling OR expansion (alter session set “_no_or_expansion”=true;) and try again:


$ grep -nT TIMER or19_ora_22205.trc
14884  :TIMER:      bitmap access paths cpu: 0.062453 sec elapsed: 0.064501 sec
15228  :TIMER: Access Path Analysis (Final) SEL$1 cpu: 0.256751 sec elapsed: 0.262467 sec
15234  :TIMER: SQL Optimization (Overall) SEL$1 cpu: 0.264099 sec elapsed: 0.268183 sec

Not only was optimisation faster, the runtime was quicker too.

Warning – it’s not always that easy.

 

A day of conferences with the Swiss Oracle User Group

Yann Neuhaus - Sun, 2019-11-17 10:00
Introduction

I’m not that excited with all these events arround Oracle technologies (and beyond) but it’s always a good place to learn new things, and maybe the most important, discover new ways of thinking. And regarding this point, I was not disappointed.

Franck Pachot: serverless and distributed database

Franck talked about scaling out, it means avoid monoliths. Most of the database servers are this kind of monoliths today. And he advises us to think microservices. It’s not so easy regarding the database component, but it could surely simplify the management of different modules through different developper teams. Achieving scaling out is also get rid of these old cluster technologies (think about RAC) and instead of that, adopt the “sharing nothing”: no storage sharing, no network sharing, etc.
It also means the need for db replication, and also scale of the writes: and that point is more complicated. Sharding is a key point for scaling out (put the associated data where the users resides).

I discovered the CAP theorem, a very interesting theory that shows us that there is actually no ultimate solution. You need to choose your priority: Consistancy and Availability, or Availability and Partition Tolerant or Consistency and Partiton Tolerant. Just remind to keep your database infrastructure adapted to your needs, a google-like infrastructure being probably nice but do you really need the same?

Kamran Aghayer: Transition from dba to data engineer

Times are changing. I knew that since several years, but now it’s like an evidence: as a traditional DBA, I will soon be deprecated. Old-school DBA jobs will be replaced by a lot of new jobs: data architect, data engineer, data analyst, data scientist, machine learning engineer, AI engineer, …

Kamran focused on Hadoop ecosystem and Spark especially when he needed to archive data from EXADATA to HADOOP (and explained how HADOOP manage data through HDFS filesystem and datanodes – sort of ASM). He used a dedicated connector, sort of wrapper using external tables. Actually this is also what’s inside the Big Data Appliance from Oracle. This task was out of the scope of a traditional DBA, as a good knowledge of the data was needed. So, traditionnal DBA is dead.

Stefan Oehrli – PDB isolation and security

Since Oracle announced the availability of 3 free PDBs with each container database, the interest for Multitenant increased.

We had an overview of the top 10 security risks, all about privileges, privilege abuse, unauthorized privileges elevation, platform vulnerability, sql injection, etc. If you’re already in the cloud with PAAS or DBAAS, risks are the same.

We had a presentation of several clues for risk mitigation:
– path_prefix: it’s some kind of chroot for the PDB
– PDB_os_credential (still bugs but…): concerns credentials and dbms_scheduler
– lockdown profiles: a tool for restricting database features like queuing, partitioning, Java OS access, altering the database. Restrictions working with inclusion or exclusion.

Paolo Kreth and Thomas Bauman: The role of the DBA in the era of Cloud, Multicloud and Autonomous Database

Already heard today that the classic DBA is soon dead. And now the second bullet. The fact is that Oracle worked hard to improve autonomous features during the last 20 years, and like it was presented, you realize that it’s clearly true. Who cares about extents management now?

But there is still a hope. DBA of tomorrow is starting today. As the DBA role actually sits between infrastructure team and data scientists, there is a way to architect your career. Keep a foot in technical stuff, but become a champion in data analysis and machine learning.

Or focus on development with opensource and cloud. The DBA job can shift, don’t miss this opportunity.

Nikitas Xenakis – MAA with 19c and GoldenGate 19c: a real-world case study

Hey! Finally, the DBA is not dead yet! Some projects still need technical skills and complex architecture. The presented project was driven by dowtime costs, and for some kind of businesses, a serious downtime can kill the company. The customer concerned by this project cannot afford more than 1h of global downtime.

We had an introduction of MAA (standing for Maximum Availability Architecture – see Oracle documentation for that).

You first need to estimate:
– the RPO: how much data you can afford to loose
– the RTO: how quick you’ll be up again
– the performance you expect after the downtime: because it matters

The presented infrastructure was composed of RHEL, RAC with Multitenant (1 PDB only), Acitve Data Guard and GoldenGate. The middleware was not from Oracle but configured to work with Transparent Application Failover.

For sure, you still need several old-school DBA’s to setup and manage this kind of infrastructure.

Luiza Nowak: Error when presenting your data

You can refer to the blog from Elisa USAI for more information.

For me, it was very surprising to discover how a presentation can be boring, confusing, missing the point just because of inappropriate slides. Be precise, be captivating, make use of graphics instead of sentences, make good use of the graphics, if you want your presentation to have the expected impact.

Julian Frey: Database cloning in a multitenant environment

Back to pure DBA stuff. Quick remind of why we need to clone, and what we need to clone (data, metadata, partial data, refreshed data only, anonymised data, etc). And now, always considering GDPR compliance!

Cloning before 12c was mainly done with these well known tools: rman duplicate, datapump, GoldenGate, dblinks, storage cloning, embedded clone.pl script (didn’t heard about this one before).

Starting from 12c, and only if you’re using multitenant, new convenient tools are available for cloning: PDB snapshot copy, snapshot carousel, refreshable copy, …

I discovered that you can duplicate a PDB without actually putting the source PDB in read only mode: you just need to put your source PDB in begin backup mode, copy the files, generate the metadata file and create the database with resetlogs. Nice feature.

You have to know that cloning a PDB is native with multitenant, a database being always a clone of something (at least an empty PDB is created from PDB$seed).

Note that Snapshot copy of a PDB is limited for some kind of filesystems, the most known being nfs and acfs. If you decide to go for multitenant without actually having the option, don’t forget to limit the maximum of PDB in your CDB settings. It’s actually a parameter: max_PDBs. Another interesting feature is the possibily to create a PDB from a source PDB without the data (but tablespace and tables are created).

Finally, and against all odds, datapump is still a great tool for most of the cases. You’d better still consider this tool too.

Conclusion

This was a great event, from great organizers, and if pure Oracle DBA is probably not a job that makes younger people dream, jobs dealing with data are not planned to disappear in the near future.

Cet article A day of conferences with the Swiss Oracle User Group est apparu en premier sur Blog dbi services.

Alpine Linux, Oracle Java JDK and musl?! - why it does not work...

Dietrich Schroff - Sun, 2019-11-17 06:45
Sometime ago i did some work with Alpine Linux (s. here) and i was impressed how tiny this Linux distro was and how fast it was running.


So i decided after nearly 6 years of running an aircraft noise measuring station (for dfld.de) with Ubuntu to change to Alpine Linux.

This station runs a software on Java and needs RXTX, because the microphone is connected via USB and is read over /dev/ttyUSB0.

What is the problem with this setup?
  • RXTX needs a Java which is running on glibc
  • Alpine Linux does not run on glibc
If you are not aware of this problem, you get some errors like
./javaash: java: command not foundand this happens even if you are in the right directory and java got the execute bit configured.

Alpine Linux changed to musl:
There are some other libc implementations (take a look here).
The homepage is https://www.musl-libc.org/:

 And a comparison to other libc can be found at http://www.etalabs.net/compare_libcs.html:

There are some workarounds to get applications build with glibc running on Alpine Linux, but i did not get to run my aircraft noise measuring station - i switched to Debian - because i needed a 32bit Linux for my very old UMPC...


Library Cache Stats

Jonathan Lewis - Sun, 2019-11-17 03:36

In resonse to a comment that one of my notes references a call to a packate “snap_libcache”, I’ve posted this version of SQL that can be run by SYS to create the package, with a public synonym, and privileges granted to public to execute it. The package doesn’t report the DLM (RAC) related activity, and is suitable only for 11g onwards (older versions require a massive decode of an index value to convert indx numbers into names).

rem
rem Script: snap_11_libcache.sql
rem Author: Jonathan Lewis
rem Dated: March 2001 (updated for 11g)
rem Purpose: Package to get snapshot start and delta of library cache stats
rem
rem Notes
rem Lots of changes needed by 11.2.x.x where x$kglst holds
rem two types – TYPE (107) and NAMESPACE (84) – but no
rem longer needs a complex decode.
rem
rem Has to be run by SYS to create the package
rem
rem Usage:
rem set serveroutput on size 1000000 format wrapped
rem set linesize 144
rem set trimspool on
rem execute snap_libcache.start_snap
rem — do something
rem execute snap_libcache.end_snap
rem

create or replace package snap_libcache as
procedure start_snap;
procedure end_snap;
end;
/

create or replace package body snap_libcache as

cursor c1 is
select
indx,
kglsttyp lib_type,
kglstdsc name,
kglstget gets,
kglstght get_hits,
kglstpin pins,
kglstpht pin_hits,
kglstrld reloads,
kglstinv invalidations,
kglstlrq dlm_lock_requests,
kglstprq dlm_pin_requests,
— kglstprl dlm_pin_releases,
— kglstirq dlm_invalidation_requests,
kglstmiv dlm_invalidations
from x$kglst
;

type w_type1 is table of c1%rowtype index by binary_integer;
w_list1 w_type1;
w_empty_list w_type1;

m_start_time date;
m_start_flag char(1);
m_end_time date;

procedure start_snap is
begin

m_start_time := sysdate;
m_start_flag := ‘U’;
w_list1 := w_empty_list;

for r in c1 loop
w_list1(r.indx).gets := r.gets;
w_list1(r.indx).get_hits := r.get_hits;
w_list1(r.indx).pins := r.pins;
w_list1(r.indx).pin_hits := r.pin_hits;
w_list1(r.indx).reloads := r.reloads;
w_list1(r.indx).invalidations := r.invalidations;
end loop;

end start_snap;

procedure end_snap is
begin

m_end_time := sysdate;

dbms_output.put_line(‘———————————‘);
dbms_output.put_line(‘Library Cache – ‘ ||
to_char(m_end_time,’dd-Mon hh24:mi:ss’)
);

if m_start_flag = ‘U’ then
dbms_output.put_line(‘Interval:- ‘ ||
trunc(86400 * (m_end_time – m_start_time)) ||
‘ seconds’
);
else
dbms_output.put_line(‘Since Startup:- ‘ ||
to_char(m_start_time,’dd-Mon hh24:mi:ss’)
);
end if;

dbms_output.put_line(‘———————————‘);

dbms_output.put_line(
rpad(‘Type’,10) ||
rpad(‘Description’,41) ||
lpad(‘Gets’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Pins’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Invalidations’,14) ||
lpad(‘Reloads’,10)
);

dbms_output.put_line(
rpad(‘—–‘,10) ||
rpad(‘—–‘,41) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘————-‘,14) ||
lpad(‘——‘,10)
);

for r in c1 loop
if (not w_list1.exists(r.indx)) then
w_list1(r.indx).gets := 0;
w_list1(r.indx).get_hits := 0;
w_list1(r.indx).pins := 0;
w_list1(r.indx).pin_hits := 0;
w_list1(r.indx).invalidations := 0;
w_list1(r.indx).reloads := 0;
end if;

if (
(w_list1(r.indx).gets != r.gets)
or (w_list1(r.indx).get_hits != r.get_hits)
or (w_list1(r.indx).pins != r.pins)
or (w_list1(r.indx).pin_hits != r.pin_hits)
or (w_list1(r.indx).invalidations != r.invalidations)
or (w_list1(r.indx).reloads != r.reloads)
) then

dbms_output.put(rpad(substr(r.lib_type,1,10),10));
dbms_output.put(rpad(substr(r.name,1,41),41));
dbms_output.put(to_char(
r.gets – w_list1(r.indx).gets,
‘999,999,990’)
);
dbms_output.put(to_char(
r.get_hits – w_list1(r.indx).get_hits,
‘999,999,990’));
dbms_output.put(to_char(
(r.get_hits – w_list1(r.indx).get_hits)/
greatest(
r.gets – w_list1(r.indx).gets,
1
),
‘999.0’));
dbms_output.put(to_char(
r.pins – w_list1(r.indx).pins,
‘999,999,990’)
);
dbms_output.put(to_char(
r.pin_hits – w_list1(r.indx).pin_hits,
‘999,999,990’));
dbms_output.put(to_char(
(r.pin_hits – w_list1(r.indx).pin_hits)/
greatest(
r.pins – w_list1(r.indx).pins,
1
),
‘999.0’));
dbms_output.put(to_char(
r.invalidations – w_list1(r.indx).invalidations,
‘9,999,999,990’)
);
dbms_output.put(to_char(
r.reloads – w_list1(r.indx).reloads,
‘9,999,990’)
);
dbms_output.new_line;
end if;

end loop;

end end_snap;

begin
select
startup_time, ‘S’
into
m_start_time, m_start_flag
from
v$instance;

end snap_libcache;
/

drop public synonym snap_libcache;
create public synonym snap_libcache for snap_libcache;
grant execute on snap_libcache to public;

You’ll note that there are two classes of data, “namespace” and “type”. The dynamic view v$librarycache reports only the namespace rows.

PostgreSQL 12 : Setting Up Streaming Replication

Yann Neuhaus - Sat, 2019-11-16 11:29

PostgreSQL 12 was released a few month ago. When actually setting up a replication, there is no longer recovery.conf file in the PGDATA. Indeed all parameters of the recovery.conf should be now in the postgresql.conf file. And in the cluster data directory of the standby server, therre should be a file named standby.signal to trigger the standby mode.
In this blog I am just building a streaming replication between 2 servers to show these changes. The configuration we are using is
Primary server dbi-pg-essentials : 192.168.56.101
Standby server dbi-pg-essentials-2 : 192.168.56.102

The primary server is up and running on dbi-pg-essentials server.

postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12] pg12

********* dbi services Ltd. *********
                  STATUS  : OPEN
         ARCHIVE_COMMAND  : test ! -f /u99/pgdata/12/archived_wal/%f && cp %p /u99/pgdata/12/archived_wal/%f
            ARCHIVE_MODE  : on
    EFFECTIVE_CACHE_SIZE  : 4096MB
                   FSYNC  : on
          SHARED_BUFFERS  : 128MB
      SYNCHRONOUS_COMMIT  : on
                WORK_MEM  : 4MB
              IS_STANDBY  : false
*************************************

postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12]

step 1 : Prepare the user for the replication on the primay server
For streaming replication, we need a user to read the WAL stream, we can do it with a superuser but it is not required. We will create a user with REPLICATION and LOGIN privileges. Contrary to the SUPERUSER privilege, the REPLICATION privilege gives very high permissions but does not allow to modifiy any data.
Here we will create a user named repliuser

postgres=# create user repliuser with password 'postgres'  replication ;
CREATE ROLE
postgres=#

Step 2 : Prepare the authentication on the primary server
The user used for the replication should be allowed to connect for replication. We need then to adjust the pg_hba.conf file for the two servers.

postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12] grep repliuser pg_hba.conf
host    replication     repliuser        192.168.56.101/32        md5
host    replication     repliuser        192.168.56.102/32        md5
postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12]

Step 3 : Create a replication slot on the primary server
Replication slots provide an automated way to ensure that the master does not remove WAL segments until they have been received by all standbys, and that the master does not remove rows which could cause a recovery conflict even when the standby is disconnected.

psql (12.1 dbi services build)
Type "help" for help.

postgres=# SELECT * FROM pg_create_physical_replication_slot('pg_slot_1');
 slot_name | lsn
-----------+-----
 pg_slot_1 |
(1 row)

postgres=#

Step 4 : Do a backup of the primary database and restore it on the standby
From the standby server launch the following command

postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] pg_basebackup -h 192.168.56.101 -D /u02/pgdata/12/PG1 --wal-method=fetch -U repliuser

Step 5 : set the primary connection info for the streaming on standby side
The host name and port number of the primary, connection user name, and password are specified in the primary_conninfo. Here there is a little change as there is no longer a recovery.conf parameter. The primary_conninfo should now be specified in the postgresql.conf

postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] grep primary postgresql.conf
primary_conninfo = 'host=192.168.56.101 port=5432 user=repliuser password=postgres'
primary_slot_name = 'pg_slot_1'                 # replication slot on sending server

Step 6 : Create the standby.signal file on standby server
In the cluster data directory of the standby, create a file standby.signal

postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] pwd
/u02/pgdata/12/PG1
postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] touch standby.signal

Step 7 : Then start the standby cluster

postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] pg_ctl start

If everything is fine, you should fine in the alert log

2019-11-16 17:41:21.552 CET [1590] LOG:  database system is ready to accept read only connections
2019-11-16 17:41:21.612 CET [1596] LOG:  started streaming WAL from primary at 0/5000000 on timeline 1

As confirmed by dbi dmk tool, the master is now streaming to the standby server

********* dbi services Ltd. *********
                  STATUS  : OPEN
         ARCHIVE_COMMAND  : test ! -f /u99/pgdata/12/archived_wal/%f && cp %p /u99/pgdata/12/archived_wal/%f
            ARCHIVE_MODE  : on
    EFFECTIVE_CACHE_SIZE  : 4096MB
                   FSYNC  : on
          SHARED_BUFFERS  : 128MB
      SYNCHRONOUS_COMMIT  : on
                WORK_MEM  : 4MB
              IS_STANDBY  : false
               IS_MASTER  : YES, streaming to 192.168.56.102/32
*************************************

postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12]

Cet article PostgreSQL 12 : Setting Up Streaming Replication est apparu en premier sur Blog dbi services.

Elapsed time of Oracle Parallel Executions are not shown correctly in AWR

Yann Neuhaus - Fri, 2019-11-15 10:08

As the elapsed time (time it takes for a task from start to end, often called wall-clock time) per execution of parallel queries are not shown correctly in AWR-reports, I thought I setup a testcase to find a way to get an elapsed time closer to reality.

REMARK: To use AWR (Automatic Workload Repository) and ASH (Active Session History) as described in this Blog you need to have the Oracle Diagnostics Pack licensed.

I created a table t5 with 213K blocks:

SQL> select blocks from tabs where table_name='T5';
 
    BLOCKS
----------
    213064

In addition I enabled Linux-IO-throttling with 300 IOs/sec through a cgroup on my device sdb to ensure the parallel-statements take a couple of seconds to run:

[root@19c ~]# CONFIG_BLK_CGROUP=y
[root@19c ~]# CONFIG_BLK_DEV_THROTTLING=y
[root@19c ~]# echo "8:16 300" > /sys/fs/cgroup/blkio/blkio.throttle.read_iops_device

After that I ran my test:

SQL> select sysdate from dual;
 
SYSDATE
-------------------
14.11.2019 14:03:51
 
SQL> exec dbms_workload_repository.create_snapshot;
 
PL/SQL procedure successfully completed.
 
SQL> set timing on
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.63
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.62
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.84
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.73
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.63
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.74
SQL> exec dbms_workload_repository.create_snapshot;
 
PL/SQL procedure successfully completed.

Please consider the elapsed time of about 5.7 seconds per execution.

The AWR-report shows the following in the “SQL ordered by Elapsed time”-section:

        Elapsed                  Elapsed Time
        Time (s)    Executions  per Exec (s)  %Total   %CPU    %IO    SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
            67.3              6         11.22   94.5   37.4   61.3 04r3647p2g7qu
Module: SQL*Plus
select /*+ parallel(t5 2) full(t5) */ count(*) from t5

I.e. 11.22 seconds in average per execution. However, as we can see above, the average execution time is around 5.7 seconds. The reason for the wrong elapsed time per execution is that the elapsed time for the parallel slaves is summed up to the elapsed time even though the processes worked in parallel. Thanks to the column SQL_EXEC_ID (very useful) we can get the sum of the elapsed times per execution from ASH:

SQL> break on report
SQL> compute avg of secs_db_time on report
SQL> select sql_exec_id, qc_session_id, qc_session_serial#, count(*) secs_db_time from v$active_session_history
  2  where sql_id='04r3647p2g7qu' and sample_time>to_date('14.11.2019 14:03:51','dd.mm.yyyy hh24:mi:ss')
  3  group by sql_exec_id, qc_session_id, qc_session_serial#
  4  order by 1;
 
SQL_EXEC_ID QC_SESSION_ID QC_SESSION_SERIAL# SECS_DB_TIME
----------- ------------- ------------------ ------------
   16777216	      237                  16626           12
   16777217	      237                  16626           12
   16777218	      237                  16626           10
   16777219	      237                  16626           12
   16777220	      237                  16626           10
   16777221	      237                  16626           10
                                             ------------
avg                                                    11
 
6 rows selected.

I.e. the 11 secs correspond to the 11.22 secs in the AWR-report.

How do we get the real elapsed time for the parallel queries? If the queries take a couple of seconds we can get the approximate time from ASH as well by subtracting the sample_time at the beginning from the sample_time at the end of each execution (SQL_EXEC_ID):

SQL> select sql_exec_id, extract (second from (max(sample_time)-min(sample_time))) secs_elapsed 
  2  from v$active_session_history
  3  where sql_id='04r3647p2g7qu'
  4  and sample_time>to_date('14.11.2019 14:03:51','dd.mm.yyyy hh24:mi:ss')
  5  group by sql_exec_id
  6  order by 1;
 
SQL_EXEC_ID SECS_ELAPSED
----------- ------------
   16777216         5.12
   16777217        5.104
   16777218         4.16
   16777219        5.118
   16777220        4.104
   16777221        4.171
 
6 rows selected.

I.e. those numbers reflect the real execution time much better.

REMARK: If the queries take minutes (or hours) to run then you have to extract the minutes (and hours) as well of course. See also the example I have at the end of the Blog.

The info in V$SQL is also not very helpful:

SQL> set lines 200 pages 999
SQL> select child_number, plan_hash_value, elapsed_time/1000000 elapsed_secs, 
  2  executions, px_servers_executions, last_active_time 
  3  from v$sql where sql_id='04r3647p2g7qu';
 
CHILD_NUMBER PLAN_HASH_VALUE ELAPSED_SECS EXECUTIONS PX_SERVERS_EXECUTIONS LAST_ACTIVE_TIME
------------ --------------- ------------ ---------- --------------------- -------------------
           0      2747857355    67.346941          6                    12 14.11.2019 14:05:17

I.e. for the QC we have the column executions > 0 and for the parallel slaves we have px_servers_executions > 0. You may actually get different child cursors for the Query Coordinator and the slaves.

So in theory we should be able to do something like:

SQL> select child_number, (sum(elapsed_time)/sum(executions))/1000000 elapsed_time_per_exec_secs 
  2  from v$sql where sql_id='04r3647p2g7qu' group by child_number;
 
CHILD_NUMBER ELAPSED_TIME_PER_EXEC_SECS
------------ --------------------------
           0                 11.2244902

Here we do see the number from the AWR again.

So in future be careful when checking the elapsed time per execution of statements, which ran with parallel slaves. The number will be too high in AWR or V$SQL. Further analysis to get the real elapsed time per execution would be necessary.

REMARK: As the numbers in AWR do come from e.g. dba_hist_sqlstat, the following query provides “wrong” output for parallel executions as well:

SQL> column begin_interval_time format a32
SQL> column end_interval_time format a32
SQL> select begin_interval_time, end_interval_time, ELAPSED_TIME_DELTA/1000000 elapsed_time_secs, 
  2  (ELAPSED_TIME_DELTA/EXECUTIONS_DELTA)/1000000 elapsed_per_exec_secs
  3  from dba_hist_snapshot snap, dba_hist_sqlstat sql 
  4  where snap.snap_id=sql.snap_id and sql_id='04r3647p2g7qu' 
  5  and snap.BEGIN_INTERVAL_TIME > to_date('14.11.2019 14:03:51','dd.mm.yyyy hh24:mi:ss');
 
BEGIN_INTERVAL_TIME              END_INTERVAL_TIME                ELAPSED_TIME_SECS ELAPSED_PER_EXEC_SECS
-------------------------------- -------------------------------- ----------------- ---------------------
14-NOV-19 02.04.00.176 PM        14-NOV-19 02.05.25.327 PM                67.346941            11.2244902

To take another example I did run a query from Jonathan Lewis from
https://jonathanlewis.wordpress.com/category/oracle/parallel-execution:

SQL> @jonathan
 
19348 rows selected.
 
Elapsed: 00:06:42.11

I.e. 402.11 seconds

AWR shows 500.79 seconds:

        Elapsed                  Elapsed Time
        Time (s)    Executions  per Exec (s)  %Total   %CPU    %IO    SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
           500.8              1        500.79   97.9   59.6   38.6 44v4ws3nzbnsd
Module: SQL*Plus
select /*+ parallel(t1 2) parallel(t2 2)
 leading(t1 t2) use_hash(t2) swa
p_join_inputs(t2) pq_distribute(t2 hash hash) ca
rdinality(t1,50000) */ t1.owner, t1.name, t1.typ

Let’s check ASH with the query I used above (this time including minutes):

select sql_exec_id, extract (minute from (max(sample_time)-min(sample_time))) minutes_elapsed,
extract (second from (max(sample_time)-min(sample_time))) secs_elapsed 
from v$active_session_history
where sql_id='44v4ws3nzbnsd'
group by sql_exec_id
order by 1;
 
SQL_EXEC_ID MINUTES_ELAPSED SECS_ELAPSED
----------- --------------- ------------
   16777216	              6       40.717

I.e. 06:40.72 which is close to the real elapsed time of 06:42.11

Cet article Elapsed time of Oracle Parallel Executions are not shown correctly in AWR est apparu en premier sur Blog dbi services.

Iconic South African Retailer Boosts Agility with Oracle

Oracle Press Releases - Thu, 2019-11-14 08:00
Press Release
Iconic South African Retailer Boosts Agility with Oracle Retail powerhouse Cape Union Mart International goes to the cloud to accelerate growth

REDWOOD SHORES, Calif. and CAPE TOWN, South Africa—Nov 14, 2019

Outdoor and Fashion retailer and manufacturer, Cape Union Mart International Pty Ltd, Inc. has selected Oracle to modernize its retail operations. With the Oracle Retail Cloud, the company plans to fuel growth across all sales channels with better inventory visibility and more sophisticated merchandise assortments that keep shoppers coming back for more.

“This is a complex project, touching virtually every part of our business. The Oracle team has partnered with us from start to finish; building our trust and giving us an insight into what we can expect in the implementation of the transformational project – we look forward to working with them and rebuilding our retail IT landscape into a world class environment, taking Cape Union Mart to the next level,” said Grant De Waal-Dubla, Group IT Executive, Cape Union Mart.

Cape Union Mart strives to deliver what their customers need with the right product in the right store at the right time. Until now, the brand has managed its retail assortments with a talented team and well-defined process in excel spreadsheets. As Cape Union Mart continued to grow, they needed a better way to manage their operations. With Oracle Retail, the brand can fully embrace automated, systemized workflows driven by dashboards and end to end reporting with a common user interface. This will lead to more seamless fulfillment and accurate demand forecasts.

“By choosing Oracle, Cape Union Mart can focus on business objectives and results, not technology. As a cloud provider, we take great pride in building appropriate real-time integration across the Oracle portfolio so our customers can get the information and results they needed quickly – whether that’s moving existing inventory or anticipating next seasons fashion trends and ensuring they are available for customers,” said Mike Webster, senior vice president and general manager, Oracle Retail.

Cape Union Mart International Pty Ltd will implement several solutions in the Oracle Retail modern platform including Oracle Retail Merchandising Cloud Service, Oracle Retail Allocation Cloud Service, Oracle Retail Pricing Cloud Services, Oracle Retail Invoice Matching Cloud Service, Oracle Retail Integration Cloud Services, Oracle Retail Merchandise Financial Planning Cloud Services, Oracle Retail Assortment and Item Planning Cloud Service, Oracle Retail Science Platform Cloud Services, Oracle Retail Demand Forecasting Cloud Service, Oracle Retail Store Inventory Operations Cloud Service, Oracle Middleware Cloud Services, Oracle Warehouse Management Cloud and Oracle Financials Cloud. Cape Union Mart has partnered with Oracle Retail Consulting for the implementation.

Contact Info
Kaitlin Ambrogio
Oracle PR
+1.781.434.9555
kaitlin.ambrogio@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1.781.434.9555

Oracle Cloud Applications Achieves Department of Defense Impact Level 4 Provisional Authorization

Oracle Press Releases - Thu, 2019-11-14 07:00
Press Release
Oracle Cloud Applications Achieves Department of Defense Impact Level 4 Provisional Authorization

Redwood Shores, Calif.—Nov 14, 2019

Oracle today announced that Oracle Cloud Applications has achieved Impact Level 4 (IL4) Provisional Authorization from the Defense Information Systems Agency (DISA) and the U.S. Department of Defense (DoD). With IL4, Oracle can now offer its software-as-a-service (SaaS) cloud suite to additional government agencies within the DoD community. Since the authorization was granted, the DoD has selected Oracle Human Capital Management (HCM) Cloud to help transform its HR operations in support of 900,000 civilian employees.

All organizations need comprehensive and adaptable technology to stay ahead of changing business and technology demands. For federal government agencies in particular, it’s even more critical to have a reliable, highly secure solution to navigate time-sensitive workflows and make strategic mission decisions. To meet these demands, Oracle Cloud Applications enables customers to benefit from best-in-class functionality, robust security, high-end scalability, mission-critical performance, and strong integration capabilities.

“At Oracle, our focus is centered on our customers’ needs. For U.S. Federal and Department of Defense customers, they need best in class, agile, and secure software to run their operations – and we can deliver that,” said Mark Johnson, SVP, Oracle Public Sector. “With built-in support for Impact Level 4, the DoD community can now take advantage of Oracle Cloud Applications to break down silos, quickly and easily embrace the latest innovations, and improve user engagement, collaboration, and performance.”

“The Department of Defense awarded a contract to Oracle HCM Cloud to support its enterprise human resource portfolio. The award modernizes its existing civilian personnel business process functions to enable improved streamlined approaches in support of the workforce. The DoD's Defense Manpower Data Center is leading the implementation of the HCM Cloud, which replaces numerous legacy systems and is targeted for full deployment in mid 2020,” according to the DMDC Director, Michael Sorrento.

Oracle has been a long-standing strategic technology partner of the US government, including the Central Intelligence Agency (CIA), the first customer to use Oracle’s flagship database software 35 years ago. Today, more than 500 government organizations take advantage of Oracle’s industry-leading technologies and superior performance.

Contact Info
Celina Bertallee
Oracle
559-283-2425
celina.bertallee@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Celina Bertallee

  • 559-283-2425

Excel Average Function – A Step By Step Tutorial

VitalSoftTech - Wed, 2019-11-13 10:33

Calculating average when you only have a few entries in your data is one thing but having to do the same with hundreds of data entries is a whole another story. Even using a calculator for finding the average of this many numbers can be highly time-consuming and to be honest, quite frustrating. After all, […]

The post Excel Average Function – A Step By Step Tutorial appeared first on VitalSoftTech.

Categories: DBA Blogs

The World Bee Project Works to Sustain Buzz with Oracle Cloud and AI

Oracle Press Releases - Wed, 2019-11-13 08:00
Blog
The World Bee Project Works to Sustain Buzz with Oracle Cloud and AI

By Guest Author, Oracle—Nov 13, 2019

The declining bee population is not just a problem for honey lovers; it’s a threat to the global food supply.

Oracle announced a partnership with The World Bee Project CIC in 2018, offering the use of its cloud storage and AI analytics tools to support the organization’s goals and innovations such as its BeeMark honey certification.

The World Bee Project is the first private organization to launch a global honeybee monitoring initiative to inform and implement actions to improve pollinator habitats, create more sustainable ecosystems, and improve food security, nutrition, and livelihoods by establishing a globally coordinated monitoring program for honeybees and eventually for key pollinator groups.

The World Bee Project Hive Network remotely collects data from varying environments through interconnected hives equipped with commercially available IoT sensors. The sensors combine colony-acoustics monitoring with other parameters such as brood temperature, humidity, hive weight, and apiary weather conditions. They also monitor and interpret the sound of a bee colony to assess colony behavior, strength, and health.

The World Bee Project Hive Network’s multiple local data sources provide a far richer view than any single data source to harness and enable global-scale computation to generate new insights into declining pollinator populations.

After the data has been validated by The World Bee Project database it can be fed into Oracle Cloud, which uses analytics tools including AI and data visualization to provide The World Bee Project with new insights into the relationship between bees and their varying environments. These new insights can be shared with smallholder farmers, scientists, researchers, governments, and other stakeholders.

“The partnership with Oracle will absolutely transform the scene as we can link AI with pollination and agricultural biodiversity,” said Sabiha Malik, founder and executive president of The World Bee Project CIC. “We have the potential to help transform the way the world grows food and to protect the livelihoods of hundreds of millions of smallholder farmers, but we depend entirely on stakeholders such as banks, agritech, insurance companies, and governments to sponsor and invest in our work so that we can begin to step toward fulfilling our mission.”

Oracle will be offering cloud computing technology and analytics tools to The World Bee Project to enable it to process data in collaboration with its science partner, the University of Reading, to enable science-based evidence to emerge.

Oracle is currently looking at funding models to support the expansion of The World Bee Project Hive Network to ensure a truly global view of the health of bee populations.

Watch The World Bee Project Video to Learn More


 

Read More Stories from Oracle Cloud

The World Bee Project is one of the thousands of innovative customers succeeding in the cloud. Read about others in Stories from Oracle Cloud: Business Successes

Oracle Apex Social Sign in

Kubilay Çilkara - Wed, 2019-11-13 03:27
In this post I want to show you how I used the Oracle Apex Social Sign in feature for my Oracle Apex app. Try it by visiting my web app beachmap.info.




Oracle Apex Social Sign in gives you the ability to use oAuth2 to authenticate and sign in users to your Oracle Apex apps using social media like Google, Facebook and others.

Google and Facebook are the prominent authentication methods currently available, others will probably follow. Social sign in is easy to use, you don't need to code, all you have to do is to obtain project credentials from say Google and then pass them to the Oracle Apex framework and put the sign in button to the page which will require authentication and the flow will kick in. I would say at most a 3 step operation. Step by step instructions are available in the blog posts below.


Further reading:




Categories: DBA Blogs

nVision Bug in PeopleTools 8.55/8.56 Impacts Performance

David Kurtz - Tue, 2019-11-12 13:12
I have recently come across an interesting bug in nVision that has a significant performance impact on nVision reports in particular and can impact the database as a whole.
Problem nVision SQLThis is an example of the problematic SQL generated by nVision.  The problem is that all of the SQL looks like this. There is never any group by clause, nor any grouping columns in the select clause in from of the SUM().
SELECT SUM(A.POSTED_BASE_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L2, PSTREESELECT10 L1
WHERE A.LEDGER='ACTUAL' AND A.FISCAL_YEAR=2018 AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 8
AND L2.SELECTOR_NUM=159077 AND A.ACCOUNT=L2.RANGE_FROM_10
AND (A.BUSINESS_UNIT='10000')
AND L1.SELECTOR_NUM=159075 AND A.DEPTID=L1.RANGE_FROM_10
AND A.CURRENCY_CD='GBP' AND A.STATISTICS_CODE=' '
Each query only returns a single row, that only populates a single cell in the report, and therefore a different SQL statement is generated and executed for every cell in the report.  Therefore, more statements are parsed and executed, and more scans of the ledger indexes and look-ups of the ledger table and performed.  This consumes more CPU, more logical I/O.
Normal nVision SQLThis is how I would expect normal nVision SQL to look.  This example, although obfuscated, came from a real customer system.  Note how the query is grouped by TREE_NODE_NUM from two of the tree selector tables, so this one query now populates a block of cells.
SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT05 L2, PSTREESELECT10 L3
WHERE A.LEDGER='S_UKMGT'
AND A.FISCAL_YEAR=2018
AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12
AND (A.DEPTID BETWEEN 'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149'
OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID BETWEEN 'B9165' AND 'B9999'
OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999'
OR A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DEPTID BETWEEN 'H0000' AND 'H9999'
OR A.DEPTID='B9150' OR A.DEPTID=' ')
AND L2.SELECTOR_NUM=10228
AND A.BUSINESS_UNIT=L2.RANGE_FROM_05
AND L3.SELECTOR_NUM=10231
AND A.ACCOUNT=L3.RANGE_FROM_10
AND A.CHARTFIELD1='0012345'
AND A.CURRENCY_CD='GBP'
GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM
The BugThis Oracle note details an nVision bug:
"UPTO SET2A-C Fixes - Details-only nPlosion not happening for Single Chart-field nPlosion Criteria.
And also encountered a performance issue when enabled details-only nPlosion for most of the row criteria in the same layout
Issue was introduced on build 8.55.19.
Condition: When most of the row filter criteria enabled Details-only nPlosion. This is solved in 8.55.22 & 8.56.07.
UPTO SET3 Fixes - Performance issue due to the SET2A-C fixes has solved but encountered new one. Performance issue when first chart-field is same for most of the row criteria in the same layout.
Issue was introduced on builds 8.55.22 & 8.56.07.
Condition: When most of the filter criteria’s first chart-field is same. The issue is solved in 8.55.25 & 8.56.10."
In summary
  • Bug introduced in PeopleTools 8.55.19, fully resolved in 8.55.25.
  • Bug introduced in PeopleTools 8.56.07, fully resolved in 8.56.10.

Basic Replication -- 11 : Indexes on a Materialized View

Hemant K Chitale - Tue, 2019-11-12 08:46
A Materialized View is actually also a physical Table (by the same name) that is created and maintained to store the rows that the MV query is supposed to present.

Since it is also a Table, you can build custom Indexes on it.

Here, my Source Table has an Index on OBJECT_ID :

SQL> create table source_table_1
2 as select object_id, owner, object_name
3 from dba_objects
4 where object_id is not null
5 /

Table created.

SQL> alter table source_table_1
2 add constraint source_table_1_pk
3 primary key (object_id)
4 /

Table altered.

SQL> create materialized view log on source_table_1;

Materialized view log created.

SQL>


I then build Materialized View with  an additional Index on it :

SQL> create materialized view mv_1
2 refresh fast on demand
3 as select object_id as obj_id, owner as obj_owner, object_name as obj_name
4 from source_table_1
5 /

Materialized view created.

SQL> create index mv_1_ndx_on_owner
2 on mv_1 (obj_owner)
3 /

Index created.

SQL>


Let's see if this Index is usable.

SQL> exec  dbms_stats.gather_table_stats('','MV_1');

PL/SQL procedure successfully completed.

SQL> explain plan for
2 select obj_owner, count(*)
3 from mv_1
4 where obj_owner like 'H%'
5 group by obj_owner
6 /

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2523122927

------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 10 | 15 (0)| 00:00:01 |
| 1 | SORT GROUP BY NOSORT| | 2 | 10 | 15 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | MV_1_NDX_ON_OWNER | 5943 | 29715 | 15 (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------

2 - access("OBJ_OWNER" LIKE 'H%')
filter("OBJ_OWNER" LIKE 'H%')



Note how this Materialized View has a column called "OBJ_OWNER"  (while the Source Table column is called "OWNER") and the Index ("MV_1_NDX_ON_OWNER") on this column is used.


You  would have also noted that you can run DBMS_STATS.GATHER_TABLE_STATS on a Materialized View and it's Indexes.

However, it is NOT a good idea to define your own Unique Indexes (including Primary Key) on a Materialized View.  During the course of a Refresh, the MV may not be consistent and the Unique constraint may be violated.   See Oracle Support Document # 67424.1



Categories: DBA Blogs

Oracle Introduces Cloud Native Modern Monetization

Oracle Press Releases - Tue, 2019-11-12 07:00
Press Release
Oracle Introduces Cloud Native Modern Monetization Cloud native deployment option gives market leaders the agility to embrace 5G, IoT and future digital business models

Redwood Shores, Calif.—Nov 12, 2019

Digital service providers are transforming their monetization systems to prepare for the upcoming demands of 5G and future digital services. Oracle Communications’ new cloud native deployment option for Billing and Revenue Management (BRM) addresses these demands by combining the features and extensibility of a proven, convergent charging system with the efficiency of cloud and DevOps agility.

Oracle Communications’ cloud native BRM deployment option provides a modern monetization solution to capitalize on the opportunities presented by today’s mobile, fixed and cable digital services. It supports any service, industry or partner-enabled business model and provides a foundation for 5G network slicing and edge monetization.

“As the telecommunications industry prepares itself to take advantage of 5G, architectural agility will be essential to monetize next-generation services quickly and efficiently,” added John Abraham, principal analyst, Analysys Mason. “With its cloud native compliant, microservices-based architecture framework, the latest version of Oracle’s Billing and Revenue Management solution is well positioned to accelerate CSPs ability to support emerging 5G-enabled use cases.“

Cloud native BRM enables internal IT teams to incorporate DevOps practices to more quickly design, test and deploy new services. Organizations can optimize their operations by seamlessly managing business growth with efficient scaling and simplified updates, and by taking advantage of deployment in any public or private cloud infrastructure environment. BRM further increases IT agility when deployed on Oracle’s next generation Cloud Infrastructure, which features autonomous capabilities, adaptive intelligence and machine learning cyber security.

“Service providers and enterprises are looking for agile solutions to quickly monetize 5G and IoT services,” said Jason Rutherford, senior vice president and general manager, Oracle Communications. “Cloud native BRM deployed on Oracle Cloud Infrastructure allows our customers to operate more efficiently, react quickly to competition and to pioneer new price plans and business models that capitalize on the digital revolution.”

Find out more about Oracle Communications Billing and Revenue Management, with modern monetization capabilities for 5G and the connected digital world. 

To learn more about Oracle Communications industry solutions, visit: Oracle Communications, LinkedIn, or join the conversation at Twitter @OracleComms.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Spain’s New York Burger Delivers Sizzling Service with Oracle

Oracle Press Releases - Tue, 2019-11-12 07:00
Press Release
Spain’s New York Burger Delivers Sizzling Service with Oracle Restaurant sees 50 percent decrease in customer wait times with Oracle MICROS

Redwood Shores, Calif.—Nov 12, 2019

New York Burger set out to shake up the local food scene by bringing American style burgers and barbeque dishes to Madrid. Today, the fast growing chain is doing exactly that. To keep up with the pace of expansion while keeping customers happy, New York Burger has added Oracle MICROS Simphony Point of Sale (POS) System to its technology menu to seamlessly connect servers and the kitchen. With real-time order sharing, cooks can immediately start an order, reducing the time it takes orders to arrive to hungry diners. Since deploying the Oracle Cloud solution, the chain has realized a 50 percent decrease in customer wait-time across its five restaurants.

“As the business grew, we found our existing solution was not up to the challenge, and inefficiencies meant our customers were kept waiting,” said Pablo Colmenares, founder, New York Burger. “Oracle has definitely helped us to streamline our operations. It is simple and fast to use, and utilizing the product helped us become a smarter business. Oracle has a great global reputation, there’s a reason why the biggest brands in the world trust Oracle. Every strong tree needs strong roots and Oracle is our roots.”

Along with improving service efficiencies, Oracle MICROS Simphony POS System has helped New York Burger streamline menu management, gaining immediate data and reporting on their customers’ favorite menu items. These insights have been especially helpful as the restaurant chain has revamped its menu to better match customers’ preferences, removing items that were not popular and reducing food waste.  

New York Burger has also relied on Oracle’s solutions to further its green-friendly approach to operating its restaurants, enabling them to reduce waste and more closely align with its goal of being an environmentally-friendly restaurant. Oracle’s solution specifically helps management minimize excess costs, by reducing any unnecessary ingredient surplus.

“This innovative chain took a chance on bringing a new kind of cuisine to Madrid – to rave reviews. But today, the quality of the experience customers have at a restaurant must be in parallel with the quality of the food,” said Simon de Montfort Walker, senior vice president and general manager for Oracle Food and Beverage. “With Oracle, New York Burger is able to speed service and give servers more time with customers - delivering an unforgettable meal on both sides of the equation. And with better insights into tastes, trends and what’s selling well, New York Burger can reduce waste and conserve revenue while giving customers a menu that will keep them coming back again and again.”

Please view New York Burger’s video: New York Burger Delivers Joyful Food Sustainably with Oracle

Contact Info
Katie Barron
Oracle
+1-202-904-1138
katie.barron@oracle.com
Scott Porter
Oracle
+1-650-274-9519
scott.c.porter@oracle.com
About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1-202-904-1138

Scott Porter

  • +1-650-274-9519

Oracle Cloud's Competitive Advantage

Oracle Press Releases - Tue, 2019-11-12 06:00
Blog
Oracle Cloud's Competitive Advantage

By Steve Daheb, Senior Vice President, Oracle Cloud—Nov 12, 2019

I just got back from a press tour in New York, where the most common question I heard was: What's Oracle's competitive advantage in the cloud? I believe it's the completeness of our offering. Here's why.

The three main components of the cloud are the application, platform, and infrastructure layers. But most enterprises don't think about the cloud in terms of these silos. They take a single, holistic view of their problems and how to solve them.

Because we play in all layers of the cloud—and are continually adding integrations between the layers—we are in a unique position to help.

The application layer refers to software such as enterprise resource planning, human capital management, supply chain management, and customer engagement. These are core applications that enterprises rely on to run their businesses. Oracle is the established leader in this area, and we're continuing to innovate and differentiate by integrating artificial intelligence, blockchain, and other important new technologies into these applications.

These applications sit on the platform layer, which is powered by the Oracle Autonomous Database. We've taken our 40-plus years of expertise and combined it with advanced machine learning technologies to create the market's only self-driving and self-repairing database.

The platform layer is also where our analytics, security, and integration capabilities live. Analytics are helping businesses answer questions they couldn't answer before—and ask new questions they never would have thought of. And security, which used to be seen as an inhibitor to cloud adoption, is actually now a driver. Enterprises are saying, "Oracle's data center is going to be more secure than what we can manage on our own."

The application and platform layers rest upon Oracle's Generation 2 Cloud Infrastructure. Our compute, storage, and networking capabilities are purpose-built to run new types of workloads in a more secure and performant way than our competitors. We plan to open 20 Oracle Cloud data centers by the end of next year, which works out to one new data center every 23 days. And we're hiring 2,000 new people to support this infrastructure business.

Another differentiator for Oracle is our commitment to openness and interoperability in the cloud. As an example, we have a very strategic relationship with Microsoft. Joint customers can migrate to the cloud, build net new applications and even do things like run Microsoft analytics on top of an Oracle Database. We've also announced a collaboration with VMware to help customers run vSphere workloads in Oracle Cloud and to support Oracle software running on VMware.

We live in a hybrid and multicloud world. Oracle's comprehensive cloud offering, combined with our interoperability and multicloud support, helps customers achieve outcomes they simply couldn't with other vendors.

Watch Steve Daheb discuss the Oracle Cloud advantage on Cheddar and Yahoo Finance.

Pages

Subscribe to Oracle FAQ aggregator