Feed aggregator

Windocks – a different way to use SQL Server on Docker

Yann Neuhaus - Thu, 2018-05-24 13:14

I spent some times to write blog posts about SQL Server on Docker since few months and you likely noticed it concerned mainly SQL Server on Linux. But what about SQL Server on Docker and Windows? If you take a look at the Docker images provided by Microsoft, only 2017 version is available and it is only intended for development and testing use. Versions prior 2017 seem to not be supported so far (but I don’t have in mind the Microsoft plan on this topic) and if you want to use containers with older SQL Server versions you have to start from scratch by yourself. This is not a hard task but I have in mind a discussion with one of my customers about the maintenance of SQL Server images prior 2017 and he told me he didn’t allocate resources to create and maintain such images by himself.

In addition, I recently presented at different events including our dbi services internal event  for customers the internal project we are managing about our DMK maintenance module tool for SQL Server. To cut the story short, this tool aims to provide smart maintenance capabilities for customer databases including backups, index, statistics etc …

Over the time, the code grew up and we had to support different versions of SQL Server from 2008 to SQL Server 2017. In the same time, the number of scenarios we have to manage and to test regarding the new features shipped with service packs and cumulative updates as well increased drastically over the time. So we decided to move on docker containers from different obvious reasons:

  • Containers are easy to provision for unit testing
  • Lower footprint on our local machine
  • We can share easily SQL images between members of our “development” team in a private registry
  • We will able to integer containers in a potential “future” CI pipeline

In our context, each developer has a local docker engine installed on a Windows 10 Pro laptop but for medium/large companies, internal rules may prevent installing such product on each individual developer laptop. Therefore, they prefer likely to provide a dedicated environment for developers that meet the internal rules and to keep the focus on flexibility.

In such shared infrastructure a main concern will be probably disk space issues because of the potential testing database(s) size and the number of containers provisioned at the same time. Let’s say that several developers would like to provision one SQL Server instance each with a testing database attached on it and that comes from the production through a sanitized database pipeline. The total size of this database is enough big to warn about a potential disk space issue because each instanced container will include its own image size (roughly 10GB with SQL Server containers on Windows) + the size of the testing database – let’s say more than 50 GBs and likely more for usual cases.

If you’re in one of the aforementioned scenarios, Windocks may be a good alternative scenario.

During this month I had the opportunity to test the product in the context of our internal project and after some experimentation stuff I admit I was agreeably surprised by some built-in capabilities as:

  • Windows authentication mode supported
  • SQL Server database cloning capabilities
  • The ability to use both Docker CLI and Web UI (even if I’m in favor of Docker CLI)
  • The ability to refresh a cloned database image from a differential backup

Before to dig further into the above features let’s just take few seconds to see the Windocks architecture design

blog 133 - 1 - windocks - architecture

We may notice some differences here. With a traditional Docker architecture, containers run on shared operating system kernel (either Windows or Linux) whereas Windocks is a strictly application construct-oriented solution and requires installing a SQL Server instance on the host as base instance for SQL Server containers. This is a fundamental change that provides some interesting advantages:

  • We don’t have to rebuild the containers to get OS & Framework patches through base image updates because containers rely only on the SQL Server base instance
  • We don’t break compatibility with some Windows storage functionalities as VSS and SQL Writer as well if you rely on them with third-party backup tools
  • We benefit from the underlying security and configuration of the SQL Server based instance meaning we may use Windows authentication for instance

Let’s talk now about one interesting feature shipped with Windocks that is cloned databases. Here a picture (from Windocks) that shows a high-level workflow of using SQL Server containers with cloned databases.

blog 133 - 2 - windocks - cloned db architecture

The starting point is a full database backup or a snapshot and Windocks will generate the corresponding VHD parent image of the backup media. Then each SQL Server container generated will use a writable clone of this parent image reducing drastically the disk footprint of each container (at least when the SQL Server container is generated). This is a common workflow that customers may implement for unit testing or for CI/CD pipeline to refresh development environments. The production database may be big in size and in this context the interest of using cloned databases becomes obvious. Another way provided would be to rely on mounted snapshot-based volumes from the storage provider but at this stage I didn’t test it. Maybe a next time!

To give a better picture of what it is possible to do with cloned databases, let me show you one way to implement it in the context of our DMK maintenance tool project. The development workflow of the development project is as follows:

blog 133 - 3 - windocks - dev workflow

We use SSDT and GitLab to manage our DMK maintenance tool sources and we perform unit testing by provisioning one or several SQL Server containers regarding the target version we want to validate at this moment. Furthermore we developed testing scenarios based on tSQLt framework we run after provisioning the SQL Server containers. With such architecture, we initially have to maintain images of different SQL Server versions and each time we want to create a container we have to attach one copy of the customized AdventureWorks_dbi database. Let’s say we want to work with 4 containers (SQL Server 2016) at time. We must to ensure we have sufficient disk space for 4 copies of this database (5 x 4 = 20GB) + space required for each container (10GB x 4 = 40GB) = 60GB. The dbi_tools database is intended to stay very small (512MB up to 1GB) comparing to other testing components, this is way I didn’t include it to the math.

Let’s now apply Windocks with the above context and the new architecture becomes as follows:

blog 133 - 4 - windocks - dev workflow

In this second scenario, we may include both the AdventureWorks_dbi and dbi_tools databases (including tSQLt framework objects) in a VHD parent image. For the demo, I used a SQL Server 2016 instance installed on the host that will be controlled by Windocks service during the container generation.

As I said previously we may rely on the configuration and the security of the base instance. Thus, I configured my security based on both Windows Authentication (dbi-services\clustadmin domain account) for SQL Server sysadmins and SQL authentication for development purpose (windock user).

In terms of configuration, tSQLt framework requires enabling CLR on the server level, so I changed the configuration directly on the SQL based instance to allow all my containers to inherit this configuration change.

SELECT 
	[name],
	principal_id,
	[type_desc]
FROM 
	sys.server_principals 
WHERE 
	[type] IN ('U', 'S')
	AND [name] NOT LIKE '##MS_%##'
	AND [name] NOT LIKE 'NT SERVICE%'

 

blog 133 - 41- windocks - base instance security

Here the content of my docker file.

FROM mssql-2016
SETUPCLONING FULL AdventureWorks_dbi \\Win2012sql16ls\windock\AdventureWorks_dbi_2008.bak
SETUPCLONING FULL dbi_tools \\Win2012sql16ls\windock\dbi_tools.bak

 

You may notice some new commands here:

  • FROM mssql-2016 indicates we will use the SQL Server 2016 instance as base image.
  • SETUPCLONING FULL indicates to generate the VHD parent image that we will include both the AdventureWorks_dbi and dbi_tools databases in a remote network path

Go ahead and let’s generate the corresponding SQL Server docker cloned image with the special -t flag

docker build -t 2016withdmk C:\DMK\WithClone\BasedImage\

 

blog 133 - 5- windocks - cloned database image

The process may take some times to generate the VHD parent image depending on the different database sizes and the location (local fast disk, network share bandwidth etc …)

blog 133 - 6- windocks - cloned db size

As expected, because the VHD parent image is a full byte copy of the data, the size is basically the sum of both AdventureWorks_dbi and dbi_tools database sizes.

Compared to a traditional approach, the provisioning of the associated containers will be faster irrespective to database size. Let’s create 2 containers from the image generated previously with the following docker commands:

PS C:\DMK\WithClone> docker run --name sql20161 -d 2016withdmk
PS C:\DMK\WithClone> docker run --name sql20162 -d 2016withdmk

 

Note that by default, Windocks will choose a random port between a configured range in the node.conf (START_PORT/ PORTS_PER_USER) unless you override the default behavior using the -p parameter.

blog 133 - 7- windocks - create containers

Let’s get a picture of the existing containers. My 2 containers have been generated correctly from the 2016withdmk base image.

PS C:\DMK\WithClone> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS
89344d99758b        2016withdmk         ""                  5 minutes ago       Started             1433/
4dfd1a198626        2016withdmk         ""                  4 minutes ago       Started             1434/

 

Let’s take another look at the storage side:

blog 133 - 8 - windocks - cloned databases diff

The interesting point here is the two differential disks created during the creation of my 2 previous containers are very small size (~70MB per container).

Let’s try to connect from SSMS to the new fresh containers:

blog 133 - 9 - windocks - ssms connection

It works and did you notice I was connected with my domain account? :)

Last topic I wanted to share with you is the Windocks capability to update a base image from differential backups. In a real context, you may need to refresh frequently your environment with recent data for developers and regarding your full backup size it might be advantageous to deal with differential backups.

Let’s consider this process with my internal project environment. During the development process we already had to scale the database schema of our testing database with the features shipped with new versions, service packs or cumulative update over the time. Let’s say we have to add a new dbo.bigTransactionhistory_cci table to test columnstore index scenarios. We first add the concerned table. Then we will perform a differential backup and finally we will update the 2016withDMK base image with it. Obviously in the context of my demo, the database size is likely not big enough to take full advantage of this feature but I trust you to draw a comparison with a more realistic scenario.

The image I want to update is named 2016withdmk. Note the mssql-xxx images that are in fact images from SQL Server base instances installed on my host server.

PS C:\DMK\WithClone> docker images
REPOSITORY            TAG                 IMAGE ID            CREATED                  VIRTUAL SIZE
2016withdmk           none                b3a249ba-2cf        Less than a second ago   0 B
agitated_heisenberg   none                bbd0ce26-4bb        Less than a second ago   0 B
dotnet-4.5            none                dotnet-4.5          3 years ago              0 B
windows               none                windows             3 years ago              0 B
mssql-2008r2          none                mssql-2008r2        3 years ago              0 B
mssql-2016            none                mssql-2016          3 years ago              0 B
mssql-2014            none                mssql-2014          3 years ago              0 B

 

My new docker file content to update the 2016withdmk base image is as follows.

FROM 2016withdmk

SETUPCLONING DIFF AdventureWorks_dbi \\Win2012sql16ls\windock\AdventureWorks_dbi_2008.DIFF
SETUPCLONING DIFF dbi_tools \\Win2012sql16ls\windock\dbi_tools.DIFF

 

I used the differential backup of the AdventureWorks_dbi database with the SETUPCLONNING DIFF command.

Let’s start updating the 2016withDMK base image with the following docker command (I tagged my new image with 1.2 suffix):

docker build -t 2016withdmk1.2 C:\DMK\WithClone\DiffImage\

 

blog 133 - 10 - windocks - cloned database diff backup

Although using a differential backup may allow for substantial productivity keep in mind that updating a VHD parent image will require creating an additional VHD parent image that will become another full byte copy of the environment as shown below:

blog 133 - 11- windocks - cloned db size diff

After running the same docker run command exposed before, let’s get a picture of the running on my server. The container id 789ce49562d0 is the new fresh container with updated data (2016withdmk1.2 image).

PS C:\DMK\WithClone> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS
89344d99758b        2016withdmk         ""                  About an hour ago   Started             1433/
4dfd1a198626        2016withdmk         ""                  About an hour ago   Started             1434/
789ce49562d0        2016withdmk1.2      ""                  2 minutes ago       Started             1436/

 

I may retrieve the new dbo.bigTransactionHistory_cci after connecting to the concerned SQL Server instance. I’m now able to update and to execute tSQLt scenarios to test columnstore index maintenance scenarios!

blog 133 - 12- windocks - ssms connection 2

 

In this blog post we’ve just surfaced some capabilities and possibilities provided by Windocks. There are other interesting features as encrypted password in the docker file (as we may use with Docker Swarm for instance), the support of TDE databases or lastly the ability to manage SQL Server on Linux. I will probably blog about it in the near future. Stay tuned!

 

 

 

 

Cet article Windocks – a different way to use SQL Server on Docker est apparu en premier sur Blog dbi services.

Postgres, the fsync() issue, and ‘pgio’ (the SLOB method for PostgreSQL)

Yann Neuhaus - Thu, 2018-05-24 08:05

That’s a long blog post title, which is actually just a good pretext to play with Kevin Closson SLOB method for PostgreSQL: pgio
I use the beta version of pgio here. If you want to read more about it, you can start on https://kevinclosson.net/2018/05/22/sneak-preview-of-pgio-the-slob-method-for-postgressql-part-i-the-beta-pgio-readme-file/. If you are used to the SLOB for Oracle (https://kevinclosson.net/slob/) you will quickly understand the ‘why’ and ‘how’ of pgio.

PostgreSQL’s fsync() surprise

You may have read about the fsync() issue. Postgres, from the beginning, relies a lot on the filesystem buffering to optimize I/O. So they write() to the data files but fsync() only at checkpoints. This is ok when everything goes well because the writes since the last checkpoints are protected by the Write Ahead Logging, where fsync() occurs for each writes at commit (if you didn’t change the default parameters for WAL). But when a problem occurs, such as power outage, some writes may be lost, or partially lost, and that’s not easy to detect at checkpoint time with fsync().

So, basically, there’s a risk of corruption and there are no easy ways to detect it.

You can read the details from https://lwn.net/Articles/752063/ and that’s not the subject of this post.

Most of the other databases are opening files with O_DSYNC, which means that the write() call will detect the error immediately. And the major ones are doing direct I/O anyway because they have their own buffer cache and do not need the performance overhead and corruption risk of double buffering.

Why is this so hard to fix?

So why is it so hard to do the same with Postgres? Just because it was not initially designed to optimize I/O and postgres relied heavily on the OS filesystem for that. The database systems which sync at each write, and which can do direct I/O, have implemented many optimizations to reduce the overhead of a disk latency at each write. They have their own buffer cache, with a background database writer which re-orders the writes in the optimal way. And they have multiblock writes for large contiguous writes which bypass the buffer cache.

However, you may have a storage system where write latency is minimal, and you may have an application where the overhead here is not a big problem. This means that you should measure it in order to balance between performance and prevention of corruption. And this is where the SLOB method is awesome: reliable and predictable metrics to measure IOPS.

pgio to the rescue

This is my first trial of pgio, in beta version. It cannot be easier. I’ve just un-tar-ed it:

tar -xvf pgio-0.9.tar
cd pgio

I’ve setup the pgio.conf with 4 schemas and 2 threads per schema:

UPDATE_PCT=10
RUN_TIME=300
NUM_SCHEMAS=4
NUM_THREADS=2
WORK_UNIT=255
UPDATE_WORK_UNIT=8
SCALE=200M
DBNAME=pgio
CONNECT_STRING="pgio"
CREATE_BASE_TABLE=TRUE

Because I want to test writes, I’ve set the UPDATE_PCT so that 10% of calls will do an UPDATE. And I kept the default work unit to read 255 blocks and, for those 10% updates, update 8 blocks only. I’ll run that with 2 threads per schemas, which means 8 concurrent sessions. And they will run for 300 seconds.

In this test I didn’t want to set different values. I just want to see what happens in IOPS for a common workload of lot of reads and small changes. the scale is 200M here. My workload sessions will find their buffers in memory.

On each test, I’ve created the pgio database:

create tablespace pgio location '/u01/pgdata';
CREATE TABLESPACE
create database pgio tablespace pgio;
CREATE DATABASE

Then run the setup.sh to load data in those schemas:

Job info: Loading 200M scale into 4 schemas as per pgio.conf->NUM_SCHEMAS.
Batching info: Loading 2 schemas per batch as per pgio.conf->NUM_THREADS.
Base table loading time: 15 seconds.
Waiting for batch. Global schema count: 2. Elapsed: 0 seconds.
Waiting for batch. Global schema count: 3. Elapsed: 103 seconds.
Waiting for batch. Global schema count: 4. Elapsed: 177 seconds.
Waiting for batch. Global schema count: 4. Elapsed: 249 seconds.
 
Group data loading phase complete. Elapsed: 249 seconds.
List of relations
Schema | Name | Type | Owner | Size | Description
--------+-----------+-------+----------+--------+-------------
public | pgio1 | table | postgres | 200 MB |
public | pgio2 | table | postgres | 200 MB |
public | pgio3 | table | postgres | 200 MB |
public | pgio4 | table | postgres | 200 MB |
public | pgio_base | table | postgres | 29 MB |
(5 rows)

And then I’m ready to run the runit.sh

ext4 mount option

My tablespace is on an ext4 filesystem:

-bash-4.2$ df -HT /u01/pgdata
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdb ext4 32G 1.2G 29G 4% /u01

I’ll run the same workload, several times, with changing only one mount option:

async All I/O to the filesystem should be done asynchronously. (See also the sync option.)
sync All I/O to the filesystem should be done synchronously. In case of media with limited number of write cycles (e.g. some flash drives) "sync" may cause life-cycle shortening.

Which means that some runs will run with /u01 mounted as:

/dev/sdb on /u01 type ext4 (rw,nosuid,nodev,relatime,sync,seclabel,data=ordered)

and some others will run with the default (async):

/dev/sdb on /u01 type ext4 (rw,nosuid,nodev,relatime,seclabel,data=ordered)

I did multiple runs and checked that the result is consistent among them. I’ll show only one result for each configuration.

Run it with async

Here is the output of one ‘runit.sh’ when /u01 was in async:

Date: Thu May 24 10:56:57 CEST 2018
Database connect string: "pgio".
Shared buffers: 128MB.
Testing 4 schemas with 2 thread(s) accessing 200M (25600 blocks) of each schema.
Running iostat, vmstat and mpstat on current host--in background.
Launching sessions. 4 schema(s) will be accessed by 2 thread(s) each.
pg_stat_database stats:
datname| blks_hit| blks_read|tup_returned|tup_fetched|tup_updated
BEFORE: pgio | 252209 | 118112 | 110420 | 6788 | 18
AFTER: pgio | 25189171 | 136972696 | 159128092 | 147250205 | 573216
DBNAME: pgio. 4 schemas, 2 threads(each). Run time: 300 seconds. RIOPS >456181< CACHE_HITS/s >83123<

This shows that, within those 5 minutes, I’ve fetched 147243417 tuples and updated 573198 ones.

pgio takes snapshots of iostat, vmstat and mpstat. Here is a sample after 1 minute of run where we show that all CPU are busy in user or kernel, but not waiting on I/O latency:

10:57:51 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
10:57:54 AM all 52.82 0.00 42.22 0.09 0.00 1.11 0.00 0.00 0.00 3.76
10:57:54 AM 0 54.11 0.00 40.75 0.00 0.00 1.37 0.00 0.00 0.00 3.77
10:57:54 AM 1 54.42 0.00 40.14 0.34 0.00 1.02 0.00 0.00 0.00 4.08
10:57:54 AM 2 51.19 0.00 43.34 0.34 0.00 0.68 0.00 0.00 0.00 4.44
10:57:54 AM 3 51.02 0.00 44.22 0.34 0.00 1.36 0.00 0.00 0.00 3.06
10:57:54 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
10:57:57 AM all 53.33 0.00 42.15 0.00 0.00 1.02 0.00 0.00 0.00 3.50
10:57:57 AM 0 53.95 0.00 42.27 0.00 0.00 0.69 0.00 0.00 0.00 3.09
10:57:57 AM 1 52.56 0.00 42.66 0.00 0.00 0.68 0.00 0.00 0.00 4.10
10:57:57 AM 2 54.27 0.00 40.27 0.00 0.00 1.37 0.00 0.00 0.00 4.10
10:57:57 AM 3 52.72 0.00 43.54 0.00 0.00 1.36 0.00 0.00 0.00 2.38
10:57:57 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
10:58:00 AM all 54.10 0.00 41.54 0.09 0.00 0.77 0.00 0.00 0.00 3.50
10:58:00 AM 0 55.14 0.00 39.38 0.34 0.00 1.03 0.00 0.00 0.00 4.11
10:58:00 AM 1 54.95 0.00 40.96 0.00 0.00 0.68 0.00 0.00 0.00 3.41
10:58:00 AM 2 54.11 0.00 41.10 0.00 0.00 0.68 0.00 0.00 0.00 4.11
10:58:00 AM 3 52.05 0.00 44.86 0.00 0.00 0.68 0.00 0.00 0.00 2.40
10:58:00 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle

Run it with sync

Here is the output of one ‘runit.sh’ when /u01 was in sync:

Date: Thu May 24 12:18:54 CEST 2018
Database connect string: "pgio".
Shared buffers: 128MB.
Testing 4 schemas with 2 thread(s) accessing 200M (25600 blocks) of each schema.
Running iostat, vmstat and mpstat on current host--in background.
Launching sessions. 4 schema(s) will be accessed by 2 thread(s) each.
pg_stat_database stats:
datname| blks_hit| blks_read|tup_returned|tup_fetched|tup_updated
BEFORE: pgio | 255169 | 119163 | 112734 | 6945 | 18
AFTER: pgio | 15040938 | 74598977 | 87775490 | 86742056 | 337889
DBNAME: pgio. 4 schemas, 2 threads(each). Run time: 300 seconds. RIOPS >248266< CACHE_HITS/s >49285<

This shows that, within those 5 minutes, I’ve fetched 86735111 tuples and updated 337871 ones. So, basically the IOPS have been divided by two here in this example when waiting on each writes to be synced to disk.

pgio takes snapshots of iostat, vmstat and mpstat. Here is a sample after 1 minute of run where we show that all CPU are 30% idle waiting on I/O completion:

12:19:51 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
12:19:54 PM all 29.95 0.00 27.79 10.88 0.00 1.26 0.00 0.00 0.00 30.13
12:19:54 PM 0 30.63 0.00 27.46 11.27 0.00 0.70 0.00 0.00 0.00 29.93
12:19:54 PM 1 30.07 0.00 27.62 12.24 0.00 0.70 0.00 0.00 0.00 29.37
12:19:54 PM 2 30.28 0.00 27.82 10.92 0.00 0.35 0.00 0.00 0.00 30.63
12:19:54 PM 3 28.02 0.00 28.02 8.56 0.39 3.89 0.00 0.00 0.00 31.13
12:19:54 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
12:19:57 PM all 30.10 0.00 27.92 11.24 0.00 1.00 0.00 0.00 0.00 29.74
12:19:57 PM 0 29.29 0.00 28.57 10.71 0.00 0.36 0.00 0.00 0.00 31.07
12:19:57 PM 1 30.88 0.00 28.07 11.93 0.00 0.35 0.00 0.00 0.00 28.77
12:19:57 PM 2 30.31 0.00 27.18 12.54 0.00 0.70 0.00 0.00 0.00 29.27
12:19:57 PM 3 30.43 0.00 27.67 9.88 0.00 2.77 0.00 0.00 0.00 29.25
12:19:57 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
12:20:00 PM all 29.51 0.00 27.00 10.76 0.00 1.08 0.00 0.00 0.00 31.66
12:20:00 PM 0 29.58 0.00 28.17 10.56 0.00 0.35 0.00 0.00 0.00 31.34
12:20:00 PM 1 29.72 0.00 26.22 12.24 0.00 0.70 0.00 0.00 0.00 31.12
12:20:00 PM 2 29.12 0.00 26.32 10.88 0.00 0.35 0.00 0.00 0.00 33.33
12:20:00 PM 3 29.34 0.00 27.80 8.88 0.00 3.09 0.00 0.00 0.00 30.89
12:20:00 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle

So what?

Currently, Postgres relies on the filesystem to optimize the I/O, but there’s a risk of corruption in case of failure. We can force to wait for I/O completion with the ‘sync’ mount options of the filesystems, or even with some file attributes (chattr -R +S) for ext4 or xfs, but there’s a performance penalty. The important thing is to measure this penalty, and this is where pgio is great: measure the performance penalty with a workload that is customizable (amount of changes, amount of data,…) but also predictable (does not depend on other parameters like an application benchmark). When you know how being in ‘sync’ impacts your system, you can choose. And we can bet that future versions of Postgres will improve and offer ways to stay efficient without compromising the data at first power outage.

 

Cet article Postgres, the fsync() issue, and ‘pgio’ (the SLOB method for PostgreSQL) est apparu en premier sur Blog dbi services.

XMLQuery

Tom Kyte - Thu, 2018-05-24 07:46
Hi Tom, I am trying to learn XQuery use with SQL but it looks very complicated. Can you please advise with some simple cases (I am not interested in XML generation but using XML and XQuery functions with relational data tables). Where to start? Wh...
Categories: DBA Blogs

Oracle Database - Grant/Revoke High Concurrency

Tom Kyte - Thu, 2018-05-24 07:46
We have an Oracle 10g release 2 database running on a production environment. It's experiencing a lot of concurrency, as Sql Developer 17.4 "Waits for past 1 hour" graph shows. When the database is behaving slow, we take a look at that graph, and ...
Categories: DBA Blogs

Can I user automatic List in subpartitions?

Tom Kyte - Thu, 2018-05-24 07:46
Dears, I have a table that contains two columns one for year and the other for month. I need to partition this table based on year and month, where year represent the partitions and under that the month represent that sub-partitions. What I need i...
Categories: DBA Blogs

Multiple block allocation to small table

Tom Kyte - Thu, 2018-05-24 07:46
Hi, I executed below query on my database and found given output: <code>select a.table_name, a.NUM_ROWS, a.AVG_ROW_LEN, a.LAST_ANALYZED, a.SAMPLE_SIZE, a.blocks from user_tables a where num_rows <10;</code> Output: <code>TABLE_NA...
Categories: DBA Blogs

Nested loop and hash join.

Tom Kyte - Thu, 2018-05-24 07:46
Hi Tom, Can you help me in understanding how optimizer decides which join ( hash or nested loop) it will use for joining. Also which is the driving table in nested loop. There are lot of confusing answers on this on internet, which one to rely...
Categories: DBA Blogs

Create a physical standby for 12c RAC

Tom Kyte - Thu, 2018-05-24 07:46
hi - this weekend we have a project that is going live. we will be importing data (about 1TB) into the database. after that we want to create the physical standby. what is the best, efficient and most proven way to create a physical standby database ...
Categories: DBA Blogs

How to gather statistics on a standard edition database

Tom Kyte - Thu, 2018-05-24 07:46
Hi, I'll like to gather some statistics on long running statements on a standard edition database. Can you please suggest the best way to gather stats on this statement? <code> BANNER ...
Categories: DBA Blogs

Limit and conversion very long IN list : WHERE x IN ( ,,, ...)

Tom Kyte - Thu, 2018-05-24 07:46
How many elements may be in the WHERE x IN (,,,) list ? I see 2 ways to overcome IN list limitation: 1) use x=el_1 OR x=el_2 OR x=el_3 OR ... 2) create temporary table , but another question arise here: why create table A( X INTEGER, Y...
Categories: DBA Blogs

ADWC – System and session settings (DWCS lockdown profile)

Yann Neuhaus - Thu, 2018-05-24 05:04

The Autonomous Data Warehouse Cloud service is a PaaS managed service where we have a PDB and an ADMIN user which has most of the system privileges. For example, we have the privilege to change initialization parameters:
SQL> select * from dba_sys_privs where grantee=user and privilege like 'ALTER S%';
 
GRANTEE PRIVILEGE ADMIN_OPTION COMMON INHERITED
------- --------- ------------ ------ ---------
ADMIN ALTER SESSION YES NO NO
ADMIN ALTER SYSTEM YES NO NO

Still, not everything is allowed for several reasons: ensure that we cannot break the Oracle managed CDB and force us to use only the features allowed in the ‘autonomous’ service. This is limited with a lockdown profile:
SQL> show parameter pdb_lockdown
 
NAME TYPE VALUE
------------ ------ -----
pdb_lockdown string DWCS

DWCS means Data Warehouse Cloud Service which was the name of the Autonomous Data Warehouse Cloud service until Larry Ellison announces this self-driven-no-human trend under the marketing umbrella of ‘autonomous’.

The limitations are all documented but I like to see them by myself and in 18c we have a mean to see the lockdown rules from the PDB itself, through V$LOCKDOWN_RULES.

ALTER SYSTEM

Basically, in this ADWC all ALTER SYSTEM statements are disallowed and then they add the few exceptions for what we are allowed to:

SQL> select * from v$lockdown_rules where rule in ('ALTER SYSTEM') and clause_option is null;
RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
--------- ---- ------ ------------- ------ ----- ------
STATEMENT ALTER SYSTEM DISABLE ALL 73
STATEMENT ALTER SYSTEM SET ENABLE COMMON 73
STATEMENT ALTER SYSTEM KILL SESSION ENABLE ALL 73

You can ignore what is enabled for COMMON users because we have no common user to connect to our PDB. We will see which ALTER SYSTEM SET clauses are allowed. But in addition to those, only the ‘KILL SESSION’ is allowed for ALTER SYSTEM.

Here is the detail about the parameters we can set:

SQL> select * from v$lockdown_rules left outer join (select upper(name) clause_option,display_value,description from v$parameter) using (clause_option) where rule in ('ALTER SYSTEM') and clause_option is not null and status='ENABLE';
 
CLAUSE_OPTION RULE_TYPE RULE CLAUSE STATUS USERS CON_ID DISPLAY_VALUE DESCRIPTION
------------- --------- ---- ------ ------ ----- ------ ------------- -----------
APPROX_FOR_AGGREGATION STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE Replace exact aggregation with approximate aggregation
APPROX_FOR_COUNT_DISTINCT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE Replace count distinct with approx_count_distinct
APPROX_FOR_PERCENTILE STATEMENT ALTER SYSTEM SET ENABLE ALL 73 none Replace percentile_* with approx_percentile
AWR_PDB_AUTOFLUSH_ENABLED STATEMENT ALTER SYSTEM SET ENABLE ALL 73 TRUE Enable/Disable AWR automatic PDB flushing
NLS_LANGUAGE STATEMENT ALTER SYSTEM SET ENABLE ALL 73 AMERICAN NLS language name
NLS_SORT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS linguistic definition name
NLS_TERRITORY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 AMERICA NLS territory name
NLS_CALENDAR STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS calendar system name
NLS_COMP STATEMENT ALTER SYSTEM SET ENABLE ALL 73 BINARY NLS comparison
NLS_CURRENCY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS local currency symbol
NLS_DATE_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 DD-MON-YYYY HH24:MI:ss NLS Oracle date format
NLS_DATE_LANGUAGE STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS date language name
NLS_DUAL_CURRENCY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 Dual currency symbol
NLS_ISO_CURRENCY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS ISO currency territory name
NLS_LENGTH_SEMANTICS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 BYTE create columns using byte or char semantics by default
NLS_NCHAR_CONV_EXCP STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE NLS raise an exception instead of allowing implicit conversion
NLS_NUMERIC_CHARACTERS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS numeric characters
NLS_TIMESTAMP_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 time stamp format
NLS_TIMESTAMP_TZ_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 timestamp with timezone format
NLS_TIME_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 time format
NLS_TIME_TZ_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 time with timezone format
OPTIMIZER_IGNORE_HINTS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 TRUE enables the embedded hints to be ignored
OPTIMIZER_IGNORE_PARALLEL_HINTS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 TRUE enables embedded parallel hints to be ignored
PLSCOPE_SETTINGS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 identifiers:all plscope_settings controls the compile time collection, cross reference, and storage of PL/SQL source code identifier and SQL statement data
PLSQL_CCFLAGS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 PL/SQL ccflags
PLSQL_DEBUG STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE PL/SQL debug
PLSQL_OPTIMIZE_LEVEL STATEMENT ALTER SYSTEM SET ENABLE ALL 73 1 PL/SQL optimize level
PLSQL_WARNINGS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 DISABLE:ALL PL/SQL compiler warnings settings

The APPROX_ ones, disable by default, can be used to transparently use approximations for faster results.
The NLS_ ones can be used to set NLS defaults for our sessions.
OPTIMIZER_IGNORE_ are new in 18c and are set by default here to ignore embedded hints. However, we can set then to false.
PLSQL_ are the defaults for sessions and I don’t understand why warnings are not enabled by default. Fortunately, we are able to change that at PDB level.

There are also some rules to disable some ALTER SYSTEM SET. They are there for the common users only (which have ALTER SYSTEM SET enabled) but they are interesting to see what Oracle choose to set in the ADWC service which cannot be changed in the PDB even by their common users:

SQL> select * from v$lockdown_rules left outer join (select upper(name) clause_option,display_value,description from v$parameter) using (clause_option) where rule in ('ALTER SYSTEM') and clause_option is not null and status='DISABLE';
CLAUSE_OPTION RULE_TYPE RULE CLAUSE STATUS USERS CON_ID DISPLAY_VALUE DESCRIPTION
------------- --------- ---- ------ ------ ----- ------ ------------- -----------
DB_FILES STATEMENT ALTER SYSTEM SET DISABLE ALL 73 25 max allowable # db files
"_PDB_INHERIT_CFD" STATEMENT ALTER SYSTEM SET DISABLE ALL 73
"_PDB_MAX_AUDIT_SIZE" STATEMENT ALTER SYSTEM SET DISABLE ALL 73
"_PDB_MAX_DIAG_SIZE" STATEMENT ALTER SYSTEM SET DISABLE ALL 73
MAX_IDLE_TIME STATEMENT ALTER SYSTEM SET DISABLE ALL 73 60 maximum session idle time in minutes
PARALLEL_DEGREE_POLICY STATEMENT ALTER SYSTEM SET DISABLE ALL 73 AUTO policy used to compute the degree of parallelism (MANUAL/LIMITED/AUTO/ADAPTIVE)
_PARALLEL_CLUSTER_CACHE_POLICY STATEMENT ALTER SYSTEM SET DISABLE ALL 73 ADAPTIVE policy used for parallel execution on cluster(ADAPTIVE/CACHED)
_ENABLE_PARALLEL_DML STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enables or disables parallel dml
RESULT_CACHE_MODE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FORCE result cache operator usage mode
RESULT_CACHE_MAX_RESULT STATEMENT ALTER SYSTEM SET DISABLE ALL 73 1 maximum result size as percent of cache size
RESOURCE_MANAGER_PLAN STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FORCE:DWCS_PLAN resource mgr top plan
_CELL_OFFLOAD_VECTOR_GROUPBY STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FALSE enable SQL processing offload of vector group by
PARALLEL_MIN_DEGREE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 CPU controls the minimum DOP computed by Auto DOP
_MAX_IO_SIZE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 33554432 Maximum I/O size in bytes for sequential file accesses
_LDR_IO_SIZE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 33554432 size of write IOs used during a load operation
_LDR_IO_SIZE2 STATEMENT ALTER SYSTEM SET DISABLE ALL 73 33554432 size of write IOs used during a load operation of EHCC with HWMB
_OPTIMIZER_GATHER_STATS_ON_LOAD_ALL STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enable/disable online statistics gathering for nonempty segments
_OPTIMIZER_GATHER_STATS_ON_LOAD_HIST STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enable/disable online histogram gathering for loads
_DATAPUMP_GATHER_STATS_ON_LOAD STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE Gather table statistics during Data Pump load rather thanimporting statistics from the dump file. This should be set to TRUE in the lockdown profile in a DWCS environment.
_OPTIMIZER_ANSWERING_QUERY_USING_STATS STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enable statistics-based query transformation
_PX_XTGRANULE_SIZE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 128000 default size of a external table granule (in KB)
_OPTIMIZER_ALLOW_ALL_ACCESS_PATHS STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FALSE allow all access paths
_DATAPUMP_INHERIT_SVCNAME STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE Inherit and propagate service name throughout job
_DEFAULT_PCT_FREE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 1 Default value of PCT_FREE enforced by DWCS lockdown

So, among the interesting ones, Result Cache is forced for all results (RESULT_CACHE_MODE=FORCE), Parallel DML is enabled for all sessions (but we will see that we can disable it at session level), PCTFREE will always be 1 (_DEFAULT_PCT_FREE=1), statistics are gathered during load (this is a 18c feature). And we cannot change that.

There are only few additional ALTER SYSTEM SET which are allowed at session level:

SQL> select * from v$lockdown_rules where rule in ('ALTER SESSION') and clause is not null and clause_option is not null
and (clause_option,status,users) not in (select clause_option,status,users from v$lockdown_rules where rule in ('ALTER SYSTEM') and clause is not null and clause_option is not null)
;
RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
--------- ---- ------ ------------- ------ ----- ------
STATEMENT ALTER SESSION SET CONTAINER ENABLE ALL 73
STATEMENT ALTER SESSION SET CURRENT_SCHEMA ENABLE ALL 73
STATEMENT ALTER SESSION SET EDITION ENABLE ALL 73
STATEMENT ALTER SESSION SET OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES ENABLE ALL 73
STATEMENT ALTER SESSION SET DEFAULT_COLLATION ENABLE ALL 73
STATEMENT ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ACTIVE ENABLE ALL 73
STATEMENT ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ALL ENABLE ALL 73
STATEMENT ALTER SESSION SET TIME_ZONE ENABLE ALL 73

Besides the parameters here are what we can do with ALTER SESSION:

SQL> select * from v$lockdown_rules where rule='ALTER SESSION' and clause_option is null;
 
RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
--------- ---- ------ ------------- ------ ----- ------
STATEMENT ALTER SESSION DISABLE ALL 73
STATEMENT ALTER SESSION SET ENABLE COMMON 73
STATEMENT ALTER SESSION ADVISE COMMIT ENABLE ALL 73
STATEMENT ALTER SESSION CLOSE DATABASE LINK ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE COMMIT IN PROCEDURE ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE PARALLEL DDL ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE PARALLEL DML ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE PARALLEL QUERY ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE RESUMABLE ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE COMMIT IN PROCEDURE ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE PARALLEL DDL ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE PARALLEL DML ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE PARALLEL QUERY ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE RESUMABLE ENABLE ALL 73
STATEMENT ALTER SESSION FORCE PARALLEL DDL ENABLE ALL 73
STATEMENT ALTER SESSION FORCE PARALLEL DML ENABLE ALL 73
STATEMENT ALTER SESSION FORCE PARALLEL QUERY ENABLE ALL 73

I’ll show other rules (other than ALTER SYSTEM and ALTER SESSION statements) in a future post. Lockdown profiles is a great feature because they have very fine granularity and makes it easy to document what is allowed or not. Oracle introduced them for their own usage in the public cloud. You can use the same on-premises for your private cloud. This requires multitenant architecture, but the option is not mandatory.

 

Cet article ADWC – System and session settings (DWCS lockdown profile) est apparu en premier sur Blog dbi services.

EMEA Edge Conference 2018

Anthony Shorten - Wed, 2018-05-23 19:41

I will be attending the EMEA Oracle Utilities Edge Conference on the 26 - 27 June 2018 in the Oracle London office. This year we are running an extended set of technical sessions around on-premise and the Oracle Utilities Cloud Services. This forum is open to Oracle Utilities customers and Oracle Utilities partners.

The sessions mirror the technical sessions for the conference in the USA held earlier this year with the following topics:

Reducing Your Storage Costs Using Information Life-cycle Management With the increasing costs of maintaining storage and satisfying business data retention rules can be challenging. Using Oracle Information Life-cycle Management solution can help simplify your storage solution and hardness the power of the hardware and software to reduce storage costs. Integration using Inbound Web Services and REST with Oracle Utilities Integration is a critical part of any implementation. The Oracle Utilities Application Framework has a range of facilities for integrating from and to other applications. This session will highlight all the facilities and where they are best suited to be used. Optimizing Your Implementation Implementations have a wide range of techniques available to implement successfully. This session will highlight a group of techniques that have been used by partners and our cloud implementations to reduce Total Cost Of Ownership. Testing Your On-Premise and Cloud Implementations Our Oracle Testing solution is popular with on premise implementations. This session will outline the current testing solution as well as outline our future plans for both on premise and in the cloud. Securing Your Implementations With the increase in cybersecurity and privacy concerns in the industry, a number of key security enhancements have made available in the product to support simple or complex security setups for on premise and cloud implementations. Turbocharge Your Oracle Utilities
Product Using the Oracle In-Memory Database Option
The Oracle Database In-Memory options allows for both OLTP and Analytics to run much faster using advanced techniques. This session will outline the capability and how it can be used in existing on premise implementations to provide superior performance. Developing Extensions using Groovy Groovy has been added as a supported language for on premise and cloud implementations. This session outlines that way that Groovy can be used in building extensions. Note: This session will be very technical in nature. Ask Us Anything Session Interaction with the customer and partner community is key to the Oracle Utilities product lines. This interactive sessions allows you (the customers and partners) to ask technical resources within Oracle Utilities questions you would like answered. The session will also allow Oracle Utilities to discuss directions and poll the audience on key initiatives to help plan road maps

Note: These sessions are not recorded or materials distributed outside this forum.

This year we have decided to not only discuss capabilities but also give an idea of how we use those facilities in our own cloud implementations to reduce our operating costs for you to use as a template for on-premise and hybrid implementations.

See you there if you are attending.

If you wish to attend, contact your Oracle Utilities local sales representative for details of the forum and the registration process.

Running Code as SYS From Another User not SYSDBA

Pete Finnigan - Wed, 2018-05-23 13:06
I have been embroiled in a twitter thread today about the post i made in this blog yesterday around granting privileges to a user and who should do the granting. Patrick today asked a further question: How do you make....[Read More]

Posted by Pete On 22/05/18 At 08:42 PM

Categories: Security Blogs

Who Should Grant Object Rights?

Pete Finnigan - Wed, 2018-05-23 13:06
Patrick Jolliffe posted a question via a tweet back in April but due to personal health pressures with a close relative of mine I have not had the time to deal with much over the last few months. I did....[Read More]

Posted by Pete On 21/05/18 At 07:08 PM

Categories: Security Blogs

Plaintiffs' Law Firms to Pay Oracle $270,000 to Settle Sanctions

Oracle Press Releases - Wed, 2018-05-23 12:43
Press Release
Plaintiffs' Law Firms to Pay Oracle $270,000 to Settle Sanctions

Redwood Shores, Calif.—May 23, 2018

Today, four plaintiffs’ law firms agreed to pay Oracle Corporation $270,000 to avoid Oracle’s motion for sanctions over their misconduct in a lawsuit related to Oracle’s acquisition of Micros Systems, Inc. in Case No. 13-C-14-099672, in the Circuit Court for Howard County, Maryland.

In 2014, Brower Piven, Robbins Arroyo LLC, Weisslaw LLC, and Pomerantz, LLP, along with other firms, sued Micros and its directors, claiming that Micros’s shareholders were not adequately informed about the transaction and that Oracle’s offer price was too low. The firms also sued Oracle for aiding and abetting the Micros board, despite the fact that Oracle simply engaged in arm’s-length negotiations to obtain the best price possible. The trial court dismissed all claims with prejudice against all defendants, including Oracle, and then courts at every level of the state system rejected five subsequent motions for reconsideration and appeals brought by these firms, affirming the trial judge’s finding that “with respect to Oracle, the Plaintiffs have failed to allege any acts, alleged acts, by agents or employees or Oracle that were impermissible under the law. The mere fact that Oracle pursued the merger is insufficient.” The outrageous litigation conduct drew the sanctions motion by Oracle, which the four firms identified above agreed to settle rather than defend.

“This substantial monetary settlement reflects the strength of our sanctions motion for what Oracle believes was clearly a “strike” suit brought against Oracle. For more than a decade, plaintiffs’ lawyers have brought these suits, challenging legitimate public mergers, in order to line their own pockets at the expense of shareholders. We are grateful that the Maryland courts recognized that the claims had no merit, and we urge other public companies to challenge these baseless suits. Shareholders’ attorneys cannot simply claim and collect what is effectively an unwarranted tax on mergers,” said Dorian Daley, Oracle’s General Counsel.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, SCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Four Enhancements for Concurrent Processing in EBS 12.1 Now Available

Steven Chan - Wed, 2018-05-23 12:15

We have just released four new enhancements for Concurrent Processing in E-Business Suite 12.1.3:

  1. Storage Strategies for Log and Output File Locations: Create custom storage strategies for management of large numbers of concurrent processing log and output files. Customers can specify the strategy that best suits their particular needs. These strategies are called schemes.
  2. Output File Naming Strategies: The output file naming conventions are now based on USER.REQID and REQID.OUT. The old USER format is desupported.
  3. Timed Shutdown: submit a normal, graceful shutdown and also specify a number of minutes after which an Abort command will be executed. After this number of minutes has passed, and Concurrent Processing has not yet shut down, the graceful shutdown will be converted to an Abort, and all remaining Concurrent Processing processes will be aborted.
  4. 64-bit Java Support for the Output Post-Processor Service: A 64-bit Java Virtual Machine (JVM) is now supported for the Output Post Processor (OPP). This support allows for a larger heap size to be set, compared to the 2G heap size that 32-bit Java allows. This larger heap size will decrease out-of-memory errors. Note that the 64-bit JVM can be run for the OPP service only.

Details about these four new enhancements are published here:

You can download these enhancements here:

Related Articles

 

Categories: APPS Blogs

Users, schemas & privileges in #Exasol

The Oracle Instructor - Wed, 2018-05-23 09:18

Exasol Logo

In Exasol, a database user may own multiple schemas – or even none at all. I connect to my Community Edition to show that:

C:\Users\uh>cd \Program Files (x86)\EXASOL\EXASolution-6.0\EXAplus

C:\Program Files (x86)\EXASOL\EXASolution-6.0\EXAplus>exaplusx64 -c 192.168.56.101:8563 -u sys -p exasol -lang EN

EXAplus 6.0.8 (c) EXASOL AG

Wednesday, May 23, 2018 3:28:29 PM CEST
Connected to database EXAone as user sys.
EXASolution 6.0.8 (c) EXASOL AG

SQL_EXA> create user adam identified by adam;
EXA: create user adam identified by adam;

Rows affected: 0

SQL_EXA> grant dba to adam;
EXA: grant dba to adam;

Rows affected: 0

SQL_EXA> select user_name from exa_dba_users;
EXA: select user_name from exa_dba_users;

USER_NAME
------------------------------------------------------------
SYS
ADAM

2 rows in resultset.

SQL_EXA> select schema_owner,schema_name from exa_schemas;
EXA: select schema_owner,schema_name from exa_schemas;

SCHEMA_OWNER
-------------------------------------------------------------
SCHEMA_NAME
-------------------------------------------------------------
SYS
RETAIL

1 row in resultset.

SQL_EXA> connect adam/ADAM;

Wednesday, May 23, 2018 3:34:42 PM CEST
Connected to database EXAone as user adam.
EXASolution 6.0.8 (c) EXASOL AG

SQL_EXA> create table t1 (n number);
EXA: create table t1 (n number);
Error: [42000] no schema specified or opened or current schema has been dropped [line 1, column 27] (Session: 1601269589413551548)
SQL_EXA> open schema adam;
EXA: open schema adam;
Error: [42000] schema ADAM not found [line 1, column 13] (Session: 1601269589413551548)

Demo user adam has the DBA role granted but there is no adam schema yet. I need to create it first:

EXA: create schema adam;

Rows affected: 0

SQL_EXA> open schema adam;
EXA: open schema adam;

Rows affected: 0

SQL_EXA> create table t1 (n number);
EXA: create table t1 (n number);

Rows affected: 0

SQL_EXA> create schema adam2;
EXA: create schema adam2;

Rows affected: 0

SQL_EXA> create table adam2.t2 (n number);
EXA: create table adam2.t2 (n number);

Rows affected: 0

SQL_EXA> select table_schema,table_name from exa_user_tables;
EXA: select table_schema,table_name from exa_user_tables;

TABLE_SCHEMA
--------------------------------------------------------
TABLE_NAME
--------------------------------------------------------
ADAM
T1
ADAM2
T2

2 rows in resultset.

As you see, user adam has now two schemas with different tables in them. Now briefly to privileges:

SQL_EXA> create user fred identified by fred;
EXA: create user fred identified by fred;

Rows affected: 0

SQL_EXA> grant create session to fred;
EXA: grant create session to fred;

Rows affected: 0

SQL_EXA> grant select on adam.t1 to fred;
EXA: grant select on adam.t1 to fred;

Rows affected: 0

SQL_EXA> connect fred/FRED;

Wednesday, May 23, 2018 3:53:34 PM CEST
Connected to database EXAone as user fred.
EXASolution 6.0.8 (c) EXASOL AG

SQL_EXA> select * from adam.t1;
EXA: select * from adam.t1;

N
-----------------

0 rows in resultset.

SQL_EXA> select * from adam2.t2;
EXA: select * from adam2.t2;
Error: [42500] insufficient privileges: SELECT on table T2 (Session: 1601270776421928841)
SQL_EXA> connect adam/ADAM;

Wednesday, May 23, 2018 3:54:33 PM CEST
Connected to database EXAone as user adam.
EXASolution 6.0.8 (c) EXASOL AG

SQL_EXA> create role allonadam2;
EXA: create role allonadam2;

Rows affected: 0

SQL_EXA> grant all on adam2 to allonadam2;
EXA: grant all on adam2 to allonadam2;

Rows affected: 0

SQL_EXA> grant allonadam2 to fred;
EXA: grant allonadam2 to fred;

Rows affected: 0

SQL_EXA> connect fred/FRED;

Wednesday, May 23, 2018 3:55:54 PM CEST
Connected to database EXAone as user fred.
EXASolution 6.0.8 (c) EXASOL AG

SQL_EXA> select * from adam2.t2;
EXA: select * from adam2.t2;

N
-----------------

0 rows in resultset.

SQL_EXA> drop table adam2.t2;
EXA: drop table adam2.t2;
Error: [42500] insufficient privileges for dropping table (Session: 1601270923042332982)

That’s because ALL contains ALTER, DELETE, EXECUTE, INSERT, SELECT and UPDATE but not DROP which can be confirmed using EXA_DBA_OBJ_PRIVS.

Categories: DBA Blogs

GDPR or AVG - regain control Part 2: Your Own Mail

Frank van Bortel - Wed, 2018-05-23 09:09
Create your own mail server Drop Yahoo, Google or Microsoft mail - they are reading your mail. Debian, Postfix, Dovecot, MariaDb, rspamd This is the second (and last) part of setting up your own internet tools in order to gain back control. Goal is to set up an email server (receive and send), secure it, and filter spam. Hardware considerations I used an abandoned ASRock ION330, where I Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

GDPR or AVG - regain control Part 1: Your Own Cloud

Frank van Bortel - Wed, 2018-05-23 06:08
Create your own Cloud Replace Google or Dropbox, and gain control over you own data. Encrypt it, protect it, share your data only with who you want. ARM, Ubuntu and secured Nextcloud This episode will be followed by an entry on email. For now, I settle for a relatively cheap ARM device (an ODroid XU4, to be precise), run it with Ubuntu, and install NextCloud. The choice for Ubuntu has Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Oracle Developer Cloud - New Continuous Integration Engine Deep Dive

OTN TechBlog - Wed, 2018-05-23 02:00

We introduced our new Build Engine in Oracle Developer Cloud in our April release. This new build engine now comes with the capability to define build pipelines visually. Read more about it in my previous blog.

In this blog we will delve deeper into some of the functionalities of Build Pipeline feature of the new CI Engine in Oracle Developer Cloud.

Auto Start

Auto Start is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure the pipeline execution auto starts when one of the build job in the pipeline is executed externally, then that would trigger the execution of rest of the build jobs in the pipeline.

The below screen shot shows the pipeline for NodeJS application created on Oracle Developer Cloud Pipelines. The build jobs used in the pipeline are build-microservice, test-microservices and loadtest-microservice. And in parallel to the microservice build sequence we have, WiremockInstall and WiremockConfigure.

Scenarios When Auto Start is enabled for the Pipeline:

Scenario 1:

If we run build-microservice build job externally, then it will lead to the execution of the test-microservice and loadtest-microservice build jobs in that order subsequently. But note this does not trigger the execution of WiremockInstall or WiremockConfigure build jobs as they are part of a separate sequence. Please refer the screen shot below, which shows only the build jobs executed in green.

Scenario 2:

If we run test-microservice build job externally, then it will lead to the execution of the loadtest-microservice build job only. Please refer the screen shot below, which shows only the build jobs executed in green.

Scenario 3:

If we run loadtest-microservice build job externally, then it will lead to no other build job execution in the pipeline across both the build sequences.

Exclusive Build

This enables the users to disallow the pipeline jobs to be built externally in parallel to the execution of the build pipeline. It is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure that the execution of build jobs in pipeline will not be allowed to be built in parallel to the pipeline execution.

When you run the pipeline you would see the build jobs queued for execution which you can see in the Build History. In this case you would see two build jobs queued, one would be build-micorservice and other would be WiremockInstall as they are parallel sequences part of the same pipeline.

Now if you try to run any of the build jobs in the pipeline, for example; like test-microservice, you will be given an error message, as shown in the screenshot below.

 

Pipeline Instances:

If you click the Build Pipeline name link in the Pipelines tab you will be able to see the pipeline instances. Pipeline instance is the instance at which it was executed. 

Below screen shot shows the pipeline instances with time stamp of when it was executed. It will show if the pipeline got Auto Started (hover on the status icon of the pipeline instance) due to an external execution of the build job or shows the success status if all the build jobs of the pipeline were build successfully. It also shows the build jobs that executed successfully in green for that particular pipeline instance. The build jobs that did not get executed have a white background.  You also get an option to cancel while the pipeline is getting executed and you may choose to delete the instance post execution of the pipeline.

 

Conditional Build:

The visual build pipeline editor in Oracle Developer Cloud has a feature to support conditional builds. You will have to double click the link connecting the two build jobs and select any one of the conditions as given below:

Successful: To proceed to the next build job in the sequence if the previous one was a success.

Failed: To proceed to the next build job in the sequence if the previous one failed.

Test Failed: To proceed to the next build job in the sequence if the test failed in the previous build job in the pipeline.

 

Fork and Join:

Scenario 1: Fork

In this scenario if you have a build job like build-microservice on which the other three build jobs, “DockerBuild” which builds a deployable Docker image for the code, “terraformBuild” which builds the instance on Oracle Cloud Infrastructure and deploy the code artifact and “ArtifactoryUpload” build job to upload the generated artifact to Artifactory are dependent on then you will be able to fork the build jobs as shown below.

 

Scenario 2: Join

If you have a build job test-microservice which is dependent on two other build jobs, build-microservice which build and deploys the application and another build job WiremockConfigure to configure the service stub, then in this case you need to create a join in the pipeline as shown in the screen shot below.

 

You can refer the Build Pipeline documentation here.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Pages

Subscribe to Oracle FAQ aggregator