Feed aggregator

Basic Replication -- 11 : Indexes on a Materialized View

Hemant K Chitale - Tue, 2019-11-12 08:46
A Materialized View is actually also a physical Table (by the same name) that is created and maintained to store the rows that the MV query is supposed to present.

Since it is also a Table, you can build custom Indexes on it.

Here, my Source Table has an Index on OBJECT_ID :

SQL> create table source_table_1
2 as select object_id, owner, object_name
3 from dba_objects
4 where object_id is not null
5 /

Table created.

SQL> alter table source_table_1
2 add constraint source_table_1_pk
3 primary key (object_id)
4 /

Table altered.

SQL> create materialized view log on source_table_1;

Materialized view log created.

SQL>


I then build Materialized View with  an additional Index on it :

SQL> create materialized view mv_1
2 refresh fast on demand
3 as select object_id as obj_id, owner as obj_owner, object_name as obj_name
4 from source_table_1
5 /

Materialized view created.

SQL> create index mv_1_ndx_on_owner
2 on mv_1 (obj_owner)
3 /

Index created.

SQL>


Let's see if this Index is usable.

SQL> exec  dbms_stats.gather_table_stats('','MV_1');

PL/SQL procedure successfully completed.

SQL> explain plan for
2 select obj_owner, count(*)
3 from mv_1
4 where obj_owner like 'H%'
5 group by obj_owner
6 /

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2523122927

------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 10 | 15 (0)| 00:00:01 |
| 1 | SORT GROUP BY NOSORT| | 2 | 10 | 15 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | MV_1_NDX_ON_OWNER | 5943 | 29715 | 15 (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------

2 - access("OBJ_OWNER" LIKE 'H%')
filter("OBJ_OWNER" LIKE 'H%')



Note how this Materialized View has a column called "OBJ_OWNER"  (while the Source Table column is called "OWNER") and the Index ("MV_1_NDX_ON_OWNER") on this column is used.


You  would have also noted that you can run DBMS_STATS.GATHER_TABLE_STATS on a Materialized View and it's Indexes.

However, it is NOT a good idea to define your own Unique Indexes (including Primary Key) on a Materialized View.  During the course of a Refresh, the MV may not be consistent and the Unique constraint may be violated.   See Oracle Support Document # 67424.1



Categories: DBA Blogs

Joining two tables with time range

Tom Kyte - Mon, 2019-11-11 11:49
Dear AskTom-Team! I wonder whether it is possible to join two tables that have time ranges. E.g a table 'firmname' holds the name of a firm with two columns from_year and to_year that define the years the name is valid. Table 'address' holds the a...
Categories: DBA Blogs

TDE Encryption of local Oracle databases. KEK hosted on cloud service?

Tom Kyte - Mon, 2019-11-11 11:49
Hi, We want to encrypt some on-premise Oracle databases. If possible, we would like to avoid to use a physical HSM or to contract with a third party HSM cloud provider. Is this possible to store the KEK's in GCP or Azure, and to interface our lo...
Categories: DBA Blogs

Migration from oracle 6i and 11g to Oracle APEX

Tom Kyte - Mon, 2019-11-11 11:49
Dear all, i have oracle forms that are built on 6i and 11g and i want to migrate them to oracle apex, is it possible to create an application based on the oracle 6i form migrated, if so can you provide me with a document or a video showing the pro...
Categories: DBA Blogs

One way Encryption where no one can decrypt

Tom Kyte - Mon, 2019-11-11 11:49
Hi Tom, Kindly suggest some one way encryption for the data in table,where no one can decrypt it? Read few articles about this and not satisfied. Kindly help.
Categories: DBA Blogs

Connection retry when database services failover

Tom Kyte - Mon, 2019-11-11 11:49
Hello, Ask Tom, Team. I have a two-node RAC running a database service with one preferred instance and one available instance. It is supposed that RAC automatically failovers the services to available when preferred instance crashes. 1. Will ...
Categories: DBA Blogs

Best Practice using database services

Tom Kyte - Mon, 2019-11-11 11:49
Hello, Ask Tom Team. We have a two-node RAC running a database. The app connects to the database using scan name (best practice), so we are not using vip to connect to database directly. The tnsnames.ora is: DBPROD = (DESCRIPTION = (ADD...
Categories: DBA Blogs

Impact of Index Monitoring in production server

Tom Kyte - Mon, 2019-11-11 11:49
Hi, we are planning to change global index to local index for partitioned table in production. Since we need to drop an index and create it again and we don't know the time it will take, we decided to monitor the index which are being used by the app...
Categories: DBA Blogs

Generating DDL in Oracle SQL Developer

Tom Kyte - Mon, 2019-11-11 11:49
Dear AskTom-Team! Is there a possibility in the Oracle Developer to suppress duplicate DDL code? For example when generating the DDL from my relational model the foreign key constraint is generated twice, i.e. for both tables that are involved in th...
Categories: DBA Blogs

A simple Terraform script to create an AWS EC2 playground

Yann Neuhaus - Mon, 2019-11-11 02:39

When I started to write the blog about AWS SSM I quickly realized that I need a way to bring up and destroy my AWS EC2 playground in an easy and reproducible way. There are several options for this: I could have used the AWS command line interface or AWS CloudFormation. Both work pretty well and would have been more than enough for my simple requirements. In the end I decided to use Terraform for one simple reason: With Terraform you can not only provision on AWS but also on Azure, GCP and many others. So, instead of using a tool which is limited to AWS, using a tool which is vendor independent make much more sense.

For the AWS SSM blog I had several requirements I wanted to address:

  • The playground should run in my on VPC so I will not affect any other people doing tests at the same time
  • Because I had Windows and Linux EC2 instances two security groups should get created, one allowing SSH and one allowing RDP into the EC2 machines
  • Both security groups should allow outbound connections to the internet by using an internet gateway
  • Finally two Windows, two Ubuntu, two Red Hat, two SLES and two CentOS instances should get provisioned

Using Terraform all of this is actually quite simple. The first information you’ll need to tell Terraform is the provider you want to use. In my case this is AWS and I am using my “test” AWS profile which is configured for my AWS command line interface, the AWS region I want to use is Frankfurt:

// set the provider to AWS and the AWS region to eu-central-1
provider "aws" {
  profile    = "test"
  region     = "eu-central-1"
}

Because I want to limit direct access to the EC2 instances to my own IP address for security reasons I am defining a local variable I am changing each time I am setting this up from a different network. Input variables could also be used for that but in my case a local variable is just fine:

locals {
  my_ip        = ["37.201.6.8/32"]
}

The first component that actually gets provisioned on AWS is the VPC usinf the aws_vpc resource::

// create the virtual private network
resource "aws_vpc" "dwe-vpc" {
  cidr_block = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support = true

  tags = {
    Name = "dwe-vpc"
  }
}

This defines the IP range I’ll be using, switches the dns parameters to on and gets a tag. As I want my EC2 instances to be able to connect to the internet an internet gateway gets provisioned in the next step, which is attached to the VPC that was created in the step before:

// create the internet gateway
resource "aws_internet_gateway" "dwe-igw" {
  vpc_id = "${aws_vpc.dwe-vpc.id}"

  tags = {
    Name = "dwe-igw"
  }
}

Next we need a subnet, again attached to the VPC:

// create a dedicated subnet
resource "aws_subnet" "dwe-subnet" {
  vpc_id            = "${aws_vpc.dwe-vpc.id}"
  cidr_block        = "10.0.1.0/24"
  availability_zone = "eu-central-1a"

  tags = {
    Name = "dwe-subnet"
  }
}

Routing tables are an important concept in AWS and define how traffic is routed in the VPC. The routing table below enables traffic to the internet through the internet gateway:

// create routing table which points to the internet gateway
resource "aws_route_table" "dwe-route" {
  vpc_id = "${aws_vpc.dwe-vpc.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.dwe-igw.id}"
  }

  tags = {
    Name = "dwe-igw"
  }
}

Once the routing table is defined it needs to be attached to the subnet:

// associate the routing table with the subnet
resource "aws_route_table_association" "subnet-association" {
  subnet_id      = "${aws_subnet.dwe-subnet.id}"
  route_table_id = "${aws_route_table.dwe-route.id}"
}

For ssh and rdp access to the EC2 instances two security groups get provisioned:

// create a security group for ssh access to the linux systems
resource "aws_security_group" "dwe-sg-ssh" {
  name        = "dwe-sg-ssh"
  description = "Allow SSH inbound traffic"
  vpc_id      = "${aws_vpc.dwe-vpc.id}"

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = local.my_ip
  }

  // allow access to the internet
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "dwe-sg-ssh"
  }
}

// create a security group for rdp access to the windows systems
resource "aws_security_group" "dwe-sg-rdp" {
name        = "dwe-sg-rdp"
vpc_id      = "${aws_vpc.dwe-vpc.id}"
description = "Allow RDP inbound traffic"

  ingress {
    from_port   = 3389
    to_port     = 3389
    protocol    = "tcp"
    cidr_blocks = local.my_ip
  }

  // allow access to the internet
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "dwe-sg-rdp"
  }
}

Both define an inbound rule (ssh and rdp) and an outbound rule for being able to connect the internet. Now, as all the basics are there the EC2 instances itself get provisioned based on the building blocks from above:

// create two Ubuntu instances
resource "aws_instance" "i-ubuntu-linux-prod" {
  ami                         = "ami-0cc0a36f626a4fdf5"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device {
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }
  
  tags = {
    Name = "i-ubuntu-linux-prod"
  }  
}

resource "aws_instance" "i-ubuntu-linux-test" {
  ami                         = "ami-0cc0a36f626a4fdf5"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device {
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = {
    Name = "i-ubuntu-linux-test"
  } 
}

// create two Amazon linux instances
resource "aws_instance" "i-amazon-linux-prod" {
  ami                         = "ami-0f3a43fbf2d3899f7"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device {
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = {
    Name = "i-amazon-linux-prod"
  } 
}

resource "aws_instance" "i-amazon-linux-test" {
  ami                         = "ami-0f3a43fbf2d3899f7"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device {
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = {
    Name = "i-amazon-linux-test"
  } 
}

// create two Red Hat linux instances
resource "aws_instance" "i-redhat-linux-prod" {
  ami                         = "ami-0badcc5b522737046"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device {
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = {
    Name = "i-redhat-linux-prod"
  } 
}

resource "aws_instance" "i-redhat-linux-test" {
  ami                         = "ami-0badcc5b522737046"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"
  
  root_block_device {     
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }
  
  tags = { 
    Name = "i-redhat-linux-test"
  }
}

// create two SUSE linux instances
resource "aws_instance" "i-suse-linux-prod" {
  ami                         = "ami-03b86a97a8f02d44e"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device { 
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = { 
    Name = "i-suse-linux-prod"
  }
}

resource "aws_instance" "i-suse-linux-test" {
  ami                         = "ami-03b86a97a8f02d44e"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"
  
  root_block_device {     
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }
  
  tags = { 
    Name = "i-suse-linux-test"
  }
}

// create two CentOS linux instances
resource "aws_instance" "i-centos-linux-prod" {
  ami                         = "ami-04cf43aca3e6f3de3"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device { 
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = { 
    Name = "i-centos-linux-prod"
  }
}

resource "aws_instance" "i-centos-linux-test" {
  ami                         = "ami-04cf43aca3e6f3de3"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"
  
  root_block_device {     
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }
  
  tags = { 
    Name = "i-centos-linux-test"
  }
}

// create two Windows instances
resource "aws_instance" "i-windows-prod" {
  ami           = "ami-034937fd7f621ba85"
  instance_type = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-rdp.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device { 
    volume_size           = "30"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = { 
    Name = "i-windows-prod"
  }
}

resource "aws_instance" "i-windows-test" {
  ami           = "ami-034937fd7f621ba85"
  instance_type = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-rdp.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device { 
    volume_size           = "30"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = {
    Name = "i-windows-test"
  }
}

And that’s it. To check what actually will be done by Terraform there is the “plan” command. As the output is quite long I’ll skip most of it and just present the last few lines:

dwe@dwe:~/Documents/aws/ssm_demo$ terraform plan

...

Plan: 19 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Once you are happy with that you can “apply” the execution plan and everything will get provisioned and you confirmed by “yes”:

dwe@dwe:~/Documents/aws/ssm_demo$ terraform apply

Plan: 19 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

Sit back, relax and one or two minutes later your AWS playground is ready:

aws_vpc.dwe-vpc: Creating...
aws_vpc.dwe-vpc: Creation complete after 2s [id=vpc-026cdc481d5365074]
aws_internet_gateway.dwe-igw: Creating...
aws_subnet.dwe-subnet: Creating...
aws_security_group.dwe-sg-ssh: Creating...
aws_security_group.dwe-sg-rdp: Creating...
aws_subnet.dwe-subnet: Creation complete after 0s [id=subnet-028e27fef8df3b963]
aws_internet_gateway.dwe-igw: Creation complete after 0s [id=igw-0656108a04d5ea0a5]
aws_route_table.dwe-route: Creating...
aws_security_group.dwe-sg-rdp: Creation complete after 1s [id=sg-0764508e3a5234393]
aws_route_table.dwe-route: Creation complete after 1s [id=rtb-07691bc54b40af0ae]
aws_route_table_association.subnet-association: Creating...
aws_instance.i-windows-prod: Creating...
aws_instance.i-windows-test: Creating...
aws_security_group.dwe-sg-ssh: Creation complete after 1s [id=sg-053995952f558a4ff]
aws_instance.i-centos-linux-test: Creating...
aws_instance.i-amazon-linux-prod: Creating...
aws_instance.i-amazon-linux-test: Creating...
aws_instance.i-redhat-linux-prod: Creating...
aws_instance.i-centos-linux-prod: Creating...
aws_instance.i-redhat-linux-test: Creating...
aws_instance.i-ubuntu-linux-test: Creating...
aws_route_table_association.subnet-association: Creation complete after 0s [id=rtbassoc-07c7d4282033c4a71]
aws_instance.i-ubuntu-linux-prod: Creating...
aws_instance.i-windows-prod: Still creating... [10s elapsed]
aws_instance.i-windows-test: Still creating... [10s elapsed]
aws_instance.i-centos-linux-test: Still creating... [10s elapsed]
aws_instance.i-amazon-linux-test: Still creating... [10s elapsed]
aws_instance.i-amazon-linux-prod: Still creating... [10s elapsed]
aws_instance.i-redhat-linux-prod: Still creating... [10s elapsed]
aws_instance.i-centos-linux-prod: Still creating... [10s elapsed]
aws_instance.i-redhat-linux-test: Still creating... [10s elapsed]
aws_instance.i-ubuntu-linux-test: Still creating... [10s elapsed]
aws_instance.i-ubuntu-linux-prod: Still creating... [10s elapsed]
aws_instance.i-amazon-linux-test: Creation complete after 13s [id=i-02706717c3440723a]
aws_instance.i-suse-linux-test: Creating...
aws_instance.i-ubuntu-linux-test: Creation complete after 16s [id=i-0d2999aa319a90a3d]
aws_instance.i-centos-linux-test: Creation complete after 16s [id=i-03923fcf9d5881421]
aws_instance.i-suse-linux-prod: Creating...
aws_instance.i-ubuntu-linux-prod: Creation complete after 16s [id=i-00967725bc758f3ef]
aws_instance.i-redhat-linux-test: Creation complete after 16s [id=i-02a705327fb0acb61]
aws_instance.i-windows-prod: Creation complete after 16s [id=i-09c3fcc90491ef2cf]
aws_instance.i-redhat-linux-prod: Creation complete after 16s [id=i-0161726dfe1ed890b]
aws_instance.i-windows-test: Creation complete after 16s [id=i-02f567d11d32444fd]
aws_instance.i-amazon-linux-prod: Still creating... [20s elapsed]
aws_instance.i-centos-linux-prod: Still creating... [20s elapsed]
aws_instance.i-suse-linux-test: Still creating... [10s elapsed]
aws_instance.i-amazon-linux-prod: Creation complete after 23s [id=i-0b799879e77ce8b33]
aws_instance.i-suse-linux-prod: Still creating... [10s elapsed]
aws_instance.i-centos-linux-prod: Still creating... [30s elapsed]
aws_instance.i-centos-linux-prod: Creation complete after 32s [id=i-0482d958849f86483]
aws_instance.i-suse-linux-test: Still creating... [20s elapsed]
aws_instance.i-suse-linux-prod: Still creating... [20s elapsed]
aws_instance.i-suse-linux-test: Still creating... [30s elapsed]
aws_instance.i-suse-linux-test: Creation complete after 32s [id=i-0b35d559853f9f0d6]
aws_instance.i-suse-linux-prod: Still creating... [30s elapsed]
aws_instance.i-suse-linux-prod: Creation complete after 33s [id=i-062df970b894a23da]

Apply complete! Resources: 19 added, 0 changed, 0 destroyed.

Once you’re done with your tests, simply destroy the whole stuff by usinf the “destroy” command:

dwe@dwe:~/Documents/aws/ssm_demo$ terraform destroy
...
Plan: 0 to add, 0 to change, 19 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes
...
aws_security_group.dwe-sg-rdp: Destroying... [id=sg-0764508e3a5234393]
aws_security_group.dwe-sg-rdp: Destruction complete after 1s
aws_security_group.dwe-sg-ssh: Destruction complete after 1s
aws_subnet.dwe-subnet: Destruction complete after 1s
aws_vpc.dwe-vpc: Destroying... [id=vpc-026cdc481d5365074]
aws_vpc.dwe-vpc: Destruction complete after 0s

Destroy complete! Resources: 19 destroyed.

Quite easy, always reproducible and fast.

Cet article A simple Terraform script to create an AWS EC2 playground est apparu en premier sur Blog dbi services.

odacli create-database error DCS-10802: Insufficient disk space on file system: database

Yann Neuhaus - Mon, 2019-11-11 02:27
Introduction

I was reimaging an X6-2M ODA after various patching troubles, and everything was fine. After several databases created, the next ones could no more be created.

DCS-10802: Insufficient disk space on file system: database. Expected free space (MB): {1}, available space (MB): {2}

I’ve spent some time on it, and finally found the cause of the problem. And the solution.

Context

After successfully reimaged an X6-2M ODA with 18.5, and applied the patch for the firmwares, ILOM and disks, I was creating all the databases with odacli with the following commands:


odacli create-database -hm XyXyXyXyXyXy --dbstorage ACFS --characterset WE8MSWIN1252 --databaseUniqueName HPMVRN --dbhomeid '0704ef7c-0cb9-4525-8edb-8d70b8f7ddfb' --dblanguage AMERICAN --dbname HPMVRN --dbshape odb1s --dbterritory AMERICA --no-cdb --no-dbconsole --json
odacli create-database ...

Each database is created with the smallest shape odb1s, as I later fine tune each instance according to my needs.

After the 8th or 9th database created, the next ones ended with a failure:

odacli describe-job -i "2ad9f7e8-331d-4a82-bb7b-c88ddad1cdf8"

Job details
----------------------------------------------------------------
ID: 2ad9f7e8-331d-4a82-bb7b-c88ddad1cdf8
Description: Database service creation with db name: HPMVRN
Status: Failure
Created: November 8, 2019 1:33:23 PM CET
Message: DCS-10802:Insufficient disk space on file system: database. Expected free space (MB): {1}, available space (MB): {2}

Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Database Service creation November 8, 2019 1:33:23 PM CET November 8, 2019 1:33:58 PM CET Failure
Database Service creation November 8, 2019 1:33:23 PM CET November 8, 2019 1:33:58 PM CET Failure
Setting up ssh equivalance November 8, 2019 1:33:23 PM CET November 8, 2019 1:33:23 PM CET Success
Creating volume datHPMVRN November 8, 2019 1:33:23 PM CET November 8, 2019 1:33:42 PM CET Success
Creating ACFS filesystem for DATA November 8, 2019 1:33:42 PM CET November 8, 2019 1:33:57 PM CET Success
Database Service creation November 8, 2019 1:33:57 PM CET November 8, 2019 1:33:58 PM CET Failure
Database Creation November 8, 2019 1:33:57 PM CET November 8, 2019 1:33:58 PM CET Failure

Analysis

Error seems obvious: Insufficient disk space. Let’s check the disk space:


df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot
30G 5.6G 23G 20% /
tmpfs 126G 1.1G 125G 1% /dev/shm
/dev/sda1 477M 41M 411M 9% /boot
/dev/mapper/VolGroupSys-LogVolOpt
109G 85G 19G 82% /opt
/dev/mapper/VolGroupSys-LogVolU01
148G 31G 110G 22% /u01
/dev/asm/datdbtest-8 100G 2.2G 98G 3% /u02/app/oracle/oradata/DBTEST
/dev/asm/datgotst-8
100G 2.2G 98G 3% /u02/app/oracle/oradata/GOTST
/dev/asm/datgeval-8
100G 2.2G 98G 3% /u02/app/oracle/oradata/GEVAL
/dev/asm/commonstore-8
5.0G 391M 4.7G 8% /opt/oracle/dcs/commonstore
/dev/asm/datsmval-8
100G 44G 57G 44% /u02/app/oracle/oradata/SMVAL
/dev/asm/datvival-8
100G 34G 67G 34% /u02/app/oracle/oradata/VIVAL
/dev/asm/datvjval-8
100G 56G 45G 56% /u02/app/oracle/oradata/VJVAL
/dev/asm/dump-8 200G 132G 69G 66% /dpdumplocal
/dev/asm/dataoval-8
100G 2.2G 98G 3% /u02/app/oracle/oradata/AOVAL
/dev/asm/datgrtst-8 100G 2.2G 98G 3% /u02/app/oracle/oradata/GRTST
/dev/asm/datgival-8
100G 7.8G 93G 8% /u02/app/oracle/oradata/GIVAL
/dev/asm/reco-329 74G 56G 19G 76% /u03/app/oracle
/dev/asm/datgetst-8 100G 2.2G 98G 3% /u02/app/oracle/oradata/GETST
/dev/asm/datgftst-8
100G 30G 71G 30% /u02/app/oracle/oradata/GFTST
/dev/asm/datgctst-8
100G 2.2G 98G 3% /u02/app/oracle/oradata/GCTST
/dev/asm/dathpmvrn-8 100G 448M 100G 1% /u02/app/oracle/oradata/HPMVRN

No filesystem is full. And create-database managed to create the acfs volume for data successfully. Let’s try to put something in it:


cp /the_path/the_big_file /u02/app/oracle/oradata/HPMVRN/

No problem with this acfs volume.

Let’s try to create the database into ASM:


odacli list-databases | tail -n 1

bed7f9a0-e108-4423-8b2c-d7c33c795e87 HPMVRN Si 12.1.0.2 false Oltp Odb1s Acfs Failed 0704ef7c-0cb9-4525-8edb-8d70b8f7ddfb

odacli delete-database -i "bed7f9a0-e108-4423-8b2c-d7c33c795e87"

odacli create-database -hm XyXyXyXyXyXy --dbstorage ASM --characterset WE8MSWIN1252 --databaseUniqueName HPMVRN --dbhomeid '0704ef7c-0cb9-4525-8edb-8d70b8f7ddfb' --dblanguage AMERICAN --dbname HPMVRN --dbshape odb1s --dbterritory AMERICA --no-cdb --no-dbconsole --json

No problem here, really seems to be related to acfs.

Where does the create-database also need free space? For sure, in the RECOvery area filesystem, created with the very first database:


Filesystem Size Used Avail Use% Mounted on
/dev/asm/reco-329 74G 56G 19G 76% /u03/app/oracle

Let’s create a file in this filesystem:

cp /the_path/the_big_file /u03/app/oracle/

No problem.

Quite strange, and as the database is not yet created, there is no alert_HPMVRN.log to look for the error…

Maybe the RECOvery area filesystem is not big enough for odacli. acfs concept means autoextensible filesystems, but as all my Fast Recovery Areas of all my databases probably won’t fit in the allocated 74GB, odacli may fail. How to extend the acfs RECOvery filesystem? Just put enough dummy files in it, and then remove them.


cd /u03/app/oracle
cp /the_path/the_big_file tmpfile1
cp tmpfile1 tmpfile2
cp tmpfile1 tmpfile3
cp tmpfile1 tmpfile4
cp tmpfile1 tmpfile5
cp tmpfile1 tmpfile6
rm -rf tmp*

df -h /u03/app/oracle
Filesystem Size Used Avail Use% Mounted on
/dev/asm/reco-329 88G 59G 30G 67% /u03/app/oracle

RECOvery filesystem is slightly bigger, let’s retry the database creation:


odacli delete-database -i "985b1d37-6f84-4d64-884f-3a429c195a5d"

odacli create-database -hm XyXyXyXyXyXy --dbstorage ACFS --characterset WE8MSWIN1252 --databaseUniqueName HPMVRN --dbhomeid '0704ef7c-0cb9-4525-8edb-8d70b8f7ddfb' --dblanguage AMERICAN --dbname HPMVRN --dbshape odb1s --dbterritory AMERICA --no-cdb --no-dbconsole --json

odacli describe-job -i "d5022d8b-9ddb-4f93-a84a-36477657794f"

Job details
----------------------------------------------------------------
ID: d5022d8b-9ddb-4f93-a84a-36477657794f
Description: Database service creation with db name: HPMVRN
Status: Success
Created: November 8, 2019 1:47:28 PM CET
Message:

Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance November 8, 2019 1:47:30 PM CET November 8, 2019 1:47:40 PM CET Success
Creating volume datHPMVRN November 8, 2019 1:47:40 PM CET November 8, 2019 1:47:59 PM CET Success
Creating ACFS filesystem for DATA November 8, 2019 1:47:59 PM CET November 8, 2019 1:48:14 PM CET Success
Database Service creation November 8, 2019 1:48:14 PM CET November 8, 2019 1:54:38 PM CET Success
Database Creation November 8, 2019 1:48:14 PM CET November 8, 2019 1:53:02 PM CET Success
Change permission for xdb wallet files November 8, 2019 1:53:02 PM CET November 8, 2019 1:53:02 PM CET Success
Place SnapshotCtrlFile in sharedLoc November 8, 2019 1:53:02 PM CET November 8, 2019 1:53:04 PM CET Success
SqlPatch upgrade November 8, 2019 1:54:02 PM CET November 8, 2019 1:54:35 PM CET Success
updating the Database version November 8, 2019 1:54:35 PM CET November 8, 2019 1:54:38 PM CET Success
create Users tablespace November 8, 2019 1:54:38 PM CET November 8, 2019 1:54:40 PM CET Success

This is it. I was then able to create the next databases without any problem.

Final words

Keep in mind that all your Fast Recovery Areas should fit in your dedicated filesystem. To make sure that you will not encounter problems, resize you RECOvery filesystem to the sum of all your target FRA with acfsutil:

acfsutil size 500G –d /dev/asm/reco-329 /u03/app/oracle

Autoextensible filesystem never ensures you it will succeed in case of extension.

Or simply go for ASM instead of acfs on your ODA, if you do not need acfs features like snapshots. ASM is simplier and more efficient because it does not provision disk space for each database like acfs.

Cet article odacli create-database error DCS-10802: Insufficient disk space on file system: database est apparu en premier sur Blog dbi services.

Exadata X8M - World's Fastest Database Machine

Syed Jaffar - Sun, 2019-11-10 08:11
Exadata X8M has launched during OOW 2019, and termed as world's fastest Database Machine. Let's walkthrough about new additions that has made X8M as world's fastest Database Machine.

An Exadata X8M is the industry's first DB machine integrated with Intel Optane DC persistent memory (read more about this) and 100 gigabit RDMA over converged ethernet. This will be dramatically improves the performance for all sort of workloads, such as OLTP, analytics, IoT, high frequency trading etc by eliminating the storage access bottlenecks. Persistent memory with RoCE networking can reduces IO latency significantly and boosts the performance by 2.5X.

It uses RDMA directly from the DB to access persistent memory in smart storage servers, eliminating the entire OS, IO and network software stacks. Which will deliver the higher throughput with lower latency. Also, frees CPU resources on storage server to execute more smart scan queries for analytic workloads.

Its in-memory performance with all advantages of shared storage benefits the Analytics and OLTP  workloads. Direct database access to shared persistent memory accelerates is the real game changer for application that demands large amounts of data.

For more details, read the link below:

https://www.oracle.com/corporate/pressrelease/oow19-oracle-unleashes-worlds-fastest-database-machine-091619.html




    Nested Tables

    Jonathan Lewis - Sat, 2019-11-09 05:28

    This note is a little side effect of answering a question about the total space associated with a table, including its indexes, LOB columns and nested tables. The first suggestion I published failed to take account of the fact that nested tables can contain their own nested tables so I had to modify the code.

    The basic change was easy – in the original code I had joined to the view dba_nested_tables to translate between a table_name and its parent_table_name. The only change I needed to make was to replace the reference to the view with a different view that joined a table_name to its “oldest ancestor”. To achieve this I had to do two things: first, create a table with multiple levels of nesting, then create a suitable view definition.  For reference – because it may help somebody – I’ve published the two stages here.

    A revolting nested table:

    The following code creates three table types, but the second table type

    
    rem
    rem     Script:         nested_tables.sql
    rem     Author:         Jonathan Lewis
    rem     Dated:          Nov 2019
    rem
    rem     Last tested 
    rem             19.3.0.0
    rem             12.2.0.1
    rem
    
    create or replace type jpl_item3 as object (v1 varchar2(10), n3 number);
    /
    
    create or replace type jpl_tab3_type as table of jpl_item3;
    /
    
    create or replace type jpl_item2 as object(v2 varchar2(10), jpl3 jpl_tab3_type);
    /
    
    create or replace type jpl_tab2_type as table of jpl_item2;
    /
    
    create or replace type jpl_item1 as object(v3 varchar2(10), jpl2 jpl_tab2_type);
    /
    
    create or replace type jpl_tab1_type as table of jpl_item1;
    /
    
    create table demo_nest_2 (
            id      number  constraint d2_pk primary key,
            jpl1    jpl_tab1_type
    )
    segment creation immediate
    nested table jpl1 store as t_jpl1
            (
            nested table jpl2  store as t_jpl2
                    (
                    nested table jpl3 store as t_jpl3 
                    return as locator
                    )
            return as locator
            )
    return as locator
    /
    
    

    I’ve never seen nested tables manually created in a production system though I believe they are likely to appear (along with varrays and LOBs) as a side effect of some XML or JSON mechanisms, but many years ago (in Practical Oracle 8i, possibly) I discussed the pros and cons of returning them “by value” or (as I have here) “by reference”. As you can see, you need to exercise some care with brackets and locating the text as you try to refine multiple levels of nesting.

    Tne Ancestor View

    We’ll take this in three steps – first, report from user_nested_tables. (The final script for reporting space used dba_nested_tables with a predicate available on the owner column, but I don’t give myself DBA privileges while writing quick and dirty bits of SQL.).

    
    select 
            parent_table_name, parent_table_column, table_name, 
            ltrim(storage_spec) storage_spec,       -- the DBA view definition includes lpad( , 30) !!
            ltrim(return_type ) return_type         -- the DBA view definition includes lpad( , 20) !!
    from 
            user_nested_tables
    order by
            parent_table_name, parent_table_column
    /
    
    PARENT_TABLE_NAME    PARENT_TABLE_COLUMN  TABLE_NAME           STORAGE_SPEC         RETURN_TYPE
    -------------------- -------------------- -------------------- -------------------- --------------------
    DEMO_NEST_2          JPL1                 T_JPL1               USER_SPECIFIED       LOCATOR
    T_JPL1               JPL2                 T_JPL2               USER_SPECIFIED       LOCATOR
    T_JPL2               JPL3                 T_JPL3               DEFAULT              LOCATOR
    
    

    You’ll notice the odd ltrim() – I have no idea why the view defines these columns to be left-padded the way they are, possibly it dates back to the time when something like cast(… as vachar2(30)) wasn’t a possible option.

    Next a simple “connect by” query what uses the above list in a materialize “with subquery” (CTE):

    
    with my_nested_tables as (
    select
            /*+ materialize */
            parent_table_name, table_name
    from
            user_nested_tables
    )
    select
            parent_table, child_table, pathname
    from    (
            select
                    level,
                    sys_connect_by_path(table_name, '/')    pathname,
                    connect_by_root parent_table_name parent_table,
                    table_name child_table
            from
                    my_nested_tables
            connect by
                    parent_table_name = prior table_name
            )
    order by
            parent_table, child_table, pathname
    /
    
    PARENT_TABLE         CHILD_TABLE          PATHNAME
    -------------------- -------------------- ----------------------------------------
    DEMO_NEST_2          T_JPL1               /T_JPL1
    DEMO_NEST_2          T_JPL2               /T_JPL1/T_JPL2
    DEMO_NEST_2          T_JPL3               /T_JPL1/T_JPL2/T_JPL3
    T_JPL1               T_JPL2               /T_JPL2
    T_JPL1               T_JPL3               /T_JPL2/T_JPL3
    T_JPL2               T_JPL3               /T_JPL3
    
    
    

    As required this shows me demo_nest_2 as the owning ancestor of t_jpl1, t_jpl2 and t_jpl3. Unfortunately it has also produced three rows that we don’t want to see in our final space-summing code. But it’s easy enough to get rid of those – the only rows we want are the rows with a parent_table that doesn’t appear as a child_table:

    
    with my_nested_tables as (
    select
            /*+ materialize */
            parent_table_name, table_name
    from
            user_nested_tables
    )
    select  parent_table, child_table, pathname
    from    (
            select
                    level,
                    sys_connect_by_path(table_name, '/')    pathname,
                    connect_by_root parent_table_name parent_table,
                    table_name child_table
            from
                    my_nested_tables
            connect by
                    parent_table_name = prior table_name
            )
    where   (parent_table) not in (
                    select table_name
                    from   my_nested_tables
            )
    order by
            parent_table, child_table, pathname
    /
    
    PARENT_TABLE         CHILD_TABLE          PATHNAME
    -------------------- -------------------- ----------------------------------------
    DEMO_NEST_2          T_JPL1               /T_JPL1
    DEMO_NEST_2          T_JPL2               /T_JPL1/T_JPL2
    DEMO_NEST_2          T_JPL3               /T_JPL1/T_JPL2/T_JPL3
    
    3 rows selected.
    
    
    

    And there’s the result we wanted to see. A first simple corroboration of the fact is that the output corresponds with the “NESTED TABLE” segments reported by user_segments.

    Of course, having written a query that gets the right result from a table definition we used to help us define the query in the first place we now ought to create a few more tables with different structures to see if the query continues to give the right results in more complex cases.

    What happens, for example, if I create a table with two columns of nested tables, and one of the nested table typed also contained two nested tables ? What happens if the base table is an index organized table ?

    It’s easy to do the second test – just add “organization index” immediately after “segment creation immediate” in the table creation statement. The correct results drop out.

    As for the first test – here’s the SQL to create one example (and the query still gets the right answers – even if you change the table to be index organized).

    
    drop type jpl_tab1_type;
    drop type jpl_item1;
    
    drop type jpl_tab2_type;
    drop type jpl_item2;
    
    drop type jpl_tab3_type;
    drop type jpl_item3;
    
    purge recyclebin;
    
    create or replace type jpl_item3 as object (v1 varchar2(10), n3 number);
    /
    
    create or replace type jpl_tab3_type as table of jpl_item3;
    /
    
    create or replace type jpl_item2 as object(v2 varchar2(10), jpl3 jpl_tab3_type, jpl3x jpl_tab3_type);
    /
    
    create or replace type jpl_tab2_type as table of jpl_item2;
    /
    
    create or replace type jpl_item1 as object(v3 varchar2(10), jpl2 jpl_tab2_type)
    /
    
    create or replace type jpl_tab1_type as table of jpl_item1;
    /
    
    create table demo_nest_3 (
            id      number  constraint d2_pk primary key,
            jpl1    jpl_tab1_type,
            jpl1a   jpl_tab1_type
    )
    segment creation immediate
    -- organization index
    nested table jpl1 store as t_jpl1
            (
            nested table jpl2  store as t_jpl2
                    (
                    nested table jpl3 store as t_jpl3 
                    return as locator
                    nested table jpl3x store as t_jpl3x 
                    return as locator
                    )
            return as locator
            )
    return as locator
    nested table jpl1a store as t_jpl1a
            (
            nested table jpl2  store as t_jpl2a
                    (
                    nested table jpl3 store as t_jpl3a
                    return as locator
                    nested table jpl3x store as t_jpl3xa
                    return as locator
                    )
            return as locator
            )
    return as locator
    /
    
    

    All that remains now is to modify the code to use the equivalent DBA views, with the addition of the owner column, then slot the resulting code into the original query in place of the simple references to dba_nested_tables. If you go to the original posting you’ll see that I’ve done this by wrapping the code into a CTE so that the only changes to the rest of the code appear as two (flagged) changes where the CTE is then used.

    Oracle Database Views and Tables for Oracle GoldenGate

    DBASolved - Fri, 2019-11-08 16:40

    Oracle GoldenGate for Oracle over the last few releases have been moving towards an integrated architecture.  This means that there is more views and tables within the Oracle Database that support Oracle GoldenGate.  You can quickly find these views and tables by using a where clause with a filter for GoldenGate: select * from all_views […]

    The post Oracle Database Views and Tables for Oracle GoldenGate appeared first on DBASolved.

    Categories: DBA Blogs

    Avoiding patching madness by using AWS SSM

    Yann Neuhaus - Fri, 2019-11-08 13:15

    As we have more and more customers either currently moving to AWS or already moved much, if not even all, of their workloads to AWS, one question pops up quite frequently: Now that we have so many EC2 instances up and running how do we regularly patch them? There are many tools around for patching operating systems, such as Microsoft SSCM, Red Hat Satellite or SUSE Manager just to name a few. There are many other as well but most of them have an important drawback: Either they can only patch Windows systems or they can only patch Linux systems (and even here usually only a subset of the distributions currently in use). This is where AWS System Manager becomes very handy as you can patch most of the commonly used operating system using one single tool. You can even patch your on-prem instances if your network setup includes the on-prem instances as well. In this post I’ll share what we did for one of our customers to remove the pain of either using multiple tools to patch all the operating systems or to manually patch all the systems and then (manually) document what has been done and what was the result.

    Managed Instances

    Once you start with AWS SSM there are a few key concepts you need to understand. The first one is the so-called “Managed Instance”. For an instance being managed it must fulfill two requirements:

    1. The AWS agent must be deployed on the instance
    2. An IAM role with the correct permission must be assigned to the instance

    For the AWS agent it is quite easy if you are deployed any of these because the agent will be there be default:

    • Windows Server 2003-2012 R2 AMIs published in November 2016 or later
    • Amazon Linux
    • Amazon Linux 2
    • Ubuntu Server 16.04
    • Ubuntu Server 18.04

    If you are running something else the agent needs to be installed manually as described here for Windows and here for Linux. For on-prem instances it is straight forward as well (Windows here, Linux here) but you need to create Managed Instance Actications as well which is not in the scope of this post.

    For the IAM role we’ll first look at what I have currently deployed:

    This are two Windows, two Red Hat, two Ubuntu, two Amazon Linux, two CentOS and two SUSE instances, one for Prod and one for Test each. The “Prod” and “Test” naming becomes important later, just ignore it for now. If we take a look at how many managed instances we have right now in System Manager we will see zero, even that the Windows, Ubuntu and Amazon Linux instances have the SSM agent deployed automatically (otherwise we would see a list of managed instances here):

    We will use the “Quick Setup” here to setup the IAM role and instance profiles:

    For the scope of this post we will go with defaults suggested by AWS:

    We want all the instances:


    The last screen shows an overview of what is currently happening, basically collecting all the information that will be stored in the AWS SSM inventory. Before you’ll see any instances in “Managed Instances” you will need to wait until the pending inventory actions have been completed:

    Once that is done all the managed instances should be there (notice that I manually deployed the ASM SSM agent on the SUSE, CentOS and RedHat instances, otherwise they would not show up here):

    Patch baselines

    Once all instances are “managed” there is the next concept we need to understand: Patch baselines. Patch baselines basically define which set of patches (classification, severity, …) shall get applied to which product (e.g. Red Hat 7, Red Hat 8, …). As we have six different operating system we need at least six patch baselines. In reality you’ll probably even need twelve because you will have production and test systems. Usually you want to have the same patches applied to production as you have applied to test some days or weeks before and therefore we need two patch baselines per operating system flavor. Patch baselines are listed in SSM under “Patch manager”:

    The following screenshots show how I created the patch baseline for the Red Hat test systems:

    For the Red Hat production systems the patch baseline is exactly the same but I’ve put an approval delay of 14 days:

    Why the delay? Suppose you patch your test systems every first Tuesday a month and then you patch the production systems 14 days later. As you want to have the same patches on production and test, the production patches get an approval delay of 14 days. This is how you can make sure that your systems are on the same level even if production is patched later than test. One point consider is, that this does not apply to the Ubuntu systems as there is no approval delay for Ubuntu patches:

    Once all the base lines have been created it looks like this:

    The “Patch group” tag

    The next key concept that comes into the game are tags. No matter what you well be doing in the cloud, without tags you’re lost. There is just no way around tags if you want to automate processes in the cloud. When it comes the AWS SSM there is the “Patch Group” tag and this tag needs to be assigned to each instance you want to have patched automatically. Let’s assume you want to have the test instances patched every first Saturday each month and the production instances every 3rd Saturday each month ( 6 a.m. in the morning for both). In addition all the Linux hosts should be patched before the Windows hosts, because the Linux hosts run the databases and the Windows hosts run the application servers or the application (of for whatever reason there is a dependency between the hosts and you need to follow a defined order for patching).

    What we did to fulfill this requirement is to populate the “Patch Group” tag like this:

    1. All test instances get the value: patch_test_0600_batch_1 or patch_test_0600_batch_2
    2. All production instances get the value: patch_prod_0600_batch_1 or patch_prod_0600_batch_2

    The “*batch” values builds the dependency between the Linux and the Windows host. Taking one test Linux and one test Windows as an example the tag would look like this:

    I’ve assigned the tags to all the remaining instances accordingly:

    1. i-amazon-linux-test: patch_test_0600_batch_1
    2. i-centos-linux-test: patch_test_0600_batch_1
    3. i-redhat-linux-test: patch_test_0600_batch_1
    4. i-suse-linux-test: patch_test_0600_batch_1
    5. i-ubuntu-linux-test: patch_test_0600_batch_1
    6. i-windows-test: patch_test_0600_batch_2

    Same procedure for the production instances but using the prod_* tags of course. Only assigning these tags to the instances is not enough though. We need to assign them to the patch baselines as well so the baseline gets selected when a patch is supposed to run on the systems. Assigning a “Patch Group” to a patch baselines basically links the patch baseline to instances with the same tag value per operating system. If you don’t do that you will always get the default patch baseline for an operating system and you have no chance of applying the same patches to test and prod because you can not have different approval delays.

    This needs to be done for all the patch baselines we created above. The patch_test_0600_batch* tags get attached to the baselines for test and the patch_prod_0600_batch* tags get attached to the baselines for production.

    Maintenance windows

    The last building block are the maintenance windows. They define when an action will executed on which systems. You can not only use them for patching but for many other tasks as well (e.g. running a PowerShell script on Windows hosts, running a shell script on Linux hosts, running an Ansible playbook and much more).

    As we want to schedule patching for all the test instances the first Saturday of a month and all production systems the third Saturday of the month we’ll need to two maintenance windows, one for test and for production. We’ll start with the one for test:

    For the scheduling you should go for a cron expression:

    “6F” means the first Saturday (0 is Sunday) each month. Note that if you skip the “Schedule timezone” timezone you will need to provide UTC time in the cron expression. If you do specify your timezone, AWS is automatically calculating the correct UTC time for you, as you can see once the maintenance window is created:

    Now we have the scheduling but the maintenance window has no idea what to run and against what to run it. What to run is specified by adding targets:

    You should definitely give a name to the target so you can easily identify the target afterwards:

    By specifying the “Patch Group” instance tag with our values for the test systems that should run in the first batch we have all the Linux systems registered with this maintenance window. The second target will be the Windows systems that should be patched once the Linux systems have been done:

    All test systems registered:

    Now that we defined where to run we need to define what to run and this is specified by the “Run command”:

    In our case the document is “AWS-RunPatchBaseline” but it could be any of the documents listed as mentioned earlier:

    The task priority is 1 (because we want this to be executed first) and the target is “patch_test_0600_batch_1” (this is why the name for target is important. You will probably not remember the unique string that was generated when you registered the target):

    The rate control option allows you to control concurrenry and error thresholds, for the IAM role go with the defaults:

    We highly recommend to store the output in a S3 bucket, otherwise the output of the run command will be truncated after 2000 charcters. The parameter should be “Install” (you could also go for “Scan” if you only want to scan for available patches):

    Do exactly the same for the patch_test_0600_batch_2 group, with priority 2:

    Both run commands are now registered:

    And that’s it. Repeat the same for the production machines and all your instances are scheduled for patching. Once the maintenance window executed you can get the logs in the “Run command” secion:



    Here you have the link to the log in S3:

    Conclusion

    When you start using AWS SSM it is a bit hard to understand all the relations between patch baselines, the patch group tag, target, run commands and maintenance windows. Once you got that it is quite easy to schedule patching of your whole infratructure (even on prem) when you run operation systems AWS has patch baselines for. The logs should definitely go to S3 so you have the full output available.

    Cet article Avoiding patching madness by using AWS SSM est apparu en premier sur Blog dbi services.

    Clustered Indexes

    Tom Kyte - Fri, 2019-11-08 02:48
    Is clustered Index is faster than Non-clustered Index?
    Categories: DBA Blogs

    Platform list on whicb JAVA_JIT_ENABLED paramter can be enbaled

    Tom Kyte - Fri, 2019-11-08 02:48
    <b>Question:</b> As my application is using some JAVA function as UDF using <b>loadjava</b>. So, developers are suggesting to set <b>JAVA_JIT_ENABLED</b> parameter as <b>true</b>. I have read (https://docs.oracle.com/cd/B28359_01/server.111/b28320/in...
    Categories: DBA Blogs

    How can i extract data from a URL string in SQL?

    Tom Kyte - Fri, 2019-11-08 02:48
    Hi TOM, i am trying to extract some data from a URL and running into some issues. The URL's will look something like this: http://192.168.201.242:8000/express_get?time=$startDate&from=$phoneNumber&to=$CAMPAIGN&Id_agent=$idExternalUser&spent=$durat...
    Categories: DBA Blogs

    JAVA CLASSPATH Specificaiton

    Tom Kyte - Fri, 2019-11-08 02:48
    Hi, We have an Oracle 10g database running java on the database as java stored procedures. It is hosted on an IBM AIX frame. There are both 64bit and 32bit jvms installed. How do you specify the java classpath for java running in the database? W...
    Categories: DBA Blogs

    Cursors

    Tom Kyte - Fri, 2019-11-08 02:48
    Hi Tom, The Oracle users in the world should be thankful to you for all your help all the way .Many thanks for that. We are learning many things about Oracle that we could not learn by reading several Oracle books (other than your books) though ...
    Categories: DBA Blogs

    Pages

    Subscribe to Oracle FAQ aggregator