Feed aggregator

Real-Time Materialized Views

Tom Kyte - 4 hours 53 min ago
Hi, I "think" I saw Maria shown a new feature that allow query rewrite with stale MV + MV Log. (feature similar to IM store + IM trail query rewrite.) What is the name of this feature and how do I use it? Thanks, P.Huang
Categories: DBA Blogs

Uncompressing externally zipped documents

Tom Kyte - 4 hours 53 min ago
Good morning/afternoon guys. I am loading XML files from Windows into the database as BLOBs via an Apex front-end, however the performance leaves much to be desired (unsurprising, due to the 1Gb file sizes). I have a cunning plan to first zip t...
Categories: DBA Blogs

Oracle Text Substring Search on Encrypted String

Tom Kyte - 4 hours 53 min ago
Hi, Do Oracle Text have capability to sub string search on encrypted string. The Data resides in table as encrypted string (using DBMS_CRYPTO). The sub string passed from UI is non encrypted and data resides on DB is encrypted. Thanks & Re...
Categories: DBA Blogs

A New Chapter

Steven Chan - 9 hours 57 min ago

I had no idea what would be in store for me when I started this blog in 2006. Thousands of articles, comments, emails, and customer interactions later, it’s clear that this decision opened up a now-essential way of communicating with you. 

2019 marks my 21st year with Oracle. I’ve had some major life changes in the last year, so it seems like I’ve reached a good juncture to reflect on what’s next for me. Starting in February, I will be taking a hiatus from my current responsibilities in EBS Development.

Over the next few days, we’ll be introducing the team in EBS Development who will be taking on my blogging responsibilities. In the meantime, the following key people will be leading the new blogging team:

  • Kevin Hudson
  • Elke Phelps

They've got a great team and are committed to keeping this blog useful and informative. I look forward to seeing what they come up with next!

Categories: APPS Blogs

ORA-04080: trigger ‘PRICE_HISTORY_TRIGGERV1’ does not exist

Amardeep Sidhu - 16 hours 13 min ago

It is actually a dumb one. I was disabling triggers in a schema and ran this SQL to generate the disable statements. (Example from here)

HR@test> select 'alter trigger '||trigger_name|| ' disable;' from user_triggers where table_name='PRODUCT';

'ALTERTRIGGER'||TRIGGER_NAME||'DISABLE;'
--------------------------------------------------------------------------------
alter trigger PRICE_HISTORY_TRIGGERv1 disable;

HR@test> alter trigger PRICE_HISTORY_TRIGGERv1 disable;
alter trigger PRICE_HISTORY_TRIGGERv1 disable
*
ERROR at line 1:
ORA-04080: trigger 'PRICE_HISTORY_TRIGGERV1' does not exist


HR@test>

WTF ? It is there but the disable didn’t work. I was in hurry, tried to connect through SQL developer and disable and it worked ! Double WTF ! Then i spotted the problem. Someone created it with one letter in the name in small. So to make it work, we need to use double quotes.

HR@test> alter trigger "PRICE_HISTORY_TRIGGERv1" disable;

Trigger altered.

HR@test>

One of the reasons why you shouldn’t use case sensitive names in Oracle. That is stupid.

Categories: BI & Warehousing

Oracle OpenWorld Europe : London 2019

Rittman Mead Consulting - 16 hours 19 min ago

Some eleven thousand people descended on Oracle OpenWorld Europe in London last week for two days of business and technical sessions delivered by a mixture of members of Oracle’s product team and end users giving real-world case studies of adoption of Oracle’s Cloud offerings and product roadmaps.

Screen-Shot-2019-01-22-at-13.16.24

Something that may not surprise anyone is that at OpenWorld, to speak of anything other than Cloud or Autonomous would be somewhat blasphemous.

It’s a shrewd move this by Oracle to branch outside of their flagship annual conference held in Redwood Shores in October and the attendance backed up the rationale that offering free entry was the right thing to do.

Some of the observations that I made after attending were:

The future is Autonomous

Oracle’s Autonomous Database offering is being heavily pushed despite being a relatively immature product with very few real-world examples yet. The concept is certainly valid and it’s worth new and existing customers of Oracle seriously considering trialling.

There are two autonomous offerings. The autonomous data warehouse (ADW) and autonomous transaction processing (ATP).

Both are fully cloud managed by Oracle, are elastic so that they can be scaled up and down on demand, and most importantly - are autonomous. So the marketing spiel goes, they are self driving, self securing, self repairing. You’ll see this a lot but basically it means that the manual tasks that a DBA would normally perform are taken care of by Oracle. Think patching etc…

AI & ML

You can tell that Oracle are really getting behind the latest trends in the technology market. AI will be a feature of all of their Cloud applications with Mark Hurd (Oracle CEO) predicting that by 2025 all applications on the market with have AI factored in (fair prediction)

Further more Oracle's 2018 acquisiton of DataScience.com show's the strategic vision of the companies board.

Blockchain

Also picking up on the cyber security side of things, Oracle spoke a lot about the role that Blockchain will play in enterprises going forwards. Oracle’s Blockchain cloud platform offering gives enterprises a rapid and simplified deployment of blockchain networks.

Final Thoughts

In summary, this was a really good event for Oracle to run and I really hope they continue to do so. It gave a chance for the Oracle community to come together again and in a growingly competitive market for Cloud, Oracle needs to keep investing in its community going forwards.

Conceptually Oracle has some very timely cloud offerings in their armoury and it will be interesting to come back in 12 months time and see how the adoption of these applications & platforms is going.

Categories: BI & Warehousing

Documentum – xPlore – How to Improve the Search results in DA or D2

Yann Neuhaus - 16 hours 37 min ago

When supporting Documentum, we often got complaining from users like “The search is not working” or “I cannot find my document”.
The first reflex is to verify if the xPlore is working properly. The second reflex is to perform a search test but all is working correctly.
Then we contact user to get more details about his search.
Sometimes the only problem is that user does not use the search properly. Indeed the Documentum search use its “own” language with wildcard and the user just don’t use the right syntax.
Here I propose to set a kind of Rosetta stone which make the link between Human and xPlore language.

Ready to “talk” with the Full Text ?
Let’s start !

Search comprehension:

A word: this is a set of alphanumeric characters in between space characters.

A wildcard character: this is a kind of placeholder represented by a single character, such as an asterisk [ * ], takes the place of any other single character or a string or zero charactere.

 

Search with one word and quotation marks [ ” ” ] When user type
in search field
Dsearch understand Will match Will
NOT
match
“word1″ “word1″: Here Fulltext will search for an exact match of terms contained between the quotation marks word1 word
word11
xword1

 

Search with one word When user type
in search field
Dsearch understands Will match Will
NOT
match
word1 “word1*”
Here FullText add an asterisk [ * ] at the end of the word which matches single character or a string or zero character.
And quotation marks are also automatically added at the beginning and the end of the word. word1
word1A
word14A
word132 word
wo
rword
1word
word4A

 

Search with one word and wildcard question marks [ ? ] When user type
in search field
Dsearch understands Will match Will
NOT
match
word1? “word1?”
Here Fulltext will replace the question mark [ ? ] with zero or one single character.
And quotation marks are also automatically added at the beginning and the end of the word. word1
word11
word12
word1a
word1x
word
word123
xword11

 

Search with one word and wildcard star [ * ] When user type
in search field
Dsearch understands Will match Will
NOT
match
word1* “word1*”
The asterisk [ * ] matches single character or a string or zero character.
And quotation marks are also automatically added at the beginning and the end of the word. word1
word12
word12a
word1x2b
word
word1

 

Search with several words When user type
in search field
Dsearch understands Will match Will
NOT
match
word1 word2 “word1*” OR “word2*”
The [ space ] between two words is translated as “OR“.
The rule “1) search with one word” is applied for each word word1 OR word2
word1A
word14A
word132
word2
word2A
word24A
word232
xword1 OR xword2

 

Search with several words and quotation marks When user type
in search field
Dsearch understands Will match Will
NOT
match
“word1 word2″ “word1 word2″
Here Fulltext will search for an exact match of terms contained within the quotation mark word1 word2 word1 OR word2
word1 AND xword2
xword1 AND word2

 

Search with several words and [+] character When user type
in search field
Dsearch understands Will match Will
NOT
match
word1 + word2 “word1*” AND “word2*”
Here combination of word [ space ][ + ]word is translated as “AND“.
The rule “1) search with one word” is applied for each word
Note: Search can be combined with several [+] word1 +word2 +word3 word1 AND word2
word1 AND word2x
word1x AND word2
word1x AND word2x word1
word2
xword1 AND word2
word1 AND xword2

 

These are the basic search “tips” mostly out of the box, be aware these search behaviors can be customized with some parameters modifications.

My advice is “Abuse quotation marks ¨” ;-)

I hope this has been helpfull !

Cet article Documentum – xPlore – How to Improve the Search results in DA or D2 est apparu en premier sur Blog dbi services.

Automated Cloud Scale Performance Monitoring capabilities with Exadata Software version 19.1

Syed Jaffar - 18 hours 48 min ago
Starting with v12.2, Oracle Autonomous Health Framework (AHF) multiple components work together 24x7 autonomously to keep the database system healthy and reduces human intervention & reaction time utilizing machine learning technologies .

There is no doubt that Exadata Database Machine delivers extreme performance for all sorts of workload. However, diagnosing critical performance issues still needs some manual work and human intervention to identify root causes. This blog post highlights a new autonomous performance enhancement introduced with Exadata system software v 19.1.

Exadata software Release 19.1 comes with an automated, cloud-scale performance monitoring for a wide-range of sub-systems, such as: CPU, Memory, File System, I/O and network. This feature built with the combination of years of real-world performance triaging experience by Oracle Support, industry best practices and Artificial intelligence (AI) technologies. This feature simplifies root cause analysis without much human intervention. It can automatically detect runtime critical performance issues and also figure out the root cause without human intervention.

Taking a few real-world scenarios, as a DBA, we all have come across on many occasions where a spinning process on a database server eating up all system resources causing complete database death (poor performance). With this enhancement, Exadata System Software automatically identifies the exact process that is causing spinning and generates an alert with root cause analysis. Another typical example will be automatically detecting the misconfiguration of huge pages settings on the server and sending alerts. When how a server and perform badly if the huge pages setting is right on the system.

No additional configuration and special skill set is required for this. Management Server (MS) is responsible to perform these activities. All you need is have Exadata software version 19.1 or higher, and configure your alerts on the servers.

For more details, read the oracle documentation.

https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmso/whats-new-oracle-exadata-database-machine-19.1.0.html#GUID-B3DE3C62-278A-48E3-889A-22B9C84B1413

Stay tuned and hunger for more Exadata software 19.1 new features.

Autotrace, statistics and the four majors products

Tom Kyte - 23 hours 13 min ago
Hello The Masters of Oracle, Last time I was using AUTOTRACE in order to collect statistics about a SELECT. I used SQLcl to change from SQL*Plus and... oh, it was a schock! Statistics were very very differents from SQL*Plus. So, I decided to...
Categories: DBA Blogs

dbms_backup_restore.searchfiles returns old opened files

Tom Kyte - 23 hours 13 min ago
Hello, I am using dbms_backup_restore.searchfiles & X$KRBMSFT to get the list of the files from a certain path but it returns me also some other files which were opened and closed with an editor in the past. More exactly, I have 2 files in the ...
Categories: DBA Blogs

Moving data file using a procedure

Tom Kyte - 23 hours 13 min ago
We need to move datafile from one location to another location using a procedure. CREATE OR REPLACE PROCEDURE FILE_MOVING_FROM_DISK AS DISK_FILE_NAME VARCHAR2(200); RES_TS VARCHAR2(100):='USERS'; sql_stmt VARCHAR2(500); ...
Categories: DBA Blogs

Export from 11g to 12c using data pump

Tom Kyte - 23 hours 13 min ago
Hi, We have an existing 11g instance on a lab/dev server. We have build a new server with 12c and exported and imported the data using data pump. That was all done by an external DBA, however, he's currently unavailable and I would like to h...
Categories: DBA Blogs

Database Sizing

Tom Kyte - 23 hours 13 min ago
Hi there, I need to come up with right sizing for our database based on the below information. 36 million records for a total of 1000 character per record. I have the following estimations but I need to add Index Size and other calculations as we...
Categories: DBA Blogs

Sortorder in Table Functions and Pipelined Table Function

Tom Kyte - 23 hours 13 min ago
Hey, suppose there is a function that return a numeric collection with 100.000 records. Will the rownum pseudocolumn always have the same value than the column_value pseudocolumn? Is there a difference between TF and PTF? What about Parallelism? ...
Categories: DBA Blogs

Testing out the new PFS (Pivotal Function Service) alpha release on minikube

Pas Apicella - Mon, 2019-01-21 19:25
I quickly installed PFS on minikube as per the instructions below so I could write my own function service. Below shows that function service and how I invoked using the PFS CLI and Postman

1. Install PFS using this url for minikube. Refer to these instructions to install PFS on minikube

https://docs.pivotal.io/pfs/install-on-minikube.html

2. Once installed verify PFS has been installed using some commands as follows

$ watch -n 1 kubectl get pod --all-namespaces

Output:



Various namespaces are created as shown below:

$ kubectl get namespaces
NAME                   STATUS    AGE
default                    Active       19h
istio-system           Active       18h
knative-build        Active       18h
knative-eventing  Active       18h
knative-serving    Active       18h
kube-public            Active       19h
kube-system          Active        19h

Ensure PFS is installed as shown below:

$ pfs version
Version
  pfs cli: 0.1.0 (e5de84d12d10a060aeb595310decbe7409467c99)

3. Now we are going to deploy this employee function which exists on GitHub as follows

https://github.com/papicella/emp-function-service


The Function code is as follows:

  
package com.example.empfunctionservice;

import lombok.extern.slf4j.Slf4j;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;

import java.util.function.Function;

@Slf4j
@SpringBootApplication
public class EmpFunctionServiceApplication {

private static EmployeeService employeeService;

public EmpFunctionServiceApplication(EmployeeService employeeService) {
this.employeeService = employeeService;
}

@Bean
public Function<String, String> findEmployee() {
return id -> {
String response = employeeService.getEmployee(id);

return response;
};
}

public static void main(String[] args) {
SpringApplication.run(EmpFunctionServiceApplication.class, args);
}

}

4. We are going to deploy a Spring Boot Function as per the REPO above. More information on Java Functions for PFS can be found here

https://docs.pivotal.io/pfs/using-java-functions.html


5. Let's create a function called "emp-function" as shown below

$ pfs function create emp-function --git-repo https://github.com/papicella/emp-function-service --image $REGISTRY/$REGISTRY_USER/emp-function -w -v

Output: (Just showing the last few lines here)

papicella@papicella:~/pivotal/software/minikube$ pfs function create emp-function --git-repo https://github.com/papicella/emp-function-service --image $REGISTRY/$REGISTRY_USER/emp-function -w -v
Waiting for LatestCreatedRevisionName
Waiting on function creation: checkService failed to obtain service status for observedGeneration 1
LatestCreatedRevisionName available: emp-function-00001

...

default/emp-function-00001-gpn7p[build-step-build]: [INFO] BUILD SUCCESS
default/emp-function-00001-gpn7p[build-step-build]: [INFO] ------------------------------------------------------------------------
default/emp-function-00001-gpn7p[build-step-build]: [INFO] Total time: 12.407 s
default/emp-function-00001-gpn7p[build-step-build]: [INFO] Finished at: 2019-01-22T00:12:39Z
default/emp-function-00001-gpn7p[build-step-build]: [INFO] ------------------------------------------------------------------------
default/emp-function-00001-gpn7p[build-step-build]:        Removing source code
default/emp-function-00001-gpn7p[build-step-build]:
default/emp-function-00001-gpn7p[build-step-build]: -----> riff Buildpack 0.1.0
default/emp-function-00001-gpn7p[build-step-build]: -----> riff Java Invoker 0.1.3: Contributing to launch
default/emp-function-00001-gpn7p[build-step-build]:        Reusing cached download from buildpack
default/emp-function-00001-gpn7p[build-step-build]:        Copying to /workspace/io.projectriff.riff/riff-invoker-java/java-function-invoker-0.1.3-exec.jar
default/emp-function-00001-gpn7p[build-step-build]: -----> Process types:
default/emp-function-00001-gpn7p[build-step-build]:        web:      java -jar /workspace/io.projectriff.riff/riff-invoker-java/java-function-invoker-0.1.3-exec.jar $JAVA_OPTS --function.uri='file:///workspace/app'
default/emp-function-00001-gpn7p[build-step-build]:        function: java -jar /workspace/io.projectriff.riff/riff-invoker-java/java-function-invoker-0.1.3-exec.jar $JAVA_OPTS --function.uri='file:///workspace/app'
default/emp-function-00001-gpn7p[build-step-build]:

...

default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: Hibernate: insert into employee (id, name) values (null, ?)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.617  INFO 1 --- [       Thread-4] c.e.empfunctionservice.LoadDatabase      : Preloading Employee(id=1, name=pas)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: Hibernate: insert into employee (id, name) values (null, ?)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.623  INFO 1 --- [       Thread-4] c.e.empfunctionservice.LoadDatabase      : Preloading Employee(id=2, name=lucia)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: Hibernate: insert into employee (id, name) values (null, ?)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.628  INFO 1 --- [       Thread-4] c.e.empfunctionservice.LoadDatabase      : Preloading Employee(id=3, name=lucas)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: Hibernate: insert into employee (id, name) values (null, ?)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.632  INFO 1 --- [       Thread-4] c.e.empfunctionservice.LoadDatabase      : Preloading Employee(id=4, name=siena)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.704  INFO 1 --- [       Thread-2] o.s.c.f.d.FunctionCreatorConfiguration   : Located bean: findEmployee of type class com.example.empfunctionservice.EmpFunctionServiceApplication$$Lambda$791/373359604

pfs function create completed successfully

6. Let's invoke our function as shown below by returning each Employee record using it's ID.

$ pfs service invoke emp-function --text -- -w '\n' -d '1'
curl http://192.168.64.3:32380/ -H 'Host: emp-function.default.example.com' -H 'Content-Type: text/plain' -w '\n' -d 1
Employee(id=1, name=pas)

$ pfs service invoke emp-function --text -- -w '\n' -d '2'
curl http://192.168.64.3:32380/ -H 'Host: emp-function.default.example.com' -H 'Content-Type: text/plain' -w '\n' -d 2
Employee(id=2, name=lucia)

The "pfs service invoke" will show you what an external command will look like to invoke the function service. The IP address here is just the same IP address returned by "minikube ip" as shown below.

$ minikube ip
192.168.64.3

7. Let's view our services using "pfs" CLI

$ pfs service list
NAME            STATUS
emp-function  Running
hello                Running

pfs service list completed successfully

8. Invoking from Postman, ensuring we issue a POST request and pass the correct headers as shown below





More Information

https://docs.pivotal.io/pfs/index.html

Categories: Fusion Middleware

ODPI-C 3.1 is Available

Christopher Jones - Mon, 2019-01-21 16:31

Release 3.1 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub

ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.

Top features: Multi-property Session Tags

The ODPI-C 3.1 release introduces some small enhancements and corrects a number of small issues that were discovered over the past few months. The main change is support for Oracle Call Interface 12.2's multi-property session tagging, allowing connections in a session pool to be assigned a semi-arbitrary string tag that you can use to represent the session state (e.g. ALTER SESSION values) of each connection. With multi-property tagging you can assign a PL/SQL procedure to 'fix-up' the session state, if necessary, before a connection is returned to the application from the pool. This is an efficient way to make sure connections have a required state.

See the release notes for all the changes.

ODPI-C References

Home page: https://oracle.github.io/odpi/

Code: https://github.com/oracle/odpi

Documentation: https://oracle.github.io/odpi/doc/index.html

Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html

Installation Instructions: oracle.github.io/odpi/doc/installation.html

Report issues and discuss: https://github.com/oracle/odpi/issues

Upgrading of Oracle APEX 18.x and future

Dimitri Gielis - Mon, 2019-01-21 08:43
Late September 2018 Oracle Application Express (APEX) 18.2 was released.

Since 2018, Oracle adopts a new versioning for their software, and APEX is following that. The plan of the Oracle APEX development team is to do two releases a year which carry the year and the release number of that year e.g. 18.1, 18.2, 19.1.

I like this new way of providing us with new releases. You don't know the exact timing of the releases, but you know the development team will be close to release version 1 in March/April and version 2 in September/October timeframe. In between you can download one-of patches or patch set releases through Oracle Support.

However there's one important change compared to before you have to be aware of:
every new version of Oracle APEX will be a complete install, with its own schema.

The biggest benefit of this approach is that the downtime to upgrade an Oracle APEX instance can be very minimal and the install is fast. You can read more about it in Maximizing Uptime During an Application Express Upgrade. I believe the Oracle Cloud is a big driver for this, but it benefits us all.

Another benefit is that if you can download and install all APEX releases in Oracle XE more easily. Before if you had installed APEX 5.1.0 and wanted to upgrade to APEX 5.1.2, 5.1.3, 5.1.4 you had to download the software from Oracle Support. But you could only do that with a valid support contract. If you wanted a complete free system with Oracle XE, you could download the latest version of APEX and do a new install and migrate over the workspaces and apps. Since APEX 18 you can always install those versions and APEX itself is doing the migration for you.

Now the biggest disadvantage of this approach is that you can't easily export and import your apps and plug-ins to a different version. Before when you might have Oracle APEX 5.1.0 (released Dec. 2016) in one environment and Oracle APEX 5.1.4 (released Dec. 2017) in another (or anything in between) and you could easily export and import between those environments. Oracle APEX exports have always been compatible till the second dot, so you could export import from any 5.1.x to another 5.1.x. You can import an APEX 18.1 version in an APEX 18.2 environment, but not the other way.

We typically move fast to a new version of APEX as we have to support those versions for APEX Office Print. I also blogged how we upgrade major releases by running multiple PDBs. With APEX 5.0, released in April 2015 and APEX 18.1 released in May 2018, we covered over 3 years in 3 PDBs (1 PDB for every major release of APEX 5.0, 5.1, 18.1). With the new version numbers, with 3 PDBs we cover a maximum of 1.5 years.

If you are aware of this, you can plan your application releases with it. And these new release numbers will make customers probably want to upgrade faster, or at least once a year as numbers go up fast :) As developers it's not only nice to work with new technology, it's also more productive for everyone.

Oracle APEX is not the only software in our stack right? ORDS, SQL Developer, Oracle Database, they follow the new release numbers. ORDS released a few days ago 18.4. The ORDS/SQL Developer/SQLcl team releases every quarter and typically use the quarter of development in their release number.

Tim Hall wrote a nice article on his thoughts on the new release numbers for the Oracle Database.

I love getting new versions and play with the new features or not wait long on bug fixes. We release frequently with APEX Office Print (AOP) too, 10 days ago we released AOP 19.1, but that is for another blog post!

Happy upgrading :)
Categories: Development

How to profile with DBMS_HPROF into memory (RAM)?

Tom Kyte - Mon, 2019-01-21 06:26
Hi TOM, How to profile with DBMS_HPROF into memory (RAM)? For example, into BLOB, that will be stored in memory (RAM, PGA). The problem is that I need to save result only if a procedure running more than 1 minute. Most of the time there is no need...
Categories: DBA Blogs

How to export output of Stored Procedure returned in refcursor into csv file

Tom Kyte - Mon, 2019-01-21 06:26
Dear Team, I have created stored procedure which return output in refcursor. If I run procedure by "right click => Run", then I can see output in tab "Log => output variables". But, I can't export output from there. Then I tried below com...
Categories: DBA Blogs

queries are not running in Sql Developer

Tom Kyte - Mon, 2019-01-21 06:26
Hi Tom/Team, I have installed Oracle 11g Database on my Windows 10 64-bit machine.All installation has been completed successfully. I have checked connections by sqlplus and it's running fine. I am able to run queries as well there. So database is...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator