Oracle Notepad vi
 
Home

Resources
Library
How Can I?
Code Samples
Presentations
Links
Book Reviews
Downloads
User Groups
Humor


General
Contact
About
Services
Legal Notice & Terms of Use
Privacy Statement


Blog Archives
2014
2013
2012
2011
2010
2009
2008
Monday, December 24, 2007
Found a new gem hidden in the newly released 11gR1 inside th DBMS_RESOURCE_MANAGER built-in package. If you are not familiar with this resource management definitely go to the Oracle docs and the Library and check out the capabilities and the demos. But here's what I just found: A new procedure named CALIBRATE_IO.

  -- Input arguments:
-- num_physical_disks - Approx, number of physical disk drives

-- max_latency - Maximum tolerable latency in milliseconds for 
-- database-block-sized IO requests (max value 100)
--
-- Output arguments:
-- max_iops - Maximum number of I/O requests per second that can be 
-- sustained. The I/O requests are randomly-distributed, 
-- database-block-sized reads.

-- max_mbps - Maximum throughput of I/O that can be sustained,
-- expressed in megabytes per second. The I/O requests are
-- randomly-distributed, 1 megabyte reads.

-- actual_latency - Average latency of database-block-sized I/O 
-- requests at "max_iops" rate, expressed in milliseconds.

SELECT pname, pval1
FROM aux_stats$
WHERE sname = 'SYSSTATS_MAIN';

set serveroutput on

DECLARE
  iops PLS_INTEGER;
  mbps PLS_INTEGER;
  alat PLS_INTEGER;
BEGIN
  dbms_resource_manager.calibrate_io(1, 100, iops, mbps, alat);
  dbms_output.put_line('Maximum IOPS: ' || TO_CHAR(iops));
  dbms_output.put_line('Maximum MBPS: ' || TO_CHAR(mbps));
  dbms_output.put_line('Actual Latency: ' || TO_CHAR(iops));
END;
/
 
So if you want to know what your i/o subsystem is actually doing ... here where you can find out.
Sunday, December 16, 2007
Welcome to the planet where dinosaurs still roam freely.

No I don't mean birds that descended from Cretaceous and Jurassic beasts. I am referring to, in most cases, well meaning DBAs who learned Oracle back in the version 5-7 days and were told something by Oracle, or some book author, that was not true then and just isn't true today either. Yes I am referring to the mistaken belief that there is any diagnostic value in BCHR (Buffer Cache Hit Ratio).

So in an attempt to add my voice to the chorus of sanity on this I present two links you should follow to two of the opinions of the most highly respected people in our profession. They are Jonathan Lewis and Richard Foote.


Wednesday, November 28, 2007
Beer

Makes

Everyone
Hans Forbrich   with Arup Nanda
Look

More

Attractive
Anjo & Mogens   with Anjo and Mogens
Pictures by Jared Still at the OOW 2007 ACE Dinner
Sunday, November 11, 2007
I am sitting in the airport in Seattle right now awaiting my flight to SFO for the biggest Oracle party of 2007.

If you are trying to find me here's where you might want to look:
  • Saturday ... my hotel room recovering from a week of preparation.
  • Sunday ..... Moscone West
  • Monday ..... Moscone South
  • Tuesday .... all over the place
  • Wednesday .. Larry Carpenter's presentations on Data Guard
  • Thursday ... Moscone West and South
  • Friday ..... in my room recovering from five days at OOW

We have to be crazy to be doing this ... the same insanity over and over again ... I can't wait ... see you there.

Tuesday, November 6, 2007
A few days ago I was trying to answer a student question and stumbled on something I couldn't answer. Here is my email exchange with Julian Dyke of juliandyke.com in the UK.

  I've stumbled trying to answer a student question and wonder if you might know the answer. A thorough search of the docs has proven that either the answer is not documented or I am not good at entering search criteria.

I create a deferrable referential constraint such as this:


ALTER TABLE person
ADD CONSTRAINT fk_person_state_zip
FOREIGN KEY (per_state, per_zip)
REFERENCES state_zip (state, zip_code)
INITIALLY DEFERRED DEFERRABLE;


Look in the data dictionary:

SELECT constraint_name, deferrable, deferred
FROM user_constraints
WHERE table_name = 'PERSON';


and I see this:

CONSTRAINT_NAME                DEFERRABLE     DEFERRED
------------------------------ -------------- ---------
FK_PERSON_STATE_ZIP            DEFERRABLE     DEFERRED


all is well.

Then I do this:


SET CONSTRAINTS ALL IMMEDIATE;

and if I again query the data dictionary I still see this:

CONSTRAINT_NAME                DEFERRABLE     DEFERRED
------------------------------ -------------- ---------
FK_PERSON_STATE_ZIP            DEFERRABLE     DEFERRED


which is technically correct because my SET CONSTRAINTS only affects my session and not others. But the question is where in Oracle is it storing the fact that within my specific session I have altered the behaviour of that constraint?

Thanks.
 
 
  The answer is that SET CONSTRAINTS ALL IMMEDIATE is implemented as a transaction callback which is a type of state object.

When you initially create a transaction you allocate a ktxcb structure from a segmented array. You can see these structures in X$KTCXB and once the transaction has started in V$TRANSACTION. If you do a state object dump (SYSTEMSTATE level 10), you can see the transactions currently in progress for each session (and process). 

When you issue SET CONSTRAINTS ALL IMMEDIATE, a transaction callback state object is created. This is chained from the transaction object

Both transaction (ktcxb) and transaction callback (ktccts?) are segmented arrays with names "transaction" and "txncallback" respectively. The amount of memory allocated to these structures can be seen in V$SGASTAT (again with names "transaction" and "txncallback" respectively).

I can't find any X$ tables which externalise the transaction callback structure. However, it is easy to find in the system state dump.

Following is an extract from a level 10 SYSTEMSTATE dump which following the execution of SET CONSTRAINTS ALL IMMEDIATE in a transaction:


SO: 0x3dace118, type: 49, owner: 0x3daa4198, flag: -/-/-/0x00 if: 0x1 c: 0x1
proc=0x3f256af0, name=txncallback, file=ktccts.h LINE:336, pg=0 (cmtcbk) 
type: constraint commit callback act: 1
Dump of memory from 0x3C434118 to 0x3C4341A8
3C434110 00000301 00000357 [....W...]
3C434120 3DAA4198 3F256AF0 3DAA41C8 3DACE128 [.A.=.j%?.A.=(..=]
3C434130 00000000 00000000 00000000 00000000 [................]
3C434140 00000000 00000000 3C434148 3C434148 [........HAC<HAC<]
3C434150 200010B4 00001024 00000000 37D204FC [...$...........7]
3C434160 37D20504 00000000 00000000 02010200 [...7............]
3C434170 00000000 00001014 6E617274 74636173 [........transact]
3C434180 206E6F69 20006F63 7FFF7FFF 7FFF7FFF [ionco. .........]
3C434190 00000200 00000000 3C434198 3C434198 [.........AC<.AC<]
3C4341A0 37D20514 00000000 [...7....] 


I don't know what happens if with SET CONSTRAINTS ALL DEFERRED yet.

Any use?
 

I will post if we learn more.
Thursday, October 25, 2007
Oracle has released both the Windows and 64bit Linux versions of 11g.

To download a personal copy of 11g go to Oracle's download page at OTN. [Click Here]
Monday, October 22, 2007
Last week I was in Salt Lake City at the UTOUG conference and this week, Thursday, I will be at NoCOUG in Pleasanton, California doing what I enjoy ... Oracle 11g in SQL*Plus without a safety net.

If you can make it there you will see:
  • Oracle's new table compression in action
  • Virtual table columns (with constraints and indexes and the Result Cache)
  • Compound table triggers and table triggers ordered using the FOLLOWS clause
  • 11g's new Native compilation
  • Some brand new PLW warnings and if time permits
  • Transaction Backout
Monday, October 15, 2007
I am at 37,000 ft flying to Salt Lake City to provide two break-out sessions to the Utah Oracle Users Group's Fall 2007 conference. I am planning to present both sessions in 11.1.0.6: One for DBAs ... one for developers.

While preparing for the conference I decided to clean up a demo schema I use and found that among the flotsam and jetsam  was a flashback archive. I dropped the table's undo from being archived:


ALTER TABLE servers NO FLASHBACK ARCHIVE;

then the archive:

DROP FLASHBACK ARCHIVE uw_archive;

and then saw something I had not seen before.

Two of the three tables created by Flashback Archive were dropped but the third was not. Attempting to drop it produced an ORA-55622 error which is not recognized by metalink.

ERROR at line 1:
ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed on table "UWCLASS"."SYS_
FBA_HIST_70439"

Well I proved "no DML" was true as I could not update or delete rows. I also proved the ALTER statement true in a few minutes. I could, however, rename it and did so renaming it to ZZYZX to make tracing easier.

RENAME TABLE SYS_FBA_HIST_70439 TO ZZYZX;

All attempts to drop it after rename also failed

SQL> drop table zzyzx;
drop table zzyzx
*
ERROR at line 1:
ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed on table "UWCLASS"."ZZYZX"


so it was time to get into the data dictionary.

I found nothing about flashback archives in dcore.bsq. Nothing obvious anywhere else even after tracking down the source code of every view I could find related to Flashback Archive. Then I tried a 10053 level 12 trace: Also a dead end.

Finally I decided there had to be something, even if undocumented, that told Oracle this table was special so I tore into obj$ and tab$ looking for anything that was unique about this one table, owned by UWCLASS, and finally found it:

SELECT object_id, object_name
FROM dba_objects
WHERE object_type = 'TABLE'
AND owner = 'UWCLASS';

OBJECT_ID  OBJECT_NAME
---------- -----------

72403      ZZYZX
72479      SERV_INST
72477      SERVERS
71072      AIRPLANES

SELECT obj#, property
FROM tab$
WHERE obj# IN (72403,72479,72477,71072);

OBJ#       PROPERTY
---------- ----------
71072       536870912

72403      9126805504
72477       536870912
72479       536870912


So should a table created to support Flashback Archive fail to drop when the table is converted to NO FLASHBACK ARCHIVE the table can dropped ONLY, it appears, by doing something remarkably dangers ... updating TAB$.

Here's what I did: The object returned was the one I was trying to drop so I did the dastardly deed:

UPDATE tab$
SET property = 536870912
WHERE obj# = 72403;

COMMIT;


The table then dropped normally and no damage was done.

But please keep in mind that any direct alteration of data dictionary tables in a production database is not just irresponsible but will render the database unsupported by Oracle. This should only be done after opening an SR at metalink and at the direction of Oracle support.

Wednesday, October 3, 2007
Well I've been kicking around in 11g a lot more and discovering that it is even richer than originally thought.

When Beta testing I focus on a combination of testing some new functionality while, at the same time, watching for old bugs to make sure they have been fixed or not reintroduced and also that legacy functionality has not been broken. The early 11g Betas contained a bit of both, the last Beta was solid and the production release looks to be ready for implementation in development and testing environments immediately.

Needless to say there was a lot of new functionality I never got a chance to touch during the Beta and one of those was something I want to talk about today ... the new INTERVAL PARTITIONING. It is absolutely brilliant and here's what it does.

In 10g and before one had to manually, or via a scheduled job, create new partitions as days and weeks passed and we were approaching the point where everything would have ended up in the LESS THAN MAXVALUE partition. Interval partitions do away with that. Oracle now dynamically creates new partitions when an insert contains values beyond a point in time defined by the existing interval partitions.

Here is a demo I've put on the Partitions page that demonstrates this new capability:

Lets start by creating a range partitioned table:


CREATE TABLE interval_part (
person_id   NUMBER(5) NOT NULL,
first_name  VARCHAR2(30),
last_name   VARCHAR2(30))
PARTITION BY RANGE (person_id)
INTERVAL (100) STORE IN (uwdata) (
PARTITION p1 VALUES LESS THAN (101))
TABLESPACE uwclass;


Note that I didn't create the usual LESS THAN MAXVALUE partition as it is unnecessary.

First lets take a look at the segment Oracle created.


desc interval_part

SELECT table_name, tablespace_name, partitioned
FROM user_tables;

col high_value format a20

SELECT partition_name, tablespace_name, high_value
FROM user_tab_partitions;


And lets insert our first row:

INSERT INTO interval_part
(person_id, first_name, last_name)
VALUES
(100, 'Dan', 'Morgan');


We can look and see that it went into the first partition as it is the only one that exists.


SELECT partition_name, tablespace_name, high_value
FROM user_tab_partitions;


Let's now put in a second row only this time with a person_id that exceeds the VALUES LESS THAN clause of the existing partition.

INSERT INTO interval_part
(person_id, first_name, last_name)
VALUES
(101, 'Barack', 'Obama');


We look at the partitions and note that Oracle has automatically created a new partition to hold the record and set an appropriate high_value.

SELECT partition_name, tablespace_name, high_value
FROM user_tab_partitions;


We now put in a record that greatly exceeds  the high value for both of our existing partitions.

INSERT INTO interval_part
(person_id, first_name, last_name)
VALUES
(567, 'Anne', 'Sweet');


And again we look at the partitions created and see that Oracle has done the right thing and made our lives just a little bit less complicated.

SELECT partition_name, tablespace_name, high_value
FROM user_tab_partitions;


Tuesday, September 25, 2007
Have you ever found yourself reading a metalink note and found yourself staring at something like this?

ORA-44310 - maximum number of services exceeded.
or
metalink note 194706.1 with ORA-26671 - maximum number of STREAMS processes exceeded.

Well thanks to Mark Townsend we now have an answer to two of these questions:

The limit is 115 services per database. Exceed that number and you are toast.

On another note have you ever done this:


desc gv$parameter

and wondered about a parameter 4000 bytes long?

Well if so fear not. Another limit just discovered is that the maximum number of bytes in a parameter is actually around, sorry I couldn't get it more accurately 270 bytes. So don't write STREAMS or DATA GUARD params that refer to long paths or this will bite you too.

But merge these two items together and the number of services changes substantially. You can have up to 115 services but, combined into a single init.ora parameter:


"service_names=xxx,yyy,zzz..."

They can not exceed 270 bytes. So in order ot have 115 services you can not use more than two bytes to name a service.

If you find doc examples like this please forward them to me at my personal email address and I will try to get the docs changed to define them.


Sunday, September 23, 2007
For what must be the tenth, perhaps the hundredth, time this year I have responded to the very same question. So I've decided to highlight it here as it seems to be one that many have problems with. This issue is how to query on a values list constructed in the form:

x :=
'10,30';

SELECT *
FROM t
WHERE column IN x;


When Oracle sees this it tries, quite naturally to match against the literal string '10,30' as there is no instruction telling it that x is intended to represent a comma delimited list.

The solution is to provide Oracle with that missing information so that it knows what you are thinking. And the way to tell it is to demonstrated below.

Log on as the user scott and try these on your database.


Does Not Work Works
set serveroutput on

DECLARE
 i     PLS_INTEGER;
 InStr VARCHAR2(20) := '10,30';
BEGIN
  SELECT COUNT(*)
  INTO i
  FROM emp
  WHERE deptno IN (InStr);

  dbms_output.put_line(i);
END;
/
CREATE OR REPLACE TYPE InStrTab IS TABLE OF VARCHAR2(20);
/

set serveroutput on

DECLARE
 i PLS_INTEGER;
 x InStrTab := InStrTab('10','30');
BEGIN
  SELECT COUNT(*)
  INTO i
  FROM emp
  WHERE deptno IN (
    SELECT column_value
    FROM TABLE(x)
);

  dbms_output.put_line(i);
END;
/
The TABLE operator tells Oracle that you wish for the comma delimited values in the variable x to be treated as though they are a table. A full demo of this capability can be found in the library on the CONDITIONS page.

Tuesday, September 04, 2007
Just got back into the office after a week and a half in Alberta, Canada. Three unforgettable days at Chateau Lake Louise. Well somebody had to do it.

As a follow-up on PL/Scope, and no I have not yet heard back from Oracle, the recommendation in Oracle's docs is:

Each DATATYPE is a base type declared in package STANDARD. In order to collect and view these identifiers, package STANDARD must be compiled with PLSCOPE_SETTINGS='IDENTIFIERS:ALL'.


Source: Click Here
If you do this you will totally corrupt an 11g database. I am working on getting an answer from Oracle about this but, for now, I can not warn everyone strongly enough to ignore this recommendation unless you are quite prepared to perform a complete reinstallation.
Tuesday, August 21, 2007
It is a slow day in the lab so not too much has happened. I'd be out painting the boat except that this summer Seattle is truly living up to its reputation and, yet again, it is raining. So here's what I would have posted somewhere if I had had somewhere to post it.

Oracle 11g contains a new capability named PL/Scope that does for PL/SQL what Cscope does for C. I have put together a page in the Library showing its use based on a very simple stored procedure that you can try. One the caution I would make is that there is a SQL statement I have bold faced because it does not work. I have sent an email to Oracle asking someone to name the unnamed view that contains the data type information (other than all_arguments) but I haven't had a response yet.

So if you've downloaded 11g and are starting to play with it take a look at PL/Scope. It could be a very nice way to learn about code developed by someone else before jumping into it and making an even worse mess. <g>


Monday, August 20, 2007
In the Usenet group comp.databases.oracle.server, this morning, I found a post asking:

  In my app, I've been using "OPEN some_cursor FOR some_dyn_sql ;"

Now I'm facing a limitation problem.... I have to generate a VERY LARGE dynamic SQL. (bigger than "VARCHAR2(32767)")

And "OPEN...FOR..." does not seem to take CLOB.

What other options do I have ?
 

... and I realized that while Oracle has been sprinting along adding and extending the database with new functionality ... a lot of legacy functionality is essentially invisible.

The answer to the poster's question is, in fact, functionality that has been in the DBMS_SQL package since it was introduced in version 7.3.4.

While there is no question but that Native Dynamic SQL was a substantial improvement over the more complex syntax of DBMS_SQL it has never duplicated, until 11g changed the rules, the raw power of DBMS_SQL. So if you don't know the answer to the above question ... go to the DBMS_SQL page in the library and scroll down until you find "Executing CLOBS Demo Tables." Create the two tables then create the "EXECUTE_PLSQL_BLOCK procedure. After that you can run my demo data anonymous block containing insert statements or put together your own CLOB, however large, and test it out.