Oracle Database Utilities
Version 18.3.0.1

General Information
Library Note Morgan's Library Page Header
For how many years have you been working with physical servers that are starving your database of the memory necessary to deploy important new performance features such as the Result Cache, Memoptimize Pool, In-Memory Aggregation, In-Memory Column Store, and Full Database Caching? Too long? Contact me to learn how to improve all queries ... not just some queries.
Utility Usage Note If you are handicapped by working on a Microsoft Windows operating system most of these utilities will not function unless you open a command prompt window "AS ADMINISTRATOR" before running the utility. This is definitely true with OPATCH which must obtain a lock on the Oracle Inventory.
Executable Purpose
acfsinstall ACFS Installer
acfsroot Start/stop ADVM/ACFS drivers
adrci Automated Diagnostic Repository Command Interpreter
afdboot ASM Filter Driver
afddriverstate Starts and stops the ASM Filter Driver: The state can be INSTALLED, LOADED, VERSION
afdroot Front end for the Perl scripts that start and stop the ASM Filter Driver
afdinstall AFD Installer
afdtool ASM Filter Driver
agtctl A multithreaded extproc agent is started, stopped, and configured by an agent control utility called agtctl, which works like lsnrctl. However, unlike lsnrctl, which reads a configuration file (listener.ora), agtctl takes configuration information from the command line and writes it to a control file.
amdu ASM Disk Usage: Creates amdu_time/report.txt for ASM disk usage. Get help by amdu help=y
aqxmlctl Start and stop the OC4J server intended for iSQLPlus (AQXML) server on Windows which is deprecated
asmcmd ASM Command Line Utility
asmcmdcore ???
asmtool ???
asmtoolg ???
chopt Used to change (enable/disable) optional components

options:
dm = Oracle Data Mining RDBMS Files
ode_net = Oracle Database Extensions for .NET
olap = Oracle OLAP
partitioning = Oracle Partitioning
rat = Oracle Real Application Testing
cluvfy Cluster Verify (RAC)
CreatDep Usage: creatdep /s <ServiceName> /d <DependServiceName>
crsdiag Cluster Ready Services Diagnostics
crtsrv ???
crtxhx c:\Users\oracle>ctxhx -h
Error: Invalid output character set on line 1
Valid output character sets are:
unicode
utf8
iso8859-1
iso8859-2
iso8859-3
iso8859-4
iso8859-5
iso8859-6
iso8859-7
iso8859-8
iso8859-9
gb2312
big5
shiftjis
windows1250
windows1251
windows1252
windows1253
windows1254
windows1255
windows1256
windows1257
koreanhangul
windows874
macroman
maclatin2
macgreek
maccyrillic
macromanturkish
koi8r
euc-jp
iso2022-jp
ctxhx - Converts the input file to html file.
Usage: ctxhx InputFile OutputFile
[FallbackCharacterSet
[OutputCharacterSet
[H
[Meta | Nometa
[TimeOut
[Heuristic | Fixed
[Format | NoFormat
[PDFRotate | NoPDFRotate
[TempFile
[Encrypted | Unencrypted | 'key <key value>']]]]]]]]]]
Where: "InputFile" is the file to be converted to HTML or TEXT.
"OutputFile" is where to place the converted file.
"FallbackCharacterSet" is obsolete. Use any string (e.g., ASCII8).
"OutputCharacterSet" is the character set of OutputFile.
Default character set is ISO8859-1.
"H" is to specify the output format as HTML.
"Meta | Nometa" is to enable/disable meta data extraction.
Default is Nometa.
"TimeOut" is the polling interval in integral seconds to determine
whether to terminate by force.
Supported range is 0..42949672.
Default is 0 to disable time-out.
"Heuristic | Fixed" is the type of time-out.
Default is Heuristic.
"Format | NoFormat" specifies whether the output should be formatted.
Default is Format.
Limitation: NoFormat cannot be used with Meta.
"PDFRotate | NoPDFRotate" specifies whether to support rotated text in
PDF.
Default is PDFRotate.
"TempFile" is fully-qualified filename prefix for temporary files.
Specify NoTempFilePrefix to use default.
Default is <OSD temp dir>/drgitmp.
"Encrypted | Unencrypted | 'key <key_value>" specifies whether
encryption is enabled or not and if encryption key is passed
in as a command line parameter.
Limitation: Encrypted cannot be used with NoFormat.
ctxkbtc The knowledge base is the information source that Oracle Text uses to perform theme analysis, such as theme indexing, processing ABOUT queries, and to document theme extraction with the CTX_DOC package. A knowledge base is supplied for English and French.

With the ctxkbtc compiler, you can:
  • Extend your knowledge base by compiling one or more thesauri with the Oracle Text knowledge base. The extended information can be application-specific terms and relationships. During theme analysis, the extended portion of the knowledge base overrides any terms and relationships in the knowledge base where there is overlap.
  • Create a new user-defined knowledge base by compiling one or more thesauri. In languages other than English and French, this feature can be used to create a language-specific knowledge base.
Usage: ctxkbtc
-user <dbuser/password>
-name <thesaurus name> [<thesaurus name>...]
[-log <filename>]
[-verbose]
[-revert]
ctxlc The Lexical Compiler (ctxlc) is a command-line utility that enables creation of Chinese and Japanese lexicons (dictionaries). Such a lexicon may either be generated from a user-supplied word list or from the merging of a word list with the system lexicon for that language.

Usage: ctxlc
[-ja | -zht] -ics <input file encoding> [-n] -i <token file name>
[-ja | -zht] -ocs <output file encoding>
ctxload Used to import a thesaurus file into the Oracle Text thesaurus tables

Usage: ctxload
-user <dbuser/password>
-name <table/index/policy/thesaurus name>
-file <filename> | -drop
[-trace]
[-verbose]
[-log <filename>]
[-thes]
[-thescase <Y|N>]
[-thesdump]
[-extract]
[-wait | -nowait]
dbca DataBase Configuration Assistant
dbfs_client Using the DBFS Command Interface
dbgeu_run_action Diagnostic workBench Generic ddE User actions - RUN ACTION, used by DDE User Actions to capture the output of external commands
dbSetup.pl ???
dbua DataBase Upgrade Assistant
dbupgrade ???
dbv DataBase Verify
deploync Deploy NComp
dg4odbc Configures the Oracle Net listener for the ODBC Database Gateway
dg4pwd Constructs the password file for a gateway SID
dgmgrl Data Guard Manager
diagsetup 12c installation Diag Setup
disable_dnfs ???
diskmon.bin C:\Users\oracle>diskmon -help
2014-07-23 14:31:20.229 [81688] Usage:
diskmon [-w dirname] [-l logdir] [-p pipename] [-r msec] [-c msec] [-k type] [-d] [-t] [-x] [-e]
-w dirname test directory, ending with directory separator
-p pipename absolute pipe name to use
-l logdir output log directory (filename extension is .log)
-r msec timeout used in the read_msg loop
-c msec timeout used in the connection listening loop
-k type kgzf test type
-m message send simulation event to diskmon
-d turn tracing on
-t timestamp the output lines
-x ipc tracing on
-o osslib tracing on
-e in test environment
-h usage
2014-07-23 14:31:20.229 [81688] CRITICAL: Command line arguments incorrect
dropjava ???
dsml2ldif LDAP
elements ???
emca New 12.1.0.2
emdwgrd Script that saves and restores dbControl information if the database was downgraded. Run this script prior to upgrade to save DB Control information. After downgrading the database run this script again to restore the DB Control information.
enable_dnfs Enable DNFS mount
eusm Enterprise Manager User Security: Runs the enterprise user security admin tool
exp Export
expdp DataPump Export
extjob ???
extjobo ???
extproc External Procedure
extract_archive ???
extusrupgrade Upgrade Externally Authenticated SSL Users
genezi One use of this UNIX/Linux utility is the basic Instant Client (i.e. no sqlplus) you can execute LD_LIBRARY_PATH=$ORACLE_HOME/lib genezi -v to get client version. (Ref: Note:818454.1)
getcrshome Returns the CRS home if the Oracle Grid Infrastructure has been installed
hsalloci Heterogeneous Server Agent driver for OCI
hsdepxa Heterogeneous Server Agent with XA Compliant Distributed External Process Driver
hsots Heterogeneous Server Agent with XA Compliant OTS Driver
imp Import
impdp DataPump Import
jssu Note:976049.1: "Jssu is called by the dbms_scheduler when credentials are used". User RnR in msg 11 of this thread shows one usage.
kfed Kernel Files Editor (Read ASM header e.g.: kfed read /dev/oracleasm/disks/VOL1. Type kfed -h for help. Also see below for kfod)
kfod OSM Discovery Utility
launch ???
lbuilder Local Building: Link in 12c to $ORACLE_HOME/nls/lbuilder/lbuilder
lcsscan NLS Language and Character Set File Scanner: Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

Copyright (c) 2003, 2018, Oracle. All rights reserved.


You can control how LCSSCAN runs by entering the LCSSCAN command
followed by the required parameters. To specify parameters, you use
keywords:

Example: LCSSCAN RESULTS=2 END=1000 FORMAT=HTML FILE=index.html

Keyword Description (Default)
--------------------------------------------------------------------
RESULTS number of language and character set pairs to return (1)
BEGIN beginning byte offset of file (1)
END ending byte offset of file (end of file)
FORMAT file format TEXT, HTML or AUTO detect (TEXT)
RATIO show result ratio YES or NO (NO)
FILE name of input file
HELP show help screen (this screen)
ldapadd LDAP
ldapaddmt LDAP
ldapbind LDAP
ldapcompare LDAP
ldapdelete LDAP
ldapmoddn LDAP
ldapmodify LDAP
ldapmodifymt LDAP
ldapsearch LDAP
ldiffmigrator ???
lmsgen NLS Binary Message File Generation

NLS Binary Message File Generation Utility: Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

Copyright (c) 1979, 2018, Oracle. All rights reserved.

Syntax:
LMSGEN <text file> <product> <facility> [language] [-t flag] [-i indir] [-o outdir]

Where <text file> is a message text file
<product> the name of the product
<facility> the name of the facility
[language] optional message language in
<language>_<territory>.<character set> format
This is required if message file is not tagged properly
with language
[-t flag] treat message truncation as error Y|N
[-i indir] optional directory where to locate the text file
[-o outdir] optional directory where to put the generated binary file.
loadjava Utility to load Java classes into the database
loadpsp ???
lsnodes List RAC Nodes

Usage: lsnodes [-c] [-l] [-n] [-v]
where:
-c get cluster name
-l get local node name
-n print node numbers also
-v verbose error reporting
LSNRCTL Listener Control
lxegen NLS Calendar

NLS Calendar Utility: Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

Copyright (c) 1994, 2018, Oracle. All rights reserved.

LXE-00200: Cannot open calendar external file
lxinst NLS Data Installation

NLS Data Installation Utility: Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

Copyright (c) 1993, 2018, Oracle. All rights reserved.

LXI-ERR-00101: Invalid command-line parameter -h

Usage: lxinst [Parameters]

Optional parameters are (default values given):

OraXMLNLS=<$ORA_NLS10> Specifies where to find the xml-format boot and
object files and where to store the new binary
files.
SysDir=<OraXMLNLS> Specifies where to find the existing system boot
file.
DestDir=<OraXMLNLS> Specifies where to put the new (merged) system
boot file.
Help=NO Show this help information.
WARNING=0|1|2|3 Show different levels of warning messages
Suppress_NLT=0|1 1: Suppress appending NLT to NLB; 0: Not

If the OraXMLNLS path is not specified, the value of $ORA_NLS10 will be
used. If either of the other path parameters is not specified, the
OraXMLNLS path will be used.
mkstore ???
modifyEMService ???
ncomp Native compiler: Compiler used when compiling PL/SQL as C
netca Network Configuration Assistant
netca_deinst.sh Net Configuration Assistant Deinstall
nid Tool for setting a New IDentifier for a database
ocopy Oracle Copy
odisrvreg ???
oerr New in 12.1.0.2: Returns the description, cause, and action of an error from a message file when a facility code and error number are passed to it
oidca Sets environment variables such as ORACLE_HOME, JLIB_HOME, CLASSPATH, many of which are JAR related
oidprovtool OID Provisioning Tool: Used to create a subscription profile for OID subscribers such as Portal.
oifcfg Oracle Interface Configuration for use with RAC clusters
ojvmjava ???
ojvmtc ???
okdstry Kerberos utility
okinit Kerberos utility
oklist Kerberos utility
olsadmintool ???
olsoidsync ???
omtsreco c:\Users\oracle>omtsreco -?
Entered main()
onsctl ONS control for starting, stopping, and other management activities
opatch Oracle Patch Application/Removal Utility
oputil ???
opwdintg ???
orabase Returns ORACLE_BASE; first appears in 11gR1; checks ORACLE_BASE in $ORACLE_HOME/inventory/ContentsXML/oraclehomeproperties.xml on UNIX/Linux (Ref)
orabaseconfig ???
orabasehome ???
oracg Oracle Class Generator: XML related. Deprecated in 10g and replaced by orajaxb in 11.2.0.3
oracle Oracle Database Executable Kernel
OracleAdNetConnect OracleAdNetConnect MFC Application
OracleAdNetTest ???
OraClrAgnt Can change the Windows registry entry to provide parameter values as command line parameters to OraClrAgnt.exe.
oraclrctl Part of the Oracle Database Extensions for .NET

oraclrctl: Release 18.3.0.0.0 Production
C:\Users\oracle\oraclrctl.exe (Oracle Home : OraDB18Home1)

Copyright (c) 2018, Oracle. All rights reserved.

Usage:

oraclrctl -<new|delete|start|stop> [-host <hostname>]

-new Create and start new service.
-delete Delete service.
-start Start service.
-stop Stop service.

-host <hostname> Execute the operation on the specified host.
Local host is used if option is not specified.
oradim Utility for configuring Windows services. Not present in any non-Windows installation
oradnfs_run Undocumented but related to Direct NFS

C:\Users\oracle>oradnfs_run -?

ORADNFS: Release 18.0.0.0.0 - Production on Mon Jan 21 23:43:32 2019

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

ODNFS-65006: Invalid use. Refer to Help for valid use.
Available Commands and Options:
dnfsenable
dnfsdisable [-deleteoranfstab] [-deleteodmdir]
mount <REMOTE_PATH> <LOCAL_PATH> [-f, -u uid, -g gid, -s server, -l local, -p path]
umount <LOCAL_PATH>
isdnfsenabled <ORACLE_HOME>
ispathondnfs -oh <ORACLE_HOME> -path <PATH>
save
restore
showmount [-log <filename>] -e <servername>
copy/cp [-f,-i,-r] <srcfile> <destfile>
move/mv [-f,-i] <srcfile> <destfile>
del/delete/rm [-f,-i,-r] <filename>
cat [-log <filename>] <filename>
stat [-c fmt ,-f -L/-l] <filename>
dd [-if <value>,-of <value>,-obs <value>,-ibs<value>,
-bs <value>,-count <value>,-seek<value>,-skip <value>]
mkdir [-p] <dirname>
rmdir <dirname>
getattr <dirname>
setattr <dirname>
fsstat <dirname>
fsinfo <dirname>
ls/dir [-1/-B,-A,-a,-C,-c,-d,-e,-l,-R,-r,-S,-s,-t,-u] <dirname>
find <path> [-follow,-maxdepth N,-name PATTERN,-path PATTERN,
-type X[f/d],-size N[ck],-print[default],-exec CMD ARG]
networkdiscovery
createoranfstab [-f
-u uid
-g gid
-s server
-lp "<local1,path1>;<local2,path2>;..."
-em "<export1,mount1>;<export2,mount2>;..."
-v version
-t mnt_timeout
-r rdmaport
-c community]
help

Optional tracing flags:

[-trace [filename]]
[-level <kgodm_trace level> <kgnfs_trace level> <skgnfs_trace level> <oradnfs_trace level from 0 to 4>]
orahomeuserctl Oracle Home User Control Tool new in 12.1.0.2

orahomeuserctl updpwd [-user username] [-host hostname1, hostname2, …] [-log logfilename]

C:\Users\oracle>orahomeuserctl list
Current Oracle Home is : C:\app18
Oracle Home User for the current Oracle Home is ora18user
orajaxb Oracle Java Architecture for XML Binding: XML JAXB Class Generator. Replacing oracg in 9i.
orakill Utility for killing Oracle sessions from the operating system command line
oramtsctl  MTS Control
oramts_deinst  MTS Deinstall
orapipe Read Oracle XML Developer's Kit Programmer's Guide, Chapter "Pipeline Definition Language for Java"
orapki Oracle Public Key Infrastructure utility: Read Oracle Database Advanced Security Administrator's Guide, Chapter "Configuring Secure Sockets Layer Authentication"
orapwd Tool for creating and managing password files
orastack Oracle Stack Modification: Used to resize the stack of an Oracle executable file such as oracle.exe, tnslsnr.exe (one way of preventing ORA-12500). Type the command at DOS to see an excellent description. (Not equivalent to pstack available on some UNIXes used to dump stack trace of a running process, but more like Solaris command ppgsz -o stack=... except it works on file not process)
oraversion This program prints release version information.
oravssw ???
oraxml New 12.1.0.2
oraxsl New 12.1.0.2
ORE Oracle IO Numbers for benchmarking
orion Oracle IO Numbers for benchmarking
ott Windows NT specific: Sets jreclasspath and runs java
outline ???
patchgen New 12.1.0.2
plshprof PL/SQL Hotspot Profiler
proc Proc*C/C++ compiler
procob Proc*Cobol compiler
racgimon ???
racgmdb ???
rawutl Raw Disk Utility: rawutl -s /dev/xxx to view size
rconfig RAC Configuration Tool
registerDll New 12.1.0.2
RemoteExecService Related to ACL and remote pipes
renamedg Rename ASM Disk Group
rman Recovery Manager for backing up, recovery, and restoration of Oracle Databases
roohctl Read Only Oracle Home Control
sbttest SBT Serial Backup Tape Test
schagent ???
schema ???
schemasync ???
sclsspawn ???
selecthome Instead launch OUI to select Home. OUI runs selecthome.bat internally.
sql ???
sqldeveloper Oracle SQL*Developer developer GUI tool
sqlldr SQL*Loader used to load files from the file system into the database and by External Tables
sqlplus Command line SQL*Plus user interface
sqlplusw Command line SQL*Plus user interface for Windows. No longer shipped after 10.2.0.5.
srvconfig Initializes, imports, and exports the Cluster Registry: See MyOracleSupport doc 953769.1
srvctl Server Control utility used for managing RAC and Services
statusnc Status of Native Compiler: Probably checks sys.jaccelerator$status. Ref: Note:4889370
textexport ???
tkprof Formats 10046 traces making them human readable
tnslsnr TNS Listener: This program is run from lsnrctl. Only run this program directly, as root, if you start listener on a port below 1024 on UNIX; even in this case, stopping the listener can be done with lsnrctl
tnsping Ping connections via SQL*Net. Determines the fact that SQL*Net can see a host and service
transx XML Translator: See XML Developer's Kit Programmer's Guide, Chapter "TransX Utility". Ref: Note:394803.1.
trcasst Network connections are traced to provide a detailed description of the operations performed by Oracle's internal components. The trace data is stored in an output trace file that can then be analyzed.
trcsess Combines multiple trace files into a single file for submission to TKPROF
uidrvci ADR related. E.g., uidrvci /u01/app/oracle/diag/rdbms/db_name/instance_name/trace getSchema INC_METER_SUMMARY, where INC_METER_SUMMARY is an .ams file under the trace/metadata subdir of the directory in the second arg. This getSchema command shows the table definition of v$incmeter_summary (summary of incident meters). Not sure what use this command really has.
umu User migration utility

https://docs.oracle.com/en/database/oracle/oracle-database/18/dbimi/using-the-user-migration-utility.html#GUID-9629B11D-D05B-4332-A7CC-EAD8B57D90C7
wrap Tool for encrypting PL/SQL source code prior to loading into the database
wrc A replay client is a multithreaded program (an executable named wrc located in the $ORACLE_HOME/bin directory) where each thread submits a workload from a captured session. Before replay begins, the database will wait for replay clients to connect. At this point, you need to set up and start the replay clients, which will connect to the replay system and send requests based on what has been captured in the workload.

Oracle Workload Replay Client: See Oracle Support doc 445116.1 and 748897.1
xml C XML Processor Command Line Utility
xmlcg XML C++ Class Generator: http://docs.oracle.com/cd/B28359_01/appdev.111/b28394/adx_cp_classgen.htm#sthref625
xmldiff Compares the trees that represent the two input documents to determine differences. Both input documents must use the same character-set encoding. The Xdiff (output) instance document has the same encoding as the data encoding (DOM encoding) of the input documents.
xmlpatch https://docs.oracle.com/en/database/oracle/oracle-database/18/sqlrf/XMLPATCH.html#GUID-C52DA494-2840-475B-871F-1EA071299894
xmlwf ???
xsl C XSLT Processor Command Line Utility
xsql Runs the oracle.xml.xsql.XSQLCommandLine class
xsqlproxy ???
xvm XVM Processor Command Line Utility
zip Zip utility provided for the Oracle installer
 
ACFSINSTALL
ACFS Installer acfsinstall: Usage: acfsinstall {/i | /u} {/a | /o | /l} [<path>]
acfsinstall: /i Install the driver
acfsinstall: /u Uninstall the driver
acfsinstall: /a Perform the operation for the ADVM driver
acfsinstall: /o Perform the operation for the ACFS driver
acfsinstall: /l Perform the operation for the Oracle Kernel Services (OKS) driver
acfsinstall: <path> Path to the driver. If not specified,
defaults to oracleadvm.sys, oracleacfs.sys, or
oracleoks.sys in the current directory

acfsinstall: Examples:
acfsinstall: acfsinstall /i /a (Install the ADVM driver from the current directory)
acfsinstall: acfsinstall /u /o (Uninstall the ACFS driver)
acfsinstall: acfsinstall /i /o c:\drivers\oracleacfs.sys (Install ACFS with the specified file)
TBD
 
AFDBOOT
Undocumented C:\Users\oracle>afdboot -help

C:\Users\oracle>afdboot -help
afdboot -scandisk [-afd_diskstring '<Dev_path>,<dev_path>....']
afdboot -f [File_name]
afdboot -config '<conf_param=value>,<conf_param=value>....'
TBD
 
AFDTOOL
Undocumented C:\Users\oracle>afdtool -help
Usage:
afdtool -add [-f] <devpath1, [devpath2,..]> <labelname>
afdtool -delete [-f] <devicepath | labelname>
afdtool -getdevlist [label] [-nohdr] [-nopath]
afdtool -filter <enable | disable> <devicepath>
afdtool -rescan [discstr1, discstr2, ...]
afdtool -stop
afdtool -log [-d <path>][-l <log_level>][-c <log_context>][-s <buf_size>]
[-m <max_file_sz>][-n <max_file_num>] [-q] [-t] [-h]
afdtool -di <enable | disable | query>
TBD
 
AMDU
ASM Disk Usage

Creates amdu_time/report.txt for ASM disk usage
C:\Users\oracle>amdu -help
al/lides Dump indirect blks unconditionally
-allides: AMDU ordinarily skips over empty indirect blocks. Specifying this option tells AMDU to dump those blocks unconditionally. Be warned that this can make the resulting AMDU dump quite large.

au/size AU size for corrupt disks
-ausize <bytes>: This option must be set when -baddisks is set. It must be a power of 2. This size is required to scan a disk looking for metadata, and it is normally read from the disk header. The value applies to all disks that do not have a valid header. The value from the disk header will be used if a valid header is found.

ba/ddisks Include disks with bad headers
-baddisks <diskgroup>: Normally disks with bad disk headers, or that look like they were never part of a disk group, will not be scanned. This option forces them to be scanned anyway and to be considered part of the given diskgroup. This is most useful when a disk header has been damaged. The disk will still need to have a valid allocation table to drive the scan unless -fullscan is used. In any case at least one block in the first two AUs must be valid so that the disk number can be determined. The options -ausize and -blksize are required since these values are normally fetched from the disk header. If the diskgroup uses external redundancy then -external should be specified. These values will be compared against any valid disks found in the diskgroup and they must be the same.

bl/ksize ASM block size for corrupt disks
-blksize <bytes>: This option must be set when -baddisks is set. It must be a power of 2. This size is required to scan a disk looking for metadata, and it is normally read from the disk header. The value applies to all disks that do not have a valid header. The value from the disk header will be used if a valid header is found.

c/ompare Compare file mirrors
-compare: This option only applies to file extraction from a normal or high redundancy disk group. Every extent that is mirrored on more than one discovered disk will have all sides of its mirror compared. If they are not identical a message will be reported on standard error and the report file. The message will indicate which copy was extracted. A count of the blocks that are not identical will be in the report file.

dir/ectory Directory from previous dump
-directory: <string>: This option completely eliminates the discovery phase of operation. It specifies the name of a dump directory from a previous run of AMDU. The report file and map files are read instead of doing a discovery and scan. The parsing of these ASCII files is very dependent on them being exactly as written by AMDU. AMDU is unlikely to work properly if they have been modified by a text editor, or if some of the files are missing or truncated. Note that the directory may be a copy FTPed from another machine. The other machine may even be a different platform with a different endianess.

dis/kstring Diskstring for discovery
-diskstring: <string>: By default the null string is used for discovery. The null string should discover all disks the user has access to. Many installations specify an asm_diskstring parameter for their ASM instance. If so that parameter value should be given here. Multiple discovery strings can be specified by multiple occurrences of -diskstring <string>. Beware of shell syntax conflicts with discovery strings. Diskstrings are usually the same syntax the shell uses for expanding path names on command lines so they will most likely need to be enclosed in single quotes.

du/mp Diskgroups to dump
-dump <diskgroup>: This option specifies the name of a diskgroup to have its metadata dumped. This option may be specified multiple times to dump multiple diskgroups. If the diskgroup name is ALL then all diskgroups encountered will be dumped. The diskgroup name is not case sensitive, but will be converted to uppercase for all reports. If this option is not specified then no map or image files will be created, but -extract and -print may still work.

exc/lude Disks to exclude
-exclude <string>: Multiple exclude options may be specified. These strings are used for discovery just like the values for diskstring. Only shallow discovery is done on these diskstrings. Any disks found in the exclude discovery will not be accessed. If they are also discovered using the -diskstring strings, then the report will include the information from shallow discovery along with a message indicating the disk was excluded.

exte/rnal Assume external redundancy
-external: Normally AMDU determines the diskgroup redundancy from the disk headers. However this is not possible with the -baddisks option. It is assumed that the redundancy of the -baddisks diskgroup is normal or high unless this option is given to specify external redundancy.

extr/act Files to extract
-extract <diskgroup>.<file_number>: This extracts the numbered file from the named diskgroup, case insensitive. This option may be specified multiple times to extract multiple files. The extracted file is placed in the dump directory under the name <diskgroup>_<number>.f where <diskgroup> is the diskgroup name in uppercase, and <number> is the file number. The -output option may be used to write the file to any location. The extracted file will appear to have the same contents it would have if accessed through the database. If some portion of the file is unavailable then that portion of the output file will be filled with 0xBADFDA7A, and a message will appear on stderr.

fi/ledump Dump files rather than extract
-filedump: This option causes the file objects in the command line to have their blocks dumped to the image files rather than extracted. This can be combined with the -novirtual option to selectively dump only some of the metadata files. It may also be used to dump user files (number >= 256) so that all mirrored copies can be examined.

fo/rmer Include dropped disks
-former: Normally disks marked as former are not scanned, but this option will scan them and include their contents in the output. This is useful when it is necessary to look at the contents of a disk that was dropped. Note that dropped normal disks will not have any entries in their allocation tables and thus only the physically addressed extents will be dumped. Force dropped disks will not have status former in their disk headers and are not affected by this option. However if DROP DISKGROUP is used, the disks will have the contents as of the time of the drop, and will be in status former. Thus this option is useful for extracting files from a dropped diskgroup.

fu/llscan Scan entire disk
-fullscan: This option reads every AU on the disk and looks at the contents of the AU rather than limiting the AU's read based on the allocation table. This is useful when the allocation table is corrupt or needs recovery. An AU will be written to the image file if it starts with a block that contains a valid ASM block header. The file and extent information for the map will be extracted from the block header. Physically addressed metadata will be dumped regardless of its contents. This option is incompatible with extracting a file. It is an error to specify -extract with this option. Note that this option is likely to find old garbage metadata in unallocated AU's since there is no means of determining what is allocated. Thus there may be many different copies of the same block, possibly of different versions.

h/ex Always print block contents in hex
-hex: This prints the block contents in hex without attempting to print them as ASM metadata. This is useful when the block is known to not be ASM metadata. It avoids the ASM block header dump and ensures the block is not accidentally interpreted as ASM metadata. This option requires at least one -print option.

noa/cd Do not dump ACD
-noacd: This option limits the dumping of the Active Change Directory to just the control blocks that contain the checkpoint. There is 126 MB of ACD per ASM instance (42 MB for external redundancy). It is normally of no interest if there has been a clean shutdown or no updates for a while. This option avoids dumping a lot of unimportant data. The blocks will still be read and checked for corruption. The map file will still contain entries for the ACD extents, but the block counts will be zero.

nod/ir Do not create a dump directory
-nodir: No dump directory is created, and no files are created in it. The directory name is not written to standard out. The report file is written to standard out before any block printouts from any -print options. This option conflicts with -filedump. It is an error to specify this and extract a file to the dump directory.

noe/xtract Do not create extracted file
-noextract: This prevents files from being extracted to an output file, but the file will be read and any errors in selecting the correct output will be reported. This is most useful in
combination with the -compare option.

noh/eart Do not check for heartbeat
-noheart: Normally the heartbeat block will be saved at discovery time and checked when the disk is scanned. A sleep is added between discovery and scanning to ensure there is time for the heartbeat to be written. If the heartbeat block changes then it is most likely that the diskgroup containing this disk is mounted by an active ASM instance. An error and warning is generated but operation proceeds normally. This option suppresses this check and avoids the sleep.

noi/mage Do not create image files
-noimage: No image files will be created n the dump directory. All the reads specified by the read options will still be done. The map files may be used to find blocks on the disks themselves. In the map file, the count of blocks dumped, the image file sequence number, and the byte offset in the image file will all always be zero (C00000 S0000 B0000000000)

nom/ap Do not create map or image files
-nomap: No map file is created and no image file is created. The only output is the report file. The -noimage option is assumed if this is set since an image file without a map is useless. The options -noscan and -noread also result in no map or image files, but -nomap still reads the metadata to check for I/O errors and corrupt blocks.

nop/rint Do no print block contents
-noprint: This suppresses the printout of the block contents for blocks printed with the -print option. It is useful for getting just the block reports without a lot of data. This option requires at least one -print option.

norea/d Shallow discovery only
-noread: This eliminates any reading of any disks at all. Only shallow discovery will be done. The report will end after the discovery section. It is an error to specify this option and specify a file to extract or blocks to print. It is an error to specify this and -fullscan.

norep/ort Do not generate a report
-noreport: This suppresses the generation of the report file. It is most useful in combination with -nodir and -print to get block printouts without a lot of clutter. It is unnecessary to include this with -directory since no report is generated then anyway.

nosc/an Deep discovery only
-noscan: This eliminates any reading of any disks after deep discovery. This results in just doing a deep discovery using the disksting parameter. The report will end after the discovery section. It is an error to specify this option and specify a file to extract. It is an error to specify this and -fullscan.

nosu/bdir Do not create a dump directory
-nosubdir: No dump directory is created, but files are still created. The directory name is not written to standard out. The report file and any other dump or extract files are written to the current directory or to the directory indicated by -parentdir. This means that if multiple AMDU dumps are requested using this option, the report file will always correspond to the last dump requested.

nov/irtual Do not dump virtual metadata
-novirtual: This option eliminates reading of any virtual metadata. Only the physically addressed metadata will be read. This implicitly eliminates the ACD and extent maps so -noacd and -noxmap will be assumed.

nox/map Do not dump extent maps
-noxmap: This option eliminates reading of the indirect extents containing the file extent maps. This is the bulk of the metadata in most diskgroups. Even the entries in the map file will be eliminated.

o/utput Files to create for extract
-output <file_name>: This option specifies a different file for writing an extracted file. The file will be overwritten if it already exists. This option requires that exactly one file is
extracted via the -extract option. Required with -directory

pa/rent Parent for dump directory
-parent <path_name>: By default the dump directory is created in the current directory, but another directory can be specified using this option. The parent directory for the dump directory must already exist.

pr/int Block to print
-print <block_spec>: This option prints one or more blocks to standard out. This option may be specified multiple times to print multiple <block_spec>s. The printout contains information about how each block was read as well as a formatted printout. Multiple blocks matching the same <block_spec> may be found when scanning the disks. For example there may be multiple disks that have headers for the same diskgroup and disk number. If the block is from a mirrored file then multiple copies should exist on different disks. If multiple copies of the same block have identical contents then only one formatted printout of the contents will be generated, but a header will be printed for each copy. A <block_spec> may include a count of sequential blocks to print. A <block_spec> may specify a block either by disk or file.
<block_spec> ::= <single_block> | <single_block>.C<count>
<single_block> ::= <report_disk_block> | <group_disk_block> |
<extent_file_block> | <virtual_file_block> | <xmap_file_block>
<report_disk_block> ::=
<group_name>.N<report_number>.A<au_number>.B<block_number>
<group_disk_block> ::=
<group_name>.D<disk_number>.A<au_number>.B<block_number>
<extent_file_block> ::=
<group_name>.F<file_number>.X<physical_extent>.B<block_number>
<virtual_file_block> ::=
<group_name>.F<file_number>.V<virtual_block_number>
<xmap_file_block> ::=
<group_name>.F<file_number>.M<extent_map_block_number>

r/egistry
Dump registry files
-registry: The ASM registries will be read and dumped to the image file. There will be no block consistency checks since these files do not have ASM cache headers. To dump one specific registry specify -filedump and include the file object for the registry (e.g. DATA.255).

s/pfile Extract usable spfile
-spfile: This causes extract to render the resulting file in a form that is directly usable by startup. Without this option, AMDU will extract the file as a regular ASM file including all ASM specific headers and such
TBD
 
ASMCMD
ASM CoMmanD line interface (Wrapper)

This program is a wrapper for asmcmdcore. It takes the same parameters as asmcmdcore. It first checks to see if %ORACLE_HOME% is set. If not, it prints an error messages and exits. It then invokes asmcmdcore at %ORACLE_HOME%/bin/asmcmdcore with the Perl interpreter at %ORACLE_HOME%/perl/<version>/bin/perl.
C:\app\oracle\product\12.1.0\dbhome_2\BIN>set ORACLE_HOME=/app/oracle/product/12.1.0/dbhome_2

C:\app\oracle\product\12.1.0\dbhome_1>asmcmd -help
usage: asmcmd [-V] [--nocp] [-v {errors | warnings | normal | info | debug} ] [--privilege {sysasm | sysdba} ] [-p] [--inst <instance_name>] [--discover][<command>]
help: help
TBD
 
CHOPT
CHang database OPTions chopt <enable | disable> <option>
chopt enable rat
 
CREATDEP
Undocumented Usage: creatdep /s <ServiceName> /d <DependServiceName>
TBD
 
DBV
Database Verify dbv USERID=username/password
segment_id=<tablespace_name.segfile.segblock>
logfile=<logging_file_name_and_path>
feedback=<integer>
help=<Y/N>
parfile=<parameter_file_name_and_path>
C:\Users\oracle>dbv -help

DBVERIFY: Release 18.0.0.0.0 - Production on Thu Jan 17 19:46:35 2019

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Keyword Description (Default)
----------------------------------------------------
FILE        File to Verify                 (NONE)
START       Start Block                    (First Block of File)
END         End Block                      (Last Block of File)
BLOCKSIZE   Logical Block Size             (8192)
LOGFILE     Output Log                     (NONE)
FEEDBACK    Display Progress               (0)
PARFILE     Parameter File                 (NONE)
USERID      Username/Password              (NONE)
SEGMENT_ID  Segment ID (tsn.relfile.block) (NONE)
HIGH_SCN    Highest Block SCN To Verify    (NONE)
            (scn_wrap.scn_base OR scn)
/*The following example shows a sample use of the command-line interface to this mode of DBVERIFY.

SEGMENT_ID specifies the segment to be verified. It is composed of the tablespace ID number (tsn), segment header file number (segfile), and segment header block number (segblock). These can be obtained by querying TABLESPACE_ID, HEADER_FILE, and HEADER_BLOCK from sys_user_segs. */


SELECT tablespace_id, header_file, header_block
FROM sys_user_segs
ORDER BY 1,2,3;

dbv USERID=uwclass/uwclass SEGMENT_ID=2.3.38451

DBVERIFY - Verification starting : SEGMENT_ID = 2.3.38451

DBVERIFY - Verification complete

Total Pages Examined         : 32
Total Pages Processed (Data) : 28
Total Pages Failing (Data)   : 0
Total Pages Processed (Index): 0
Total Pages Failing (Index)  : 0
Total Pages Processed (Other): 3
Total Pages Processed (Seg)  : 1
Total Pages Failing (Seg)    : 0
Total Pages Empty            : 0
Total Pages Marked Corrupt   : 0
Total Pages Influx           : 0
Highest block SCN            : 30027821 (0.30027821)
 
NID
DB New ID Help DBNEWID is a database utility that can change the internal database identifier (DBID) and the database name (DBNAME) for an operational database.

Prior to the introduction of the DBNEWID utility, you could manually create a copy of a database and give it a new database name (DBNAME) by re-creating the control file. However, you could not give the database a new identifier (DBID). The DBID is an internal, unique identifier for a database. Because Recovery Manager (RMAN) distinguishes databases by DBID, you could not register a seed database and a manually copied database together in the same RMAN repository. The DBNEWID utility solves this problem by allowing you to change any of the following:

* Only the DBID of a database
* Only the DBNAME of a database
* Both the DBNAME and DBID of a database

Changing the DBID of a database is a serious procedure. When the DBID of a database is changed, all previous backups and archived logs of the database become unusable. After you change the DBID, you must open the database with the RESETLOGS option, which re-creates the online redo logs and resets their sequence to 1 (see the Oracle11g Database Administrator's Guide). Consequently, you should make a backup of the whole database immediately after changing the DBID.

Changing the DBNAME without changing the DBID does not require you to open with the RESETLOGS option, so database backups and archived logs are not invalidated. However, changing the DBNAME does have consequences. You must change the DB_NAME initialization parameter after a database name change to reflect the new name. Also, you may have to re-create the Oracle password file. If you restore an old backup of the control file (before the name change), then you should use the initialization parameter file and password file from before the database name change.

Syntax:
Keyword Description Default
TARGET Username / Password None
DBNAME New database name None
LOGFILE Output log None
REVERT Revert failed change NO
SETNAME Set a new database name only NO
APPEND Append to output log NO
HELP Displays help messages NO
C:\users\oracle>nid -help

DBNEWID: Release 18.0.0.0.0 - Production on Tue Jan 22 08:39:22 2019

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Keyword Description (Default)
----------------------------------------------------
TARGET      Username/Password              (NONE)
DBNAME      New database name              (NONE)
LOGFILE     Output Log                     (NONE)
REVERT      Revert failed change           NO
SETNAME     Set a new database name only   NO
APPEND      Append to output log           NO
HELP        Displays these messages        NO
DB New ID

DBNEWID: Release 18.0.0.0.0 - Production on Thu Jan 17 09:03:17 2019

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

NID-00002: Parse error: LRM-00101: unknown parameter name 'h'

Change of database ID failed during validation - database is intact.
DBNEWID - Completed with validation errors.
/* Ensure that you have a recoverable whole database backup and ensure that the target database is mounted but not open, and that it was shut down consistently prior to mounting. */

SHUTDOWN IMMEDIATE
STARTUP MOUNT


/* Invoke the DBNEWID utility on the command line, specifying a valid user with the SYSDBA privilege. */

% nid TARGET=sys/oracle123@test_db

/* To change the database name in addition to the DBID, specify the DBNAME parameter. This example changes the name to orabase: */

% nid TARGET=sys/oracle@test DBNAME=orabase

/* The DBNEWID utility performs validations in the headers of the datafiles and control files before attempting I/O to the files. If validation is successful, then DBNEWID prompts you to confirm the operation (unless you specify a log file, in which case it does not prompt), changes the DBID for each datafile (including offline normal and read-only datafiles), and then exits. The database is left mounted but is not yet usable. */

DBNEWID: Release 12.1.0.1.0 - Production on Mon Jul 21 07:28:35 2014
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

Connected to database TEST_DB (DBID=3942195360)

Control Files in database:
/oracle/dbs/cf1.f
/oracle/dbs/cf2.f

Change database id of database SOLARIS? (Y/[N]) => y

Proceeding with operation
Datafile /oracle/dbs/tbs_01.f - changed
Datafile /oracle/dbs/tbs_02.f - changed
Datafile /oracle/dbs/tbs_11.f - changed
Datafile /oracle/dbs/tbs_12.f - changed
Datafile /oracle/dbs/tbs_21.f - changed


/* New DBID for database TEST_DB is 3942196782.
All previous backups and archived redo logs for this database are unusable
Proceed to shutdown database and open with RESETLOGS option
DBNEWID - Database changed

If validation is not successful, then DBNEWID terminates and leaves the target database intact. You can open the database, fix the error, and then either resume the DBNEWID operation or continue using the database without changing its DBID
After DBNEWID successfully changes the DBID, shut down the database */


SHUTDOWN IMMEDIATE

-- mount the database

STARTUP MOUNT

-- open the database in RESETLOGS mode and resume normal use

ALTER DATABASE OPEN RESETLOGS;

/* make a new database backup. Because you reset the online redo logs, the old backups and archived logs are no longer usable in the current incarnation of the database. */
/* The following steps describe how to change the database name without changing the DBID. */

-- 1. Ensure that you have a recoverable whole database backup
-- 2. Ensure that the target database is mounted but not open, and that it was shut down consistently prior to mounting


SHUTDOWN IMMEDIATE
STARTUP MOUNT

-- 3. Invoke the utility on the command line, specifying a valid user with the SYSDBA privilege. You must specify both the DBNAME and SETNAME parameters. This example changes the name to orabase

% nid TARGET=SYS/oracle@test_db DBNAME=orabase SETNAME=YES

/* DBNEWID performs validations in the headers of the control files (not the datafiles) before attempting I/O to the files. If validation is successful, then DBNEWID prompts for confirmation, changes the database name in the control files, and exits. After DBNEWID completes successfully, the database is left mounted but is not yet usable. */

DBNEWID: Release 12.1.0.1.0 - Production on Mon Jul 21 07:28:35 2014
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

Connected to database TEST_DB (DBID=3942196782)

Control Files in database:
/oracle/dbs/cf1.f
/oracle/dbs/cf2.f

Change database name of database TEST_DB to ORABASE? (Y/[N]) => Y

Proceeding with operation

Database name changed from TEST_DB to ORABASE - database needs to be shutdown.
Modify parameter file and generate a new password file before restarting.

DBNEWID - Successfully changed database name


/* If validation is not successful, then DBNEWID terminates and leaves the target database intact. You can open the database, fix the error, and then either resume the DBNEWID operation or continue using the database without changing the database name. */

-- 4. Shut down the database.


SHUTDOWN IMMEDIATE

-- 5. Set the DB_NAME initialization parameter in the initialization parameter file to the new database name.
-- 6. Create a new password file.
-- 7. Start up the database and resume normal use.


STARTUP
/* To revert a stalled DBID change operation, run the DBNEWID utility again, specifying the REVERT keyword. */

% nid TARGET=SYS/oracle REVERT=YES LOGFILE=$HOME/nid.log
/* connects with operating system authentication and changes only the DBID */

% nid TARGET=/

/* changing the DBID and Database Name. The following example connects as user SYS and changes the DBID and also changes the database name to test2 */

% nid TARGET=SYS/oracle@test_db DBNAME=orabase
 
OJVMTC
  ojvmtc [-help ] [-bootclasspath] [-server connect_string] [-jar jar_name] [-list] -classpath jar1:path2:jar2 jars,...,classes
-bootclasspath Classes used for closure but not included in the closure set

-server connect_string Connect to the server and uses classes found there.
Classes are not included in the closure set connect_string
thin thin:user/passwd@host:port:sid
oci oci:user/passwd@host:port:sid
oci:user/passwd@tnsname
oci:user/passwd@(connect descriptor)

-jar jar_name Write each class of the closure set to a jar and generate stubs for missing classes

-list List the missing classes

-classpath Use the specified jars and classes for the closure set
TBD
 
OKDSTRY
  Usage: okdstry [-q] [-c cache_name]
-q quiet mode (Don't output BELL characters).
-c specify name of credential cache.
TBD
 
OKINIT
  Usage: okinit [-V] [-l lifetime] [-s start_time]
[-r renewable_life] [-f | -F] [-p | -P] -n [-a | -A] [-C]
[-E]
[-v] [-R] [-k [-t keytab_file]] [-c cachename]
[-S service_name] [-T ticket_armor_cache]
[-X <attribute>[=<value>]] [principal]

options:
  -V verbose
  -l lifetime
  -s start time
  -r renewable lifetime
  -f forwardable
  -F not forwardable
  -p proxiable
  -P not proxiable
  -n anonymous
  -a- include addresses
  -A do not include addresses
  -v validate
  -R renew
  -C canonicalize
  -E client is enterprise principal name
  -k use keytab
  -t filename of keytab to use
  -c Kerberos 5 cache name
  -S service
  -T armor credential cache
  -X <attribute>[=<value>]
TBD
 
OKLIST
  Usage: oklist [-e] [-V] [[-c] [-d] [-f] [-s] [-a [-n]]] [-k [-t] [-K]] [name]
  -c specifies credential cache.
  -k specifies keytab. (Default is credential cache).
  -e shows the encryption type.
  -V shows the Kerberos version and exits.
options for credential caches:
  -d shows the submitted authorization data types.
  -f shows credential flags.
  -s sets exit status based on valid tgt existence.
  -a displays the address list.
  -n do not reverse-resolve.
options for keytabs:
  -t shows keytab entry timestamps.
  -K shows keytab entry DES keys.
TBD
 
ORASTACK
Windows Modify Reserved Stack Size orastack <executable> <stack_size>
orastack
-- read the comments from the above command

orastack oracle.exe 500000
orastack tnslsnr.exe 500000
 
ORAVERSION
Oracle Version Information These are its possible arguments:
 -compositeVersion: Print the full version number: a.b.c.d.e.
 -baseVersion: Print the base version number: a.0.0.0.0.
 -majorVersion: Print the major version number: a.
 -buildStamp: Print the date/time associated with the build.
 -buildDescription: Print a description of the build.
 -help: Print this message.
c:\app18\bin>oraversion -baseVersion
18.0.0.0.0

c:\app18\bin>oraversion -buildDescription
Release_Update

c:\app18\bin>oraversion -buildStamp
180813205634

c:\app18\bin>oraversion -compositeVersion
18.3.0.0.0

c:\app18\bin>oraversion -majorVersion
18
 
TRCASST
Format listener trace files trcasst <switch> <tracefile_name>

Syntax:
Switch Description
-e Displays error information. Valid options to use with –e include: Valid options listed below.

Switch Description Default
0 Translate NS error numbers Yes
1 Display only error translation  
2 Display error numbers without translation  
-l Displays services and TTC information. Valid options listed below.

Switch Description Default
-a Displays data for all connections in trace file  
-i Displays the trace data for a particular ID from the –la option.  
-s Displays a summary of statistics. This includes total bytes sent and received, maximum open cursors, total calls, parse counts, and more.  
-o Displays services and TTC (Two Task Common) information. Valid options listed below.

Switch Description Default
-c Summary of connection information  
-d Detailed connection information Yes
-q SQL commands (used in combination with "u")  
-t Detailed TTC information Yes
-u Summary of TTC information  
-s Statistics Default
C:\Documents and settings\oracle>set trc_level ADMIN

-- connect to the database and perform some actions via SQL*Net

C:\Documents and settings\oracle>set trc_level OFF

C:\Documents and settings\oracle> trcasst c:\app\oracle\diag\tnslsnr\perrito4\listener\trace\ora_7520_10328.trc
 
TRCSESS
Trace file consolidation. Merges multiple trace files into a single file for running through TKPROF The code for this utility is in $ORACLE_HOME/bin.

trcsess [output=output_file_name]
        [session=session_id]
        [clientid=client_id]
        [service=service_name]
        [action=action_name]
        [module=module_name]
        [trace_files]
C:\Users\oracle>trcsess
oracle.ss.tools.trcsess.SessTrcException: SessTrc-00002: Session Trace Usage error: Wrong parameters passed.
trcsess [output=<output file name >] [session=<session ID>] [clientid=<clientid>]
[service=<service name>] [action=<action name>] [module=<module name>] <trace file names>

output=<output file name> output destination default being standard output.
session=<session Id> session to be traced.
Session id is a combination of session Index & session serial number e.g. 8.13.
clientid=<clientid> clientid to be traced.
service=<service name> service to be traced.
action=<action name> action to be traced.
module=<module name> module to be traced.
<trace_file_names> Space separated list of trace files with wild card '*' supported.
conn uwclass/uwclass@pdbdev

SQL> SELECT sid, serial#
2 FROM v$session
3 WHERE sid IN (SELECT sid FROM v$mystat WHERE rownum = 1);

       SID    SERIAL#
---------- ----------
       147       8613

-- create two trace files ... first a 10053
ALTER SESSION SET tracefile_identifier = 'test_trace1';
ALTER SESSION SET EVENTS '10053 trace name context forever, level 1';
select count(*) from airplanes where delivered_date > sysdate+1000;
select count(*) from airplanes where delivered_date <= sysdate+1000;
ALTER SESSION SET EVENTS '10053 trace name context OFF';

-- then a 10046
ALTER SESSION SET tracefile_identifier = 'test_trace2';
ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';
select count(*) from airplanes where delivered_date > sysdate+1000;
select count(*) from airplanes where delivered_date <= sysdate+1000;
ALTER SESSION SET EVENTS '10046 trace name context OFF';

-- finally use trcsess to merge them for TKPROF analysis
trcsess output=c:\tmp\mytrace.trc session=147.8613 c:\app\oracle\diag\rdbms\orabase\orabase\trace\*test_trace*.trc
 
WRAP
Wrap a PL/SQL object

See also DBMS_UTILITY
wrap iname=input_file oname=output_file
SELECT ascii('D') FROM dual;
SELECT ascii(' ') FROM dual;

-- create as a text file c:\temp\unwrapped.sql
CREATE OR REPLACE FUNCTION unwrapped (namein IN VARCHAR2) RETURN NUMBER AUTHID DEFINER IS
 c  PLS_INTEGER := 0;
 j  PLS_INTEGER;
 l  PLS_INTEGER;
BEGIN
  l := LENGTH(namein);

  FOR i IN 1.. l LOOP
    c := c + ASCII(SUBSTR(namein, i, 1));
  END LOOP;
  RETURN c;
END unwrapped;
/

-- create as a text file c:\temp\wrapped.sql
CREATE OR REPLACE FUNCTION wrapped (namein IN VARCHAR2) RETURN NUMBER AUTHID DEFINER IS
 c  PLS_INTEGER := 0;
 j  PLS_INTEGER;
 l  PLS_INTEGER;
BEGIN
  l := LENGTH(namein);

  FOR i IN 1.. l LOOP
    c := c + ASCII(SUBSTR(namein, i, 1));
  END LOOP;
  RETURN c;
END wrapped;
/

wrap iname=c:\temp\wrapped.sql oname=c:\temp\output.sql

SQL> @c:\temp\unwrapped.sql
SQL> @c:\temp\output.sql

set pagesize 0

SELECT text
FROM user_source
WHERE name = 'UNWRAPPED'
ORDER BY line;

SELECT text
FROM user_source
WHERE name = 'WRAPPED'
ORDER BY line;
 
XML
Undocumented Usage: xml [switches] [document URI]
    or xml -f [switches] [document filespec]
Switches:
    -B <BaseUri> Set the Base uri for XSLT processor.
 BaseUri of http://pqr/xsl.txt resolves pqr.txt to http://pqr/pqr.txt
    -c Conformance check only, no validation
    -e <encoding> Specify default input file encoding (-ee to force)
    -E <encoding> Specify output/data/presentation encoding
    -f File - Interpret <document> as filespec, not URI
    -G <xptr exprs> evaluates XPointer scheme examples give in a file
    -h Help - show this usage help (-hh for more options)
    -i <n> Number of times to iterate the XSLT processing
    -l <language> Language for error reporting
    -o <XSLoutfile> Specify output file of XSLT processor
    -p Print document/DTD structures after parse
    -P Pretty print from root element
    -PP Pretty print from root node (DOC); includes XMLDecl
    -PE <encoding> Specify encoding for -P or -PP output
    -PX Include XMLDecl in output always
    -s <style sheet> Style sheet - specifies the XSL style sheet
    -v Version - show parser version and exit
    -V <var> <value> To test top level variables in CXSLT
    -w Whitespace - preserve ALL whitespace
    -W Warning - stop parsing after a warning
    -x Exercise SAX interface for parser (prints document)
    -Y control characters are valid
TBD
 
XMLWF
Undocumented usage: xmlwf [-n] [-p] [-r] [-s] [-w] [-x] [-d output-dir] [-e encoding] file ...
TBD
 
XSL
C XSLT Processor Command Line Utility Usage: xsl [switches] <stylesheet> <instance>
    or xsl -f [switches] [document filespec]
Switches:
    -B <BaseUri> Set the Base uri for XSLT processor.
BaseUri of http://pqr/xsl.txt resolves pqr.txt to http://pqr/pqr.txt
   -e <encoding> Specify default input file encoding (-ee to force)
   -E <encoding> Specify output/data/presentation encoding
   -f File - Interpret <document> as filespec, not URI
   -G <xptr exprs> evaluates XPointer scheme examples give in a file
   -h Help - show this usage help (-hh for more options)
   -i <n> Number of times to iterate the XSLT processing
   -l <language> Language for error reporting
   -o <XSLoutfile> Specify output file of XSLT processor
   -v Version - show parser version and exit
   -V <var> <value> To test top level variables in CXSLT
   -w Whitespace - preserve ALL whitespace
   -W Warning - stop parsing after a warning
TBD

Related Topics
DBMS_DDL
DBMS_DB_VERIFY
DBMS_UTILITY
Functions
Procedures
What's New In 12cR1
What's New In 12cR2

Morgan's Library Page Footer
This site is maintained by Dan Morgan. Last Updated: This site is protected by copyright and trademark laws under U.S. and International law. © 1998-2019 Daniel A. Morgan All Rights Reserved