Oracle DBMS_HADOOP_INTERNAL
Version 21c

General Information
Library Note Morgan's Library Page Header
Which has the higher priority in your organization: Deploying a new database or securing the ones you already have? Looking for a website, and resources, dedicated solely to securing Oracle databases? Check out DBSecWorx.
Purpose DBMS_HADOOP_INTERNAL is an internal package providing helper functions to  DBMS_HADOOP
AUTHID DEFINER
Dependencies
 ALL_HIVE_COLUMNS DBA_HIVE_TAB_PARTITIONS EXTERNAL_TAB$
ALL_HIVE_DATABASES DBA_OBJECTS HIVEMETADATA
ALL_HIVE_PART_KEY_COLUMNS DBA_PART_TABLES HIVETYPESET
ALL_HIVE_TABLES DBA_TABLES OBJ$
ALL_HIVE_TAB_PARTITIONS DBA_XTERNAL_TAB_PARTITIONS TABPART$
ALL_TAB_PRIVS DBA_XT_HIVE_TABLES_VALIDATION USER$
ALL_XT_HIVE_TABLES_VALIDATION DBMS_ASSERT USER_HIVE_COLUMNS
ANYDATA DBMS_HADOOP USER_HIVE_DATABASES
DBA_DIRECTORIES DBMS_METADATA USER_HIVE_PART_KEY_COLUMNS
DBA_HIVE_COLUMNS DBMS_STANDARD USER_HIVE_TABLES
DBA_HIVE_DATABASES DBMS_SYS_ERROR USER_HIVE_TAB_PARTITIONS
DBA_HIVE_PART_KEY_COLUMNS DBMS_UTILITY USER_XT_HIVE_TABLES_VALIDATION
DBA_HIVE_TABLES DUAL UTL_RAW
Documented No
First Available 12.2
Security Model Owned by SYS with EXECUTE granted to the EXECUTE_CATALOG_ROLE role.

This package is protected with an ACCESSIBLE BY clause from direct access.
Source {ORACLE_HOME}/rdbms/admin/dbmshadp1.sql and prvthadoop1.plb

See also dbmshadp.sql and cathive1.sql
Subprograms
 
ADDED_HIVE_PARTNS
Given a Hive table, this function returns all the partitions that appear in the Hive table, but missing in the corresponding Oracle external table. We use the function ADDED_PARTNS() to obtain the hashed partitioned names from DBA_HIVE_TAB_PARTITIONS and then find the corresponding original partition specs. dbms_hadoop_internal.added_hive_partns(
clus_id          IN VARCHAR2,
db_name          IN VARCHAR2,
tab_name         IN VARCHAR2,
partnList        IN CLOB,
mdata_compatible IN VARCHAR2)
RETURN CLOB;
TBD
 
ADDED_PARTNS
Given a Hive table, this function returns all the partitions that appear in the Hive table, but missing in the corresponding Oracle external table dbms_hadoop_internal.added_partns(
clus_id          IN VARCHAR2,
db_name          IN VARCHAR2,
tab_name         IN VARCHAR2,
et_name          IN VARCHAR2,
et_owner         IN VARCHAR2,
mdata_compatible IN VARCHAR2)
RETURN CLOB;
TBD
 
DROPPED_PARTNS
Given a Hive table, this function returns all the partitions that appear in the corresponding Oracle external table, but in the given Hive table dbms_hadoop_internal.dropped_partns(
clus_id          IN VARCHAR2,
db_name          IN VARCHAR2,
tab_name         IN VARCHAR2,
et_name          IN VARCHAR2,
et_owner         IN VARCHAR2,
mdata_compatible IN VARCHAR2)
RETURN CLOB;
TBD
 
DROP_ALL_PARTNS
Drop all external table partitions dbms_hadoop_internal.drop_all_partns(
clus_id          IN VARCHAR2,
db_name          IN VARCHAR2,
tab_name         IN VARCHAR2,
et_name          IN VARCHAR2,
et_owner         IN VARCHAR2,
mdata_compatible IN VARCHAR2)
RETURN CLOB;
TBD
 
EVAL_PART_KEY_OPERATOR (new 21c)
Undocumented dbms_hadoop_internal.eval_part_key_operator(
text_of_ddl IN OUT CLOB,
key         IN     VARCHAR2)
RETURN CLOB;
TBD
 
FIND_PARTN_SPECS
Returns the partition specs for a given partition key dbms_hadoop_internal.find_partn_specs(
curKey           IN VARCHAR2,
xt_tab_name_dqr  IN VARCHAR2,
xt_tab_owner_dqr IN VARCHAR2)
RETURN CLOB;
TBD
 
GETHIVETABLE
getHiveTable() is a pipelined table function that returns the rows back from C external procedures via ODCI to PL/SQL. The rows sent from C external procedures actually originate from various Hive metastores and fetched via JNI calls made from hotspot JVM. dbms_hadoop_internal.getHiveTable(
configDir        IN VARCHAR2,
debugDir         IN VARCHAR2,
clusterName      IN VARCHAR2,
dbName           IN VARCHAR2,
tblName          IN VARCHAR2,
createPartitions IN VARCHAR2,
callType         IN NUMBER)
RETURN hiveTypeSet PIPELINED USING HiveMetadata;
TBD
 
GETNUMBEROFITEMS
Undocumented dbms_hadoop_internal.getNumberOfItems(
instr       IN CLOB,
boundarykey IN CHAR)
RETURN NUMBER;
TBD
 
GET_ALL_HIVE_PARTNS
Returns all the partitions currently in the specified hive table dbms_hadoop_internal.get_all_hive_partns(
clus_id       IN VARCHAR2,
db_name       IN VARCHAR2,
hive_tab_name IN VARCHAR2)
RETURN CLOB;
TBD
 
GET_ALL_HYPT_PARTNS
Returns all partitions in the specified HyPT dbms_hadoop_internal.get_all_hypt_partns(
et_name  IN VARCHAR2,
et_owner IN VARCHAR2)
RETURN CLOB;
TBD
 
GET_ALL_PET_PARTNS
Returns all partitions in the specified PET table dbms_hadoop_internal.get_all_pet_partns(
et_name  IN VARCHAR2,
et_owner IN VARCHAR2)
RETURN CLOB;
TBD
 
GET_CONFIG_DIR
Undocumented dbms_hadoop_internal.get_config_dir RETURN VARCHAR2;
SELECT dbms_hadoop_internal.get_config_dir
FROM dual;

GET_CONFIG_DIR
------------------------------------------------
 
 
GET_DDL
Undocumented dbms_hadoop_internal.get_ddl(
secureConfigDir     IN VARCHAR2,
secureDebugDir      IN VARCHAR2,
secureClusterId     IN VARCHAR2,
secureDbName        IN VARCHAR2,
secureHiveTableName IN VARCHAR2,
createPartitions    IN VARCHAR2)
RETURN CLOB;
TBD
 
GET_DEBUG_DIR
Undocumented dbms_hadoop_internal.get_debug_dir(current_user IN VARCHAR2)
RETURN VARCHAR2;
SELECT dbms_hadoop_internal.get_debug_dir(USER)
FROM dual;

DBMS_HADOOP_INTERNAL.GET_DEBUG_DIR(USER)
-----------------------------------------
InvalidDir
 
GET_HIVE_COLUMNS
Returns all the columns found in the given hive table dbms_hadoop_internal.get_hive_columns(
clus_id  IN VARCHAR2,
db_name  IN VARCHAR2,
tbl_name IN VARCHAR2)
RETURN CLOB;
TBD
 
GET_HIVE_PKEYS
Undocumented dbms_hadoop_internal.get_hive_pkeys(
clus_id       IN VARCHAR2,
database_name IN VARCHAR2,
tbl_name      IN VARCHAR2)
RETURN CLOB;
TBD
 
GET_HIVE_TAB_INFO
Undocumented dbms_hadoop_internal.get_hive_tab_info(
xt_tab_name  IN VARCHAR2,
xt_tab_owner IN VARCHAR2)
RETURN VARCHAR2;
TBD
 
GET_HYPT_COLUMNS
Returns all the merging HYPT table columns dbms_hadoop_internal.get_hypt_columns(orig_ddl IN CLOB)
RETURN CLOB;
TBD
 
GET_INCOMPATIBILITY
Given a Hive table and its corresponding Oracle external table, this function returns the first incompatibility that is encountered dbms_hadoop_internal.get_incompatibility(
clus_id       IN VARCHAR2,
db_name       IN VARCHAR2,
hive_tbl_name IN VARCHAR2,
et_tbl_name   IN VARCHAR2,
et_tbl_owner  IN VARCHAR2)
RETURN CLOB;
TBD
 
GET_NAME
Undocumented dbms_hadoop_internal.get_name(
name    IN     VARCHAR2,
myowner IN OUT VARCHAR2,
myname  IN OUT VARCHAR2,
downer  IN     VARCHAR2);
TBD
 
GET_OBJNO_FROM_PARTITIONED_ET
Given a Hive table, this function returns the object number of the Oracle external table, if one is present dbms_hadoop_internal.get_objno_from_partitioned_et(
cluster_id   IN VARCHAR2,
table_name   IN VARCHAR2,
XT_tab_name  IN VARCHAR2,
xt_tab_owner IN VARCHAR2)
RETURN NUMBER;
TBD
 
GET_PARTN_SPEC
Undocumented dbms_hadoop_internal.get_partn_spec(
hive_table_name IN VARCHAR2,
text_of_ddl_ip  IN CLOB)
RETURN CLOB;
TBD
 
GET_XT_COLUMNS
Returns all external table columns dbms_hadoop_internal.get_xt_columns(orig_ddl IN CLOB)
RETURN CLOB;
TBD
 
GET_XT_PKEYS
Undocumented dbms_hadoop_internal.get_xt_pkeys(orig_ddl IN CLOB)
RETURN CLOB;
SQL> SELECT dbms_hadoop_internal.get_xt_pkeys('DROP TABLE t')
  2  FROM dual;

DBMS_HADOOP_INTERNAL.GET_XT_PKEYS('DROPTABLET')
------------------------------------------------
 
 
HIVE_TABLE_EXISTS
Given the names of a hive table, cluster id and db_name, verifies that the hive table actually exists in hive metastore dbms_hadoop_internal.hive_table_exists(
hive_table_name IN VARCHAR2,
cluster_id      IN VARCHAR2,
db_name         IN VARCHAR2)
RETURN NUMBER;
TBD
 
HYPT_COLUMNS_MATCHED
Checks whether all HyPT columns match with those in the hive table dbms_hadoop_internal.hypt_columns_matched(
clus_id       IN VARCHAR2,
db_name       IN VARCHAR2,
hive_tbl_name IN VARCHAR2,
orig_ddl      IN CLOB)
RETURN VARCHAR2;
TBD
 
IS_HYPT
Check whether a given external table is a hypt dbms_hadoop_internal.is_hypt(
xt_tab_name  IN VARCHAR2,
xt_tab_owner IN VARCHAR2)
RETURN NUMBER;
TBD
 
IS_METADATA_COMPATIBLE
Given a Hive table and its corresponding Oracle external table, this function checks whether the external table is metadata-compatible with the Hive table. Metadata compatibility means (a) every column in the external table must be present in the Hive table (b) the datatype of each external table column must be the same or compatible with the datatype of the hive table column. (c) The partition keys in the external table must be in the same order as the partition keys in the hive table. dbms_hadoop_internal.is_metadata_compatible(
clus_id       IN VARCHAR2,
db_name       IN VARCHAR2,
hive_tbl_name IN VARCHAR2,
et_tbl_name   IN VARCHAR2,
et_tbl_owner  IN VARCHAR2)
RETURN VARCHAR2;
TBD
 
IS_ORACLE_TABLE_COMPATIBLE
Check whether the given Oracle table is compatible with the given Hive table. In order to be compatible, both of their columns should match, partition keys should match and all of their datatypes should be compatible. dbms_hadoop_internal.is_oracle_table_compatible(
clus_id        IN VARCHAR2,
db_name        IN VARCHAR2,
hive_tbl_name  IN VARCHAR2,
hypt_tbl_name  IN VARCHAR2,
hypt_tbl_owner IN VARCHAR2)
RETURN VARCHAR2;
TBD
 
IS_ORACLE_TABLE_ELIGIBLE
Check whether the given Oracle table is an eligible table for merging with a Hive table into a hybrid partitioned table. In order to be eligible, it needs to be an internal LIST partitioned table, and cannot be an external table. dbms_hadoop_internal.is_oracle_table_eligible(
secureClusterId     IN VARCHAR2,
secureHiveTableName IN VARCHAR2,
hypt_name_dqr       IN VARCHAR2,
hypt_owner_dqr      IN VARCHAR2)
RETURN NUMBER;
TBD
 
IS_PARTITION_COMPATIBLE
Given a Hive table and its corresponding Oracle external table, this function tells whether the external table is partition compatible with the hive table.

If the XT is exactly identical to the Hive table this function will return FALSE - the reason is that the user does not need to call the SYNC API.
dbms_hadoop_internal.is_partition_compatible(
mdata_compatible IN VARCHAR2,
partns_added     IN CLOB,
partns_dropped   IN CLOB)
RETURN VARCHAR2;
TBD
 
REMOVE_DOUBLE_QUOTE
Removes the double quotes from an enquoted string dbms_hadoop_internal.remove_double_quote(str IN VARCHAR2)
RETURN VARCHAR2;
SELECT dbms_hadoop_internal.remove_double_quote('"MORGAN"')
FROM dual;

DBMS_HADOOP_INTERNAL.REMOVE_DOUBLE_QUOTE('"MORGAN"')
-----------------------------------------------------
MORGAN
 
SYNC_USER_PRIVILEGED
Undocumented dbms_hadoop_internal.sync_user_privileged(
xt_tab_name  IN VARCHAR2,
xt_tab_owner IN VARCHAR2,
current_user IN VARCHAR2)
RETURN NUMBER;
TBD
 
UNIX_TS_TO_DATE
Ancillary function to convert Julian date/time value to calendar date dbms_hadoop_internal.unix_ts_to_date(julian_time IN NUMBER)
RETURN DATE;
TBD
 
XT_COLUMNS_MATCHED
Undocumented dbms_hadoop_internal.xt_columns_matched(
clus_id       IN VARCHAR2,
db_name       IN VARCHAR2,
hive_tbl_name IN VARCHAR2,
orig_ddl      IN CLOB)
RETURN VARCHAR2;
TBD

Related Topics
Built-in Functions
Built-in Packages
DBMS_HADOOP
What's New In 19c
What's New In 20c-21c

Morgan's Library Page Footer
This site is maintained by Dan Morgan. Last Updated: This site is protected by copyright and trademark laws under U.S. and International law. © 1998-2019 Daniel A. Morgan All Rights Reserved
  DBSecWorx