kOracle DBMS_HADOOP_INTERNAL
Version 12.2.0.1

General Information
Library Note Morgan's Library Page Header
Are you prepared for the release of Oracle Database 18c ... the first autonomous database? We are here at the Library. It is time for DBAs to stop fighting robots with their fingers and losing ... time to start using our intelligence and winning.
Purpose DBMS_HADOOP_INTERNAL is a private package that provides a number of helper functions to DBMS_HADOOP
AUTHID DEFINER
Dependencies
ALL_TAB_PRIVS DBMS_METADATA EXTERNAL_TAB$
ANYDATA DBMS_SQL HIVEMETADATA
DBA_DIRECTORIES DBMS_STANDARD HIVETYPESET
DBA_OBJECTS DBMS_SYS_ERROR OBJ$
DBMS_ASSERT DBMS_UTILITY TABPART$
DBMS_HADOOP DUAL USER$
Documented No
Exceptions
Error Code Reason
   
   
   
First Available 12.2.0.1
Security Model Owned by SYS with
Source {ORACLE_HOME}/rdbms/admin/dbmshadp1.sql

See also dbmshadp.sql and cathive1.sql
Subprograms
 
ADDED_HIVE_PARTNS (new 12.2)
Given a Hive table, this function returns all the partitions that appear in the Hive table, but missing in the corresponding Oracle external table. We use the function ADDED_PARTNS() to obtain the hashed partitioned names from DBA_HIVE_TAB_PARTITIONS and then find the corresponding original partition specs. dbms_hadoop_internal.added_hive_partns(
clus_id          IN VARCHAR2,
db_name          IN VARCHAR2,
tab_name         IN VARCHAR2,
partnList        IN CLOB,
mdata_compatible IN VARCHAR2)
RETURN CLOB;
TBD
 
ADDED_PARTNS (new 12.2)
Given a Hive table, this function returns all the partitions that appear in the Hive table, but missing in the corresponding Oracle external table dbms_hadoop_internal.added_partns(
clus_id          IN VARCHAR2,
db_name          IN VARCHAR2,
tab_name         IN VARCHAR2,
et_name          IN VARCHAR2,
et_owner         IN VARCHAR2,
mdata_compatible IN VARCHAR2)
RETURN CLOB;
TBD
 
DEBUG_USER_PRIVILEGED (new 12.2)
Check whether a given user has WRITE privileges on the ORACLE_BIGDATA_DEBUG directory dbms_hadoop_internal.debug_user_privileged(current_user IN VARCHAR2)
RETURN NUMBER;
TBD
 
DO_SYNC_PARTITIONS (new 12.2)
Undocumented dbms_hadoop_internal.do_sync_partitions(
partnlist    IN CLOB,
syncmode     IN NUMBER,
xt_tab_name  IN VARCHAR2,
xt_tab_owner IN VARCHAR2);
TBD
 
DROPPED_PARTNS (new 12.2)
Given a Hive table, this function returns all the partitions that appear in the corresponding Oracle external table, but in the given Hive table dbms_hadoop_internal.dropped_partns(
clus_id          IN VARCHAR2,
db_name          IN VARCHAR2,
tab_name         IN VARCHAR2,
et_name          IN VARCHAR2,
et_owner         IN VARCHAR2,
mdata_compatible IN VARCHAR2)
RETURN CLOB;
TBD
 
GETHIVETABLE (new 12.2)
getHiveTable() is a pipelined table function that returns the rows back from C external procedures via ODCI to PL/SQL. The rows sent from C external procedures actually originate from various Hive metastores and fetched via JNI calls made from hotspot JVM. dbms_hadoop_internal.getHiveTable(
configDir        IN VARCHAR2,
debugDir         IN VARCHAR2,
clusterName      IN VARCHAR2,
dbName           IN VARCHAR2,
tblName          IN VARCHAR2,
createPartitions IN VARCHAR2,
callType         IN NUMBER)
RETURN hiveTypeSet PIPELINED USING HiveMetadata;
TBD
 
GETNUMBEROFITEMS (new 12.2)
Undocumented dbms_hadoop_internal.getNumberOfItems(
instr       IN CLOB,
boundarykey IN CHAR)
RETURN NUMBER;
SELECT ???
 
GET_CONFIG_DIR (new 12.2)
Undocumented dbms_hadoop_internal.get_config_dir RETURN VARCHAR2;
SQL> SELECT dbms_hadoop_internal.get_config_dir
  2  FROM dual;

GET_CONFIG_DIR
------------------------------------------------
 
 
GET_DDL (new 12.2)
Undocumented dbms_hadoop_internal.get_ddl(
secureConfigDir     IN VARCHAR2,
secureDebugDir      IN VARCHAR2,
secureClusterId     IN VARCHAR2,
secureDbName        IN VARCHAR2,
secureHiveTableName IN VARCHAR2,
createPartitions    IN VARCHAR2)
RETURN CLOB;
TBD
 
GET_DEBUG_DIR (new 12.2)
Undocumented dbms_hadoop_internal.get_debug_dir(current_user IN VARCHAR2)
RETURN VARCHAR2;
SQL> SELECT dbms_hadoop_internal.get_debug_dir(USER)
  2  FROM dual;

DBMS_HADOOP_INTERNAL.GET_DEBUG_DIR(USER)
-----------------------------------------
InvalidDir
 
GET_HIVE_PKEYS (new 12.2)
Undocumented dbms_hadoop_internal.get_hive_pkeys(
clus_id       IN VARCHAR2,
database_name IN VARCHAR2,
tbl_name      IN VARCHAR2)
RETURN CLOB;
TBD
 
GET_HIVE_TAB_INFO (new 12.2)
Undocumented dbms_hadoop_internal.get_hive_tab_info(
xt_tab_name  IN VARCHAR2,
xt_tab_owner IN VARCHAR2)
RETURN VARCHAR2;
TBD
 
GET_INCOMPATIBILITY (new 12.2)
Given a Hive table and its corresponding Oracle external table, this function returns the first incompatibility that is encountered dbms_hadoop_internal.get_incompatibility(
clus_id       IN VARCHAR2,
db_name       IN VARCHAR2,
hive_tbl_name IN VARCHAR2,
et_tbl_name   IN VARCHAR2,
et_tbl_owner  IN VARCHAR2)
RETURN CLOB;
TBD
 
GET_NAME (new 12.2)
Undocumented dbms_hadoop_internal.get_name(
name    IN     VARCHAR2,
myowner IN OUT VARCHAR2,
myname  IN OUT VARCHAR2,
downer  IN     VARCHAR2);
TBD
 
GET_OBJNO_FROM_ET (new 12.2)
Given a Hive table, this function returns the object number of the Oracle external table, if one is present dbms_hadoop_internal.get_objno_from_et(
cluster_id   IN VARCHAR2,
table_name   IN VARCHAR2,
xt_tab_name  IN VARCHAR2,
xt_tab_owner IN NUMBER)
RETURN NUMBER;
TBD
 
GET_PARTN_SPEC (new 12.2)
Undocumented dbms_hadoop_internal.get_partn_spec(
hive_table_name IN VARCHAR2,
text_of_ddl_ip  IN CLOB)
RETURN CLOB;
TBD
 
GET_XT_PKEYS (new 12.2)
Undocumented dbms_hadoop_internal.get_xt_pkeys(orig_ddl IN CLOB)
RETURN CLOB;
SQL> SELECT dbms_hadoop_internal.get_xt_pkeys('DROP TABLE t')
  2  FROM dual;

DBMS_HADOOP_INTERNAL.GET_XT_PKEYS('DROPTABLET')
------------------------------------------------
 
 
IS_METADATA_COMPATIBLE (new 12.2)
Given a Hive table and its corresponding Oracle external table, this function checks whether the external table is metadata-compatible with the Hive table. Metadata compatibility means (a) every column in the external table must be present in the Hive table (b) the datatype of each external table column must be the same or compatible with the datatype of the hive table column. (c) The partition keys in the external table must be in the same order as the partition keys in the hive table. dbms_hadoop_internal.is_metadata_compatible(
clus_id       IN VARCHAR2,
db_name       IN VARCHAR2,
hive_tbl_name IN VARCHAR2,
et_tbl_name   IN VARCHAR2,
et_tbl_owner  IN VARCHAR2)
RETURN VARCHAR2;
TBD
 
IS_PARTITION_COMPATIBLE (new 12.2)
Given a Hive table and its corresponding Oracle external table, this function tells whether the external table is partition compatible with the hive table.

If the XT is exactly identical to the Hive table this function will return FALSE - the reason is that the user does not need to call the SYNC API.
dbms_hadoop_internal.is_partition_compatible(
mdata_compatible IN VARCHAR2,
partns_added     IN CLOB,
partns_dropped   IN CLOB)
RETURN VARCHAR2;
TBD
 
SYNC_USER_PRIVILEGED (new 12.2)
Undocumented dbms_hadoop_internal.sync_user_privileged(
xt_tab_name  IN VARCHAR2,
xt_tab_owner IN VARCHAR2,
current_user IN VARCHAR2)
RETURN NUMBER;
TBD
 
UNIX_TS_TO_DATE (new 12.2)
Anciliary function to convert Julian date/time value to calendar date dbms_hadoop_internal.unix_ts_to_date(julian_time IN NUMBER)
RETURN DATE;
SELECT dbms_hadoop_internal.unix_ts_to_date()
FROM dual;
 
USER_PRIVILEGED (new 12.2)
Undocumented dbms_hadoop_internal.user_privileged(
CLUSTER_ID   IN VARCHAR2,
CURRENT_USER IN VARCHAR2)
RETURN NUMBER;
TBD
 
XT_COLUMNS_MATCHED (new 12.2)
Undocumented dbms_hadoop_internal.xt_columns_matched(
clus_id       IN VARCHAR2,
db_name       IN VARCHAR2,
hive_tbl_name IN VARCHAR2,
ORIG_DDL      IN CLOB)
RETURN VARCHAR2;
TBD

Related Topics
Built-in Functions
Built-in Packages
What's New In 12cR1
What's New In 12cR2

Morgan's Library Page Footer
This site is maintained by Dan Morgan. Last Updated: This site is protected by copyright and trademark laws under U.S. and International law. © 1998-2017 Daniel A. Morgan All Rights Reserved