ACE Director Alum Daniel Morgan, founder of Morgan's Library, is scheduling
complimentary technical Workshops on Database Security for the first 30
Oracle Database customers located anywhere in North America, EMEA, LATAM, or
APAC that send an email to
asra_us@oracle.com. Request a Workshop for
your organization today.
Purpose
Oracle SQL Access to Kafka (OSAK) is a feature of Oracle Database that allows data in Kafka topics to be queried from Oracle SQL.
OSAK allows Kafka to be queried in one of three ways, each abstracted as a type of application created using one of three PL/SQL calls:
CREATE_LOAD_APP, CREATE_STREAMING_APP, or CREATE_SEEKABLE_APP.
Convert the input number of milliseconds since the epoch time to a TIMESTAMP WITH TIME ZONE and return this value
dbms_kafka.convert_ms_to_timestamp(
milliseconds IN INTEGER,
timezone IN VARCHAR2 DEFAULT NULL)
RETURN TIMESTAMP WITH TIME ZONE;
PRAGMA SUPPLEMENTAL_LOG_DATA(CONVERT_MS_TO_TIMESTAMP_TZ, READ_ONLY);
Convert the input number of milliseconds since the epoch time to a TIMESTAMP WITH TIME ZONE and return this value
dbms_kafka.convert_ms_to_timestamp(
milliseconds IN INTEGER,
timezone IN VARCHAR2 DEFAULT NULL)
RETURN TIMESTAMP WITH TIME ZONE;
PRAGMA SUPPLEMENTAL_LOG_DATA(CONVERT_MS_TO_TIMESTAMP_TZ, READ_ONLY);
Creates an Oracle SQL Access to Kafka LOAD application that will retrieve data
from all partitions in a Kafka topic for the purpose of loading Kafka data into an Oracle Database table
dbms_kafka.create_load_app(
cluster_name IN VARCHAR2,
application_name IN VARCHAR2,
topic_name IN VARCHAR2,
options IN CLOB);
Creates an OSAK streaming application which includes a set of dedicated OSAK global temporary tables and OSAK views used for retrieving new,
unread records from partitions in a Kafka topic.
dbms_kafka.create_streaming_app(
cluster_name IN VARCHAR2,
application_name IN VARCHAR2,
topic_name IN VARCHAR2,
options IN CLOB,
view_count IN INTEGER DEFAULT 1);
Loads a user defined target table with content from a Kafka topic.
For subsequent calls, only new unread Kafka records are inserted into the target table.
dbms_kafka.execute_load_app(
cluster_name IN VARCHAR2,
application_name IN VARCHAR2,
target_table IN VARCHAR2,
records_loaded OUT INTEGER,
parallel_hint IN INTEGER DEFAULT 0);
Positions the processing of Kafka records to a point that is relatively current, potentially skipping unprocessed older records in the Kafka partitions
Overload 1
dbms_kafka.init_offset_ts(
view_name IN VARCHAR2,
start_timestamp_ms IN INTEGER);
TBD
Overload 2
dbms_kafka.init_offset_ts(
view_name IN VARCHAR2,
start_timestamp IN TIMESTAMP WITH TIME ZONE);
TBD
Overload 3
dbms_kafka.init_offset_ts(
view_name IN VARCHAR2,
start_timestamp IN TIMESTAMP,
timezone IN VARCHAR2 DEFAULT NULL);
Updates the last Kafka partition offsets read after a STREAMING
application instance has has finished querying a set of Kafka records captured in an OSAK temporary table