How Listener Writes to Targets

Teradata Target

Teradata Listener uses JDBC sessions to write data to a Teradata Database target.
  1. Listener stores each streaming message in a staging table. The staging table has three metadata columns a one raw data column for each record.
  2. In near real-time mode, Listener uses prepared SQL INSERT statements to micro-batch the data at an interval of 240ms, or 4000 records, whichever interval is reached first.
  3. The JDBC driver persists all the batched records into the Teradata Database.
  4. Listener receives acknowledgement that the data has been successfully persisted into the Teradata Database.

HDFS Target with Kerberos

Listener writes HFDS target data in sequence file format (.seq) to the directory provided in the data_path field. In the example below, data is written to .seq files in /user/testuser/kerberos/{source_id}/.
This is a standard method for writing to HDFS targets.

    "target_id": "c1bd34bf-93e7-4ce2-b782-23d1c71e06d3",
    "source_id": "e750d1fe-2608-43f3-9d7d-6c1231d681a8",
    "bundle_interval": 100,
    "bundle_type": "records",
    "data_path": {
      "extension": "seq",
      "path": "/user/testuser/kerberos"
    "target_type": "hdfs", 
When a bundle_interval is specified (100 records in this example):
  1. Listener collects and holds data records in a temporary directory called /user/testuser/kerberos/+tmp.
  2. When there are 100 records in the tmp directory, Listener moves the data from the tmp directory to sequence files (.seq) in /user/testuser/kerberos/{source_id}/.
    If Listener does not collect 100 bundle_interval records before the default interval of 100 seconds, it moves the data it has collected at the default interval.

Sequence files (.seq) are in key value format delimited by a tab. The key is a random UUID and not associated with the Listener UUID metadata. The value is the data ingested and the metadata appended by Listener.

results matching ""

    No results matching ""