Skip to main content
Version: 3.3.x

HDFS3 sink connector

note

You can download all the Pulsar connectors on download page.

The HDFS3 sink connector pulls the messages from Pulsar topics and persists the messages to HDFS files.

Configuration​

The configuration of the HDFS3 sink connector has the following properties.

Property​

NameTypeRequiredDefaultDescription
hdfsConfigResourcesStringtrueNoneA file or a comma-separated list containing the Hadoop file system configuration.

Example
'core-site.xml'
'hdfs-site.xml'
directoryStringtrueNoneThe HDFS directory where files read from or written to.
encodingStringfalseNoneThe character encoding for the files.

Example
UTF-8
ASCII
compressionCompressionfalseNoneThe compression code used to compress or de-compress the files on HDFS.

Below are the available options:
  • BZIP2
  • DEFLATE
  • GZIP
  • LZ4
  • SNAPPY
  • ZSTANDARD
  • kerberosUserPrincipalStringfalseNoneThe principal account of Kerberos user used for authentication.
    keytabStringfalseNoneThe full pathname of the Kerberos keytab file used for authentication.
    filenamePrefixStringtrueNoneThe prefix of the files created inside the HDFS directory.

    Example
    The value of topicA result in files named topicA-.
    fileExtensionStringtrue, if compression is set to None.NoneThe extension added to the files written to HDFS.

    Example
    '.txt'
    '.seq'
    separatorcharfalseNoneThe character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array.
    syncIntervallongfalse0The interval between calls to flush data to HDFS disk in milliseconds.
    maxPendingRecordsintfalseInteger.MAX_VALUEThe maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk.

    Example​

    Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods.

    • JSON

      {
      "configs": {
      "hdfsConfigResources": "core-site.xml",
      "directory": "/foo/bar",
      "filenamePrefix": "prefix",
      "compression": "SNAPPY"
      }
      }
    • YAML

      configs:
      hdfsConfigResources: "core-site.xml"
      directory: "/foo/bar"
      filenamePrefix: "prefix"
      compression: "SNAPPY"