Kafka Producer Job processes the JSON/CSV source format data and produces the output in AVRO format data.

Note:  To create a new Job Flow, refer Creating New Job Flow

To create the Kafka Producer Job, follow the below steps.

Step I: From the Jobs menu, drag and drop the Kafka Producer Job on canvas.

Step II: Provide the General details of Job

On Canvas, select the Kafka Producer job, and then under Properties, provide the General details.

Name - The Name field consists of default name and is editable.

Description - In the text box, you can provide a description and is optional.

Disable task - Check Disable task, if the job need not be executed as part of the Job Flow, and you do not want to delete the Job.

Step III: Optionally specify the retry attempts for Kafka Producer Job

Under Properties, select the Properties tab to enable retry attempts for Kafka Producer job execution.

Retry EnabledCheck the retry option if you want to enable retry attempts for Kafka Producer job.

  • No. of Retry Attempts: Specify the number of attempts to retry Kafka Producer job execution if in case the job fails to execute. By default, the retry attempts is set to 2.
  • Retry Wait Time (in Seconds): Specify the duration in seconds for the job to retry next execution. By default, the duration is set to 60 seconds. If the Job fails to execute, it retries again for next execution attempt after the specified wait time.

Step IV: Enter source connection details

1. Under Source Properties, to add the Source File Data Point click Add.

2. The Window displays the list of File Data Points. Choose the required Data Point. 

If there is any Global project that the user has access to, then the Project drop-down, also lists all the global projects. You can choose the global project from the Project drop-down, and select the global Kafka data point as required.

3. Once you choose the File Data Point, under Source Properties, you can view and update the Data Point Property values as needed.

  • File Format: Specify the Source File Format. The Source File Format can be JSON or CSV.
  • File Location: Specify directory of source on agent location.
  • File Name: Specify the File Name which needs to be processed by kafka. The File Name specified here should already be present in the File Location. 
  • File Data Reference: Specify the data reference for the type of file to be picked from source.
  • Schema Location: Specify directory of Schema on agent location.
  • Schema File: Specify the Schema File which has to be of avsc format. The Schema File needs to be specified here since the data is produced in Avro format. The Schema File specified here should already be present in the Schema Location.
    To know more about File source property values, refer Editing Extract Properties in File Data Point.

Note: 

  • To change the Source Data Point, click Change.
  • To change the default values, click on the respective field and change the value, and to revert, click the Reset icon. The Reset icon appears only when the property value is changed.

Step V: Enter target connection details

1. Under Properties, click Target PropertiesTo add the Target Data Point, that is the target connection details where the data needs to be moved, click Add

2. The Window displays the list of Data Points. Choose the required Data Point.

If there is any Global project that the user has access to, then the Project drop-down, also lists all the global projects. You can choose the global project from the Project drop-down, and select the global Kafka data point as required.

3. Once you choose the Data Point, under Target Properties you can view the target Data Point Property values. All the property values are set to default and the values can be changed based on your requirement.

Note: 

  • To change the Target Data Point, click Change
  • To change the default values, click on the respective field and change the value, and to revert, click the Reset icon. The Reset icon appears only when the property value is changed.

PropertyDescriptionDefault valueOther possible values
Broker AddressSpecifies the Kafka broker address.

Topic NameDisplays the topic name. You can specify the topic name where data needs to be loaded.weblog
Schema Registry URLSpecify the Schema Registry URL path where kafka SchemaRegisterMain is running

Note:

  • To save the Job Flow, on the Actions menu, click Save. For more information, refer Saving Job Flow
  • To revert the changes before saving the Job Flow, on the Actions menu, click Revert. For more information, refer, Reverting changes in Job Flow.
  • To execute individual job in the Job Flow, on the job specific Actions menu, click Run Job. For more information, refer Executing individual Job in Job Flow.
  • To execute the Job Flow, on the Actions menu, click Run. For more information, refer Executing Job Flow.
  • Once the Job is created and the changes are saved, then, close or unlock the Job Flow so that it is editable by other users. For more information, refer Closing Job Flow and Unlocking Job Flow