Splice Machine data point is used to configure connectivity to Splice Machine database. For each instance of Splice Machine database, a separate data point will have to be created. Multiple schemas within the same Splice Machine instance can be added in single Splice Machine data point. The Splice Machine data point will be associated to any Splice Machine data object created and data flows defining Splice Machine as the native processing platform.
To work with Splice Machine Data Point, follow below steps:
Step I: Create a New Data Point
- To open and edit an existing data point, refer Opening Data Point.
- To create a new data point, refer Create New Data Point.
Step II: Provide connection details
1. To connect to Splice Machine database, following details need to be provided in the Properties tab.
- Host: Specify the hostname or the IP address of the Splice system.
- Port: Specify the port associated to the Splice system.
- App User: Specify the user id that will be used to connect to the Splice system and make sure the User have necessary privileges to access the data in the schemas.
- Password: Specify the password for the App User. To use the project parameter for the password, check the Use Project Parameters option, and you can view and select the required Project Parameter from the Password drop-down.
- Jdbc Options: Specify the options that should be used along with JDBC URL to connect to Splice Machine.
For example, following details are provided in JDBC Options to connect to Splice Machine: user=diyotta, password=****, db=TEST_DB.
- Database: Collection of information that is organized so that it can be easily accessed, managed and updated.
- Database Version: Specify the Hadoop version being connected to.
- Mandatory field names are suffixed with *. To establish the connection, provide all the mandatory property field values.
- All the fields in the Properties tab can be parameterized using project parameters. To parameterize the fields, refer Working with Project Parameters.
2. Assign Agent: To assign or change the associated agent click Change. The Change Agent window appears and displays the list of available Agents. From the list, click on required Agent Name.
Note: To search for a specific agent, enter the keyword in the search bar, and the window displays the search result list. Select the required agent and click Ok.
- If Default agent is assigned to the Project then automatically, the Default agent will be associated with the new Data point created.
- If Default agent is not assigned to the Project then, no agent will be assigned automatically and appropriate agent needs to be assigned to the data point.
- When connecting to the Agent server then, the agent installation user should have appropriate privilege to access the path where file will be placed.
- When connecting to the remote server then, the firewall needs to be opened from the Agent server to it and user specified to connect should have appropriate privilege to access the path where file will be placed.
Step III: Test the data point connection
- To validate that the data point is able to connect to the Splice Machine data point database using the details provided, refer Test Data Point Connection.
Step IV: Enter the Schema details
Manage the required schemas to be accessed through the Schemas tab. Multiple schemas can be added here.
1. To add a new schema, click Add (+).
New entry for schema is generated and success message is displayed at the bottom right corner of the screen.
- The "Name" field is a friendly name that can be assigned to the schema for easy identification. This friendly name is displayed when a schema needs to be chosen from the data point and when the schema association with other components is displayed.
- Provide the physical name of the schema in the "Schema" field. When clicked on the entry in the "Schema" field a drop-down appears with the list of Splice Machine schemas in the system. As you enter the schema keyword, the drop-down shows the specific schemas. The Splice Machine schema name can either be selected from this drop-down list or you can manually enter.
- To assign a schema to be used for creating temporary tables as part of processing the data (generally referred to as tform schema), select the checkbox under Transforms field.
- The Transforms field is available only for those type of data points which are supported by Diyotta as data processing platform.
- It is mandatory to assign a database as transform in the data point when, that data point needs to be assigned during the data flow creation and used as the processing platform.
- To view the schema drop-down, it is a prerequisite to test the connection. For more information, refer Test Data Point Connection.
- To search for a specific schema, enter the keyword in the search bar, and the page displays the related schemas.
- Following operations are allowed on the schema entries: Add, Cut, Copy, Paste, Up, Down, Delete, and Search.
- From the list of schemas, multiple schemas can be selected and we can perform/apply these operations.
Step V: Save the data point
- To save the changes made to the data point, refer Saving Data Point.
- If the changes made to the data point need to be reverted and not saved then, refer Reverting changes in Data Point.
- Once the data point has been created and the changes have been saved then, Close or Unlock the data point so that it is editable by other users. For more information, refer Closing Data Point and Unlocking Data Point.
Step VI: Modify the configured Extract and Load properties
When moving data from one system to another the data is extracted from the source system, moved over the network, and loaded into the target system. The SQLs and commands generated during execution of the jobs to extract and load data are generated based on the properties defined for these. The properties associated with the extraction and load of data should depend on the format, performance and variety of the data being moved. These properties vary based on the environment and the type of the system. Diyotta comes with default properties that covers most of the known scenarios.
- The default values for extract and load properties can be configured in the Admin module and these properties reflect in the Studio module.
- The extract and load properties set in data point are by default used in the source and target instance of the data flow and the job flows.
- It is a good practice to set the extract and load properties as per the company standards in the data point.
- However, if needed any specific property can be overridden in the data flow or job flow.