MSSQL data point is used to configure connectivity to MSSQL database. For each instance of MSSQL database, a separate data point will have to be created. Multiple databases within the same MSSQL instance can be added in single MSSQL data point. The MSSQL data point will be associated to any MSSQL data object created and data flows defining MSSQL as the native processing platform.
To work with MSSQL Data Point, follow below steps:
Step I: Create a New Data Point
- To open and edit an existing data point, refer Opening Data Point.
- To create a new data point, refer Create New Data Point.
Step II: Provide connection details
1. To connect to MSSQL database following details need to be provided in the Properties tab.
- Database Type: You can select the type of database, it can be either MSSQL, Azure SQL Database, or SQL Database on Azure VM
Following fields are common across all the database types:
- Host: Specify the hostname or the IP address of the MSSQL system
- Port: Specify the port associated to the MSSQL system
- Named instance: upon selecting the check-box a new field Instance name is added to existing connection list. Based on the field provided in the connection (port or instance name) will be considered to connect to MSSQL.
- Azure AD User: Enable this option, to use single sign-on and multi-factor authentication to avoid security breach and this field is enabled only for Azure SQL Database
- App User: As per the selected database type and if Azure AD User is enabled then App User details are appended to JDBC Connection URL otherwise connected using App User.
- Password: As per the selected database type and if Azure AD User is enabled then Password details are appended to JDBC Connection URL otherwise connected using Password. To use the project parameter for the password, check the Use Project Parameters option, and you can view and select the required Project Parameter from the Password drop-down.
- Always Encrypted: Enable this option to use SSL connectivity between Agent and MSSQL server. Upon enabling this option, following fields are displayed on the screen. Provide details for this to use SSL in connectivity.
- Key Store Providers: Select the type of encryption. It can be either Azure Key Vault or Java Key Store.
- If you select Azure Key Vault, you must provide Client ID and Client Key.
- If you select Java Key Store, you must provide Key Store Location and Key Store Password.
- Instance: Specify an instance which has collection of SQL Server databases run by a single SQL Server service
- Database Version: Specify the MSSQL version being connected to
- Jdbc Options: Specify the options that should be used along with JDBC URL used to connect to MSSQL server.
For example, following details are provided in JDBC Options to connect to MSSQL: user=diyotta, password=****, db=TEST_DB.
- Security Authentication: Specifies the type of security protocol to connect to the instance and you can either select None, SSL, or Kerberos.
- Truststore: Specify TrustStore path which is placed in the agent location and this field is available for SSL authentication
- Truststore Password: Specify password for TrustStore which is created while generating TrustStore file and this field is available for SSL authentication
- All the fields in the Properties tab can be parameterized using project parameters. To parameterize the fields, refer Working with Project Parameters.
- Upon enabling the checkbox beside the client key, it allows you to select the sensitive project parameters in client key.
2. Assign Agent: To assign or change the associated agent click Change. The Change Agent window appears and displays the list of available Agents. From the list, select the required Agent Name.
- If Default agent is assigned to the Project then automatically, the Default agent will be associated with the new Data point created.
- If Default agent is not assigned to the Project then, no agent will be assigned automatically and appropriate agent needs to be assigned to the data point.
- When connecting to the Agent server then, the agent installation user should have appropriate privilege to access the path where file will be placed.
- When connecting to the remote server then, the firewall needs to be opened from the Agent server to it and user specified to connect should have appropriate privilege to access the path where file will be placed.
Step III: Test the data point connection
- To validate that the data point is able to connect to the MSSQL data point database using the details provided, refer Test Data Point Connection.
Step IV: Provide the database connection details
Manage the databases required to be accessed through the Databases tab. Multiple databases can be added here.
1. To add a new database, click on Click Here.
New entry for database is generated and success message is displayed at the bottom right corner of the screen.
- The "Name" field is a friendly name that can be assigned to the database for easy identification. This friendly name is displayed when a database needs to be chosen from the data point and when the database association with other components is displayed.
- Provide the physical name of the database in the "Database" field. When clicked on the entry in the "Database" field a drop-down appears with the list of databases in the system. As you enter the database keyword, the drop-down shows the specific database. The database name can either be selected from this drop-down list or you can manually enter.
- Manually provide the physical name of the schema in the "Schema" field.
- To assign a database to be used for creating temporary tables as part of processing the data (generally referred to as tform database), select the checkbox under Transforms field.
- The Transforms field is available only for those type of data points which are supported by Diyotta as data processing platform.
- It is mandatory to assign a database as transform in the data point when, that data point needs to be assigned during the data flow creation and used as the processing platform.
- To view the database drop-down, it is a prerequisite to test the connection. For more information, refer Test Data Point Connection.
- To search for a specific database, enter the keyword in the search bar, and the page displays the related databases.
2. You can also copy database from another Data Point of same type and paste here, to do that:
Click the drop down arrow on Click Here, to see the paste option.
- Following operations are allowed on the database entries: Add, Cut, Copy, Paste, Up, Down, Delete, and Search.
- From the list of databases, multiple databases can be selected and we can perform/apply these operations.
Step V: Save the data point
- To save the changes made to the data point, refer Saving a Data Point.
- If the changes made to the data point need to be reverted and not saved then, refer Reverting changes in Data Point.
- Once the data point has been created and the changes have been saved then, Close or Unlock the data point so that it is editable by other users. For more information, refer Closing Data Point and Unlocking Data Point.
Step VI: Modify the configured Extract and Load properties
When moving data from one system to another the data is extracted from the source system, moved over the network and loaded into the target system. The SQLs and commands generated during execution of the jobs to extract and load data are generated based on the properties defined for these. The properties associated with the extraction and load of data should depend on the format, performance and variety of the data being moved. These properties vary based on the environment and the type of the system. Diyotta comes with default properties that covers most of the known scenarios.
- The default values for extract and load properties can be configured in the Admin module and these properties reflect in the Studio module.
- The extract and load properties set in data point are by default used in the source and target instance of the data flow and the job flows.
- It is a good practice to set the extract and load properties as per the company standards in the data point.
- However, if needed any specific property can be overridden in the data flow or job flow.