DB2x Driver Documentation
This documentation explains the DB2x driver implementation and how connection parameters map to UI fields in the Gigantic application.
Connection Parameters
| UI Field | Technical Implementation | Description | Required | Default Value |
|---|---|---|---|---|
| Host | host in odbc connection config | The hostname or IP address of the DB2 for i (AS/400) server | Yes | localhost |
| Port | port in odbc connection config | The port number DB2 for i server is listening on | Yes | 446 |
| Database | db in odbc connection config | The name of the database/library to connect to | Yes | None (must be specified) |
| Schema | dbSchema in gigadb config | The schema/library to use for queries and metadata extraction | Yes | None (must be specified) |
| Username | user in odbc connection config | Database user for authentication | Yes | None (must be specified) |
| Password | password in odbc connection config | Password for the database user | Yes | None (must be specified) |
Technical Details
The DB2x driver implementation uses a combination of the odbc Node.js library for database connectivity and the ssh2 library for secure file transfers, with a unique template-based approach for write operations.
Key technical aspects:
- Connection pooling is supported through the
pooledconfiguration flag - Schema/library selection is required and affects all query generation
- The driver uses QSYS2 system tables for metadata extraction (SYSTABLES, SYSCOLUMNS2, SYSINDEXES)
- Read operations use standard ODBC queries to the DB2 for i system
- Write operations use a template-based shell script execution system
When SSH tunneling is enabled, connections use a separate SSH client implementation through the ssh2 library rather than built-in SSH tunneling.
Write Operations Process
The DB2x driver has a unique approach to writing data:
- Data is first written to a local CSV file
- The CSV file is then transferred to the remote system via SFTP
- A write.sh template is executed on the remote system to process the data
- The shell script can run any commands needed to load the data into DB2
Authentication Options
DB2x supports several authentication types:
| Auth Type | UI Mapping | Description |
|---|---|---|
| None | Not applicable for DB2x | DB2x always requires authentication |
| Password | Username/Password fields | Standard username/password authentication |
| Password Manager | Various password manager fields | Fetches credentials from external password managers |
SSH Tunnel Support
The DB2x driver supports SSH connections for secure access to databases. For detailed information on configuring SSH tunnels, please refer to the SSH Tunnelling documentation.
API Endpoints Used
DB2x connections are primarily used in:
- Tap creation (data source discovery)
- Sink creation (data destination for anonymized data)
- Pipeline execution (data extraction and loading)
Node.js Driver Dependencies
The DB2x driver depends on two Node.js libraries:
- Library: odbc
- Purpose: Connects to IBM DB2 for i databases using ODBC
- Features: Connection pooling, prepared statements, transactions
- Library: ssh2
- Purpose: Provides SSH and SFTP functionality for secure file transfers
- Features: Secure file transfer, shell command execution, connection management
Note: The DB2x driver is not included in the standard driverPorts or driverAuthOptions configurations, indicating it may be a specialized or custom implementation.
Custom Params
The DB2x driver supports additional custom parameters that can be specified in YAML format to fine-tune both ODBC database connection behavior and SSH/SFTP file transfer options.
ODBC Connection Parameters
# Basic connection parameters
# Connection Behavior
CONNTYPE: 1 # Connection type: 0=Default, 1=Primary, 2=Secondary
SORTTYPE: 2 # Sort type: 0=System, 1=Hex, 2=Table
LANGUAGEID: "ENU" # Language ID for national language support
# Performance Options
MAXBUFLEN: 32767 # Maximum buffer length for column data
LOBThreshold: 32767 # Threshold for LOB data handling
BLOCKING: 100 # Number of rows to fetch per request
# Timeout Settings
loginTimeout: 30 # Login timeout in seconds
queryTimeout: 60 # Query timeout in seconds
connectionTimeout: 30 # Connection establishment timeoutSSH/SFTP Parameters
# SSH connection parameters for file transfers
sshHost: "remote.example.com"
sshPort: 22
sshUsername: "sftp_user"
sshPassword: "sftp_password"
# SSH Key Authentication
privateKey: "/path/to/private_key"
passphrase: "key_passphrase"
# SFTP Options
sftpPath: "/remote/upload/directory"
filePrefix: "data_"
fileSuffix: ".csv"
remoteCommand: "/path/to/write.sh"
# SSH Connection Settings
sshTimeout: 20000 # SSH connection timeout in milliseconds
sshReadyTimeout: 20000 # SSH ready timeout in milliseconds
keepaliveInterval: 120000 # Keepalive interval in milliseconds
keepaliveCountMax: 3 # Maximum keepalive count
# Security Options
algorithms:
kex: ["diffie-hellman-group14-sha256"]
cipher: ["aes256-ctr"]
validateServerCertificate: trueConnection Pool Parameters
# Pool configuration
pooled: true
poolMin: 0
poolMax: 10
poolIncrement: 1
poolTimeout: 60 # Pool timeout in secondsExample Configurations
Basic Configuration
sshHost: "remote.example.com"
sshUsername: "sftp_user"
sshPassword: "sftp_secret"
sftpPath: "/uploads/"
remoteCommand: "/scripts/process_data.sh"
pooled: trueSSH Key Authentication Configuration
sshHost: "remote.example.com"
sshPort: 22
sshUsername: "deploy"
privateKey: ${file:/path/to/ssh_private_key}
passphrase: ${env:SSH_PASSPHRASE}
sftpPath: "/data/incoming/"
filePrefix: "gig_"
remoteCommand: "/usr/local/bin/load_data.sh"Advanced Configuration with Performance Tuning
CONNTYPE: 1
BLOCKING: 500
LOBThreshold: 65536
queryTimeout: 300
pooled: true
poolMax: 20
# SSH Configuration
sshHost: "sftp.example.com"
sshPort: 22
sshUsername: "data_user"
sshPassword: "data_password"
sftpPath: "/bulk/uploads/"
remoteCommand: "/opt/scripts/bulk_load.sh"
sshTimeout: 30000
keepaliveInterval: 60000