Connect v9.10.26 Release Notes - docs.precisely.com

51
DMX Release Notes Connect Release Notes 9.10.26 February 3, 2021 Note: In Q3 2017, the Release Notes were migrated to the Syncsort MySupport website. MySupport will include an archive of all Release Notes from prior releases as well as a search function, which provides for an improved user experience. If you do not already have a MySupport account, please email [email protected] to obtain one. Contents What's New New Platforms New Functionality Performance Enhancements Platform Lifecycle Information Known Problems Fixes That Change Product Behavior Fixed Problems Installation Guide Contacting Technical Support Copyright What's New The following summary highlights what is new in DMX 9.10. Details of new features and platforms and fixed problems can be found in their respective sections. DMX Release 9.10 is the culmination of product updates since release 9.9. Highlights include the following capabilities: Integration with CyberArk password vault. Connect ETL/Big Data now offers an improved password protection and management by storing and retrieving passwords from its enterprise password vault. Integration with Protegrity data masking solution. Connect ETL/Big Data now integrates with data masking solution from Protegrity to protect and mask PII sensitive data on the fly in both standalone servers and at scale. Support for Databricks databases, Snowflake and Azure Synapse (formerly Azure SQL Data warehouse). All the supported dispositions to pull/load data to these cloud databases has been added to Connect for Big Data. A batch of changes can now be applied to these cloud databases from any supported source. New Platforms Support for the following platforms has been added since Release 9.0 of DMX. New Functionality The following functionality has been added since Release 9.0 of DMX. For the most complete information on the product, refer to the DMX Help via the Task Editor or Job Editor or directly from the DMExpress programs folder. Release New feature 9.9.27 Protegrity Data Security Gateway. DMX/DMX-h now supports protecting data using API calls to Protegrity Data Security Gateway. See the DMX help topic, “Protect function”, or Install Guide topic, "Connecting to Protegrity Data Security Gateway". 9.9.27 Db2 for Linux, UNIX, and Windows data source support in Syncsort Connect portal. Connect portal supports Db2 for Linux, UNIX, and Windows (LUW) JDBC data source connections to replicate your captured data changes to Apache Kafka targets. For more information, see the Connect portal help topic, “Adding New Data Connections.” 9.9.27 Enhanced connectivity to Amazon S3. DMX retries failed REST API calls to Amazon S3 using an exponential back-off algorithm.

Transcript of Connect v9.10.26 Release Notes - docs.precisely.com

Page 1: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Connect

Release Notes 9.10.26

February 3, 2021Note: In Q3 2017, the Release Notes were migrated to the Syncsort MySupport website. MySupport will include an archive of all Release Notes from prior releases as well as a search function, which provides for animproved user experience. If you do not already have a MySupport account, please email [email protected] to obtain one.

Contents

What's NewNew PlatformsNew FunctionalityPerformance EnhancementsPlatform Lifecycle InformationKnown ProblemsFixes That Change Product BehaviorFixed ProblemsInstallation GuideContacting Technical SupportCopyright

What's New

The following summary highlights what is new in DMX 9.10. Details of new features and platforms and fixed problems can be found in their respective sections.

DMX Release 9.10 is the culmination of product updates since release 9.9. Highlights include the following capabilities:

Integration with CyberArk password vault. Connect ETL/Big Data now offers an improved password protection and management by storing and retrieving passwords from its enterprise password vault.

Integration with Protegrity data masking solution. Connect ETL/Big Data now integrates with data masking solution from Protegrity to protect and mask PII sensitive data on the fly in both standaloneservers and at scale.

Support for Databricks databases, Snowflake and Azure Synapse (formerly Azure SQL Data warehouse). All the supported dispositions to pull/load data to these cloud databases has been added to Connect forBig Data. A batch of changes can now be applied to these cloud databases from any supported source.

New Platforms

Support for the following platforms has been added since Release 9.0 of DMX.

New Functionality

The following functionality has been added since Release 9.0 of DMX. For the most complete information on the product, refer to the DMX Help via the Task Editor or Job Editor or directly from the DMExpressprograms folder.

Release New feature

9.9.27 Protegrity Data Security Gateway. DMX/DMX-h now supports protecting data using API calls to Protegrity Data Security Gateway. See the DMX help topic, “Protect function”, or Install Guide topic,"Connecting to Protegrity Data Security Gateway".

9.9.27 Db2 for Linux, UNIX, and Windows data source support in Syncsort Connect portal. Connect portal supports Db2 for Linux, UNIX, and Windows (LUW) JDBC data source connections to replicate yourcaptured data changes to Apache Kafka targets. For more information, see the Connect portal help topic, “Adding New Data Connections.”

9.9.27 Enhanced connectivity to Amazon S3. DMX retries failed REST API calls to Amazon S3 using an exponential back-off algorithm.

Page 2: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

9.9.24 Databricks databases. Through JDBC connectivity, DMX-h supports Databricks databases as sources and targets for all dispositions. See the DMX help topic, “Connecting to Databricks.”

9.9.20 Db2 for IBM i data source support in Syncsort Connect portal. Connect portal supports Db2 for IBM i JDBC data source connections to replicate your captured data changes to Apache Kafka targets. Formore information, see the Connect portal help topic, “Adding New Data Connections.”

9.9.20 Change Data Capture on z/OS Installation. The DMExpress installation process for installing CDC on z/OS platforms has been enhanced to improve both usability and reliability.

9.9.14 Abort on Rejected Records and ENFORCELENGTH parameter for Vertica targets. The "Abort if any record is rejected" parameter configures DMX to abort tasks that update Vertica targets uponrejecting a record. The default behavior to process remaining records remains unchanged. A new Target Database Table parameter Vertica COPY ENFORCELENGTH enables the Vertica COPYcommand ENFORCELENGTH option when loading Vertica targets. Please see the DMX on-line help topic "Connecting to Vertica Databases" for details.

9.9.14 CyberArk Enterprise Password Vault. DMX/DMX-h now supports retrieving passwords stored in a CyberArk vault to authenticate connections to servers, databases, message queues, and remote files.See the DMX help topic, “Task Editor”, or Install Guide topic, "Connecting to CyberArk Enterprise Password Vault".

9.9.14 Azure Synapse Analytics (formerly SQL Data Warehouse). Through JDBC connectivity, DMX-h supports Azure Synapse Analytics as sources and targets for all dispositions. See the DMX help topic,“Connecting to Azure Synapse Analytics.”

9.9.13 Execution Profile Schema Validation. DMX/DMX-h now verifies the frormat of any applicable execution profile files at runtime against the schema definition installed with the product. See the DMX helptopic “Execution profile file".

9.9.5 DMX now supports new SFTP encryption algorithms such as hmac-sha2-512 and hmac-sha2-256.

9.9.4 Snowflake databases. Through JDBC connectivity, DMX-h supports Snowflake databases as sources and targets for all dispositions. See the DMX help topic, “Connecting to Snowflake.”

9.8.7 Syncsort Connect portal.Syncsort Connect portal (formerly DMX DataFunnel UI) is a secure, browser-based application that features change data capture (CDC), replication, and bulk data copyfunctionality. Connect portal supports your data integration strategies by:

Using log-based capture to determine when data has changed from your Oracle, Db2 for z/OS, and SQL Server data sources, automatically capturing the data changes in real time, then replicating(moving) the changed data downstream to Apache Kafka’s distributed messaging layer. This helps keep your data set fresh without losing data while your enterprise data processes continueuninterrupted. SQL Server databases also support trigger-based data capture.Collectively extracting, filtering, and copying large volumes of data from database sources to target database management systems (DBMS), file systems on-premise and in the cloud (includingHDFS, Amazon S3, and local), and Apache Kafka distributed streams. Run data transfer jobs locally or on a Hadoop-determined node in the cluster.

Use the Connect portal to:

Share application data across your enterprise – from mainframes to the cloud. Transfer all source data (hundreds of tables at a time) or filter the data to exclude or include a subset of tablecolumns and rows.Simplify bulk data transfers and populate the data lake from hundreds of tables by automatically creating the target schemas and mapping fields.Add customized data copy (ETL) or replicate (CDC) tasks, known as data flows.Manage the data connections, data flows, metabases, and servers used in your CDC and ETL projects.Monitor the status of replication activities and copy jobs.Export a JSON configuration file to run data flows from the command line. This automatically generates corresponding Connect jobs and tasks that can be run in the DMX and DMX-happlications.

Connect portal supports connecting to a variety of databases and environments, including Hive, Oracle, Apache Kafka, IBM Db2 for z/OS, Netezza, Microsoft SQL Server, Teradata, HDFS, S3, AmazonRedshift, IBM Db2 for Linux, UNIX, and Windows, and local.Connect portal supports the following browsers:

FirefoxGoogle ChromeInternet Explorer versions 10 and 11Microsoft Edge

For more information about how to use Connect portal, see the Connect portal help available from the application.To get started with Syncsort Connect portal, you need to install DMX ETL and MIMIX Share (CDC).

9.8.7 DMX Data Lineage REST API (first customer shipment): The DMX REST API provides web-based request support for Data Lineage metadata published to the DMX management server. REST resourcesprovide access over HTTP to the following data lineage and related resources in JSON format:

Job StatusLoggingLineage data

9.8.7 Oracle and SQL Server targets through JDBC connections supported in Hadoop Single Node Execution - DMX offers the ability to move data from Hadoop distributed file system and database sources toOracle and SQL Server targets in a Single Node Execution DMX job. Please see the on-line help topics, Defining DMX-h ETL Sources and Targets, Connecting to JDBC Sources and Targets, and Running

Page 3: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

a job from the command prompt for more information.

9.8 Add support for brokers in the Kafka connection. Please refer to the section “Kafka connections” of the “Apache Kafka” documentation.

9.8 Custom installation location for Ambari packages. DMX-h can be installed under a custom directory prefix when installing via an Ambari package. Refer to the Installation Guide.

9.8 Apply license keys while DMX is running. It is no longer necessary to stop any running DMX jobs or services to apply a new license key.

9.8 Separate software and license packages for DMX-h. Previously, customer-specific DMX-h installation packages with embedded license keys were generated for RPM, Cloudera parcel, and Ambari packageinstallations. Now, the software and your license keys are provided in separate packages. This allows you to apply a new license key to all cluster nodes without reinstalling DMX-h. Refer to theInstallation Guide and the "DMX-h License Package Migration Guide" on the MySupport Knowledge Base: https://mysupport.syncsort.com/kb/article/?id=ART-509939/1

9.7.32 Abort on Rejected Records to Kafka and MapR streams. The new "Abort if any record is rejected" parameter configures DMX to abort tasks that update Kafka targets upon rejecting a record. The defaultbehavior to process remaining records remains unchanged.

9.7.32 Distributed Parallel Data Ingestion (First customer shipment). DMX DataFunnel offers distributed parallel ingestion of large database sources for enabled jobs to Hadoop targets, which enablesparallelized map jobs to any number of Hadoop targets (HDFS, Hive, or HCatalog). With distributed parallel ingestion, DMX automatically splits tables across mappers to optimize load performancebased on table and cluster size. For more details, see the DMX help topic, "High-speed parallel ingestion of large database sources to Hadoop."

9.7.23 DMX Change Data Capture for DB2 z/OS. Added READ authority information and missing SYSIBM table names for DB2 Catalog Tables in DB2GRANT JCL . Refer to the “Identify/Authorize z/OS Usersand Started Tasks” section in the DB2 z/OS Change Data Capture Reference.

9.7.23 Support for COBOL-IT line sequential files on SunOS SPARC 64-bit. DMExpress provides support for COBOL-IT line sequential files. See the DMExpress help topic, “Installing support for COBOL-IT.”

9.7.23 DataFunnel enhancements. The DataFunnel configuration file now allows the following:

Checking whether a column is the first or last one in the source table, in the predicate of expression_construction transformation rulesCreating target files in subdirectories named for each source tableCustomizing the file extension of target filesCustomizing the file extension of exported DTL task files

See the DMExpress help topic, "DMXDFNL configuration file."

9.7.19 Runtime field-level data lineage. DMX/DMX-h offers a runtime data lineage solution that enables you to track field-level metadata from source to target through the DMX/DMX-h task and job processesthat transform it. As part of your overall data governance strategy, you can leverage the generated lineage for debugging, auditing, and regulatory reporting. You can further extend these capabilitiesthrough end-to-end visualization by securely publishing the lineage to Cloudera Navigator as a compliance-ready solution. See the DMX help topic, DMX/DMX-h runtime data lineage.DMX/DMX-h lineage is connected to all other data lineage in the cluster, which offers a full view of what is happening to the data within and without DMX/DMX-h.

9.7.3 Open System Support for DMX Change Data Capture (CDC). DMX Change Data Capture (CDC) synchronizes data changes between data sources and targets while your enterprise data processescontinue with no impact on performance. DMX CDC open system components add support for the following database sources for CDC synchronization:

Oracle and Oracle Real Application Clusters (RACs)Microsoft SQL ServerIBM DB2 for iIBM DB2 For Linux/UNIX/Windows (LUW)IBM InformixSybase

9.7.3 GUID database datatypes. DMExpress supports GUID database datatypes, such as SQL Server uniqueidentifier and Access guid datatypes.

9.7 VSAM source support for DMX CDC. In addition to DB2 for z/OS sources, DMX CDC supports VSAM data sets as sources. Changes written to VSAM data sets can be captured and written to target filesand applied to Hive target database tables through generated downstream DMX-h tasks. See the DMExpress help topic, “DMX Change Data Capture.”

9.6.22 Enhanced target support for Hive complex types. Hive target structure and array support is enhanced to include the following:

Additional array selection option, which enables mapping all elements of any array field within a chain of fields into separate fields.Ability to create tables in which the hierarchy of composite fields is either flattened or maintained.

See the DMExpress help topics, “Array Element Selection dialog” and “Target Database Table dialog.”

9.6.22 Apache Impala databases. Through JDBC connectivity, DMX-h supports Impala databases as sources and targets when running on the ETL server/edge node, in the cluster, and on a framework-determined node in the cluster. See the DMExpress help topic, “Connecting to Impala.”

9.6.22 Enhanced target support for Hive complex types. Hive target structure and array support is enhanced to include the following:

Additional array selection option, which enables mapping all elements of any array field within a chain of fields into separate fields.Ability to create tables in which the hierarchy of composite fields is either flattened or maintained.

Page 4: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

See the DMExpress help topics, “Array Element Selection dialog” and “Target Database Table dialog.”

9.6.15 DMExpress Enterprise architecture (First customer shipment). In addition to the classic DMExpress client-server architecture in which DMExpress/DMX-h jobs and tasks are run, DMExpress alsosupports an enterprise web-server architecture, which extends DMExpress processing power and currently supports running DataFunnel projects through a browser-based user interface (UI).At the core of the enterprise architecture is the DMExpress central management server, which houses the management service, serves REST APIs, and coordinates all activities among clientworkstations, job run servers, and central file and database repositories:

Client workstations use your client web browser to submit DataFunnel projects through the DataFunnel UI.Job run servers include the service for management service and the DMExpress engine, through which DataFunnel projects are run.The central file repository stores DataFunnel job logs and operational metadata; the central database repository stores DataFunnel job definitions and runtime connection data.

See the DMExpress help topic, “DMExpress Enterprise architecture.”

9.6.15 DMX DataFunnel UI. DMX DataFunnel UI is a secure, high-speed, browser-based application from which you collectively extract, filter, transfer, and copy large volumes of data from database sources totarget database management systems (DBMS) or file systems (including HDFS, S3, and local).As an extension of DMExpress, DataFunnel UI enables you to automatically run data transfers in bulk rather than manually creating dozens, hundreds, or even thousands of individual transfer tasks,one task for each table in the schema you want to transfer. Instead, DataFunnel UI gives you the tools to build collections of data transfer tasks called data flows. You group data flows in projects. When aproject is executed, each customized data flow automatically transfers your data in bulk to a specified target database or file system.Use DataFunnel UI to:

Transfer all source data (hundreds of tables at a time), or filter the data to exclude or include a sub-set of table columns and rows. Tables are automatically transferred in parallel at an appropriatelevel of parallelism.Automatically create new Hive target tables based on equivalent source tables.Build, manage, and maintain a centralized repository of reusable data connections, data flows, and projects.Search and examine summary information about DataFunnel objects.Export a JSON configuration file to execute data flows from the command line.DataFunnel UI includes wizards that guide you step-by-step to:

Build unique, secure data connections to DBMS sources and target databases and file systems.Configure how data is copied to target databases and file systems.Run one or more projects at a time and enable job logging.

DataFunnel UI supports connecting to a variety of databases and environments, including Hive, Oracle, IBM Db2 for Linux, Unix, and Windows, IBM Db2 for z/OS, Netezza, Microsoft SQL Server,Teradata, HDFS, S3, Amazon Redshift, and local.DataFunnel UI supports the following browsers:

Internet Explorer versions 10 and 11Google ChromeMicrosoft Edge

For more information about how to use DataFunnel UI, access the online help, which is available from the Help menu in DataFunnel UI.

9.6.15 Enhanced DMExpress installation. The DMExpress installation process includes component-based options, which enable standard, full, classic, and custom installations. From among these options, youcan tailor the installation to meet your requirements; for example, you can install a client-server configuration for running DMExpress/DMX-h jobs and tasks through the Job and Task Editors, or youcan install a web-server configuration for running DataFunnel projects through the DataFunnel UI, or you can install both. See the DMExpress help topics, “DMExpress installation component options”and “DMExpress Enterprise architecture.”

9.6.11 Target support for Hive complex types. DMX-h supports writing elementary, composite, and array fields from mainframe and open system database table and file sources to Hive target structures andarrays. See the DMExpress help topic, “Target Database Table dialog.”

9.6.7 Hive merge statement support. DMX-h provides Hive merge statement support through JDBC for Hive target database tables that are configured to support ACID and that are specified as having theupsert or update disposition. See the DMExpress help topic, “Hive merge statement support for upsert and update dispositions.”

9.6.4 User-defined SQL as lookup source. In addition to supporting entire database tables as lookup sources, DMExpress supports user-defined SQL statements as lookup sources. See the DMExpress helptopic, “Lookup Source Database Table dialog.”

9.5 DMX Change Data Capture. DMX Change Data Capture (CDC) is a new add-on product that keeps data on a mainframe source system in sync with Hadoop without overloading networks and incurring ahigh MIPS cost on the mainframe. DMX CDC detects changes in the source system and either applies or writes the changed data to a specified Hadoop target. Currently, DB2 for z/OS is supported assource and Hive and HDFS are supported as targets. See the DMExpress help topic, “DMX Change Data Capture.”

9.4.28 Custom task enhancement. Along with the current extended task support, DMX-h supports running custom tasks or groups of directly connected custom tasks on the cluster with all DMX-h tasks of anIX job when the following conditions are met:

All DMX-h tasks are cluster eligible.

Page 5: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Custom tasks have piped inputs from previous DMX-h tasks and piped outputs to following DMX-h tasks.

See the DMExpress help topic, “Developing Intelligent Execution Jobs.”

9.4.28 DataFunnel support for Amazon S3 and Amazon Redshift. DataFunnel supports Amazon S3 as target and Amazon Redshift as source and target. See the DMExpress help topics, "DMExpressDataFunnel," "DMXDFNL configuration file," and "Connections object: amazon_s3 member."

9.4.26 Enhanced Hive support. DMX-h supports Hive database table lookup sources, the Boolean data type for Hive JDBC source and target connections, and updates applicable to the integration with ApacheRanger. See the DMExpress help topics, “Connecting to Hive data warehouses,” “Conversion between Hive data types and DMExpress data types,” and “Apache Sentry and Apache Ranger authorization.”

9.4.26 DMExpress job progress monitoring and reporting. Support for real-time progress monitoring of jobs and their subjobs and tasks and report generation on this progress for jobs run in both standaloneDMExpress and in MapReduce/Spark. See the DMExpress help topic, “DMExpress progress monitor.”

9.4.26 Key break processing functions. DMExpress supports key break processing functions for sort, merge, and aggregate tasks. While cross-record processing can be achieved with user defined values, keybreak processing functions provide a simplified means of enabling cross-record processing with sort keys, merge keys, or group by fields, which are specified as arguments in these functions. Dependingon the executed function, the returned value can represent changes in argument values or can be counters or running totals. See the DMExpress help topic, “Key break processing functions.”

9.4.6 Support for Spark cluster deploy mode. DMX-h supports running jobs on Spark in cluster deploy mode as well as in client deploy mode. See the DMExpress help topic, “DMX-h Configuration Settings.”

9.3.16 Access method support in DataFunnel, including Hive JDBC. Users can now specify the access method for each database connection in DataFunnel. This includes the JDBC access method for Hiveconnections, which is now recommended. See the definition of the "access_method" parameter in the Help topic "Connections object: database member".

9.3.16 External metadata support through Connect:Direct. You can now link to remote COBOL copybooks or other external metadata through Connect:Direct when designing a task in the Task Editor.

9.3.5 Hive target statistics. Upon loading to Hive target tables, DMX-h can now analyze table-level and column-level statistics for the target table, which optimizes subsequent Hive query performance. See theDMExpress help topic, "Connecting to Hive data warehouses."

9.3.3 DTL metadata export. DMExpress supports the export of metadata, which includes values, conditions, layouts, connections, and collating sequences, to DTL command options files. See the DMExpresshelp topic, “Export DMExpress job and task files into DTL command options files.”

9.3.1 Support for Spark 2.0. DMX-h supports Spark 2.0 on YARN, Mesos, and standalone. For details, see the DMExpress help topic, “DMX-h.”

9.3.1 Google Cloud Storage sources and targets. DMExpress supports direct connections to Google Cloud Storage buckets for reading and writing. See the DMExpress help topic, “Connecting to Google CloudStorage from DMExpress.”

9.2.8 Ability to create target tables in Hive at design-time. It is now possible to create target tables in Hive at design-time for access via JDBC. This can be achieved using the "Create Database Table" dialog.See the DMExpress Help topic, "Create Database Table dialog."

9.2.5 Record number in warning message. When a data issue is encountered in processing records, DMExpress provides you with the ability to view the corresponding record number, source number, and joinside (if applicable) in the generated warning message. See the DMExpress help topic, “DMExpress debugging.”

9.2.5 Distributed Hive sources and targets. Through JDBC connectivity, DMExpress supports reading from Hive sources and writing to Hive targets in the cluster. In particular, this allows Parquet-backedHive tables, which could only be read and written on the edge node previously, to be read and written by jobs run in the cluster. See the DMExpress help topic, “Connecting to Hive data warehouses.”

9.2.2 Integrated workflow. Users can now specify where to run individual task and subjob components of a DMX-h job - on a cluster, single node of a cluster, or single server. This orchestration capabilityallows a single job to perform both data ingestion from the edge node and distributed processing in Spark and MapReduce. See "Integrated workflow" in the DMExpress Help.

9.2.1 Support for COBOL-IT line sequential files. DMExpress provides support for COBOL-IT line sequential files. See the DMExpress help topic, “Installing support for COBOL-IT.”

9.1.15 DataFunnel (DMXDFNL) table creation support. Leveraging customizable metadata translation, DMXDFNL can now automatically create tables for Hive/HCatalog targets based on source tables fromthe following databases: Oracle, DB2 for z/OS, Netezza, Microsoft SQL Server, and Teradata. Generated table definitions (DDL/CREATE TABLE statements) can be exported to files on disk using theddl_export_dir configuration parameter. See the DMExpress help topics: "DMExpress DataFunnel" and "Subunit alias object: table_creation member."

9.1.15 DMXDFNL inflight transformation support. DMXDFNL can now map sources to targets, transform columns based upon their properties, and map source expressions to target fields dynamically.Transformation rules, which are referenced in the data transfer task, are programmatically defined at the root level of the DMXDFNL configuration file. See the DMExpress help topics: "DMExpressDataFunnel," "DMXDFNL configuration file," and "Subunit alias object: data_transfer member."

9.1.15 DMXDFNL enhancements. DMXDFNL enhancements include the following:

When output to a Linux/UNIX terminal, log messages now display in color: errors display in red and warnings display in yellow.Tasks no longer time out by default. A timeout can be specified globally with the root-level default_timeout_minutes configuration parameter or at the subunit_order level with thetimeout_minutes parameter.The no_execute configuration parameter now supports disabling the execution of one or more task types.DMXDFNL configuration file updates include the following:

Tasks are separated into table_creation, data_transfer, and existing_dtl categories in each subunit.Sources and targets are defined at the root level.

The previous configuration styles are deprecated, but remain supported. See the DMExpress help topic, "DMXDFNL configuration file."

Page 6: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

9.1.4 Google Cloud Storage sources and targets. DMExpress supports reading from and writing to Google Cloud Storage buckets in and out of Hadoop. See the DMExpress help topic “Connecting to GoogleCloud Storage from DMExpress.”

9.0.10 MapR Streams sources and targets (GA). DMExpress supports MapR Streams as a message queue source when running from an edge or single cluster node, and as a message queue target when runningfrom an edge node, single cluster node, or MapR cluster. Kerberos is supported for connectivity to MapR Streams. See "MapR Streams" in the DMExpress Help.

9.0.10 DB2 ODBC driver support for Windows 32-bit. DMExpress now provides DB2 ODBC driver support for Windows 32-bit installations in addition to the support for UNIX/Linux installations. See theDMExpress help topic, "Connecting to DB2 databases."

9.0.7 DMExpress Report enhancements. DMExpress supports the generation of DMExpress job and task reports in machine processable, comma separated values (csv) format. Through command-linedmxreport invocation and wildcard support for input job and task files, you can generate data from an unlimited number of job definitions, interpret the data with data analysis tools, and load the datainto databases and into other metadata repositories. See the DMExpress help topic, "dmxreport: Generating DMExpress job and task reports."

9.0.7 Hive targets via JDBC. Hive targets can now be written via a JDBC connection, allowing for easier configuration, particularly when running on a single cluster node. See “Connecting to Hive datawarehouses” in the DMExpress Help.

9.0.7 Hive targets support truncate and insert. Hive targets now support the "Truncate table and insert rows" disposition. See "Connecting to Hive data warehouses" in the DMExpress Help.

9.0.7 Repository passwords. DMExpress now supports storage of password and secret key values in a secure repository under a user-provided name. This name can subsequently be referenced while creatingDMExpress tasks or DataFunnel jobs. See "Repository passwords" in the DMExpress Help.

9.0.6 MapR Streams sources and targets (Pre-release). DMExpress supports MapR Streams as a message queue source when running from an edge or single cluster node, and as a message queue target whenrunning from an edge node, single cluster node, or MapR cluster. Kerberos support is not yet available for connectivity to MapR Streams. See "MapR Streams" in the DMExpress Help.

9.0.4 Support for international character sets for data and metadata. DMExpress now enhances support for globalization through ICU character sets and multi-byte COBOL copybooks. See the DMExpresshelp topics "Data encoding" and "External metadata in a COBOL copybook."

9.0.2 DMXMMSRT supports NULL decimal fields. The DMXMMSRT utility now supports an option to treat empty or whitespace-only decimal fields (except Packed Decimal) as NULL. See “Mainframemigration sort component DMXMMSRT” in the DMExpress help.

Performance Enhancements

The following performance improvements have been made since Release 9.0 of DMExpress.

Release Applications with Improved Performance

9.9.20 Task Editor Target Database Table dialog. When developing tasks in the DMExpress Task Editor, mapping, unmapping, and editing target column mappings in the Target Database Table dialog are nowmore responsive.

9.9.6 Hive and Kafka Sources and Targets. The memory usage improved significantly for DMExpress jobs that has multiple HIVE or Kafka sources and/or targets.

9.7.34 Apply CDC Changes to Impala Targets. The elapsed time for CDC apply changes tasks is significantly improved when applying changes to Apache Impala tables using the Kudu storage format.

9.7.23 Kudu Targets. The elapsed time improves for DMX-h Intelligent Execution jobs having kudu tables connected through JDBC connections.

9.7.23 Hive and Impala Targets. The elapsed time improves for DMX-h Intelligent Execution jobs running on the cluster having Hive/Impala tables connected through JDBC connections.

9.7.19 DMX Change Data Capture for DB2 z/OS. Improved replication performance on low-volume DB2 tables. Refer to the “Flushing DB2 Log Buffer to Reduce Data Capture Delays” section in the DB2 z/OSChange Data Capture Reference.

9.7.1 Hive and Impala Sources. The elapsed time improves for DMX-h Intelligent Execution jobs running on the cluster having Hive/Impala tables connected through JDBC connections.

9.4.6 Hive sources. The elapsed time improves for jobs having Hive sources connected through a JDBC connection.

9.3.16 Hive sources and targets. The elapsed time improves for DMX-h Intelligent Execution jobs running on the cluster having multiple Hive sources or targets connected through JDBC connections defined inthe same task or in different tasks.

9.1.6 DMX-h MapReduce jobs. DMX-h’s updated algorithm for calculating the number of reducers in MapReduce jobs should improve performance.

9.1.2 DMX-h Spark jobs. DMX-h’s updated algorithm for calculating the number of reducers in Spark jobs should improve performance. Additionally, in a multi-tenant scenario, if Spark's dynamic allocationmechanism is enabled, DMX-h will use resources more appropriately.

9.0.7 DMX-h Spark jobs. Both elapsed time and load balancing when Spark outputs to local disk have been improved.

9.0.3 Hive targets. When writing a large number of records to Hive, the elapsed time may improve if constant values are mapped to partition columns.

Platform Lifecycle Information

Page 7: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

The following table contains the schedule of changes to the supported platforms:

Platform Support Status Date

ACUCOBOL-GT on SunOS SPARC 64-bit Discontinued Q4 2018

AIX 5.3 Power 32-bitAIX 5.3 Power 64-bit Discontinued Q1 2016

AIX 5.1 Power 32-bitAIX 5.1 Power 64-bit Discontinued Q3 2012

DB2 Load accelerator Discontinued Q1 2017

DB2 Load accelerator on SunOS SPARC 64-bit Discontinued Q2 2016

DB2 8.x DB2 Load Discontinued Q2 2013

DB2 9.1 Discontinued Q1 2015

HP-UX 11.23 IA64 64-bit Discontinued Q2 2016

HP-UX 11.23 IA64 32-bit Discontinued Q1 2015

HP-UX 11.11 PA-RISC 32-bitHP-UX 11.11 PA-RISC 64-bit Discontinued Q2 2013

Linux 2.4 x86 32-bitLinux 2.4 x86_64 64-bit Discontinued Q1 2012

Linux 2.6 IA64 64-bit Discontinued Q4 2013

Linux 2.6 Power 32-bit Linux 2.6 Power 64-bit Discontinued Q3 2012

Linux 2.6 s390x 64-bit RHEL 4 and SLES9 Discontinued Q3 2016

Linux 2.6 x86 32-bit Discontinued Q3 2016

Linux 2.6 x86_64 64-bit C library version 2.5 to 2.11, kernel version 2.6.18 to 2.6.31, RHEL 5 and 6, SLES 10, 11 Discontinued Q3 2017

MapR 3.1.1 Discontinued Q4 2016

Metadata Interchange Discontinued Q4 2015

Micro Focus Server 5.1 Discontinued Q3 2013

Oracle 9i Discontinued Q1 2012

Oracle 10g Discontinued Q1 2015

Server Grid Discontinued Q4 2014

SQL Server 7SQL Server 2000 Discontinued Q1 2013

SQL Server 2005 Discontinued Q1 2015

SunOS 5.8 SPARC 32-bitSunOS 5.8 SPARC 64-bit Discontinued Q1 2013

SunOS 5.10 SPARC 32-bit Discontinued Q2 2013

SunOS 5.10 x86 32-bitSunOS 5.10 x64 Discontinued Q1 2013

Teradata TD12 Discontinued Q1 2013

Teradata TD13 Discontinued Q2 2015

Vertica 4.1 Discontinued Q4 2012

Page 8: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Vertica 5.1 Discontinued Q1 2015

Vertica 6.0 Discontinued Q3 2016

Vertica 7.1 Discontinued Q1 2018

Windows 2000Windows XP without SP3Windows Server 2003 without SP1

Discontinued Q3 2011

Windows Itanium 64-bit Discontinued Q4 2013

Windows XPWindows Server 2003 Discontinued Q2 2016

Known Problems

The following list includes DMX defects as well as problems with third-party software. Documentation on third-party issues is being migrated to the DMX Knowledge Base and will be removed from the release notesin a future release. In the interim, please refer to the Knowledge Base for additional entries. The DMX Knowledge Base is accessible from your MySupport account:https://mysupport.syncsort.com/kb/. If you do nothave a MySupport account, you may request one from DMX Technical Support at [email protected].

Symptom Incorrect output - each byte of any multibyte UTF-8 data is replaced with the UTF-8 replacement character, U+FFFD, which has byte sequence 0xEF 0xBF 0xBD.

Area Affected DMX-h jobs running in CDH 6.2.0 (or similar Apache Hadoop distribution released March 2019) MapReduce or Spark reading UTF-8 multibyte data from a Hive or Impala table via the nativeApache HCatalog API. Hive access via HCatalog is the default DMX behavior. See online help topic Apache HCatalog for details.

Problem This is an Apache HCatalog problem.

Solutions Export the environment variable: DMX_HIVE_SOURCE_FORCE_STAGING=1 for Hive sources or DMX_IMPALA_SOURCE_FORCE_STAGING=1 for Impala sources, before executing aDMExpress Hadoop job.

Referencenumber DMX-27823

Symptom DMExpress : (RECNOTIN) record was not inserted into table <table name>(ODBC_ERROR) [Vertica][VerticaDSII] (20) An error occurred during query execution: Row rejected by server; see server log for detailsDMExpress : (ODBC_ERROR) INFO 2499: Cannot rollback; no transaction in progressDMExpress : (TBLCOUNT) 3 records were committed to target database "<db name>" table "<table name>", out of which 2 records were loaded successfully and 1 records were rejectedDMExpress has aborted

Area Affected DMX tasks loading non-conforming data into a Vertica target table with Abort if error specified and not using the default Vertica COPY load method. A non-default SQL insertion load isautomatically used only when the target table contains LOB columns.

Problem This is a Vertica ODBC problem.

Solutions DMX will correctly abort and issue a fatal error message when the first rejected record is encountered indicating the number of rows loaded, however all of these records processed before therejected record will be committed. Setting up a Vertica "savepoint" prior to loading with DMX, may provide a means to recover the pre-load-attempt table state after an aborted DMX SQLinsertion load.

Referencenumber DMX-27313

Symptom Unable to insert row for version '1.6' in Schema History table "PUBLIC"."schema_version"

Area Affected Connect Portal installation when customers jump from 9.9.6 to a version > 9.9.10 without installing 9.9.10.

Problem This is a Syncsort Connect Portal problem.

Solutions Install 9.9.10 before a version > 9.9.10 to ensure a seamless migration.

Referencenumber DMX-26978

Symptom DMX task aborts or records are rejected with the following message

Page 9: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

(AZSQDWTRERJ) records were rejected while writing to Azure Synapse Analytics database "<database_url>" table "<tablename>". The rejected records and the reason file are available onAzure storage <location>

Area Affected Loading data that includes at least one linefeed, carriage return or "~~" ("0x7E0x7E") characters as part of the data into Azure Synapse Analytics (Azure SQL Data Warehouse) targets.

Problem This is a Azure Synapse Analytics (Polybase) problem.

Solutions Preprocess the data to replace these characters before using DMX to load into Azure Synapse Analytics targets.

Referencenumber DMX-25987

Symptom Fatal Java error - java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:139)OR after release 9.9.8 DMX fatal error:DMExpress : (GCAPIERR) data connector "JDBC" returned failed status for Data Connector API "dmx_connector_initiateObjectIOUsingDriver". Please contact Syncsort Technical Support at(201) 930-8270 / [email protected](GCMSG) data connector: JDBC, severity: 2, message: Missing jar or native library files required of a compression CODEC in the Hadoop configuration. Set DMX_JDBC_DEBUG=1 per theDMX on-line help to get additional exception information. Check Hadoop distribution website to assure compression CODECs are properly installed. LZO CODEC from GPL Extras librarymay require native compression library location in library path and .jar files in DMX_CLASSPATH exported in dmxjob /EXPORT command., vendor contact information: (201) 930-8270 /[email protected]

Area Affected DMX-h jobs accessing Hive in a Hadoop cluster or on an edge node running CDH 6.2.0 or other distributions with similar or more recent release dates (March 2019).

Problem This is a Hadoop configuration or environment settings problem.

Solutions Consult Hadoop distribution documentation to assure compression CODEC libraries and jar files are installed. LZO CODEC from GPL Extras library may require native compression librarylocation in library path and .jar files in HADOOP_CLASSPATH, or in DMX_CLASSPATH exported in dmxjob /EXPORT command.

Referencenumber DMX-26386

Symptom DMExpress : (HIVEWERR) Write to Apache Hive table <temporary work table name> failed due to the following error:(RMFIOFL) an attempted I/O on file "<hdfs URL>:<port><work file path name on HDFS>" accessed through remote file connection "<Hive jdbc URL>:<PORT>/<database>;<settings>", wasnot successful

Area Affected DMX-h job accessing a Hive table and specifying a Work table codec (see on-line help topic for details) on a CDH 6.2.0 or other Hadoop distribution with a similar release date (March 2019).

Problem This is a Hadoop platform configuration problem.

Solutions Assure compression CODECs are installed on your Hadoop platform. Include the native library locations for the LZO compression CODEC and the specified work table CODEC (such asSnappy) in the library path variable. Include the GPL Extras Parcel (see Cloudera documentation) jar files in the HADOOP_CLASSPATH environment variable.

Referencenumber DMX-26965

Symptom Incorrect output - each byte of any multibyte UTF-8 data is replaced with the UTF-8 replacement character, U+FFFD, which has byte sequence 0xEF 0xBF 0xBD.

Area Affected DMX-h jobs running in CDH 6.2.0 MapReduce or Spark writing UTF-8 multibyte data to a Hive or Impala table.

Problem This is a DMX problem.

Solutions Export the environment variable: DMX_HIVE_TARGET_FORCE_STAGING=1 for Hive targets or DMX_IMPALA_TARGET_FORCE_STAGING=1 for Impala targets, before executing aDMExpress job.

Referencenumber DMX-27095

Symptom Selecting a new source schema on an existing dataflow clears existing saved table selections in Syncsort Connect Portal.

Area Affected Changing the selected schema on the source tab for an existing dataflow and then selecting the existing saved schema loads the tables for that schema but deselects previously selected tables.

Problem This is a Syncsort Connect Portal problem.

Page 10: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Solutions Click on the cancel button on the data flow dialog and then click on the Edit Data Source button to reopen the saved source data connection settings.

Referencenumber DMX-26143

Symptom The monitor replication page data flow grid displays the error "No data flows are configured for this project" or "Unable to load model controller for project <project name>" after deploying aproject, after starting or stopping a project or data flow, or when refresing the monitor page.

Area Affected Monitoring replication projects in the Syncsort Connect Portal.

Problem This is a Syncsort Connect Portal problem.

Solutions Refresh the page or move between projects until the interface displays data flows.

Referencenumber DMX-26509

Symptom Users can see a misleading error message "Could not connect to server 'xxxx' at 'xxxx'. Ensure the server is available and the DMX DataFunnel Runtime Service is running. See Help for moreinformation."

Area Affected Adding and editing Servers in Syncsort Connect Portal.

Problem This is a Syncsort Connect Portal problem.

Solutions Please ensure the server is running and ignore the error for "DMX DataFunnel Runtime Service is running"

Referencenumber DMX-26557

Symptom Double quote (") delimiter or single quote(') enclosing character causes the job to abort.

Area Affected Datafunnel jobs with hdfs target.

Problem This is a DMX datafunnel problem.

Solutions Escape double quote (") delimiter with another double quote. Escape single quote(') enclosing character with backslash '\'.

Referencenumber DMX-26572

Symptom Previously saved data flow displays incorrect disposition 'Overwrite Table'/'Append to Table' for hdfs targets.

Area Affected Copy projects with hdfs target in the Syncsort Connect Portal.

Problem This is a Syncsort Connect Portal problem.

Solutions Data flow will 'Overwrite File' or 'Append to File' as requested. Ingore the incorrect displayed value.

Referencenumber DMX-26573

Symptom *ERROR* [OM:07051] Unexpected exception occurred in the expression handler. Error is: java.lang.NullPointerException

Area Affected Specifying filter expressions and clicking the Validate Expression button in the Filters tab of the Add/Edit Data Flow dialog in the Syncsort Connect Portal.

Problem This is a Syncsort Connect Portal problem.

Solutions Specify filter expressions without clicking the Validate Expression button. The filter expressions will work as expected at run-time.

Referencenumber DMX-26534

Symptom Jobs display 'Completed' Status. However, the log reports 'ERROR com.syncsort.datafunnel.ExecuteMainJobCommand : Job has finished running with return code 109'.

Area Affected Copy projects with multiple data flows in the Syncsort Connect Portal.

Page 11: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Problem This is a Syncsort Connect Portal problem.

Solutions Use one data flow per project.

Referencenumber DMX-26495

Symptom Unable to select tables from multiple schemas. Selecting a table from different schema clears previously selected tables.

Area Affected Existing DMX customers who already have setup agent with multiple schemas. This would affect replication from DB2/Z using connect portal.

Problem This is a Syncsort Connect Portal problem.

Solutions Ensure that each publisher/engine contains tables from only one schema, or define the data flow using the Connect Portal.

Referencenumber DMX-26558

Symptom Test for Adding/Editing a server reports "Test was successful." for a server with incorrect hostname/port details.

Area Affected Adding/editing Servers in the Syncsort Connect Portal.

Problem This is a Syncsort Connect Portal problem.

Solutions None.

Referencenumber DMX-26555

Symptom Saving a project with Run on: 'Cluster' has no effect and reverts back to local mode.

Area Affected Saving/Running a Copy project in the Syncsort Connect Portal.

Problem This is a Syncsort Connect Portal problem.

Solutions Continue to use local run mode.

Referencenumber DMX-26491

Symptom *ERROR* API Returns OverallStatus to OK instead of STOPPED in the DataFlow status

Area Affected After starting a Project for the first time, a copy DataFlow can be shown as Ok instead of Stopped.

Problem This is a Syncsort Connect Portal problem.

Solutions Restarting the Project solve the issue.

Referencenumber DMX-26478

Symptom DMExpress aborts with the error message:(GCMSG) data connector: APACHEKAFKA, severity: 2, message: Error from API: 'javax.security.auth.login.LoginException: Unable to obtain Principal Name for authentication ', vendorcontact information: (201) 930-8270 / [email protected](GCMSG) data connector: APACHEKAFKA, severity: 2, message: Error from API: 'Unable to obtain Principal Name for authentication ', vendor contact information: (201) 930-8270 /[email protected]

Area Affected DMX job with kafka target run in a Kerberized Hadoop cluster

Problem This is a DMExpress problem.Workaround : Run the jobs on the edgenode.

Solutions None.

Reference DMX-26211

Page 12: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

number

Symptom Symptom Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to org.apache.hadoop.hive.common.type.Timestampat org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaTimestampObjectInspector.getPrimitiveJavaObject(JavaTimestampObjectInspector.java:38)at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$TimestampDataWriter.write(DataWritableWriter.java:498)at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$GroupDataWriter.write(DataWritableWriter.java:204)at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$MessageDataWriter.write(DataWritableWriter.java:220)at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:91)at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:182)at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:44)at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:136)at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:42)at org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:124)at org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:55)at com.syncsort.dmexpress.hcatalog.k.a(HCatalogDelegatingOutputFormat.java:127)

Area Affected Running a DMX-h job on the cluster when the job contains a task writing to a non-transactional and unpartitioned Hive 3.0 or above timestamp or date column.

Problem This is a DMExpress problem.

Solutions Set the environment variable DMX_HIVE_TARGET_FORCE_STAGING=1

Referencenumber DMX-25670

Symptom Error during remote command: An internal system exception occurred: Invalid resource filter : cluster = Sandbox, service = null, command = UNINSTALL

Area Affected Uninstalling DMX-h from a host in the cluster using the Ambari UI on HDP 3.0 and higher.

Problem This is an Ambari problem.

Solutions Uninstall DMX-h from each host by running "rpm -e dmexpress" or an equivalent command such as "yum erase dmexpress" from a shell as root.Note that uninstalling DMX-h from individual hosts is normally unnecessary in an upgrade scenario. Installing a newer Ambari package automatically replaces the DMX-h installation oneach host.

Referencenumber DMX-25552

Symptom Incorrect output, missing or duplicated rows in target table or file.

Area Affected DataFunnel or parallel ingestion jobs that run in Hadoop, use character primary keys, and split reads on a source table that uses a collating sequence (such as EBCDIC) that is incompatiblewith the Unicode Collating Sequence.

Problem DMExpress generates character split conditions in source table queries based on Unicode Collating Sequence. Filters do not behave as expected when applied to source tables with collatingsequences that are incompatible with the Unicode Collating Sequence.

Solutions Avoid using character columns as primary keys, particularly when the source table collating sequence is EBCDIC or some other Unicode incompatible collating sequence.

Referencenumber DMX-23069

Symptom Fatal Java error - Exception in thread "main" java.io.IOException: The components of InternalInputInfo shoud not be less than 2 or larger than 2 atcom.syncsort.dmexpress.generic.E.a(Utils.java:393)

Area Affected DataFunnel or parallel ingestion jobs that run in Hadoop, and have character primary key data that includes the pipe symbol, "|".

Page 13: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Problem This is a Dmexpress problem.

Solutions Avoid using character columns as primary keys. Don not run DataFunnel or parallel ingestion jobs with character primary keys containing pipe symbols in Hadoop.

Referencenumber DMX-24938

Symptom DMExpress : (COLDSNAV) information about the columns could not be retrieved for “<table name>"(GCAPIERR) data connector "JDBC" returned failed status for Data Connector API "dmx_connector_getObjectFields".

Area Affected DB2ZOS source tables with lowercase characters in the table name accessed by DMExpress through JDBC.

Problem This is a Dmexpress problem.

Solutions When accessing DB2ZOS via JDBC avoid accessing table names with lowercase characters - use ODBC instead.

Referencenumber DMX-25000

Symptom Segmentation Fault

Area Affected Tasks with a COBOL-IT Line Sequential source and standard output as the target on SunOS SPARC 64-bit.

Problem This is a DMExpress problem. Standard output target is not supported with a COBOL-IT Line Sequential source.

Solutions Use a file target.

Referencenumber DMX-24956

Symptom Symptom dmxjob reports "DMXPublishLineage aborted". Lineage information is not uploaded to Cloudera Navigator.

Area Affected DMX jobs executed with the "lineageEnabled":true and "lineagePublishToNavigator":true, when dmxjob fails to connect to Cloudera Navigator.

Problem This is a DMX-h problem.

Solutions Please contact Syncsort Technical Support at [email protected] or (201) 930-8270 for instructions on how to proceed with uploading the lineage information to Cloudera Navigator.

Referencenumber DMX-21576

Symptom Lineage information may not display properly in Cloudera Navigator.

Area Affected Lineage published to Cloudera for DMX jobs.

Problem This is Cloudera Navigator problem.

Solutions Please contact Syncsort Technical Support at [email protected] or (201) 930-8270 for instructions on how to get a patch for Cloudera Navigator from Cloudera.

Referencenumber DMX-24816

Symptom make command fails with linker errors such as ld: 0711-317 ERROR: Undefined symbol: ._ZNSt8ios_baseC2Ev

Area Affected make command to rebuild Acucobol-GT runtime on AIX to use DMExpress for COBOL program SORT/MERGE verbs

Problem This is an Acucobol Makefile problem. For more information contact Acucobol support at https://www.microfocus.com/support-and-services/ .

Solutions Acucobol-GT patch number 1729.

Referencenumber DMX-20787

Symptom DMX CDC Director help fails to launch

Area Affected DMX CDC Director when you try to open the help.

Page 14: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Problem This is a DMX CDC Director problem.

Solutions Open the help from the installation folder ${DMEXPRESS_INSTALLATION}/DMXCDC/Windows/director/omnidir.chm).

Referencenumber DMX-23308

Symptom (XMLNODST) the XML document “<XML document>" does not contain a layout

Area Affected DMExpress jobs containing Tasks having XML schema with xsd:import statement specified as External Metadata and with Lineage generation turned on.

Problem This is a DMExpress problem.

Solutions Import the schema specified by xsd:import into the parent schema file and rerun the DMExpress job.

Referencenumber DMX-23778

Symptom DMExpress : (INERR) an internal error has occurred (-1073741819 in SSSYN_EXEC)

Area Affected Tasks that reference a LOB field when multiple sources do not associate with all layouts.

Problem This is a DMExpress problem.

Solutions Associate all layouts with all source files.

Referencenumber DMX-23726

Symptom DMExpress : (GCAPIERR) data connector "JDBC" returned failed status for Data Connector API "dmx_connector_openObjectUsingDriver". Please contact Syncsort Technical Support at (201) 930-8270 /[email protected](GCMSG) data connector: JDBC, severity: 2, message: Deserialize metadata failed. Metadata file/yarn/nm/usercache/<username>/appcache/application_XXXXXXXXXXXXX_XXXX/container_XXXXXXXXXXXXX_XXXX_XX_XXXXXX/<jobname>_XXXXX_XXXXXXXXXXXXXXX_XXXXXXXXXX_cacheis empty, vendor contact information: (201) 930-8270 / [email protected] has aborted

AreaAffected Running a DMX-h Intelligent Execution job on single cluster node when the job has a subjob containing either a Hive or an Impala table connected through a JDBC connection

Problem This is a DMX-h problem.

Solutions Run the job from the edge node or run the subjob on cluster via Job orchestration.

Referencenumber DMX-23276

Symptom Incorrect output: DMExpress reads stale data from an intermediate Hive/Impala source table

Area Affected Running a DMX-h Intelligent Execution job on a single cluster node where a Hive/Impala target table connected through a JDBC connection is used as a Hive/Impala source in any of thesubsequent tasks

Problem This is a DMX-h ETL problem

Solutions If possible, use files instead of Hive/Impala tables for intermediate storage. Otherwise, run the tasks that depend on the intermediate Hive/Impala tables on the edge node or in distributedmode.

Referencenumber DMX-23275

Symptom DMExpress crashes when importing a DTL task from the command line or the Task/Job Editor.

Area Affected Tasks that contain both summarization and sources with embedded layouts (header layout, XML).

Problem This is a DMExpress problem.

Page 15: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Solutions None.

Referencenumber DMX-23060

Symptom (ORACLE_ERROR) oracle error. ORA-01438: value larger than specified precision allowed for this column.

Area Affected DMExpress tasks with an Oracle target, which load into NUMBER columns with scale greater than 0 using Oracle 11 client on Windows.

Problem This is an Oracle problem.

Solutions Upgrade your Oracle client to version 12 or higher.

Referencenumber DMX-22527

Symptom Incorrect output

Area Affected A DMX-h IX job that has an aggregate task or a join task, which references an array field.

Problem This is a DMX-h ETL problem.

Solutions Run the job from the edge node.

Referencenumber DMX-18028

Symptom Deleted records from a DMX Change Data Capture (CDC) source are not applied to a Hive target database table partition.

Area Affected DMX-h tasks that have Hive target database tables with the Apply changes (CDC) disposition and that have partitions, but that do not provide transactional support.When all of the records of a Hive target database table partition should be deleted as per deleted records from a CDC source, the changed records are never applied, which results in no recordsbeing deleted from the partition during the apply phase of DMX CDC batch processing.

Problem This is a Hive bug: https://issues.apache.org/jira/browse/HIVE-16842

Solutions Drop the partition by using a HiveQL statement.

Referencenumber DMX-21755

Symptom The log file of the driver may not be printed at all or maybe incompletely printed. In addition, it may print a message “awk: fatal: can't open source file `processDriverLogInClusterMode_awk'for reading (No such file or directory)”. Note that there are no functional problems with the application. The application will run correctly.

Area Affected Running DMX-h jobs in Spark’s cluster deploy mode, with subjobs or where multiple tasks can be run independently via Job orchestration.

Problem This is a DMX-h problem.

Solutions None.

Referencenumber DMX-21556

Symptom Job hangs.

Area Affected DMX-h IX jobs with a custom/extended task or groups of directly connected custom/extended tasks that have piped input from a previous DMX-h task and piped output to a following DMX-htask when both DMX-h tasks are either aggregate or join tasks and are run as MapReduce.

Problem This is a DMX-h problem.

Solutions Add a DMX-h copy task before the second DMX-h task that runs as MapReduce.

Referencenumber DMX-21488

Symptom DMExpress : (CDCNFD) The Connect:Direct connection to either pnode (rhel65-test-p05 ) or snode (z196) or to both failed due to the following:

Page 16: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

(CDMSG) (XCMM024I) An invalid pnode id was specified. Either the userid or password was not valid

Area Affected DMExpress tasks with mainframe sources accessed via Connect:Direct when the pnode user name is different from the logged-in user and when no password is specified.

Problem This is a Connect:Direct restriction as indicated by http://www-01.ibm.com/support/docview.wss?uid=swg21532302.

Solutions Either leave the user name blank, use the same user name as the logged-in user, or provide the user name and password for the user.

Referencenumber DMX-20123

Symptom Incorrect output

Area Affected Running a DMX-h join or aggregate task that references at least one user defined value in a DMX-h Intelligent Execution job in the Hadoop cluster.

Problem This is a DMX-h ETL problem.

Solutions Run a DMX-h join or aggregate task that references user defined values on the ETL server/edge node.

Referencenumber DMX-21162

Symptom 2017-07-20 02:01:55,685 ERROR [Driver] transport.TSaslTransport: SASL negotiation failurejavax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:422)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:430)at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:240)at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:185)at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.(HiveClientCache.java:303)at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:227)

Area Affected Area Affected Running DMX-h jobs with an HCatalog source or target on Cloudera Spark 2.1 in cluster deploy mode on a Kerberos-enabled CDH Spark cluster.

Problem This is a Cloudera Spark 2.1 problem.

Solutions Run the application in client deploy mode or upgrade to Cloudera Spark 2.2 or higher. For more information, please seehttps://www.cloudera.com/documentation/spark2/latest/topics/spark2.html.

Referencenumber DMX-21086

Symptom The Google Storage bucket does not list any directories.

Area Affected Browsing to a file residing on Google Cloud Storage.

Problem This is a DMX-h problem.

Solutions Upload a placeholder data file to the Google Storage bucket.

Referencenumber DMX-20934

Page 17: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Symptom DMExpress : (SQUTERIN) DMExpress is unable to insert data from staging table <TABLE_NAME> for database "<CONNECTION_URL>" target table <TABLE_NAME>; the systemreturned the following error:(GCAPIERR) data connector "JDBC" returned failed status for Data Connector API "dmx_connector_openObjectUsingDriver". Please contact Syncsort Technical Support at (201) 930-8270 /[email protected](GCMSG) data connector: JDBC, severity: 2, message: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask,vendor contact information: (201) 930-8270 / [email protected](GCMSG) data connector: JDBC, severity: 2, message: Output channel creation failed, vendor contact information: (201) 930-8270 / [email protected] Job has aborted

Area Affected Running a DMX-h job on Spark in a MapR cluster when the job contains an Amazon S3-backed external Hive target table connected through a JDBC connection.

Problem This is a MapR problem.

Solutions Set the hive configuration property ‘hive.optimize.insert.dest.volume’ to false. For help on setting Hive configuration properties, see the DMExpress Help topic, "Connecting to Hive datawarehouses".

Referencenumber DMX-20576

Symptom DMExpress : (JTOOMANYPAIRS) too many paired recordsDMExpress : (INMEM) there is insufficient memory available to DMExpress to execute (1 in SSJOIN_ALLOC_REC)

Area Affected Running a DMX-h job on Spark on a MapR cluster where the job contains a join task with a Hive source table accessed via JDBC and the extracted records are longer than 65535.

Problem This is a DMX-h problem.

Solutions Set the property spark.executor.cores to 1.

Referencenumber DMX-20353

Symptom DMExpress : (JTOOMANYPAIRS) too many paired recordsDMExpress : (INMEM) there is insufficient memory available to DMExpress to execute (1 in SSJOIN_ALLOC_REC)

Area Affected Running a DMX-h job on a single cluster node where the job contains a join task with a Hive source table accessed via JDBC and the extracted records are longer than 65535.

Problem This is a DMX-h problem.

Solutions Run the job from the edge node.

Referencenumber DMX-20353

Symptom Return code of DMExpress is 0 (success) instead of 100 (exception), despite warning messages regarding target statistics.

Area Affected When Hive analyze queries fail and warning messages are issued, DMExpress tasks running in local mode that load to Hive tables via JDBC and request analysis of table/column statistics.

Problem This is a DMExpress problem.

Solutions Identify the problem via the GCMSG warning message that is printed to the task log.

Referencenumber DMX-19907

Symptom Analysis of Hive target column statistics fails with a Hive syntax error:FAILED: SemanticException [Error 10001]: Line 1:270 Table not found '<table_name>'.

Area Affected When running Hive versions prior to 1.2, DMExpress tasks analyzing a Hive target specified with a fully-qualified table name.

Problem This is a Hive bug, HIVE-10007.

Solutions Contact your Hadoop distribution vendor for a resolution or omit the database prefix from the table name when the target table is in the default database.

Page 18: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Referencenumber DMX-19907

Symptom Analysis of Hive target column statistics fails with a Hive error:FAILED: UDFArgumentTypeException Only integer/long/timestamp/float/double/string/binary/boolean/decimal type argument is accepted but date is passed.

Area Affected When running Hive versions prior to 1.2, DMExpress tasks analyzing a Hive target table containing a DATE column.

Problem This is a Hive bug, HIVE-10226.

Solutions Contact your Hadoop distribution vendor for a resolution or disable analysis of column statistics for the table.

Referencenumber DMX-19907

Symptom Analysis of Hive target column statistics fails with a Hive error:FAILED: SemanticException [Error 10004]: Invalid table alias or column reference '<value>': (possible column names are: <columns>)

Area Affected When running Hive versions prior to 1.2, DMExpress tasks analyzing a Hive target table when a Text or Date/time constant is mapped to a target partition column.

Problem This is a Hive bug.

Solutions Contact your Hadoop distribution vendor for a resolution or disable analysis of column statistics for the table.

Referencenumber DMX-19907

Symptom Task Editor takes a long time to respond.

Area Affected Opening a task that links to external metadata with a large number of fields and reformatting.

Problem This is a DMExpress problem.

Solutions As a temporary workaround, if the linked external metadata will not change while developing the task, rename the external metadata file or its parent directory such that it is no longeraccessible from the task. When re-opening the task, choose "Use cached metadata" when prompted.

Referencenumber DMX-20144

Symptom Extracted column data is truncated.

Area Affected DMExpress tasks that extract from a Hive CHAR column when the data stored in the column has a length longer than the original column length after being converted to UTF-8.

Problem This is a DMExpress problem.

Solutions Extract the column using a longer extract length by selecting the other format radio button in the Format section of the Source Database Table dialog.

Referencenumber DMX-20129

Symptom User <user name> does not have privileges for DESCTABLE The required privileges: Server=<server name>>Db=<database name>>Table=<table name>>action=select;Server=<servername>>Db=<database name>>Table=<table name>>action=insertDMExpress has aborted.

Area Affected DMExpress tasks accessing a Sentry-enabled Hive source/target where the user has column-level select privilege but does not have table level privilege.

Problem This is a DMExpress problem.

Solutions Currently, there is no solution available. Table-level select privilege must be granted to the user.

Referencenumber DMX-20100

Symptom User-defined C/C++ function called instead of internal DMExpress function.

Page 19: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Area Affected Invoking DMExpress from a C/C++ program on HP, where the code contains a function with the same name as a DMExpress function.

Problem This is a DMExpress problem.

Solutions Ensure the function names defined in the C/C++ program do not collide with the function names listed in the DMExpress API Reference provided in the Documentation folder of theDMExpress installation.

Referencenumber DMX-19692

Symptom DMExpress : (HDFSNS) HDFS server connections are only supported on Intel-based 64bit Linux systems.

Area Affected Sampling an XML file with a Hadoop Distributed File System (HDFS) connection.

Problem This is a DMX-h ETL problem.

Solutions Move the file to a local or a non-HDFS source.

Referencenumber DMX-19606

Symptom DMExpress : (HDFSNS) HDFS server connections are only supported on Intel-based 64bit Linux systems.

Area Affected Clicking the “Map layout” button in the Header Properties dialog of a file with a Hadoop Distributed File System (HDFS) connection.

Problem This is a DMX-h ETL problem.

Solutions Move the file to a local or a non-HDFS source.

Referencenumber DMX-19605

Symptom DMExpress discards records which have an empty value for a LargeObject column, and incorrectly reports these records as loaded.

Area Affected DMExpress tasks loading data into a DB2 database using the provided DataDirect driver, where the target table contains at least one of the following types of columns in any position but thelast: CLOB/BLOB/DBCLOB/ LONG VARCHAR/ LONG VARGRAPHIC.

Problem This is a Progress DataDirect problem (defect 54444)

Referencenumber DMX-19420

Symptom DMExpress reads zero bytes from MapR-FS

Area Affected Running a DMX-h job on an edge node or running a multi-target DMX-h job on a MapR cluster.

Problem This is a MapR problem fixed in MapR Case #00040694.

Solutions Contact MapR and apply the patch for the MapR Case #00040694: mapr-patch-5.1.0.37549.GA-39375.x86_64.rpm

Referencenumber DMX-18716

Symptom MapR Streams connection is still untrusted after adding a self-signed certificate to the java keystore.

Area Affected DMX-h tasks that contain a MapR Streams connection.

Problem This is a Java 7 problem.

Solutions Update your environment to use Java 8 or use a self-signed certificate with a good KeyUsage

Referencenumber DMX-18959, http://bugs.java.com/bugdatabase/view_bug.do?bug_id=7018897

Symptom Incorrect output or records are rejected

Area Affected Loading data that includes at least one linefeed or carriage return character as part of the data into Greenplum

Page 20: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Referencenumber DMX-18741

Symptom DMExpress fails with the error: (CREAT) unable to create "<filename>"

Area Affected DMX-h jobs with more than one distributed target when run in the cluster on recent Hortonworks and Pivotal distributions.

Problem This is a MapReduce problem fixed in MAPREDUCE-6619

Solutions If your Hadoop distribution doesn’t contain the fix, you can work around the problem by adding $HADOOP_CONF_DIR or /etc/hadoop/conf to the mapreduce.application.classpath property inthe mapred-site.xml file.

Referencenumber DMX-18730

Symptom DMExpress crashes when running a task that uses the Greenplum Data Direct ODBC driver to write to a Greenplum target.

Area Affected Tasks defined with a Greenplum target that include user-defined SQL in which the query has a mix of literals and parameters.

Problem This is a DMExpress problem.

Solutions Modify the task to assign the literals to DMExpress values and use the DMExpress values in the SQL query. Alternatively, set the parameter EnableDescribeParam=0 for the data source inthe odbc.ini file.

Referencenumber DMX-18600

Symptom Incorrect output

Area Affected DMExpress tasks with float or dfloat fields associated with "Variable length with 2-byte record prefix" or "Variable length, Hadoop distributable record format" sources when run on AIX orSun platforms.

Problem This is a DMExpress problem.

Referencenumber DMX-18316

Symptom DMExpress may fail with the error: “Partition already present with given partition key values: Data already exists”

Area Affected A DMExpress task with a partitioned HCatalog table as target.

Problem This is a known issue in the Apache HCatalog implementation: https://issues.apache.org/jira/browse/HIVE-6476

Solutions Fix pending hive-6476.

Referencenumber DMX-18339

Symptom Negative values are incorrectly displayed as valid (black), rather than invalid (red) data.

Area Affected Sampling unsigned packed decimal data in the Task Editor or Job Editor.

Problem This is a DMExpress problem. During sampling, the data is treated as signed packed decimal data, instead of unsigned.

Solutions Run the task to identify invalid unsigned packed decimal data. Invalid data warnings are reported correctly at runtime.

Referencenumber DMX-17369

Symptom Unsigned packed decimal fields are treated as signed packed decimal fields by the Task Editor and Job Editor.

Area Affected Importing metadata or DTL tasks containing record layouts that include unsigned packed decimal fields.

Problem This is a DMExpress problem.

Solutions Use external metadata to define unsigned packed decimal fields to maintain correct runtime behavior, or maintain the task as a DTL task instead of a GUI task.

Page 21: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Referencenumber DMX-17369

Symptom Empty file output.

Area Affected DMX-h User-Defined MR jobs with a single distributed target on the map side, and a single output on the reduce side of the application.

Problem This is a DMX-h ETL problem.

Solutions Update the job to run the tasks that would have run on the Map side of the MapReduce job as a map-only job, and the tasks on the Reduce side as a MapReduce job.

Referencenumber DMX-17889

Symptom Incorrect output.

Area Affected DMX-h jobs run in the cluster with the same file (path and file name) specified on each side of a join task.

Problem This is a DMExpress problem.

Solutions To achieve a self-join in DMX-h cluster jobs, use two copies of the same file with a different path and/or file name.

Referencenumber DMX-17943

Symptom Incorrect output

Area Affected DMX-h Intelligent Execution job that has an aggregate or a join task with a Folded ASCII, Folded EBCDIC, or Customized collating sequence.

Problem This is a DMX-h ETL problem.

Solutions Create a user-defined MapReduce job and ensure that key fields or group-by fields that are equal based on the collating sequence will result in the same hash value for the partition key.For instance, if the Folded ASCII collating sequece is used, make sure a key value of "A" and "a" will result in the same hash value by using DMExpress functions like ToLower or ToUpper onthe key field before calling the hash function.

Referencenumber DMX-17783

Symptom DMExpress map task aborts with the following error:java.io.IOException: (PVALLESS) the map partition number "<number>" should be at least "<number>"

Area Affected Running a join or aggregation task within an Intelligent Execution (IX) DMX-h job on the Hadoop cluster, where the task has an EBCDIC encoded source and has at least one delimited sourcerecord layout.

Problem This is a DMExpress problem.

Solutions Avoid using delimited source record layouts for IX aggregation or join tasks when the source encoding is EBCDIC. If the data is originally fixed length and needs to be converted to delimited, do all processing with data as fixed, and subsequently convert it to delimited in a copy task at the end of the processflow. If the data is originally delimited, preprocess it using a copy task to convert from a delimited to fixed positional layout.

Referencenumber DMX-17777

Symptom Original field content with enclosing characters is output instead of the expected reduced field content, in which enclosing characters are removed.

Area Affected DMX tasks with a delimited record layout with enclosing characters and variable length arrays with composite fields, which contain only one subfield, and the composite field is referenced inan expression as an array element.

Problem This is a DMExpress problem.

Solutions To reference the array element, use the subfield instead of the composite field.

Referencenumber DMX-17752

Page 22: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Symptom DMExpress : (PIVOF) "<filename>" is not a valid file name(GATTR) unable to retrieve attributes for "<filename>"() No such file or directoryDMExpress has aborted

Area Affected Running a DMX-h Intelligent Execution job on MAPR that tasks with multiple target intermediate files that are specified as local files.

Problem This is a DMX-h ETL problem.

Solutions Use direct data flow or set an HDFS connection for the target file.

Referencenumber DMX-17512

Symptom Incorrect output

Area Affected DMX-h Intelligent Execution job that has a sort task with the "Retain only one record" option specified.

Problem This is a DMX-h ETL problem.

Solutions Set the environment variable DMXHadoopExecutionModeForSort=0.

Referencenumber DMX-17504

Symptom Wrong target partition file name with the partition indicate appended to the part-##### file name, instead of the folder containing the files.

Area Affected DMX-h jobs that run on the cluster and contain tasks that generate partitioned target.

Problem This is a DMX-h ETL problem.

Solutions Run the task generating the partitioned target on an edge node or single cluster node.

Referencenumber DMX-17095

Symptom DMX-h job aborts with a core dump

Area Affected Running an Intelligent Execution job containing a task with an HCatalog or database lookup source.

Problem This is a DMX-h ETL problem.

Solutions Use an additional copy task to read the HCatalog or database lookup source and generate a target file, and subsequently use this file as the lookup source.

Referencenumber DMX-17094

Symptom DMExpress : (DFNF) data file <filename> does not exist DMExpress has aborted

Area Affected DMX-h jobs written in DTL where a task’s single output is connected to more than one other tasks with direct dataflow specified.

Problem This is a DMX-h ETL problem.

Solutions Use only one direct dataflow per file.

Referencenumber DMX-16973

Symptom MapReduce Output not sorted in total order

Area Affected DMX-h jobs with tasks that request sorted output that run on the cluster.

Problem This is a DMX-h ETL problem.

Solutions Consider using either a single reducer or a sort task that runs on a single node (edge node or single cluster node) at the end of your job.

Reference

Page 23: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

number DMX-16079

Symptom The status of the scheduled job is not updated automatically in the DMExpress Server Dialog when there is a change of status while the Server Dialog is open.

Area Affected Scheduled DMX jobs.

Problem This is a DMExpress problem.

Solutions Click the "Refresh" button in the server dialog to update the status of the scheduled jobs.

Referencenumber DMX-8049

Symptom DMX-h job aborts with the error: “java.lang.OutOfMemoryError: Java heap space”.

Area Affected DMX-h jobs running on Amazon EMR and reading files from Amazon S3.

Problem This is a Hadoop problem, fixed with http://issues.apache.org/jira/browse/HADOOP-11584

Solutions Set the Hadoop property ‘mapreduce.input.fileinputformat.split.minsize’ to the desired split size.

Referencenumber DMX-17132

Symptom HDFS source file paths in DMExpress statistics shown inconsistently

Area Affected DMExpress statistics in mapper logs of MapReduce Join jobs.

Problem This is a DMExpress problem.

Solutions A fix will be available in an upcoming DMExpress release.

Referencenumber DMX-17056

Symptom Amazon S3 source file paths in DMExpress statistics shown inconsistently

Area Affected DMExpress statistics in mapper logs of MapReduce Join jobs.

Problem This is a DMExpress problem.

Solutions A fix will be available in an upcoming DMExpress release.

Referencenumber DMX-16859

Symptom Incorrect Amazon S3 directory path in the Source File dialog.

Area Affected Browsing to Amazon S3 directories from the Task Editor's Source File dialog.

Problem This is a DMExpress problem.

Solutions Type the Amazon S3 directory path in the File Name field of the Source File dialog.

Referencenumber DMX-16911

Symptom DMExpress does not properly convert the format of floating point fields from the IBM format (Excess64) to IEEE.

Area Affected DMExpress tasks with a mainframe source file containing one or more IBM floating point number fields, and a non-mainframe target file including at least one of these fields in either adelimited target record layout with a bit field, or a fixed position target record layout that has one or more of the following:• a variable length number or date/time field.• a variable length field that is not at the end of the record.• the “Compress all fields” option specified.

Problem This is a DMExpress problem.

Page 24: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Solutions Add a subsequent task to your DMX job that writes the target with the desired condition(s) separately from the task that does the data conversion.

Referencenumber DMX-16076

Symptom DMExpress Job : (HRUNMAPO) job was run on the Hadoop cluster as a Map-only jobException in thread "main" java.lang.IllegalArgumentException: The bucketName parameter must be specified… Job has aborted.

Area Affected Accessing Amazon S3 buckets in MapReduce jobs when the AWS secret key entered either on the Amazon S3 Connection dialog or in the DTL or SDK command option/SERVERCONNECTION contains a slash (/).

Problem This is a Hadoop problem: Hadoop-3733.

Solutions Use one of the following solutions:• Create a new secret access key without a slash.• Specify the AWS access key ID and secret access key (with the slash) in the Hadoop configuration file, and set the following environment variable:DMX_USE_AWS_ACCESS_KEYS_FROM_HADOOP_CONF=1.

Referencenumber DMX-15873

Symptom DMX-h ETL job aborts with the following error: "Exception cleaning up: java.io.IOException: No callback registered for TaskAttemptID:XXX"

Area Affected DMX-h tasks run on the cluster, where the task has both an HCatalog source (or sources) and an HCatalog target, and the Hive version is between 0.14 and 1.1.

Problem This is a Hive issue. Details at https://issues.apache.org/jira/browse/HIVE-10213

Solutions Use Hive 1.2 or later.

Referencenumber DMX-16532

Symptom DMX-h tasks abort with the following error: “DMExpress: (DFNF) data file "<file_name>" does not exist DMExpress has aborted.”

Area Affected DMX-h tasks run on a single cluster node when sources or targets within the task are specified using relative paths.

Problem This is a DMExpress problem.

Solutions When running on a single cluster node, ensure that sources and targets are specified using absolute paths.

Referencenumber DMX-16365

Symptom MQGET ended with reason code 2110

Area Affected IBM WebSphere MQ functionality on Linux 2.6 s390x 64-bit

Problem This is a DMExpress problem.

Solutions Do not use IBM WebSphere MQ functionality on Linux 2.6 s390x 64-bit

Referencenumber DMX-15023

Symptom DMExpress : (INWAR) (1 in SSALLOCATEFILTERMEM); planned memory not available, replanning and continuing execution "DMExpress : (CRASHRPTERR) unable to create a crashreport at..."

Area Affected DMExpress tasks that access Amazon Redshift source or target via ODBC

Problem This is an Amazon Redshift ODBC driver issue.

Solutions Set the "Single Row Mode" option in the Amazon Redshift ODBC DSN settings. See the DMExpress Help topic "Defining ODBC data sources" for help on defining an ODBC data source.

Page 25: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Referencenumber DMX-16317

Symptom The job will run as the user who started the Hadoop JVM container, not the login user who submits the job. This might results in a file ownership or permissions issue. DMExpress : (POFIR)insufficient access rights to output file "<file>"

Area Affected Single cluster node execution of DMExpress jobs that have tasks with local target files.

Problem This is a DMExpress problem.

Solutions Use HDFS targets or Kerberos authentication.

Referencenumber DMX-16412

Symptom DMX-h jobs abort with the following error: “Container [pid=nnnnn, containerID=container_nnnnnnnnnnnn_nnnn_nn_nnnnnn] is running beyond virtual memory limits. Current usage: n MBof n GB physical memory used; n GB of n GB virtual memory used. Killing container.”

Area Affected DMX-h jobs run on a single cluster node or in the Hadoop cluster.

Problem This is a YARN configuration reporting problem.

Solutions Ensure that the yarn.nodemanager.vmem-check-enabled property is set to false. The false property setting is recommended by many Hadoop vendors and is the default on many Hadoopplatforms.

Referencenumber DMX-14914

Symptom DMExpress crashes or generates inconsistent error messages.

Area Affected Saving and opening task and job files to/from Amazon S3 from the Task or Job Editor.

Problem This is a DMExpress problem.

Solutions A fix will be available in an upcoming DMExpress release.

Referencenumber DMX-16304

Symptom DMExpress may hang or issue the following message: An error occurred while reading records from the file "<file>". DMExpress : (CRASHRPT) a crash report archive has been saved at "<filename>.dmp"

Area Affected Sampling an HDFS file on a MapR cluster.

Problem This is a MapR problem, MapR Case #00021131, BUG-18456.

Solutions Apply the following MapR patch:Release Note: http://doc.mapr.com/display/components/HTTPFS+1.0-1504+Release+Notes

Package URL: http://package.mapr.com/releases/ecosystem/redhat/mapr-httpfs-1.0.201505061449-1.noarch.rpm

Referencenumber DMX-14060

Symptom Intelligent Execution Layer job produces incorrect or missing results.

Area Affected A standard DMExpress job run in Hadoop, where the job has a task with source/target enclosed-by fields.

Problem This is a DMExpress problem.

Solutions Make sure there are no enclosed-by options set for sources/targets.

Referencenumber DMX-15468

Page 26: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Symptom Intelligent Execution job produces incorrect or missing results.

Area Affected A standard DMExpress job run in Hadoop, where the job has a task with a condition or value that references one of the following functions: IfRecordOrigin, SourceName, or SourceFullName

Problem This is a DMExpress problem.

Solutions Make sure none of these functions are used.

Referencenumber DMX-15468

Symptom MapR directories that belong to the shadow group are displayed as files and cannot be browsed.

Area Affected Browsing a MapR cluster using an HDFS connection in the DMExpress Task Editor .

Problem This is a MapR problem.

Solutions Select the directory as a file then click the browse button in the Source or Target File Dialog again to browse it as a directory.

Referencenumber DMX-13366

Symptom DMExpress : (DBCNNTES) database connection "DatabaseConnection1" to ODBC data source "VerticaODBC" could not be established(ODBC_ERROR) [unixODBC][Vertica][ODBC] (11560) Unable to locate SQLGetPrivateProfileString function.

Area Affected Running tasks that connect to Vertica databases after upgrading to DMExpress 7.14 or later.

Problem The SQLGetPrivateProfileString function needed by the Vertica driver has been moved to a different library starting in DMExpress 7.14.

Solutions In the vertica.ini file, modify the ODBCInstLib setting to point to <dmx_install_dir>/lib/libodbcinstSS.so instead of libodbcinstSS.so

Referencenumber DMX-13943

Symptom Task Editor issues the following error message:“Unable to create the table:[<vendor>][HiveODBC] (35) Error from Hive: error code: ‘40000’ error message: ‘Error while compiling statement: FAILED: ParseException line <line number>mismatched input ‘NOT’ expecting) near ‘<column name>’ in create table statement’.”

Area Affected Creating a Hive database table from the Task Editor’s Create Database Table dialog.

Problem This is a DMExpress problem when switching between connections with different DBMS in the Create Database Table dialog.

Solutions Select “Yes” in the Nullable column for each row in the Layout grid where the nullability is not specified.

Referencenumber DMX-13788

Symptom DMExpress task fails with the following Vertica error:“(ODBC_ERROR) [Vertica][VerticaDSII] (20) An error occurred during query execution: server closed the connection unexpectedlyThis probably means the server terminated abnormally before or while processing the request”

Area Affected DMExpress tasks extracting data from a Vertica LOB column and loading into an Oracle varchar column, when the ToText() function is used to convert the LOB data into text.

Problem This is a DMExpress problem.

Solutions Split the task into two tasks to avoid the problem:- Unload from the Vertica table to a flat file in the first task- Load from the flat file to the Oracle table in the second task

Referencenumber DMX-13527

Symptom DMExpress installation crashes while verifying Vertica or Netezza connectivity.

Page 27: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Area Affected Installing and uninstalling DMExpress.

Problem This is a DMExpress problem.

Solutions If the installation has crashed, you will need to contact Syncsort Technical Support at [email protected] or (201) 930-8270 for instructions on how to proceed with reinstalling oruninstalling DMExpress. When reinstalling, skip the step for verifying connectivity, and instead verify connectivity in the DMExpress Task Editor’s “Database Connection” dialog once theinstallation completes; see the “Database Connection dialog” help topic for details.

Referencenumber DMX-12808

Symptom (DBIOCTIV) data type of column "<column name>" in database table "<table name>" is not supported.

Area Affected DMExpress tasks that access bytea or text data type in Greenplum through DataDirect ODBC on Linux.

Problem This is a DataDirect ODBC Greenplum Driver issue.

Solutions If possible, use data type varchar instead of bytea and text for the specified column. Alternatively, you can use SQL text to cast text as varchar.

Referencenumber DMX-11443

Symptom (TBLORCOL) unable to output records to database table "<user defined sql text>". Additional messages will provide further details(DBIOCTIV) data type of column "<column name>" in database table "<user defined sql text>" is not supported.

Area Affected DMExpress tasks loading data to a Greenplum target though ODBC on Linux, where the target is a user defined SQL statement of the form "update <table> set ... where..." and a varcharcolumn is present in the where clause of the update SQL statement.

Problem This is a Greenplum issue.

Solutions If possible, use data type char instead of varchar for the specified column.

Referencenumber DMX-11443

Symptom DMExpress task completed with zero records read.

Area Affected DMExpress tasks extracting data from very large Greenplum tables through ODBC on Windows.

Problem This is a Greenplum ODBC driver issue.

Solutions Set the "Use Declare/Fetch" option in the Greenplum ODBC DSN settings. See the DMExpress Help topic "Defining ODBC data sources" for help on defining an ODBC data source.

Referencenumber DMX-11685

Symptom DMExpress : (TBLORCOL) unable to output records to database table <table name>. Additional messages will provide further details (ODBC_ERROR) ERROR: relation <table name> doesnot existOr(ODBC_ERROR) ERROR: column <column name> does not exist.

Area Affected DMExpress Software Development Kit (SDK) tasks that access Greenplum target through ODBC.

Problem This is a DMExpress problem.

Solutions Enclosing the database table and/or column name in double-quotes (") may resolve the problem.

Referencenumber DMX-11256

Symptom DMExpress : (COLDSNAV) information about the columns could not be retrieved for <table name> (ODBC_ERROR) ERROR: relation <table name> does not existOr(ODBC_ERROR) ERROR: column <column name> does not exist.

Page 28: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Area Affected DMExpress Software Development Kit (SDK) tasks that access Greenplum source through ODBC.

Problem This is a DMExpress problem.

Solutions Enclosing the database table and/or column name in double-quotes (") may resolve the problem.

Referencenumber DMX-11256

Symptom Job Editor crashes with the following message:"DMExpress has encountered an unexpected condition from which it cannot recover. A crash report has been created at..."

Area Affected Opening or modifying a job on Windows Server 2003.

Problem This is a DMExpress problem. The Job Editor requires the Windows Imaging Component, which is not a standard component of Windows Server 2003.

Solutions Download and install the Windows Imaging Component.For 32-bit Windows: http://www.microsoft.com/en-us/download/details.aspx?id=32

For 64-bit Windows: http://www.microsoft.com/en-us/download/details.aspx?id=1385

Referencenumber DMX-11420

Symptom DMExpress hangs waiting for user input on Windows platforms under Cygwin.

Area Affected SDK task with incomplete options or option arguments.

Problem This is a Cygwin issue due to MinTTY being based on Cygwin pty.

Solutions Fix the SDK syntax error and rerun the task.

Referencenumber DMX-10778

Symptom No data is displayed in the Layout or Sample dialog.

Area Affected Sampling of remote data files using SFTP on Windows 7 with Service Pack 1 and certain hotfixes.

Problem This is a DMExpress problem. DMExpress tries to create/delete a temporary file under C:/ProgramData/DMExpress and is denied access.

Solutions Reduce the "User Account Control" settings to "Never notify", or change the permissions of the C:/ProgramData/DMExpress folder to be less restrictive.

Referencenumber DMX-10856

Symptom Exception in thread "main" java.io.FileNotFoundException: File <DMExpress job or task> does not exist.at org.apache.hadoop.util.GenericOptionsParser.validateFiles(GenericOptionsParser.java:388)at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:294)at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:422)at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:168)at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:151)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)at org.apache.hadoop.streaming.HadoopStreaming.main(HadoopStreaming.java:50)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)at java.lang.reflect.Method.invoke(Method.java:597)at org.apache.hadoop.util.RunJar.main(RunJar.java:208)

Page 29: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Area Affected MapReduce jobs that have spaces in either the job or task file name.

Problem This is a DMExpress problem.

Solutions Rename the tasks and jobs so they don't have any spaces in their names.

Referencenumber DMX-10895

Symptom (INERR) an internal error has occurred (540 in .../SSAdvancedComparisonFunctionNode.cpp)

Area Affected Tasks that use non-ASCII characters in IfContainsAny and IfEqualsAny with the environment variable LC_ALL=C.

Solutions Set the environment variable LC_ALL=en_US.UTF8 or LC_ALL=en_US.UTF-8 depending on your operating system.

Referencenumber DMX-11020

Symptom ERROR streaming.StreamJob: Error launching job , Output path already exists : Output directory <output file name> already exists

Area Affected MapReduce jobs that use the overwrite disposition for target files and the target file already exists.

Problem This is a DMX-h ETL problem.

Solutions Delete the target file prior to running the DMExpress MapReduce job.

Referencenumber DMX-10840

Symptom (TDTERROR) MTDP: EM_DBC_CRASH_A(220): Network connection to the DBC was lost when using Teradata load utitilies (MultiLoad, TPump, TPT)or “terminate called after throwing an instance of 'StatusMessage*' “ when using the FastLoad utility.

Area Affected DMExpress tasks running on UNIX platforms having both Teradata and Oracle target tables where the Teradata client version is 14.

Solutions Create separate tasks for the Oracle and Teradata targets.

Referencenumber DMX-10026

Symptom DMExpress : (INMEM) there is insufficient memory available to DMExpress to executeOrDMExpress : (INMEM4LU) insufficient memory to complete the lookups (in <context>); estimated required memory is <estimated memory> megabytes

Area Affected Using 32-bit DMExpress to perform lookup from large look-in data which requires more than 2G of memory to store.

Problem This is a DMExpress problem.

Solutions Where feasible, change your task to use Join instead of lookup, or upgrade to 64-bit DMExpress.

Referencenumber DMX-8426

Symptom (INERR) an internal error has occurred (1 in SSLookupByConditionKeysToRecordMap::setNumberOfRecords).

Area Affected Tasks with a conditional lookup where the look-in source file has more than 134 million records.

Problem This is a DMExpress problem.

Solutions Split the look-in source file into multiple smaller files having fewer than 134 million records and create a conditional value that does the lookup from the multiple smaller files.

Referencenumber DMX-8200

Symptom DMExpress aborts with error: (MCOLCOMP) the specified match condition compares <field name> field and <column name> column but the target table mapping is between <field name>

Page 30: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

field and <column name> column

Area Affected DMExpress tasks that update target table columns using a condition as match criteria, when a field or value used in the match condition is mapped to a different column in the TargetDatabase Table dialog.

Solutions Provide the mappings for all the field-column pairs in the match condition. Note that the additional mappings will affect the output to the target table when 'Update and insert' disposition isselected and the match criteria is not satisfied.

Referencenumber DMX-8064

Symptom Missing time zone information, possibly without warning, when accessing Timestamp with time zone data type columns.

Area Affected DMExpress tasks that access Timestamp with time zone data type columns through ODBC.

Problem This is an ODBC issue. ODBC defines no date time data type with time zone. How time zone is handled will depend on the ODBC driver. The behavior may be different in future versions ofDMExpress. See corresponding database ODBC driver manual for more information.

Solutions If available, use DBMS native access instead of ODBC. It may also be possible to use DMExpress character values containing date time with time zone text, and to use SQL text to cast theseas Timestamp with time zone.

Referencenumber DMX-6108

Symptom Degraded performance.

Area Affected Tasks that load to an Oracle table with the following conditions:- the Oracle version is 11 prior to version 11.2.0.3 ,- the disposition is 'Update existing rows' or 'Update existing rows and insert new ones',- commit interval is not set and- 'Abort task if any record is rejected' is not set.

Problem Oracle bug reference number 7019313 affects update statements in batch mode.

Solutions Change one of the specified conditions in the DMExpress task, or upgrade your Oracle server to version 11.2.0.3 or higher.

Referencenumber DMX-7868

Symptom Multi-byte locale characters are garbled in SYSOUT.

Area Affected Executing JCL SORT steps using DMXMFSRT against a Micro Focus instance which is set up with an EBCDIC character set and a multi-byte locale, when the record format of SYSOUT isnot Line sequential.

Problem This is a DMExpress problem.

Solutions Change the record format of SYSOUT to Line Sequential.

Referencenumber DMX-8154

Symptom (CREAT): unable to create "<target file>"

Area Affected Use case accelerators that create target files, when DMExpress is installed under "C:/Program Files/DMExpress".

Problem This is a DMExpress problem. DMExpress tries to create target files under "C:/Program Files/DMExpress/Examples/UseCaseAccelerators" and is denied access.

Solutions Reduce the "User Account Control" settings to "Never notify", or copy the `Examples' to a user writable folder.

Referencenumber DMX-7939

Symptom DMExpress : (RECNOTIN) record was not inserted into table <table name>(ODBC_ERROR) [DMExpress][ODBC SQL Server Wire Protocol driver][Microsoft SQL Server]The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time.

Page 31: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-runningtransactions.(RECORD) Record is: ...

Area Affected DMExpress tasks running on an AIX 5.3 64-bit platform with a target table in a SQL Server data source that has the DataDirect Bulk Load option enabled; when either Set commit intervalor Abort task if any record is rejected are specified for the target, and records are rejected by the database.

Problem This is a DMExpress problem.

Solutions Do not select Table Lock in the BulkLoadOptions attribute for the data source in the odbc.ini file. This is a workaround that may affect the bulk load performance. Refer to DataDirect ODBCdocumentation for more information.

Referencenumber DMX-7800

Symptom Installation causes the following error but proceeds normally: Setup.exe encountered an unexpected errorUninstallation fails with the following error: SetupDLL\SetupDLL.cpp (391), Setup has experienced an error

Area Affected Installing and uninstalling DMExpress on Windows 7 Professional.

Solutions Please contact Syncsort Technical Support at [email protected] or (201) 930-8270.

Referencenumber DMX-7582

Symptom DMExpress : (MFESFNNS) this JCL sort step includes functions that are not currently supported. Micro Focus MFSORT will be used.

Area Affected JCL sort jobs running in Micro Focus Server prior to release 6.0 Service Pack 2 that invoke DMExpress.

Problem This is a problem with the DMExpress/Micro Focus Server interface.

Solutions Set the environment variable DMX_MFSERVER_VERSION to the following.1) For Micro Focus Server versions prior to release 6.0 Service Pack 2 but after release 5.1 Wrap Pack 2, set DMX_MFSERVER_VERSION=2.2) For Micro Focus Server versions prior to release 5.1 Wrap Pack 2, set DMX_MFSERVER_VERSION=1. After setting the environment variable, the Micro Focus Server needs to berestarted.

Referencenumber DMX-7537

Symptom DMExpress aborts with exit code 134 (abort signal).

Area Affected Tasks that read decimal or date columns containing NULL values from an Informix source table on Linux 64bit.

Solutions This is an ODBC configuration issue. Add DMXSQLLEN = 4 to Informix section in odbcinst.ini. Refer to "Defining ODBC data sources" for more information.

Referencenumber DMX-7309

Symptom Running setup.exe in silent mode prompts the user for elevated permissions with the messages:In Vista - "A process needs your permission to continue"In Windows 7 - "Do you want to allow the following program to make changes to this computer"

Area Affected Systems running Windows Vista and Windows 7.

Solutions Refer to the "Silent Installation" section of the Installation Guide for instructions on how to avoid the prompts.

Referencenumber DMX-6958

Symptom Task continues to run when Teradata utility error limit setting is exceeded.

Area Affected DMExpress tasks with Teradata targets customized to use the TPT Stream operator.

Page 32: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Problem This is a Teradata problem. For more information, see the Teradata defect report DR 147138 at http://www.teradataatyourservice.com.

Solutions Export the environment variable: DMXTeradataTPTStreamPackFactor=1. This is a workaround causing the TPT Stream operator to use a pack factor of 1, which resolves the problem at theexpense of degraded performance.

Referencenumber DMX-6928

Symptom TPT Stream operator may write valid records to the error table when neighboring records contain errors.

Area Affected DMExpress tasks with Teradata targets customized to use the TPT Stream operator.

Problem This is a Teradata problem. For more information, see the Teradata defect report DR 146758 at http://www.teradataatyourservice.com.

Solutions Export the environment variable: DMXTeradataTPTStreamPackFactor=1. This is a workaround causing the TPT Stream operator to use a pack factor of 1, which resolves the problem at theexpense of degraded performance.

Referencenumber DMX-6928

Symptom The Find dialog does not appear when the Find toolbar button is pressed.

Area Affected On systems where Internet Explorer 6 is installed, using the Find toolbar button on the following dialogs: Job Log - Job Log tab, Job Log - Post-run Log tab, DMExpress Application Upgrade,and Restart Job.

Solutions To work around the problem click on the HTML viewing control to change focus before clicking on the Find toolbar button or upgrade Internet Explorer to a version higher than 6.

Referencenumber DMX-3355

Symptom Error message from the Task Editor:"DMExpress has encountered an unexpected condition from which it cannot recover. The task/job has been saved at <filename>, but any unapplied changes in open dialogs have been lost.Please contact Syncsort Technical Support or your local agent and provide the steps that resulted in this error. Syncsort Technical Support can be contacted at (201) 930-8270 or [email protected]"

Area Affected In the Task Editor, creating a named value and using it as the searchset argument to IfContainsAny, IfEqualsAny, or FindContainedValue functions.

Problem This is a DMExpress problem.

Solutions Instead of using a named value as the searchset argument, create an inlined text constant in the function call itself.

Referencenumber DMX-6464

Symptom Corrupted data and/or undefined behavior.

Area Affected Tasks running on Unix or Linux that connect to an Informix database through a 64-bit ODBC driver.

Problem 64-bit Informix drivers on Unix and Linux expect SQLLEN to be 4 bytes; by default, DMExpress assumes SQLLEN is 8 bytes on all 64-bit platforms.

Solutions Add "DMXSQLLEN=4" to odbcinst.ini as detailed in the DMExpress Help topic "Defining ODBC data sources".

Referencenumber DMX-6186

Symptom Unexpected behavior when defining or running tasks with Oracle database sources in DMExpress.

Area Affected Tasks which connect to an Oracle database that has cursor sharing set to SIMILAR.

Problem This is a documented Oracle problem. Please see Oracle bug #5553553.

Solutions Set cursor sharing to EXACT on the Oracle server DMExpress is connecting to.

Referencenumber DMX-6229

Page 33: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Symptom DMExpress aborts.

Area Affected Tasks where a file is provided as the searchset to the IfContainsAny, IfEqualsAny, or FindContainedValue functions, when processing of the searchset file causes a warning message.

Solutions Avoid the warning by addressing the cause of the message.

Referencenumber DMX-3458

Symptom Garbled or incorrect characters in the DMExpress installation log.

Area Affected DMExpress installation logs on HP-UX 11.31 and on HP-UX 11.23 with patch PHCO_35226 when the locale is Shift-JIS.

Problem This is an HP-UX problem with the /usr/bin/printf command.

Solutions Solution pending from HP. Please contact HP technical support for an update.

Referencenumber DMX-5241

Symptom DMExpress : (TDTTUERR) while accessing Teradata database <database> table <table> Teradata utility <utility> terminated with error code "12"; log file: <log file>DMExpress : (TBLCOUNT) 0 records were committed to target database <database> table <table>, out of which 0 records were loaded successfully and 0 records were rejected

Area Affected Tasks that write to Teradata CHAR or VARCHAR columns longer than 10666 when any of the target data is Unicode encoded.

Problem This is a Teradata limitation.

Solutions If all the Unicode data is Locale compatible, use the Encode() function to convert it to Locale. Refer to DMExpress Help Functions Reference for more details.

Referencenumber DMX-6026

Symptom DMExpress : (TDTTUERR) while accessing Teradata database <database> table <table> Teradata utility <utility> terminated with error code "12"; log file: <log file>DMExpress : (TBLCOUNT) 0 records were committed to target database <database> table <table>, out of which 0 records were loaded successfully and 0 records were rejected

Area Affected Tasks that write to Teradata LONG VARCHAR columns when any of the target data is Unicode encoded.

Problem This is a Teradata problem. For more information on this subject see the Teradata defect report DR 115511 on the Teradata website at http://www.teradataatyourservice.com.

Solutions If all the Unicode data is Locale compatible, use the Encode() function to convert it to Locale. Refer to DMExpress Help Functions Reference for more details.

Referencenumber DMX-6034

Symptom Segmentation fault.

Area Affected DMExpress tasks accessing Teradata through ODBC on Red Hat Linux AS5 32-bit.

Problem This is a DMExpress problem.

Solutions Do not access Teradata through ODBC on Red Hat Linux AS5.

Referencenumber DMX-5960

Symptom Incomplete data extracted from large object columns.

Area Affected DMExpress tasks extracting CLOB columns from Oracle tables.

Problem This is an Oracle problem. For more information on this subject see the Oracle bug number 7253302 on the Oracle website at http://www.metalink.oracle.com.

Solutions Upgrade Oracle client to either 10.2.0.4 or 11.1.0.7 and then apply patch for bug number 7253302.

Referencenumber DMX-5739

Page 34: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Symptom SSDMX3659I: DMExpress : (MFESFNNS) this JCL sort step includes functions that are not currently supported. Micro Focus MFSORT will be used.

Area Affected JCL sort jobs running in Micro Focus Server 5.1 prior to WrapPack 2 that invoke DMExpress.

Problem This is a problem with the DMExpress/Micro Focus Server interface.

Solutions Set the environment variable DMX_MFSERVER_VERSION=1 in the Micro Focus Server environment and restart the Micro Focus Server.

Referencenumber DMX-5637

Symptom (INERR) an internal error has occurred (1 in SSTRPHDL)or(INERR) an internal error has occurred (2226 in sscetij.c) are among possible symptoms.

Area Affected Tasks that access MySQL database tables using MySQL ODBC driver version lower than 5.1.

Solutions Install MySQL ODBC driver 5.1.

Referencenumber DMX-5406

Symptom Incorrect data inserted into database.

Area Affected Tasks with ODBC target where the table is in a Vertica database and user defined SQL of the form "insert into <table> select .." is used to insert records.

Problem This problem occurs due to a problem in Vertica ODBC driver 2.05.08.

Solutions Solution pending from Vertica. Please contact Vertica support for an update on the status.

Referencenumber DMX-5141

Symptom DMExpress hangs.

Area Affected Tasks with a Vertica target table where any of the following is true:- There is a syntax error in the user-defined SQL used to define the DMExpress target;- The Vertica username specified in the task differs from the username that created the table; or- All columns in the table are not listed in the DMExpress target in the order they appear in the table.

Problem This is a problem with the DMExpress/Vertica interface.

Solutions Modify your application to avoid the problem:-Correct any syntax errors in the user-defined SQL;-Connect to the Vertica database with the same username and password that was used to create the target table; and-If specifying a list of columns to load, provide the names of all the columns in the order they appear in the target table.

Referencenumber DMX-5142

Symptom No data in the target table.

Area Affected Tasks with a Vertica target table where either of the following is true:-Not all target columns are mapped; or-A target user-defined SQL command loads a date/time column.

Problem This is a Vertica problem.

Solutions Modify your application to avoid the problem:- Map source fields to all target table columns; and

Page 35: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

- Do not use target user-defined SQL for tables that have a date/time column.

Referencenumber DMX-5143

Symptom Incorrect output.

Area Affected Tasks using a filler field within a composite field in a reformat and converting a source character encoding to a different target character encoding.

Problem This is a DMExpress problem. Filler fields will be treated as text and will be converted to the target encoding.

Solutions Currently, there is no solution to avoid the conversion. You may want to consider redefining the filler fields as bit fields to avoid a conversion.

Referencenumber DMX-4969

Symptom The file <filename> could not be linked because unexpected syntax was encountered while processing the file.

Area Affected Tasks trying to use EBCDIC-encoded XML files.

Problem This is a DMExpress problem. Currently, EBCDIC-encoded XML files are not supported.

Solutions Use DMExpress to convert your XML file and schema to a supported encoding by specifying them as regular text files and replace the encoding signature in the XML file with thecorresponding encoding. You can then use this new XML and schema with the supported encoding in DMExpress.

Referencenumber DMX-4871

Symptom Missing enclosing characters in targets when running a task saved with version 4.7.2 or lower.

Area Affected DMExpress tasks that contain source with enclosing characters and target in a different encoding than the source.

Problem This is a DMExpress problem.

Solutions Use DMExpress version 4.7.3 or higher to resave the task.

Referencenumber DMX-4547

Symptom Incorrect field separators in targets when running a task saved with version 4.7.2 or lower.

Area Affected DMExpress tasks that contain target with a specified field separator and in a different encoding than the source.

Problem This is a DMExpress problem.

Solutions Use DMExpress version 4.7.3 or higher to resave the task.

Referencenumber DMX-4547

Symptom Pressing the Home or the End key causes the current input language in the language bar to change.

Area Affected The Expression Builder dialog in the DMExpress Task Editor, when the direction of the selected input language (e.g. Arabic - Right To Left) is different from that of the default system locale(e.g. English - Left To Right).

Problem This problem is common to all Microsoft Rich Edit Text boxes; for example, it can also be seen in Microsoft's Wordpad application.

Solutions Use the arrow keys instead of the Home and End keys.

Referencenumber DMX-4634

Symptom DMExpress : (SOVFL) summarization would cause overflow; multiple equal-keyed records remainorDMExpress : (SUMRECTOOLONG) a summarized record is greater than the maximum record length allowed

Page 36: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Area Affected Aggregate tasks where record format for some sources is defined via delimited record layout.

Problem This is a DMExpress problem.

Solutions Increase longest expected record length in the Performance Tuning dialog.

Referencenumber DMX-3953

Symptom DMExpress aborts with an internal error in SSTRPHDL.

Area Affected Running multiple DMExpress tasks at the same time on an HP Itanium machine.

Problem This is an HP-UX software problem.

Solutions Upgrade to DMExpress 4.1.2 or later and install or supersede patches PHSS_35979 and PHSS_34853.

Referencenumber DMX-3857

Symptom DMExpress : (NOCNVDSC) System function iconv_open failed during set up for function Unicode Converter.

Area Affected DMExpress applications which convert locale encoded data and are submitted to a 64-bit AIX server through the GUI.

Problem This is an AIX problem.

Solutions Install IBM AIX APAR IY83580. To download this APAR go to http://www-1.ibm.com/support/docview.wss?uid=isg1IY83580.

Referencenumber DMX-3358

Symptom DMExpress Job : (SRVCANNOTOPFLRD) unable to open the file for reading() A file or directory in the path name does not exist.Job has aborted.

Area Affected Executing Jobs with DMExpress server 3.3.8 and above that were created using Job Editor versions 2.6.2 through 3.3.7.

Problem This is a DMExpress problem.

Solutions All jobs will need to be resaved.

Referencenumber DMX-3791

Symptom DMExpress does not obey resource limits in place for the user submitting a task or job.

Area Affected Jobs or Tasks which are submitted to a DMExpress Server through the GUI.

Problem Per-process resource limits are inherited by jobs or tasks when they are submitted to the DMExpress Server through the GUI.

Solutions Set resource limits in the user's profile or in the global profile.

Referencenumber DMX-3319

Symptom DMExpress hangs while running a job on a server grid.

Area Affected Servers running HP-UX 11.

Problem This is an HP-UX software problem.

Solutions Install patches: PHKL_22840 and PHNE_22397.

Referencenumber DMX-3486

Page 37: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Symptom Script or program which invokes a DMExpress job is killed when an error occurs in the DMExpress job.

Area Affected DMExpress jobs customized using Korn shell and run from another shell script or external program.

Problem This is caused by a Korn shell limitation.

Workaround The parent script should start dmxjob in a separate process group. In Korn Shell, this can be done as follows:set -mdmxjob /run job_name.dxj &set +mwait $!For more information, consult the documentation for your shell.

Referencenumber DMX-3040

Symptom DMExpress : (RECNOTIN) record was not inserted into table ...(ODBC_ERROR) [Microsoft][ODBC Microsoft Access Driver] Invalid argument.DMExpress : (RECDUMP) Record is: ...

Area Affected DMExpress tasks with a Microsoft Access target.

Problem Loads into a Microsoft Access database create a significant amount of excess data, and Access has a file size limitation on the MDB file, typically 1 - 2 gigabytes.

Solutions Partition the data into smaller sizes, load each partition separately and run Microsoft Access's "Compact and Repair Database" utility in between loading each partition. When this problemoccurs, the DMExpress status log reports both records/data output as well as records/data deleted for the Microsoft Access target; approximate the size of each partition to be less than thenumber of records or size of data successfully output. The "Compact and Repair Database" utility can be accessed either from the Access Tools menu or from the command line (for moreinformation on the command-line syntax, refer to Microsoft documentation under "How to Use Command-Line Switches in Microsoft Access").

Referencenumber DMX-3172

Symptom Error message from the Task Editor: error occurred while reading records from the file <filename>.ORIncorrect source or target file sampled.

Area Affected Sampling a source file or target file within tasks that use the runtime data directory ($DMXDataDirectory), after the task file is moved to another folder.

Problem The old runtime data directory stored in the task is no longer valid and needs to be changed within the Job Editor.

Solutions Change the runtime data directory in the Job Settings dialog in the Job Editor and either sample the file through the Job Editor or open the task from the Job Editor and sample the file in theTask Editor. Alternatively, use an absolute pathname to sample the source or target file.

Referencenumber DMX-2861

Symptom The DMExpress Task Editor gives an Internal application error when opening a task saved with version 3.1 or lower, under a different locale.

Area Affected DMExpress tasks that contain a database table as source.

Problem This is a DMExpress problem.

Solutions Delete the reference to the database table and recreate it in the current locale. Alternatively, use DMExpress version 3.1.1 or higher to resave the task in the locale it was created beforeopening it under a different locale.

Referencenumber DMX-2213

Symptom Error message from DMExpress GUI :DMExpress is unable to connect to the serverDMExpress Service : (SRVAUTHFAIL) the user name or password is not found

Page 38: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Area Affected The DMExpress Task Editor, Job Editor, and Server Status connecting to a UNIX server which uses Kerberos as its only method of authentication.

Problem libkrb5.so (or .sl) must be available in the shared library path before the DMExpress Service is started on the UNIX server.

Solutions If libkrb5.so (or .sl) exists in the Kerberos lib directory, make sure this directory appears in the appropriate shared library path and restart the DMExpress Service. See the Installation Guideor the Installation section of the online help for directions on updating the shared library path, and restarting the DMExpress Service.If libkrb5.so (or .sl) does not exist in the Kerberos lib directory, create a symbolic link from libkrb5.so to the available Kerberos library, libkrb5.so.<n>. This link can appear in any directorythat is part of the shared library path when the DMExpress Service is started. For example, it may be created in the Kerberos lib directory, or in <dmexpress_home>/dmexpress/lib, where<dmexpress_home> is the directory in which DMExpress is installed. Note that if you create the symbolic link in the DMExpress lib directory, you must recreate it and restart the DMExpressService each time you install DMExpress.

Referencenumber DMX-2181

Symptom A core dump or segmentation fault executing a user written program linked to the DMExpress library.

Area Affected User written programs linked to the DMExpress library, on HP-UX platforms, that call tmpnam() or ctermid() with NULL as a function argument.

Problem In a multi-threaded environment, the behavior of tmpnam() and ctermid() with NULL argument is not defined in the POSIX standard.See POSIX Standard 1003.1, 1996 section 4.7.1, 8.2.5.

Solutions To ensure the correct behavior of tmpnam() and ctermid(), supply a pointer to a buffer as the argument to tmpnam() and ctermid().Incorrect way of using tmpnam()

#include <stdio.h>char *file_name = tmpnam(NULL);/* file_name may be NULL after execution of above statement */

Correct way of using tmpnam()#include <stdio.h>char file_name[L_tmpnam]; /* L_tmpnam is a constant defined in stdio.h */tmpnam(file_name);

Symptom DMExpress is unable to connect to a FTP server.

Area Affected A DMExpress task or job that contains a large number of remote files accessed using FTP.

Problem This is a system problem.

Solutions Check whether the association of FTP connection with files within a DMExpress task can be avoided. An FTP connection is needed only to access files that are not directly accessible from thesystem where the job is run.

Referencenumber DMX-1630

Fixes That Change Product Behavior

The following are Fixes that change product behavior in the current release of DMExpress.

Area Affected File specified by the environment variable DMX_HADOOP_CONF_FILE does not exist.

Old Behavior When running in Spark, DMX-h aborts. When running in MapReduce, DMX-h runs without generating warning messages.

New Behavior When running in Spark or in MapReduce, DMX-h generates a warning message and continues to run.

Release changedin 9.5.1

Referencenumber DMX-21085

Area Affected Custom/extended tasks

Page 39: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Old Behavior When all DMX-h tasks of an IX job are cluster eligible,

and a custom task exists, all tasks of the job run on the local ETL server/edge node unless otherwise specified by the environment variableDMX_HADOOP_ON_VALIDATION_FAILURE.extended tasks or groups of directly connected extended tasks run on the cluster with all DMX-h tasks when they have piped inputs from previous DMX-h tasks and piped outputs tofollowing DMX-h tasks.

New Behavior When all DMX-h tasks of an IX job are cluster eligible,

custom/extended tasks that have no inputs/outputs run on the local ETL server/edge node and all DMX-h tasks run on the cluster.both custom and extended tasks or groups of directly connected custom/extended tasks run on the cluster with all DMX-h tasks when they have piped inputs from previous DMX-h tasksand piped outputs to following DMX-h tasks.

See the DMExpress help topic, “Developing Intelligent Execution Jobs.”

Release changedin 9.4.28

Referencenumber DMX-21405

Area Affected File specified by the environment variable DMX_HADOOP_CONF_FILE does not exist.

Old Behavior When running on Spark in client deploy mode, DMX-h will continue to run without aborting.

New Behavior When running on Spark in client or cluster deploy mode, DMX-h will abort if the file specified by the environment variable DMX_HADOOP_CONF_FILE does not exist.

Release changedin 9.4.3

Referencenumber DMX-20140

Area Affected Execution units in the DMExpress Job Editor renamed to restartability units.

Old Behavior In the DMExpress Job Editor, a group of tasks that share data in the form of pipes, previous/next task pairs, or direct data flows is visible through the "Show Execution Units" context menuitem and View menu item.

New Behavior In the DMExpress Job Editor, a group of tasks that share data in the form of pipes, previous/next task pairs, or direct data flows is visible through the "Show Restartability Units" contextmenu item and View menu item.

Release changedin 9.2.2

Referencenumber DMX-19884

Area Affected External Metadata dialog in the DMExpress Task Editor.

Old Behavior The DMExpress Task Editor displays errors for invalid metadata files when you select a file in the External Metadata dialog without specifying a metadata type.

New Behavior The DMExpress Task Editor no longer displays errors for invalid metadata files when you select a file in the External Metadata dialog without specifying a metadata type.

Solutions toPreserve OldBehavior

Specify the metadata type before or after selecting the file name and select the Refresh button.

Release changedin 9.0.9

Referencenumber DMX-18997

Area Affected External Metadata dialog in the DMExpress Task Editor.

Old Behavior The DMExpress Task Editor detects the metadata type and prompts you to switch to this type when you specify a metadata file name with an incompatible metadata type.

Page 40: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

New Behavior The DMExpress Task Editor no longer detects the metadata type nor does it prompt you to switch to this type when you specify a metadata file name with an incompatible metadata type.DMExpress generates the errors that result from parsing the metadata file with the wrong metadata type.

Solutions toPreserve OldBehavior

Specify the correct metadata type for your metadata file.

Release changedin 9.0.9

Referencenumber DMX-18997

Area Affected Selecting a COBOL copybook in the External Metadata dialog in the Task Editor

Old Behavior DMExpress Task Editor defaults to Micro Focus COBOL data format.

New Behavior DMExpress Task Editor defaults to VS COBOL II Release 4 format.

Solutions toPreserve OldBehavior

Select the appropriate COBOL copybook format if the default is not correct. Existing tasks are not affected.

Release changedin 9.0.7

Referencenumber DMX-6841

Area Affected IX job that is not eligible to run on the cluster

Old Behavior The job runs by default on a single cluster node

New Behavior The job runs by default on the edge node

Release changedin 9.0.4

Referencenumber DMX-18771

Area Affected Driver version number in DataDirect driver file names.

Old Behavior Version number was 26.

New Behavior Version number is 27.For existing ODBC data source configurations, ensure that the version numbers in the DataDirect driver file names, which are listed in the driver properties section in the odbcinst.ini andodbc.ini files, are updated from 26 to 27.

Release changedin 8.6.3

Referencenumber DMX-17942

Area Affected A packed decimal field containing an invalid value (anything but A-F) in the sign nibble (the last 4 bits).

Old Behavior Treated as a valid number; for example, 0X40 is treated as 4 even though the trailing 0 is not a valid sign value.

New Behavior Treated as an invalid number; an appropriate warning message is issued, and the value is treated as 0 for all computations.

Release changedin 8.5.15

Referencenumber DMX-18267

Page 41: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Area Affected DMXReport output for DMExpress tasks with Mainframe record formats in the Source Pipe dialog.

Old Behavior DMXReport output has the word "record" in the name of the record format (e.g., "Mainframe fixed record length").

New Behavior DMXReport output eliminates the word "record" in the name of the record format (e.g., "Mainframe fixed length").

Release changedin 8.5.8

Referencenumber DMX-17960

Area Affected Tasks that map to COBOL copybooks containing unsigned packed decimal data. Unsigned packed decimal fields are indicated by a USAGE clause of COMP-3 or PACKED-DECIMAL and aPICTURE (PIC) clause that does not contain an "S" (sign).

Old Behavior All packed decimal data is treated as signed. Negative values are allowed and prefer the "D" sign nibble. Positive values prefer the "C" sign nibble.

New Behavior Unsigned packed decimal fields are now treated as unsigned. Negative values are not allowed and will cause validation warnings such as CVROP, CVNEGTOUNSIGN or NEG2USEX. Valid(positive) values in unsigned packed decimal prefer the "F" sign nibble rather than "C" on output. See the DMExpress Help topic, "Decimal, packed, unsigned number data format."

Solutions toPreserve OldBehavior

Modify the COBOL copybook by adding an "S" to the beginning of the PIC clause of each unsigned packed decimal field that you would prefer to treat as signed packed decimal.

Release changedin 8.5.6

Referencenumber DMX-17123

Area Affected DMExpress tasks referencing a variable length array in a reformat.

Old Behavior The variable length array is padded to the maximum number of elements.

New Behavior The actual number of elements in the input record is output to the variable length array. A task created through the DMExpress Editor must be resaved to result in the correct behavior.

Release changedin 8.4.18

Referencenumber DMX-16897

Area Affected DMExpress tasks connecting to HDFS filesystems, where the NameNode hostname specified in the task's HDFS connection does not match the NameNode hostname or HA (High Availability)service name detected at runtime.

Old Behavior The task fails with the following messages: DMExpress : (HDFSCNNF) HDFS connection to Hadoop namenode (<namenode_host>) failed due to the following reason:(HDFSNNM) server name "<namenode_host>" specified for the connection does not match the namenode host name "<hdfs_server>" in your Hadoop configuration

New Behavior The task no longer fails, and the default filesystem (specified by the Hadoop configuration property "fs.defaultFS" or "fs.default.name") takes precedence over the value specified in the task. Ifthe filesystem is an HDFS-compatible filesystem other than HDFS, no messages are issued. If the filesystem is HDFS, the following informational message is issued without affecting thesuccessful completion of the task: DMExpress : (HDFSNNM) an HDFS connection specifies a server name that does not match the namenode hostname "<hdfs_server>" specified in your Hadoop configuration; the valuespecified in the Hadoop configuration will be used

Release changedin 8.4

Referencenumber DMX-16131

Area Affected DMExpress Data Connector API.

Old Behavior The struct dmx_connector_FieldProperties in the DMExpress Data Connector API had eight fields and did not have the ability to set field properties.

New Behavior dmx_connector_FieldProperties now has an additional field ‘int m_usageIndicator’ for indicating field usage. Existing connectors will need to be modified to initialize the new field. See

Page 42: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

DMExpress_DataConnector_API_Reference.pdf in the DMExpress/Documentation folder.

Release changedin 8.4

Referencenumber DMX-15153

Area Affected DMExpress Data Connector Java API.

Old Behavior The FieldProperties class in the DMExpress Data Connector Java API did not have a field usage indicator.

New Behavior FieldProperties now has an additional private data member ‘usageIndicator’ for indicating field usage. There is also an additional member function ‘getUsageIndicator()’ to return its valueand an additional constructor to set its value. See DMExpress_DataConnector_JavaAPI_Reference.pdf in the DMExpress/Documentation folder.

Release changedin 8.4

Referencenumber DMX-15153

Area Affected DMX-h jobs run in Hadoop MapReduce.

Old Behavior The default MapReduce version was MRv1. To run in YARN (MRv2), you needed to set DMX_HADOOP_MRV to 2.

New Behavior The default MapReduce version is YARN (MRv2) if MRv1 is not auto-detected. To ensure running in MRv1, set DMX_HADOOP_MRV to 1.

Release changedin 8.2.1

Referencenumber DMX-17621

Area Affected DMExpress Kerberos environment variables

Old Behavior DMExpress used the following DMExpress Kerberos environment variables to manage Kerberos tickets: DMX_HADOOP_KERBEROS_PRINCIPAL, DMX_HADOOP_KERBEROS_KEYTAB,and DMX_HADOOP_KERBEROS_CACHE.

New Behavior With the introduction of Kerberos authentication for Teradata databases, DMExpress Kerberos environment variables are not Hadoop specific. The environment variables are renamed asfollows: DMX_KERBEROS_PRINCIPAL, DMX_KERBEROS_KEYTAB, and DMX_KERBEROS_CACHE. In addition, the following new environment variable is introduced:DMX_KERBEROS_CACHE_PREFIX. Backward compatibility for the deprecated environment variables is retained. See the DMExpress help topic, "Kerberos Authentication in DMExpressjobs."

Release changedin 8.0.6

Referencenumber DMX-15296

Area Affected Defining Teradata utility settings

Old Behavior Define parameters through the Teradata Utility Settings dialog.

New Behavior Define parameters through the Source and Target Database Table dialogs.

Release changedin 8.0.1

Referencenumber DMX-12226

Area Affected Netezza load validation when a DMExpress CHAR field with n characters is mapped to a Netezza CHAR column with less than n characters.

Old Behavior Record is truncated and loaded. DMExpress issues a warning message.

New Behavior Record is rejected and sent to rejected records log file. DMExpress issues a warning message. Additional messages may be listed in the rejected records exception messages log file.

Page 43: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Release changedin 7.15.3

Referencenumber DMX-6556

Area Affected Netezza load validation when a DMExpress EN field is mapped to a Netezza INTEGER column with data that forces a numeric overflow.

Old Behavior Value is truncated to maximum integer and loaded. DMExpress issues a warning message.

New Behavior Record is rejected and sent to rejected records log file. DMExpress issues a warning message. Additional messages may be listed in the rejected records exception messages log file.

Release changedin 7.15.3

Referencenumber DMX-6556

Area Affected Netezza load validation when a DMExpress EN field is mapped to a Netezza NUMERIC column with data that exceeds precision.

Old Behavior Precision is truncated and loaded. DMExpress issues a warning message.

New Behavior Record is rejected and sent to rejected records log file. DMExpress issues a warning message. Additional messages may be listed in the rejected records exception messages log file.

Release changedin 7.15.3

Referencenumber DMX-6556

Area Affected Netezza load validation when a DMExpress EN field is mapped to a Netezza NUMERIC column with data that exceeds scale.

Old Behavior Field is truncated and loaded. DMExpress issues a warning message.

New Behavior Record is rejected and sent to rejected records log file. DMExpress issues a warning message. Additional messages may be listed in the rejected records exception messages log file.

Release changedin 7.15.3

Referencenumber DMX-6556

Area Affected DMExpress silent installation.

Old Behavior The installation does not prompt the user to select between a trial and a licensed version, so the recorded installation response file does not include that response.

New Behavior The installation prompts the user to select between a trial and a licensed version. Re-record the installation to capture the response to this new option and use the new response file for silentinstallations.

Release changedin 7.14.12

Referencenumber DMX-12907

Area Affected Tasks that extract Unicode encoded character data from Teradata sources using the TTU access method.

Old Behavior Character data from Teradata sources was extracted in the default Teradata client character set interpreted as ASCII.

New Behavior For new tasks or tasks resaved in the current release of DMExpress, the default extraction format of Teradata character data changes. For Teradata sources that do not have any columndefined as Unicode, character data will be extracted in the default Teradata client character set and interpreted in the system locale (instead of ASCII). For Teradata sources that have one ormore columns defined as Unicode, character data will be extracted as UTF-8.

Solutions toPreserve OldBehavior

Tasks that are not resaved in the current release of the Task Editor will not change. To resave affected tasks and maintain existing behavior, select the option to keep existing behavior whenprompted at task open, or when running the DMExpress Application Upgrade utility (see the DMExpress Application Upgrade help topic for details). Alternatively, manually change eachcharacter column to extract in Locale encoding and treat as ASCII.

Page 44: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Release changedin 7.14.12

Referencenumber DMX-6708

Area Affected Greenplum load validation when a DMExpress CHAR field with n characters is mapped to a Greenplum CHAR column with less than n characters.

Old Behavior Record is truncated and loaded. DMExpress issues a warning message.

New Behavior Record is rejected and sent to error table. DMExpress issues a warning message.

Release changedin 7.14.7

Referencenumber DMX-10983

Area Affected Greenplum load validation when a DMExpress EN field is mapped to a Greenplum INTEGER column with data that forces a numeric overflow.

Old Behavior Value is truncated to maximum integer and loaded. DMExpress issues a warning message.

New Behavior Record is rejected and sent to error table. DMExpress issues a warning message.

Release changedin 7.14.7

Referencenumber DMX-10983

Area Affected Greenplum load validation when a DMExpress EN field is mapped to a Greenplum NUMERIC column with data that exceeds precision.

Old Behavior Precision is truncated and loaded. DMExpress issues a warning message.

New Behavior Record is rejected and sent to error table. DMExpress issues a warning message.

Release changedin 7.14.7

Referencenumber DMX-10983

Area Affected Greenplum load validation when a DMExpress EN field is mapped to a Greenplum NUMERIC column with data that exceeds scale.

Old Behavior Field is truncated and loaded. DMExpress issues a warning message.

New Behavior Record is rejected and sent to error table. DMExpress issues a warning message.

Release changedin 7.14.7

Referencenumber DMX-10983

Area Affected DMX-h tasks with HDFS targets whose disposition is ‘Overwrite file’.

Old Behavior Task aborts if the HDFS target exists.

New Behavior The HDFS target (and its contents if it is a directory) is removed, and the tasks runs.

Release changedin 7.14.5

Referencenumber DMX-10616

Page 45: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Area Affected DMX-h Sort accelerator.

Old Behavior DMX-h calculated the input datasize to mappers/reducers based on the HDFS input split size and the number of mappers/reducers.

New Behavior New options, dmx.map.datasize.useHDFSsplits and dmx.reduce.datasize.useHDFSsplits, when false (default), allow the user to specify the datasize using options dmx.map.datasize anddmx.reduce.datasize to eliminate the need to query the namenode for the input split size by each mapper/reducer. See the DMX-h Sort Edition User Guide for details.

Release changedin 7.12.11

Referencenumber DMX-12349

Area Affected Vertica load validation when a DMExpress CHAR field with more characters is mapped to a Vertica CHAR column with less.

Old Behavior Record inserted with truncation and warning message (for Vertica 5 and earlier).

New Behavior Record inserted with silent truncation (for Vertica 6 and later).

Release changedin 7.12.8

Referencenumber DMX-12122

Area Affected Vertica load validation when a DMExpress EN field is mapped to a Vertica INTEGER column with data that forces a numeric overflow.

Old Behavior Value is truncated to max int and loaded with a warning (for Vertica 5 and earlier).

New Behavior Record is rejected with a warning and added to the rejected records log file (for Vertica 6 and later).

Release changedin 7.12.8

Referencenumber DMX-12122

Area Affected Vertica load validation when a DMExpress EN field is mapped to a Vertica NUMERIC column with data that exceeds precision.

Old Behavior Excess precision is truncated with a warning (for Vertica 5 and earlier).

New Behavior Excess precision is silently truncated (for Vertica 6 and later).

Release changedin 7.12.8

Referencenumber DMX-12122

Area Affected Vertica load validation when a DMExpress EN field is mapped to a Vertica NUMERIC column with data that exceeds scale.

Old Behavior Field is truncated with a warning (for Vertica 5 and earlier).

New Behavior Record is rejected with a warning and added to the rejected records log file (for Vertica 6 and later).

Release changedin 7.12.8

Referencenumber DMX-12122

Area Affected Database Connection dialog.

Old Behavior The DBMS drop down list displayed ODBC as an option.

New Behavior The DBMS drop down list displays Other as the option that replaces ODBC. From the Access method drop down list, you can select ODBC.

Release changed

Page 46: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

in 7.11.6

Referencenumber DMX-11432

Area Affected Running DMXJob from the command line with incorrect options.

Old Behavior Error was output using the settings of /LOG option.

New Behavior Error now output in TEXT format.

Release changedin 7.4.5

Referencenumber DMX-7282

Area Affected Running DMXJob from the command line.

Old Behavior The /LOG option could be included anywhere even if it was in an illegal location.

New Behavior The placement of the /LOG option must follow documented syntax.

Release changedin 7.4.5

Referencenumber DMX-7282

Area Affected DMExpress tasks that use time zone columns from Oracle sources.

Old Behavior TZNOTRET message is issued and time zone information is not extracted.

New Behavior TZNOTRET message no longer appears, time zone information is extracted.

Release changedin 7.4.2

Referencenumber DMX-7632

Area Affected DMExpress installation and licensing.

Old Behavior License feature was named "Micro Focus Enterprise Server Integration".

New Behavior License feature is now called "Mainframe Re-hosting Integration".

Release changedin 7.2.6

Referencenumber DMX-7978

Area Affected Arithmetic calculations that result in fractional numbers.

Old Behavior The fractional part of the result is not precise.

New Behavior The fractional part of the result is precise in most cases.

Release changedin 6.1.14

Referencenumber DMX-1738

Area Affected Saving multiple text logs in the "DMExpress Server dialog".

Page 47: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Old Behavior Logs were concatenated into a single text file.

New Behavior Each text log gets saved in a separate file.

Release changedin 6.1.6

Referencenumber DMX-6223

Area Affected DMExpress tasks that update existing rows in Teradata targets with triggers, foreign key constraints or non-primary indexes.

Old Behavior The task would run but the transaction order would not be preserved.

New Behavior The task now aborts. In order to update a Teradata target having the above properties, customize the load to use TPump. See the topic "Teradata Utility Settings dialog" in the DMExpressHelp.

Release changedin 6.0.4

Referencenumber DMX-6305

Area Affected DMExpress tasks that use the text functions IfContainsAny, IfEqualsAny, FindContainedValue, and URLDecode.

Old Behavior These features were categorized under the product licensing feature "String Functions".

New Behavior These features are now categorized under the product licensing feature "Advanced Text Processing". See "Licensing" in the DMExpress Help.

Solutions toPreserve OldBehavior

Please contact [email protected] if you are currently licensed for the Advanced Data Management license package and using any of these functions. You will need new license keysthat enable the Advanced Text Processing feature in order to use these text functions.

Release changedin 5.5.4

Referencenumber DMX-6291

Area Affected Filters that compare a delimited character field with a string shorter than the field length in a non-ASCII locale.

Old Behavior The shorter string is incorrectly not padded to the length of the field with the pad byte.

New Behavior The shorter string is padded to the length of the field with the pad byte.

Release changedin 5.4.9

Referencenumber DMX-6143

Area Affected Encoded source files with array fields.

Old Behavior The source file encoding was not being passed to individual array fields, as a result array fields were always treated as ASCII.

New Behavior The source file encoding is being passed to individual array fields.

Solutions toPreserve OldBehavior

Do not assign an encoding to the source file, instead change each individual field to have the encoding of the source file, and leave the array fields as ASCII encoded.

Release changedin 5.3.7

Referencenumber DMX-7878

Page 48: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Area Affected User defined SQL statement in DMExpress Task Editor.

Old Behavior The '\' character had to be escaped with another '\', if it was part of a SQL statement defined in the Source or Target Database dialogs.

New Behavior There is no longer a need for escaping, so '\\' will actually be treated as two '\' characters.

Solutions toPreserve OldBehavior

If you have any SQL statements with '\\', change them to '\'.

Release changedin 5.3.5

Referencenumber DMX-5891

Area Affected Definition of Teradata source or target in DMExpress Task Editor.

Old Behavior In the Source and Target Database dialogs, unqualified table names were searched under the user name defined in the Teradata connection.

New Behavior Unqualified table names are now searched under the default database for the Teradata connection.

Solutions toPreserve OldBehavior

Use fully qualified table names.

Release changedin 5.2.17

Referencenumber DMX-5456

Area Affected Tasks with a Teradata target that were saved with DMExpress version 4.7.12.10 or lower.

Old Behavior Task may have been running succesfully.

New Behavior Task aborts with the following message: (DBMSABM) abort if error is mandatory for the "Teradata" DBMS.

Solutions toPreserve OldBehavior

Resave the task with the new version of DMExpress.

Release changedin 5.2.4

Referencenumber DMX-4865

Area Affected Non-English versions of DMExpress running tasks and environment variable SYNCSORT_INSERT is defined.

Old Behavior The job log contained English messages "Contents of ..."

New Behavior The job log contains localized messages "Contents of ..."

Release changedin 5.2

Referencenumber DMX-4784

Area Affected Composite/array fields used in reformat.

Old Behavior The entire composite/array field was treated as a single field.

New Behavior Each elementary field inside the composite/array field will be created separately.For example, if a fixed position composite field was being reformated into a delimited layout:Input:

Page 49: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

01312008abcOutput before the change:01312008,abcOutput after the change:01,31,2007,abc

Solutions toPreserve OldBehavior

In the Reformat Target Layout dialog, use the "Set target format" option to turn the composite/array field into a single field. Consider using enclosing characters for the target file if you needto reuse the target layout in the next tasks. See the topic "Target File dialog" in the online help to learn more about using enclosing characters.

Release changedin 4.7.3

Referencenumber DMX-3932

Area Affected Tasks with an encoding specified for fields in a record layout which is associated with a source that has a different encoding.Tasks with an encoding specified for fields in a reformat which is applied to a target that has a different encoding.

Old Behavior Field level encoding would determine the field encoding and override file level encoding.

New Behavior File level encoding will determine the field encoding and override field level encoding.

Solutions toPreserve OldBehavior

When field level encoding is desired, leave the file level encoding unspecified.

Release changedin 4.7.3

Referencenumber DMX-4040

Area Affected The DMExpress service used for communication between the DMExpress client and the DMExpress server.

Old Behavior The DMExpress Service only used ports for the port mapper and an arbitrary port provided by the port mapper.

New Behavior The DMExpress Service requires the use of a new port, 32636, in addition to the previously used ports. This new port needs to be accessible through any firewalls and/or security policiespresent between the systems running the DMExpress client and the system running the DMExpress server.

Release changedin 4.7.1

Referencenumber DMX-4049

Area Affected Collation and comparison of Unicode encoded values.

Old Behavior Values are collated and compared according to the Unicode collation algorithm found at http://www.unicode.org/unicode/reports/tr10/ using a collation strength of three levels forcomparisons.

New Behavior Values are collated and compared in Unicode code point order.

Solutions toPreserve OldBehavior

Please contact Syncsort Technical Support at (201) 930-8270.

Release changedin 4.4.12

Referencenumber DMX-4370

Area Affected IsValidDate(datetime) function when datetime evaluates to NULL.

Page 50: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Old Behavior Return value was NULL.

New Behavior Return value is 0.

Solutions toPreserve OldBehavior

Replace IsValidDate(datetime) with the expression IfThenElse(datetime=NULL, NULL, IsValidDate(datetime)) in your tasks.

Release changedin 4.3.6

Referencenumber DMX-1590

Area Affected Any tasks that perform transformations that are impacted by nullability. For example, tasks with source database tables, target database tables, comparisons with empty strings, join fields,new values, etc.

Old Behavior NULL Handling options for a data type in the Task Settings impacted nullability attributes for fields or values and changed fields and values of that data type to be nullable, if checked. Whenthe NULL Handling settings in a task were checked for a data type, all empty fields or values of that data type in the task would be treated as NULL. When the NULL Handling settings in atask were not checked for a data type, all empty values of that data type in the task would be treated as NULL or NOT NULL depending on the nullability attribute of the individual field.

New Behavior Nullability attributes of fields and values are no longer overridden by NULL Handling options in Task Settings. Only fields whose nullability setting is set to "Use Task Setting" are impactedby NULL Handling options in Task Settings.Nullability attributes of fields are determined by the field's settings defined at source field's origin, which may have been defined in a preceding task if the field comes from linked externalmetadata.Nullability settings of new values are derived from the nullability attributes of the source fields used to create the value.All constants, except the constant NULL, are considered not nullable.

Solutions toPreserve OldBehavior

Resaving your tasks in your newest release may correct all of the problems.Alternatively, change the nullability attributes to "Indicated by empty field" for the target field in a task preceding the task that uses the field in a transformation that relies on NULLbehavior.If you have no preceding task, you will need to add a conditional value to check if the field or value is empty and if so, use the NULL constant, else use the field or value.

Release changedin 3.4.9, 4.3.5

Referencenumber DMX-3542, DMX-3905

Area Affected Tasks that use the GetExternalFunctionValue() function.

Old Behavior The libraryname parameter can contain an absolute or relative path.

New Behavior The libraryname should contain only the file name. Any path specification in the library name is ignored. The path of the library should be set in the appropriate library path environmentvariable. For detailed information, see the topic "GetExternalFunctionValue function" in the DMExpress Help.

Release changedin 4.3.3

Referencenumber DMX-3794

Fixed Problems

The following problems have been fixed since Release 9.10 of DMExpress.

Symptom Not a valid JAR: /usr/dmexpress/lib/dmxhadoop_mrv1.jarJob has aborted

Area Affected Running a Connect ETL job in MapReduce when YARN ResourceManager HA (High Availability) is enabled and the yarn.resourcemanager.address.<rmid> configuration parameter is notdefined.

Page 51: Connect v9.10.26 Release Notes - docs.precisely.com

DMX Release Notes

Referencenumber DMX-24432

Release fixed in 9.10.26

Symptom DMExpress : (RTRNC) record <record number> from source <source number> (length <record length>) is truncated to the maximum length (<maximum record number>)

Area Affected DMX tasks reading a large volume of data with variable little-endian formatted records.

Referencenumber DMX-29308

Release fixed in 9.10.26

Symptom DMExpress : (INERR) an internal error has occurred (439 in …\VariableArrayCapableUserRecordDescriptorImp.cpp)

Area Affected DMExpress tasks with COBOL copybook layouts that contain Occurs Depending On and redefines fields that redefine other redefines fields.

Referencenumber DMX-28244

Release fixed in 9.10.6

Installation Guide

For detailed instructions on installing DMExpress, refer to the document INSTALLATIONGUIDE.PDF located in the top level directory on the DMExpress CD.To view this document, you will need to have the Adobe Acrobat Reader software installed. You can download the latest version of Adobe Acrobat Reader free from the Adobe web site http://www.adobe.com/.

Contacting Technical Support

If you have a maintenance support agreement for DMExpress and you encounter difficulties in installing or running DMExpress, contact Syncsort Incorporated in the United States, 24 hours a day, 7 days a week.

Address Syncsort Incorporated2 Blue Hill Plaza #1563Pearl River, NY 10965USA

Phone 201-930-8270

Fax 201-930-8281

E-mail [email protected]

Copyright Statement

Copyright(c) Syncsort Incorporated, 2003 - 2018All rights reserved. The accompanying DMExpress program and the related media, documentation and materials ("Software") are protected by copyright law and international treaties. Unauthorized reproduction ordistribution of the Software, or any portion of it, may result in severe civil and criminal penalties, and will be prosecuted to the maximum extent possible under the law.The Software contains proprietary and confidential material, and is only for use by the licensees of the DMExpress proprietary software system. The Software may not be reproduced in whole or in part, in any form,except with written permission from Syncsort Incorporated. The Software is provided under the accompanying Software License Agreement ("SLA"). Refer to "Syncsort Software License Agreement" topic in theonline help.DMExpress is a trademark of Syncsort Incorporated. UNIX is a registered trademark of the X-Open Group. Adobe, Acrobat, and Acrobat Reader are either the registered trademarks or trademarks of Adobe SystemsIncorporated in the United States and/or other countries. Windows 2000, Windows Server 2003 and Windows XP are registered trademarks of Microsoft Corporation in the United States and/or other countries. Allother third-party brand names and product names used in this documentation are trade names, service marks, trademarks, or registered trademarks of their respective owners.The Software is a proprietary product of Syncsort Incorporated, but incorporates certain third-party components that are each subject to separate licenses and notice requirements. Note, however, that while theseseparate licenses cover the respective third-party components, they do not modify or form any part of Syncsort's SLA. Refer to the "Third-party license agreements" topic in the online help for copies of respectivethird-party license agreements referenced herein.