Essbase Connector Error – Client Commands are Currently Not Being Accepted

DataStage Essbase Connector, Essbase Connector Error, Client Commands are Currently Not Being Accepted

DataStage Essbase Connector

While investigating a recent Infosphere Information Server (IIS), Datastage, Essbase Connect error I found the explanations of the probable causes of the error not to be terribly meaningful.  So, now that I have run our error to ground, I thought it might be nice to jot down a quick note of the potential cause of the ‘Client Commands are Currently Not Being Accepted’ error, which I gleaned from the process.

Error Message Id

  • IIS-CONN-ESSBASE-01010

Error Message

An error occurred while processing the request on the server. The error information is 1051544 (message on contacting or from application:[<<DateTimeStamp>>]Local////3544/Error(1013204) Client Commands are Currently Not Being Accepted.

Possible Causes of The Error

This Error is a problem with access to the Essbase object or accessing the security within the Essbase Object.  This can be a result of multiple issues, such as:

  • Object doesn’t exist – The Essbase object didn’t exist in the location specified,
  • Communications – the location is unavailable or cannot be reached,
  • Path Security – Security gets in the way to access the Essbase object location
  • Essbase Security – Security within the Essbase object does not support the user or filter being submitted. Also, the Essbase object security may be corrupted or incomplete.
  • Essbase Object Structure –  the Essbase object was not properly structured to support the filter or the Essbase filter is malformed for the current structure.

Related References

IBM Knowledge Center, InfoSphere Information Server 11.7.0, Connecting to data sources, Enterprise applications, IBM InfoSphere Information Server Pack for Hyperion Essbase

Printable PDF Version of This Article

 

Aginity For Netezza – How to Generate DDL

Aginity, Aginity for Netezza, Netezza, PureData, DDL, SQL

Aginity

How to Generate Netezza Object DDL

In ‘Aginity for Netezza’ this process is easy, if you have a user with sufficient permissions.

The basic process is:

  • In the object browser, navigate to the Database
  • select the Object (e.g. table, view, stored procedure)
  • Right Click, Select ‘Script’ > ‘DDL to query window’
  • The Object DDL will appear in the query window
Create DDL to Query Window

Create DDL to Query Window

Related References

 

Datastage – When checking operator: Operator of type “APT_TSortOperator”: will partition despite the preserve-partitioning flag on the data set on input port 0

APT_TSortOperator Warning

APT_TSortOperator Warning

The APT_TSortOperator  warning happens when there is a conflict in the portioning behavior between stages.  Usually, because the successor (down Stream) stage has the ‘Partitioning / Collecting’ and ‘Sorting’ property set in a way that conflicts with predecessor (upstream) stage’s properties, which it is set to preserver.  This can occur when the successor stage has the “Preserve Partitioning” property set to:

  • ‘Default (Propagate)’
  • ‘Propagate’, or
  • ‘Set’
Preserve Partitioning Property - list

Preserve Partitioning Property – list

Message ID

  • IIS-DSEE-TFOR-00074

Message Text

  • <<Link Name Where Warning Occurred>>: When checking operator: Operator of type “APT_TSortOperator”: will partition despite the preserve-partitioning flag on the data set on input port 0.

Warning Fixes

  • First, if the verify that the partitioning behaviors of both stages are correct
  • If so, set the predecessor ‘Preserve Partitioning’ property to “Clear”
  • If not, then correct the partitioning behavior of the stage which is in error

Clear Partitioning Property Screenshot

Preserve Partitioning Property - Set To Clear

Preserve Partitioning Property – Set To Clear

[HY000] ERROR: Base table/view has changed (datatype); rebuild view

Error

Error

Error Message

ERROR [HY000] ERROR: Base table/view ‘BLOG_LIST_TBL’ attr ‘BLOG_ID’ has changed (datatype); rebuild view ‘BLOG_LIST_VW’

What does the Error Mean?

This error means that an underlying table used by the view has changed in a way, which made the view invalid, as a result, the view must be rebuilt to reflect the new table definition.

How to Rebuild the view

In ‘Aginity for Netezza’ this process is easy, if you have a user with sufficient permissions.  The basic process is:

  • Navigate to and select the view to be rebuilt
  • Right Click, Select ‘Script’ > ‘DDL to query window’
Create DDL to Query Window

Create DDL to Query Window

  • Once the ‘Create or Replace View’ SQL has generated, click within the SQl statement (without highlighting)
  • Press Ctrl+F5 or navigate to Execute > ‘Execute as a Single batch
Execute SQL as a Single Batch

Execute SQL as a Single Batch

 

  • Verify the ‘Create or Replace View’ SQL executes successfully
  • Then, run a simple select against the view and that Select runs without producing the HYOOO Base Table/View Error

*DataStage*DSR_SELECT (Action=3); check DataStage is set up correctly in project

Error

Error

Having encountered this DataStage client error in Linux a few times recently, I thought I would document the solution, which has worked for me.

Error Message:

Error calling subroutine: *DataStage*DSR_SELECT (Action=3); check DataStage is set up correctly in project

(Subroutine failed to complete successfully (30107))

Probable Cause of Error

  • NodeAgents has stopped running
  • Insufficient /temp disk space

Triage Approach

To fix this error in Linux:

  • Ensure disk space is available and you may want clean up the /tmp directory of any excel non-required files.
  • Start the NodeAgents.sh, if it is not running

Command to verify Node Agent is running

ps -ef | grep java | grep Agent

 

Command to Start Node Agent

This example command assumes the shell script is in its normal location, if not you will need to adjust the path.

/opt/IBM/InformationServer/ASBNode/bin/NodeAgents.sh start

Node Agent Logs

These logs may be helpful:

  • asbagent_startup.err
  • asbagent_startup.out

Node Agent Logs Location

This command will get you to where the logs are normally located:

cd /opt/IBM/InformationServer/ASBNode/

Netezza Connector Stage, Table name required warning for User-defined SQL Write mode

Recently, while working at a customer site and I encountered an anomaly in the Netezza Connector stage, when choosing ‘User-defined SQL’ write mode, the ‘Table name’ displays a caution / warning even though a table name should not be required.  If you are using a user-defined SQL statement and/or have parametrized your SQL scripts to make the job reusable, each SQL and/or SQL script would have its own schema and table name being passed in.  After some investigation, a workaround was found, which both allows you to populate table name and leverage with different schema and table names within your SQl statement and/or.

Table Name, User-defined SQL, Warning

You will notice, in a screen shot below the ‘User-defined SQL’, ‘write mode’, property has been chosen, a parameter has been placed in the ‘User-defined SQL’ property, and ‘Read user defined SQL from a file’ property has been set to ‘Yes’.  However, yellow triangle displays on the ‘Table name’ property marking it as a required item.  This, also, occurs when placing SQL statements in the User-defined SQL property, whether reading from a file of not.

Netezza Connector User-Defined SQL , Table Name Required , Warning

Netezza Connector User-Defined SQL , Table Name Required , Warning

Table Name, User-defined SQL, Warning Workaround

After some experimentation, the workaround is straight forward enough.  Basically, give the ‘table name’ property something to read successfully, so it can move on to the user-defined SQL and/or user defined SQl file script, which the process actually needs to execute. In the screenshot below, the SYSTEM.DEFINITION_SCHEMA._V_DUAL view was used, so, it could be found, then the script file passed by the parameter runs fine.  Another view or table, which the DataStage user has access to, should just as well.

Netezza Connector, User-Define SQL, Table Name Required Fix

Netezza Connector, User-Define SQL, Table Name Required Fix

Related References

 

DataStage – IIS-DSEE-TBLD-00008- Processing Input Record APT_Decimal Error

IIS-DSEE-TBLD-00008 apt decimal error before disabling combination

IIS-DSEE-TBLD-00008 apt decimal error

This another one of those nebulas error messages, which can cost a lost of time in research, if you don’t know how to simplify the process a bit.  However, determining where the error is can be a bit of a challenge if you have not encountered this error before and figured out the trick, which isn’t exactly intuitive.

In this case, as it turned out, after I had determined where the error was, it was as simple as having missed resetting the stage variable properties, when the other decimal fields were increase.

How to identify where this error occurs?

Disabling the APT_DISABLE_COMBINATION environment variable by:

  • adding the APT_DISABLE_COMBINATION environment variable to the job properties

  • setting the  APT_DISABLE_COMBINATION environment variable it to true in the job properties

  • compiling the job and running the job again

 

This approach will, usually, provide a more meaningful identification of the stage with the error.

Note:  Please remember to remove the APT_DISABLE_COMBINATION environment variable before moving it to testing and/or releasing your code in production.

Message ID

IIS-DSEE-TBLD-00008

Error Message with combine enabled:

APT_CombinedOperatorController(1),0: Exception caught in processingInputRecord() for input “0”: APT_Decimal::ErrorBase: From: the source decimal has even precision, but non-zero in the leading nybble, or is too large for the destination decimal… Record dropped. Create a reject link to save the record if needed.

Error message with combine disabled

Tfm_Measures_calc,0: Exception caught in processingInputRecord() for input “0”: APT_Decimal::ErrorBase: From: the source decimal has even precision, but non-zero in the leading nybble or is too large for the destination decimal… Record dropped. Create a reject link to save the record if needed.

IIS-DSEE-TBLD-00008 apt decimal error after disabling combination

IIS-DSEE-TBLD-00008 apt decimal error after disabling combination

Note: Measures_calc is the stage name

Related References

 

 

Infosphere – decimal_from_string Conversion Error

IBM Infosphere - decimal_from_string Conversion Error

decimal_from_string Conversion Error

This is another one of those nebulas error, which can kick out of DataStage, DataQuality, and/or DataClick.  This error can be particularly annoying, because it doesn’t identify the field or even the precise command, which is causing the error.  So, there can be more than field and/or more than one command causing the problem.

Error

Conversion error calling conversion routine decimal_from_string data may have been lost

Resolution

To resolve this error, check for the correct formatting (date format, decimal, and null value handling) before passing to datastage StringToDate, DateToString,DecimalToString or StringToDecimal functions.  Additionally, even if the formatting is correct, you may need to imbed commands to completely clear the issue.

Example

Here is a recent example of command embedding, which has clear the issue, but I’m sure you will need to this concept in other ways to meet all your needs.

DecimalToString( DecimalToDecimal( <>,’trunc_zero’),”suppress_zero”)

How to suppress a Change_Capture_Host warning in Datastage

Change Capture Host Warning (IIS-DSEE-TFXR-00017)

Change Capture Host Warning (IIS-DSEE-TFXR-00017)

Occasionally, I run into this Change Capture Host defaulting warming, so, I thought this information may be useful.

Event Type

  • Warning

Message ID

  • IIS-DSEE-TFXR-00017

Example Message

  • Change_Capture_Host: When checking operator: Defaulting “<<FieldName>>” in transfer from “beforeRec” to “outputRec”.

Setting Variable

  • Set APT_DISABLE_TFXR0017=1
  • This environment variable can be added either at the project level or at the job level.

Alternative Solution

  • Within the Change Capture stage properties:
    • Stage tab
    • Option
    • Property: “Change Mode
    • Value:  “Explicit key, All Values”.

What does “ERROR: pg_atoi: error in : can’t parse ” mean?

This is one of those vague messages, which takes a little time to run down, but once you understand it, is easily fixed.  Basically, this error happens because you have mismatch and incompatible fields.

Question

What does “ERROR: pg_atoi: error in <ascii>: can’t parse <ascii>” mean?

Answer

The error is the result of an implicit conversion being performed on a character field (varchar/char) while being compared to a numeric field or value.

Vendor Reference Link

Infosphere Data Architect Install Error CRIMC1029E or CRIMC1085E

The CRIMC1029E / CRIMC1085E errors may be caused by running the incorrect Infosphere Data Architect installer.  If you run the admin installer (launchpad.exe) on 64bit windows with insufficient privileges, the process will throw a CRIMC1029E / CRIMC1085E error.

What the Error Looks Like

 

Installation failed.

CRIMC1029E: Adding plug-in com.ibm.etools.cobol.win32_7.0.921.v20140409_0421 to repository E:\Program Files\IBM\SDPShared failed.

CRIMC1085E: Resumable download failed for: file:/E:/InstallIBM/INFO_DATA_ARCH_V9.1.2_WIN/disk2/ad/plugins/file000086.

‘plug-in com.ibm.etools.cobol.win32_7.0.921.v20140409_0421’ does not exist (at file:/E:/InstallIBM/INFO_DATA_ARCH_V9.1.2_WIN/disk2/ad/plugins/file000086).

Elapsed time 00:00.00.

Solution

Run the launchpad_win_nonadmin64.exe file, instead of the launchpad.exe file.

Reference Link:

Installing IBM InfoSphere Data Architect with the Installation Manager Install wizard

 

Useful IIS Datastage Transformer Variables and Functions

Algorithm

Algorithm

This is list of InfoSphere Infromation Server (IIS)  Datastage transformer variables and functions, which I have found to be helpful over the years and tend to be frequently used.

Item Description Sample
INROWNUM and OUTROWNUM variables @INROWNUM and @OUTROWNUM are internal DataStage variables which do the following:

●       @INROWNUM counts incoming rows to a transformer in a DataStage job

●       @OUTROWNUM counts outcoming rows from a transformer in a DataStage job

These variables can be used to generate sequences, primary keys, id’s, numbering rows

@INROWNUM

@OUTROWNUM

Substring in DataStage Returns delimited substrings in a string.

Syntax

Field (string, delimiter, instance [ ,number] )

 

string is the string containing the substring. If string is a null value, null is returned.

 

delimiter is the character that delimits the substring. If delimiter is an empty string, string is returned. If string does not contain delimiter, an empty string is returned unless instance is 1, in which case string is returned. If delimiter is a null value, a run-time error occurs. If more than one substring is returned, delimiters are returned with the substrings.

 

instance specifies which instance of delimiter terminates the substring. If instance is less than 1, 1 is assumed. If string does not contain instance, an empty string is returned. If instance is a null value, a run-time error occurs.

 

number specifies the number of delimited substrings to return. If number is an empty string or less than 1, 1 is assumed. If number is a null value, a run-time error occurs.

The variable MyString is set to the data between the third and fourth occurrences of the delimiter “#”:

 

MyString = Field(“###DHHH#KK”,”#”, 4) ;* returns “DHHH”

 

In the following example SubString is set to “” since the delimiter “/” does not appear in the string.

 

MyString = “London+0171+NW2+AZ”

SubString = Field(Mystring, “/”, 1) ;* returns “”

 

In the following example SubString is set to “0171+NW2” since two fields were requested using the delimiter “+” (the second and third fields):

 

MyString = “London+0171+NW2+AZ”

SubString = Field(Mystring, “+”, 2, 2)

* returns “0171+NW2”

Special Characters in DataStage Handles/converts special characters in a transformer stage, which can cause issues in XML processing and certain databases. General:

Convert(“;:?\+&,*`#’$()|^~@{}[]%!”,””, TrimLeadingTrailing(DSLink_ES_XML.File))

 

Decimal and Double Quotes:

Convert(‘ ” . ‘,”, DSLink12.Field)

DSJobStartTimestamp Type: String. Date and time when the job started on the engine in the form YYYY-MM-DD hh:mm:ss. This variable can be helpful when you need to append the date on the end of a file.
DSJobStartTime Type: String. time when the job started on the engine in the form (24 hour clock) HH:MM:SS.
DSJobStartDate Type: String. Date and time when the job started on the engine in the form YYYY-MM-DD. This variable can be helpful when needing to provide a loaded date, which is consistent across all rows.
DSJobName Type: String. Actual name of the job referenced by the job handler. This variable can be helpful when needing to prove which processes row into a table or file, if more than one process is inserting data.

 

Vendor Documentation

For additional information the complete list of Infosphere Information Server (IIS) DataStage system variables, see IBM online documentation at:

System Variables

https://www.ibm.com/support/knowledgecenter/SSZJPZ_11.5.0/com.ibm.swg.im.iis.ds.serverjob.dev.doc/topics/r_dsvjbref_System_Variables.html?lang=en

Supported macros and system variables

https://www.ibm.com/support/knowledgecenter/SSZJPZ_11.5.0/com.ibm.swg.im.iis.ds.parjob.dev.doc/topics/limitationsmacros.html?lang=en

 

Related References

What are the Infosphere DataStage job status log values?

IBM Infosphere Information Server (IIS), DataStage job status log values, datastage

IBM Infosphere Information Server (IIS)

These are the job status codes seen when running Infosphere Datastage jobs and sequences.   Additional information regarding a specific job or sequence error can be seen in Director.

Table of dsjob utility Status Codes

dsjob utility Status Codes

 

Log/Status Description Job State Comments
0 Running Not Runnable This is the only status that means the job is actually running
1 Finished Runnable Job finished a normal run with no warnings
2 Finished (See Log) Runnable Job finished a normal run with warnings
3 Aborted Not Runnable Job finished a normal run with a fatal error
4 Queued Not Runnable Job queued waiting for resource allocation
8 Failed Validation Not Runnable
9 Has Been Reset Runnable
11 Validated OK Runnable
12 Validated (See Log) Runnable
13 Failed Validation Not Runnable
21 Has Been Reset Runnable
96 Aborted Not Runnable
97 Stopped Not Runnable
98 Not Compiled Not Runnable

 

 

ETL Error Handling Effective Practices

Error, ETL Error Handling Effective Practices

Error

ETL Error Handling Effective Practices

ETL (extract, transform, and load) error handling practices can vary, but three basic approaches can significantly assist in having effective ETL error handling practices.  Effective error handling practices begin in the requirements and design phases. All too often, error handling practices are left to the build phase and the fall to the developer practices. This is an area where standard practices are not well defined or adapted by the ETL developer community.  So, here are a few effective error handling practices which will contribute to process stability, information timeliness, information accuracy, and reduce the level of effort required to support the application once in operation.

Anticipating ETL Errors

In the requirements and design phases, with proper consideration, many errors can be avoided altogether in the ETL process. When discussing requirements and preparing designs consideration should be given to error handling, especially, treatment of common errors. As an effective practice, anticipated errors should be treated within the ETL process. Some examples, to consider our:

  • Replacement of special characters: do any special characters need to be removed, if found? This is generally determined that the field level and should be considered in the source to target mapping (STTM) and business rules. Also, the passing between differing systems and working with VARCHAR fields, should the ‘unicode’ extended property be set.
  • Removal of leading and trailing spaces: removal of unnecessary leading and trailing spaces should be considered when changing fields from CHAR to VARCHAR and when working with keys used as primary keys, join keys, and/or lookup keys.
  • Deduplication of data: duplicate data prevention practices and business rules should always be considered. These can be of a couple of types:
    • First, is file processing conventions, such as, assigning timestamps to files and removal or movement of process files to prevent reprocessing.
    • Second, is rules for dedication of duplicate rows, including the dedication of appropriate keys for determining duplicate rows.
    • Third, if duplicate rows are being produced as a result of more than one input source system, identification of the authoritative source should be considered to resolve conflicts.
  • Null Value Treatment: null value treatment can be extraordinarily important, especially, when working with keys and traditional data warehouse models. It is important to be mindful of the fact that to the database and the ETL nulls and spaces are not the same thing. They may or may not be the same thing in the mind of the consumer of the information. So,  business rules should indicate the treatment of both spaces and nulls. In some circumstances, especially, when using surrogate keys in data warehousing business processes sometimes need to know the difference between a null in a space or even a no and the space and an unknown value. So these three scenarios should be considered when forming business rules and treating the ETL.  Here are a couple questions that could be asked informing your solution:
    • do nulls and spaces mean the same thing to the business community?
    • Is a space considered an unknown value?
    • Does a null need to be uniquely identified as different from a space and/or an unknown look up value?
    • If surrogate keys are in use for the field in question, which of these scenarios require a unique surrogate key, other than the unknown unique surrogate key?
  • Missing or Invalid Value Replacement or Defaults: having replacement values or defaults is especially important for any fields which are not nullable and/or require a surrogate key for data warehouse dimensions.  Also, for reporting to be meaningful replacement or default value assignments can be important, as well (e.g. for cubes, and statistical calculations).

Rejecting Rows

Rows should not be rejected, unless there is a specific business requirement and/or need to do so.  Rejecting rows causes data inaccuracies by omission and undermines the consumer’s confidence in the accuracy of the information being delivered.  This can be, especially, problematic for accounting and other activities, which must balance across information sets.

  • If value look ups are in use:
    • Unknown and null values need to have a treatment rule to prevent errors.
    • Two surrogate key or transformation default values may be necessary, if the ability to distinguish between an unknown/Invalid value and a null value is required.
    • Make sure the look up ‘Key Type’ are aligned (e.g. equality, caseless equality) to the formatting of both inputs to the look up
    • That the complete unique key is in use.

Information Consistency Practices

Information consistency practices allow the information to be transformed and enriched to make the information more consistent for ‘like to like’ comparisons, usability, and/or readability. As an effective practice consider these Standard formatting recommendations, which can be good requirements questions and should be included in the STTM:

  • Making descriptive and/or text fields consistent in their format (e.g. mixed case, Proper case, upper case).
  • Have use consistent date formatting, when converting dates to text fields.
  • When dealing with currency, convert the currency to consistent ISO currency codes (e.g. USD, CAD, EUR) and decimal (e.g.  two decimal places).
  • Identification of financial records into categories (e.g. credit and debit) with a default group behavior included (e.g. N/A or Unknown).

Netezza – [HY000] ERROR: Please issue a groom on table

PureData Powered by Netezza, [HY000] ERROR: Please issue a groom on table, Netezza

[HY000] ERROR: Please issue a groom on table

While altering a Netezza table this error was produce:

  • ERROR [HY000] ERROR: Please issue a groom on table ‘<<TableName>>’, maximum table versions exceeded.

The error was resolve by running these commands on succession:

  1. GROOM TABLE ‘<<TableName>> VERSIONS;
  2. GROOM TABLE ‘<<TableName>> PAGES ALL;

Related References

Groom Table

PureData System for Analytics, PureData System for Analytics 7.2.0, IBM Netezza Database User’s Guide, Netezza SQL command reference, groom table

 

IBM InfoSphere DataStage – Parallel Environment Variables

Most of this list of Parallel Environment Variables can be found in the IBM InfoSphere DataStage, Version 11.5 documentation.  However, I have started to find variables,  which I use and are not included in the IBM list.  So, for simplicity, I will make additions and clarifications to the IBM list, as I run across them, on this page.

Performance Tuning

 These environment variable are frequently used in tuning Datastage performance.

  • APT_BUFFER_FREE_RUN (See, also, Buffering)
  • APT_BUFFER_MAXIMUM_MEMORY (See, also,  Buffering)
  • APT_COMPRESS_BOUNDED_FIELDS
  • APT_FILE_IMPORT_BUFFER_SIZE  (See, also, Reading and Writing Files)
  • APT_FILE_EXPORT_BUFFER_SIZE (See, also, Reading and Writing Files)
  • TMPDIR (This variable also specifies the directory for Netezza log files on all operating systems.  Setting TMDIR paths is a Netezza best practice.)

Buffering

These environment variable are all concerned with the buffering InfoSphere DataStage performs on stage links to avoid deadlock situations

  • APT_BUFFER_FREE_RUN
  • APT_BUFFER_MAXIMUM_MEMORY
  • APT_BUFFER_MAXIMUM_TIMEOUT
  • APT_BUFFER_DISK_WRITE_INCREMENT
  • APT_BUFFERING_POLICY
  • APT_DISABLE_ROOT_FORKJOIN

Building Custom Stages

These environment variables are concerned with the building of custom operators that form the basis of customized stages.

  • APT_BUILDOP_SOURCE_CHARSET
  • APT_SINGLE_CHAR_CASE
  • DS_OPERATOR_BUILDOP_DIR
  • DS_OPERATOR_WRAPPED_DIR
  • OSH_BUILDOP_CODE
  • OSH_BUILDOP_HEADER
  • OSH_BUILDOP_NO_OPTIMIZE
  • OSH_BUILDOP_OBJECT
  • OSH_BUILDOP_WRAPPER
  • OSH_BUILDOP_XLC_BIN
  • OSH_CBUILDOP_XLC_BIN
  • OSH_STDOUT_MSG

Compiler

These environment variables specify details about the C++ compiler used by InfoSphere DataStage in connection with parallel jobs.

  • APT_COMPILER
  • APT_COMPILEOPT
  • APT_LINKER
  • APT_LINKOPT

DB2 Support

These environment variables are concerned with setting up access to DB2 databases from InfoSphere DataStage.

  • APT_DB2INSTANCE_HOME
  • APT_DB2READ_LOCK_TABLE
  • APT_DBNAME
  • APT_DEBUG_DB2
  • APT_RDBMS_COMMIT_ROWS
  • APT_TIME_ALLOW_24
  • DB2DBDFT

Debugging

These environment variables are concerned with the debugging of InfoSphere DataStage parallel jobs.

  • APT_DEBUG_OPERATOR
  • APT_DEBUG_MODULE_NAMES
  • APT_DEBUG_PARTITION
  • APT_DEBUG_SIGNALS
  • APT_DEBUG_STEP
  • APT_DEBUG_SUBPROC
  • APT_EXECUTION_MODE
  • APT_NO_PM_SIGNAL_HANDLERS
  • APT_PM_DBX
  • APT_PM_SHOW_PIDS
  • APT_PM_XTERM
  • APT_PXDEBUGGER_FORCE_SEQUENTIAL
  • APT_SHOW_LIBLOAD
  • DS_OSH_WRAPPER_DEBUG_CONNECT
  • DS_OSH_WRAPPER_TIMEOUT
  • DS_PXDEBUG

Decimal Support

These environment variables are concerned with support for decimal columns.

  • APT_DECIMAL_INTERM_PRECISION
  • APT_DECIMAL_INTERM_SCALE
  • APT_DECIMAL_INTERM_ROUNDMODE
  • DS_USECOLPREC_FOR_DECIMAL_KEY

Disk I/O

These environment variables are all concerned with when and how InfoSphere DataStage parallel jobs write information to disk.

  • APT_BUFFER_DISK_WRITE_INCREMENT
  • APT_EXPORT_FLUSH_COUNT
  • APT_IO_MAP/APT_IO_NOMAP and APT_BUFFERIO_MAP/APT_BUFFERIO_NOMAP
  • APT_PHYSICAL_DATASET_BLOCK_SIZE

General Job Administration

These environment variables are concerned with details about the running of InfoSphere DataStage and IBM InfoSphere QualityStage® parallel jobs

  • APT_CLOBBER_OUTPUT
  • APT_CONFIG_FILE
  • APT_DISABLE_COMBINATION
  • APT_DONT_COMPRESS_BOUNDED_FIELDS
  • APT_EXECUTION_MODE
  • APT_FILE_EXPORT_ADD_BOM
  • APT_IMPORT_FORCE_QUOTE_DELIM
  • APT_ORCHHOME
  • APT_STARTUP_SCRIPT
  • APT_NO_STARTUP_SCRIPT
  • APT_STARTUP_STATUS
  • DSForceTerminate
  • DSIPC_OPEN_TIMEOUT
  • DSJOB_DOMAIN
  • DSWaitShutdown
  • DS_FORCE_ABORT_AT_WARN_LIMIT
  • DS_LOG_AUTOPURGE_IGNORE_STATUS

Job Monitoring

These environment variables are concerned with the Job Monitor on InfoSphere DataStage.

  • APT_MONITOR_SIZE
  • APT_MONITOR_MINTIME
  • APT_MONITOR_TIME
  • APT_NO_JOBMON
  • APT_PERFORMANCE_DATA

Lookup support

This environment variable is concerned with lookup tables.

  • APT_LUTCREATE_NO_MMAP

Miscellaneous

These environment variables do not fit into the other categories.

  • APT_AGGREGATOR_NULLABLE_OUTPUT
  • APT_COPY_TRANSFORM_OPERATOR
  • APT_DATASET_FLUSH_NOFSYNC
  • APT_DATASET_FLUSH_NOSYNC
  • APT_DATE_CENTURY_BREAK_YEAR
  • APT_DATE_ADD_ROLLOVER
  • APT_DISABLE_FASTALLOC
  • APT_DONT_OPTIMIZE_MODIFY
  • APT_EBCDIC_VERSION
  • APT_FILE_EXPORT_DEFAULTS_TO_CONDUCTOR
  • APT_FIFO_DIRECTORY
  • APT_IMPEXP_ALLOW_ZERO_LENGTH_FIXED_NULL
  • APT_IMPORT_REJECT_INVALID_CHARS
  • APT_IMPORT_REJECT_STRING_FIELD_OVERRUNS
  • APT_INSERT_COPY_BEFORE_MODIFY
  • APT_ISVALID_BACKCOMPAT
  • APT_OLD_BOUNDED_LENGTH
  • APT_OLD_CUTOFF
  • APT_OUTPUT_LOCALE
  • APT_OPERATOR_REGISTRY_PATH
  • APT_OSL_PARAM_ESC_SQUOTE
  • APT_OSL_RESTORE_BACKSLASH
  • APT_PLAYERS_REPORT_IN
  • APT_PM_ACCEPT_CONNECTION_RETRIES
  • APT_PM_ACCEPT_CONNECTION_TIMEOUT
  • APT_PM_NO_SHARED_MEMORY
  • APT_PM_NO_NAMED_PIPES
  • APT_PM_SCORE_DIR
  • APT_PM_STARTUP_CONCURRENCY
  • APT_RECORD_COUNTS
  • APT_RESPATH
  • APT_SHOW_COMPONENT_CALLS
  • APT_STACK_TRACE
  • APT_SURRKEY_BLOCK_WRITE
  • APT_SURRKEY_LOCKSTATE_RETRIES
  • APT_THREAD_SAFE_FAST_ALLOC
  • APT_TRANSFORM_ABORT_ON_CONVERSION_ERROR
  • APT_TRANSFORM_COMPILE_OLD_NULL_HANDLING
  • APT_TRANSFORM_LOOP_WARNING_THRESHOLD
  • APT_TRANSFORM_OPERATOR_DEBUG
  • APT_USE_CRLF
  • APT_VIEWDATA_TEMP_DIR
  • DSAttLockWait
  • DSE_SLAVE_CLOSE_SOCKET_ON_EXEC
  • DSJobStartedMax
  • DSODBC_EXECUTE_MULTI_STMTS_AS_ONE
  • DSODBC_FATAL_ERRORS
  • DSODBC_NEW_FVMARKERS
  • DSODBC_NO_BIGINT_WARNINGS
  • DSODBC_NO_DB2_DELETE_WARNINGS
  • DSODBC_NO_METADATA_WARNINGS
  • DSR_RECORD_DELETE_BEFORE_WRITERAW
  • DSSocketNotifyTimeout
  • DSWaitResetStartup
  • DSWaitStartup
  • DS_API_DEBUG
  • DS_CHANGE_SEQ_PUT_BEHAVIOR
  • DS_EXECUTE_NO_MASKING
  • DS_HOSTNAME_ALIAS
  • DS_IPCPUT_OLD_TIMEOUT_BEHAVIOR
  • DS_LEGACY_AUDIT
  • DS_LOGDETAIL_INVOCATION_OLDFORM
  • DS_LONG_JOBSTATUS_LOCK
  • DS_MAX_PREC_FOR_FLOATS
  • DS_MMAPPATH
  • DS_MMAPSIZE
  • DS_NO_INSTANCE_PURGING
  • DS_OPTIMIZE_FILE_BROWSE
  • DS_SEQ_BYPASS_CHOWN
  • DS_STATUSSTARTED_CHECK
  • DS_TRX_ALLOW_LINKVARIABLES
  • OSH_PRELOAD_LIBS
  • PX_DBCONNECTHOME
  • DS_USE_OLD_STATUS_PURGE
  • DS_USE_OSHSCRIPT
  • DS_USE_SERVER_AUTH_ONLY
  • ODBCBindingOrder
  • ODBCstripCRLF
  • OSHMON_INIT_RETRY
  • OSHMON_TRACE

Network

These environment variables are concerned with the operation of InfoSphere DataStage parallel jobs over a network.

  • APT_DEBUG_ENVIRONMENT
  • APT_PM_ENV_DEBUG
  • APT_PM_ENV_NOCLOBBER
  • APT_IO_MAXIMUM_OUTSTANDING
  • APT_IOMGR_CONNECT_ATTEMPTS
  • APT_PLAYER_CONNECTION_PORT
  • APT_PM_CONDUCTOR_TIMEOUT
  • APT_PM_CONDUCTOR_HOSTNAME
  • APT_PM_CONNECT_USING_NAME
  • APT_PM_NO_TCPIP
  • APT_PM_NODE_TIMEOUT
  • APT_PM_PLAYER_TIMEOUT
  • APT_PM_SHOWRSH
  • APT_PM_STARTUP_PORT
  • APT_RECVBUFSIZE
  • APT_USE_IPV4

National Language Support (NLS)

These environment variables are concerned with InfoSphere DataStage’s implementation of NLS.

  • APT_COLLATION_SEQUENCE
  • APT_COLLATION_STRENGTH
  • APT_ENGLISH_MESSAGES
  • APT_IMPEXP_CHARSET
  • APT_INPUT_CHARSET
  • APT_OS_CHARSET
  • APT_OUTPUT_CHARSET
  • APT_STRING_CHARSET

Oracle Support

These environment variables are concerned with the interaction between InfoSphere DataStage and Oracle databases.

  • APT_ORACLE_LOAD_OPTIONS
  • APT_ORACLE_NO_OPS
  • APT_ORACLE_PRESERVE_BLANKS
  • APT_ORA_IGNORE_CONFIG_FILE_PARALLELISM
  • APT_ORA_WRITE_FILES
  • APT_ORAUPSERT_COMMIT_ROW_INTERVAL
  • APT_ORAUPSERT_COMMIT_TIME_INTERVAL
  • ODBCKeepSemicolon

Partitioning

The following environment variables are concerned with how InfoSphere DataStage automatically partitions data.

  • APT_NO_PART_INSERTION
  • APT_NO_PARTSORT_OPTIMIZATION
  • APT_PARTITION_COUNT
  • APT_PARTITION_NUMBER

Reading and Writing Files

These environment variables are concerned with reading and writing files.

  • APT_DELIMITED_READ_SIZE
  • APT_FILE_IMPORT_BUFFER_SIZE
  • APT_FILE_EXPORT_BUFFER_SIZE
  • APT_IMPORT_FILE_PATTERN_CMD
  • APT_IMPORT_HANDLE_SHORT
  • APT_IMPORT_PATTERN_USES_CAT
  • APT_IMPORT_PATTERN_USES_FILESET_MOUNTED
  • APT_MAX_DELIMITED_READ_SIZE
  • APT_STRING_PADCHAR

Reporting

These environment variables are concerned with various aspects of InfoSphere DataStage jobs reporting their progress.

  • APT_DUMP_SCORE
  • APT_ERROR_CONFIGURATION
  • APT_MSG_FILELINE
  • APT_PM_PLAYER_MEMORY
  • APT_PM_PLAYER_TIMING
  • APT_RECORD_COUNTS
  • OSH_DUMP
  • OSH_ECHO
  • OSH_EXPLAIN
  • OSH_PRINT_SCHEMAS

SAS Support

These environment variables are concerned with InfoSphere DataStage interaction with SAS.

  • APT_HASH_TO_SASHASH
  • APT_NO_SASOUT_INSERT
  • APT_NO_SAS_TRANSFORMS
  • APT_SAS_ACCEPT_ERROR
  • APT_SAS_CHARSET
  • APT_SAS_CHARSET_ABORT
  • APT_SAS_COMMAND
  • APT_SASINT_COMMAND
  • APT_SAS_DEBUG
  • APT_SAS_DEBUG_IO
  • APT_SAS_DEBUG_LEVEL
  • APT_SAS_DEBUG_VERBOSE
  • APT_SAS_S_ARGUMENT
  • APT_SAS_NO_PSDS_USTRING
  • APT_SAS_SCHEMASOURCE_DUMP
  • APT_SAS_SHOW_INFO
  • APT_SAS_TRUNCATION

Sorting

The following environment variables are concerned with how InfoSphere DataStage automatically sorts data.

  • APT_NO_SORT_INSERTION
  • APT_SORT_INSERTION_CHECK_ONLY
  • APT_TSORT_NO_OPTIMIZE_BOUNDED
  • APT_TSORT_STRESS_BLOCKSIZE

Teradata Support

The following environment variables are concerned with InfoSphere DataStage interaction with Teradata databases.

  • APT_TERA_64K_BUFFERS
  • APT_TERA_NO_ERR_CLEANUP
  • APT_TERA_NO_PERM_CHECKS
  • APT_TERA_NO_SQL_CONVERSION
  • APT_TERA_SYNC_DATABASE
  • APT_TERA_SYNC_PASSWORD
  • APT_TERA_SYNC_USER

Transport Blocks

The following environment variables are all concerned with the block size used for the internal transfer of data as jobs run. Some of the settings only apply to fixed length records.

  • APT_AUTO_TRANSPORT_BLOCK_SIZE
  • APT_LATENCY_COEFFICIENT
  • APT_DEFAULT_TRANSPORT_BLOCK_SIZE
  • APT_MAX_TRANSPORT_BLOCK_SIZE
  • APT_MIN_TRANSPORT_BLOCK_SIZE

Related References

TMPDIR

InfoSphere Information Server, InfoSphere Information Server 11.5.0, Connecting to data sources, Databases, Netezza Performance Server, Environment variables: Netezza connector, TMPDIR

Environment Variables

InfoSphere Information Server, InfoSphere Information Server 11.5.0, InfoSphere DataStage and QualityStage, Reference, Parallel Job Reference, Environment Variables

Environment Variables For The Parallel Engine

InfoSphere Information Server, InfoSphere Information Server 11.5.0, Installing, Installing IBM InfoSphere Information Server software, Configuring software, Configuring a parallel processing environment, Setting environment variables for the parallel engine, Environment variables for the parallel engine