While investigating a recent Infosphere Information Server (IIS), Datastage, Essbase Connect error I found the explanations of the probable causes of the error not to be terribly meaningful. So, now that I have run our error to ground, I thought it might be nice to jot down a quick note of the potential cause of the ‘Client Commands are Currently Not Being Accepted’ error, which I gleaned from the process.
Error Message Id
An error occurred while processing the request on the server. The error information is 1051544 (message on contacting or from application:[<<DateTimeStamp>>]Local////3544/Error(1013204) Client Commands are Currently Not Being Accepted.
Possible Causes of The Error
This Error is a problem with access to the Essbase object or accessing the security within the Essbase Object. This can be a result of multiple issues, such as:
Object doesn’t exist – The Essbase object didn’t exist in the location specified,
Communications – the location is unavailable or cannot be reached,
Path Security – Security gets in the way to access the Essbase object location
Essbase Security – Security within the Essbase object does not support the user or filter being submitted. Also, the Essbase object security may be corrupted or incomplete.
Essbase Object Structure – the Essbase object was not properly structured to support the filter or the Essbase filter is malformed for the current structure.
IBM Knowledge Center, InfoSphere Information Server 11.7.0, Connecting to data sources, Enterprise applications, IBM InfoSphere Information Server Pack for Hyperion Essbase
The APT_TSortOperator warning happens when there is a conflict in the portioning behavior between stages. Usually, because the successor (down Stream) stage has the ‘Partitioning / Collecting’ and ‘Sorting’ property set in a way that conflicts with predecessor (upstream) stage’s properties, which it is set to preserver. This can occur when the successor stage has the “Preserve Partitioning” property set to:
<<Link Name Where Warning Occurred>>: When checking operator: Operator of type “APT_TSortOperator”: will partition despite the preserve-partitioning flag on the data set on input port 0.
First, if the verify that the partitioning behaviors of both stages are correct
If so, set the predecessor ‘Preserve Partitioning’ property to “Clear”
If not, then correct the partitioning behavior of the stage which is in error
Recently, while working at a customer site and I encountered an anomaly in the Netezza Connector stage, when choosing ‘User-defined SQL’ write mode, the ‘Table name’ displays a caution / warning even though a table name should not be required. If you are using a user-defined SQL statement and/or have parametrized your SQL scripts to make the job reusable, each SQL and/or SQL script would have its own schema and table name being passed in. After some investigation, a workaround was found, which both allows you to populate table name and leverage with different schema and table names within your SQl statement and/or.
Table Name, User-defined SQL, Warning
You will notice, in a screenshot below the ‘User-defined SQL’, ‘write mode’, the property has been chosen, a parameter has been placed in the ‘User-defined SQL’ property, and ‘Read user-defined SQL from a file’ property has been set to ‘Yes’. However, the yellow triangle displays on the ‘Table name’ property marking it as a required item. This, also, occurs when placing SQL statements in the User-defined SQL property, whether reading from a file or not.
Table Name, User-defined SQL, Warning Workaround
After some experimentation, the workaround is straight forward enough. Basically, give the ‘table name’ property something to read successfully, so it can move on to the user-defined SQL and/or user defined SQl file script, which the process actually needs to execute. In the screenshot below, the SYSTEM.DEFINITION_SCHEMA._V_DUAL view was used, so, it could be found, then the script file passed by the parameter runs fine. Another view or table, which the DataStage user has access to, should just as well.
This another one of those nebulas error messages, which can cost a lot of time in research, if you don’t know how to simplify the process a bit. However, determining where the error is can be a bit of a challenge if you have not encountered this error before and figured out the trick, which isn’t exactly intuitive.
In this case, as it turned out after I had determined where the error was, it was as simple as having missed resetting the stage variable properties, when the other decimal fields increased.
How to identify where this error occurs?
Disabling the APT_DISABLE_COMBINATION environment variable by:
adding the APT_DISABLE_COMBINATION environment variable to the job properties
setting the APT_DISABLE_COMBINATION environment variable it to true in the job properties
compiling the job and running the job again
This approach will, usually, provide a more meaningful identification of the stage with the error.
Note: Please remember to remove the APT_DISABLE_COMBINATION environment variable before moving it to testing and/or releasing your code in production.
Error Message with combine enabled:
APT_CombinedOperatorController(1),0: Exception caught in processingInputRecord() for input “0”: APT_Decimal::ErrorBase: From: the source decimal has even precision, but non-zero in the leading nybble, or is too large for the destination decimal… Record dropped. Create a reject link to save the record if needed.
Error message with combine disabled
Tfm_Measures_calc,0: Exception caught in processingInputRecord() for input “0”: APT_Decimal::ErrorBase: From: the source decimal has even precision, but non-zero in the leading nybble or is too large for the destination decimal… Record dropped. Create a reject link to save the record if needed.