End Of Support For IBM InfoSphere 9.1.0

IBM Information Server (IIS)

IBM Information Server (IIS)

End of Support for IBM InfoSphere Information Server 9.1.0

IBM InfoSphere Information Server 9.1.0 will reach End of Support on 2018-09-30.  If you are still on the InfoSphere Information Server (IIS) 9.1.0, I hope you have a plan to migrate to an 11-series version soon.  InfoSphere Information Server (IIS) 11.7 would be worth considering if you don’t already own an 11-series license. InfoSphere Information Server (IIS) 11.7 will allow you to take advantage of the evolving thin client tools and other capabilities in the 2018 release pipeline without needing to perform another upgrade.

Related References

IBM Support, End of support notification: InfoSphere Information Server 9.1.0

IBM Support, Software lifecycle, InfoSphere Information Server 9.1.0

IBM Knowledge Center, Home, InfoSphere Information Server 11.7.0, IBM InfoSphere Information Server Version 11.7.0 documentation

How to know if your Oracle Client install is 32 Bit or 64 Bit

Oracle Database, How to know if your Oracle Client install is 32 Bit or 64 Bit

Oracle Database

 

How to know if your Oracle Client install is 32 Bit or 64 Bit

Sometimes you just need to know if your Oracle Client install is 32 bit or 64 bit. But how do you figure that out? Here are two methods you can try.

The first method

Go to the %ORACLE_HOME%\inventory\ContentsXML folder and open the comps.xml file.
Look for <DEP_LIST> on the ~second screen.

If you see this: PLAT=”NT_AMD64” then your Oracle Home is 64 bit
If you see this: PLAT=”NT_X86” then your Oracle Home is 32 bit.

It is possible to have both the 32-bit and the 64-bit Oracle Homes installed.

The second method

This method is a bit faster. Windows has a different lib directory for 32-bit and 64-bit software. If you look under the ORACLE_HOME folder if you see a “lib” AND a “lib32” folder you have a 64 bit Oracle Client. If you see just the “lib” folder you’ve got a 32 bit Oracle Client.

Related References

 

Oracle – How to get a list of user permission grants

IBM Infosphere Information Server (IIS), Oracle – How to get a list of user permission grants

IBM Infosphere Information Server (IIS)

Since the Infosphere, information server, repository, has to be installed manually with the scripts provided in the IBM software, sometimes you run into difficulties. So, here’s a quick script, which I have found useful in the past to identify user permissions for the IAUSER on Oracle database’s to help rundown discrepancies in user permissions.

 

SELECT *

FROM ALL_TAB_PRIVS

WHERE  GRANTEE = ‘iauser’

 

If we cannot run against the ALL_TAB_PRIVS view, then we can try the ALL_TAB_PRIVS view:

 

SELECT *

FROM USER_TAB_PRIVS

WHERE  GRANTEE = ‘iauser’

 

Related References

oracle help Center > Database Reference > ALL_TAB_PRIVS view

Linux – What is yum?

Linux

Linux

In simple terms, yum is a, command-line interface, package manager utility for computers running the Linux operating system, which augments the RPM Package Manager capabilities. yum is the primary tool for getting, installing, deleting, querying, and managing RPM software packages. Alos, yum is used in Red Hat Enterprise Linux (RHEL) versions 5 and later.

 

Where to download IBM Data Studio?

IBM Data Studio Client

IBM Data Studio Client

IBM data studio is offered free from IBM, and can be helpful when working with DB2 and Puredata/Netezza using a JDBC driver.

What you need to Down Load IBM Data Studio

  • You will need an IBM ID and password

Basic down load steps

IBM Sign In Screen

IBM Sign In Screen

  • Enter you IBM ID, and password, then click ‘sign in’.
  • On the IBM Data Studio Client, license page, check ‘I agree’ and then click ‘I confirm’
IBM Data Studio Client License Screen

IBM Data Studio Client License Screen

  • On the IBM Data Studio Client, download page, Select the desired method tab, Then
    • Select the desired product or products and click ‘Download now.
IBM Data Studio Client Download Files Screen

IBM Data Studio Client Download Files Screen

 

Related References

IBM Data Studio

IBM Software > Products > Data management platform > Data management > IBM Data Studio

IBM Data Studio Client (Download)

IBM Support

Download and install IBM Data Studio Version 4.1.x

IBM Support

System requirements for IBM Data Studio Version 4.1.x

IBM Knowledge Center

Data Studio, Data Studio 4.1.1, Overview, Overview of IBM Data Studio

 

Linux – how to display file system disk space statistics

Linux

Linux

In Linux there are lot of ways to disk size and/or space, but the ‘Disk Filesystem’ (df) command is old reliable and has been around a long time.   The ‘df’ command provides a summary of disk space and free space, which I find myself coming back to time after time.

Basic Command Format

DF -<<Option>>   <<File>>

Example ‘Disk Filesystem’, Command

df -h

  • -h = Human readable in Mega Bytes

For more details in Linux

df –help

 

Example Command Output

root@BlogSrvr1 /]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/vg_BlogSrvr1-lv_root

36G   34G   16M 100% /

tmpfs                 3.9G     0  3.9G   0% /dev/shm

/dev/sda1             477M   33M  419M   8% /boot

/dev/mapper/vg_BlogSrvr1-LogVol03

11G   27M  9.9G   1% /data

/dev/mapper/vg_BloSrvr1-lv_home

4.8G   33M  4.6G   1% /home

/dev/mapper/vg_BlogSrvr1-LogVol04

25G   13G   11G  55% /opt/IBM

/dev/mapper/vg_BlogSrvr1-LogVol05

11G  6.0G  3.7G  62% /scratch

/dev/mapper/vg_BlogSrvr1-LogVol06

11G   27M  9.9G   1% /tmp/dev/shm

*DataStage*DSR_SELECT (Action=3); check DataStage is set up correctly in project

Error

Error

Having encountered this DataStage client error in Linux a few times recently, I thought I would document the solution, which has worked for me.

Error Message:

Error calling subroutine: *DataStage*DSR_SELECT (Action=3); check DataStage is set up correctly in project

(Subroutine failed to complete successfully (30107))

Probable Cause of Error

  • NodeAgents has stopped running
  • Insufficient /temp disk space

Triage Approach

To fix this error in Linux:

  • Ensure disk space is available and you may want clean up the /tmp directory of any excel non-required files.
  • Start the NodeAgents.sh, if it is not running

Command to verify Node Agent is running

ps -ef | grep java | grep Agent

 

Command to Start Node Agent

This example command assumes the shell script is in its normal location, if not you will need to adjust the path.

/opt/IBM/InformationServer/ASBNode/bin/NodeAgents.sh start

Node Agent Logs

These logs may be helpful:

  • asbagent_startup.err
  • asbagent_startup.out

Node Agent Logs Location

This command will get you to where the logs are normally located:

cd /opt/IBM/InformationServer/ASBNode/

Linux – How to compress an entire directory

Linux

Linux

From time to time there is a need to package up a folder for any number of reasons which may include things like:

  • Migration
  • Movement to a new location
  • Movement to a new server
  • To keep a backup
  • Or simply to save space

Compressing a folder is folder can be very useful, but for those of us who don’t do it all the time, it is nice to have a pattern to follow.  Also, even an experienced user can get brain cramp, if they have not had a reason to compress a folder in a while. So, here is a simple pattern to follow to compress a folder and its contents.

Basic Command Format

tar -zcvf <<archive-name>>.tar.gz <<directory-name>>

Example Compress Command

tar -zcvf  blog_files_backup.tar.gz   sqlfiles

Linux tar command line options used here

  • -z = Compress archive using gzip program
  • -c = Create archive
  • -v = Verbose i.e display progress while creating archive
  • -f = Archive File name

For help with the tar command in Linux

To get additional detail on the tar command in Linux, just need to type:

 tar -?

 

Netezza – [SQLCODE=HY000][Native=46] ERROR: External Table : count of bad input rows reached maxerrors limit

SQL (Structured Query Language)

SQL (Structured Query Language)

While helping a customer we encountered the [SQLCODE=HY000][Native=46] ERROR, which was a new one for me. So here are a few notes to help the next unlucky soul may run into the error.

Netezza Error Reason:

  • [SQLCODE=HY008][Native=51] Operation canceled; [SQLCODE=HY000][Native=46] ERROR: External Table : count of bad input rows reached maxerrors limit

What Does the Error Mean

  • In a nutshell, it mean invalid data was submitted and could not be inserted.

What To Do

  • Basically, you need to go to the Netezza logs to see why the rows were reject and resolve input data error, then resubmit your transactions. The logs are temporary and reused, so, you need to get to them before they are over written.

Where Are The Data Logs

  • In linux the logs can be found in /tmp:

For nzload Methods Logs

  • /tmp/database name.table name.nzlog
  • /tmp/database name.table name.nzbad

For External Table Load Logs

  • /tmp/external table name.log
  • /tmp/external table name.bad

Related References

 

How to stop and restart Cognos Service from Linux command line

stop and restart cognos service from linux command line

stop and restart cognos service from linux command line

I don’t do this very often, but recent had to look this up to help out a project.  Stopping and restarting a Cognos from a Linux command line is relatively simple, just a couple of commands.

  • Log on to the reporting server as Root user or a non-root user with administrative privileges.
  • Find the path to install bin directory.  I use this find command, but you can do what works for you:   find . -name “cogconfig.sh”
  • Launch an and navigate to the bin directory as follows: <Cognos_Home>/bin64
  • Where <Cognos_Home> is the installation location of the Cognos® application.
  • Do the following one or both of the following, according to what you are attempting to do:
    • To start the service, enter the following command: ./cogconfig.sh -s
    • To stop the service, enter the following command: ./cogconfig.sh -stop

 

Related References

 

Surrogate Key File Effective Practices

Database Table, Surrogate Key File Effective Practices

Surrogate Key File Effective Practices

Here are a few thoughts on effectively working with IBM Infosphere, Information Server, DataStage surrogate key files, which may prove useful for developers.

 

Placement

  • The main thing about placement is that it be in a consistent location. Developers and production support teams should need to guess or look up where it is for every DataStage project. So, it is best to put the surrogate keys in same base path and that each project has its own subfolder to facilitate migrations and to reduce the possibility of human error. Here Is the patch structure, which is commonly use:

Path

  • /data/SRKY/<<Project Name>>

Parameter Sets

  • As a best practice, the surrogate key file path should be in a parameter set and the parameter used in the jobs, as needed.  This simplifies maintenance, if and when changes to the path are required, and during migrations.

Surrogate Key Parameter Set Screenshot – Example Parameter Tab

Surrogate Key Parameter Set Screen Screen

Surrogate Key Parameter Set Screenshot

Surrogate Key Parameter Set Screenshot – Example Values Tab

Surrogate Key Parameter Set Screenshot – Example Values Tab

Surrogate Key Parameter Set Screenshot – Example Values Tab

Surrogate Key Job Parameter Example Path using Parameter

Surrogate Key Job Parameter Example Path using Parameter

Surrogate Key Job Parameter Example Path using Parameter

Permissions

  • DataStage must have permissions to:
    • The entire parent path
    • The project folder, and
    • The surrogate key files themselves.

To ensure the DataStage has access to the path and Surrogate files, ensure:

  • The ‘dsadm’ (owner) and ‘dstage’ (group) have access to folders in the path, with at least a “-rw-r–r–“ (644) permissions level. Keeping the permissions to a minimum can, for obvious reasons,  prevent inadvertent overwrites of the surrogate key files; thus, avoiding some, potentially, serious cleanup.
  • The ‘dsadm’ (owner) and ‘dstage’ (group) have access to the surrogate key files

Surrogate Key File Minimum Permissions

Surrogate Key File Minimum Permissions

Surrogate Key File Minimum Permissions

Productivity Tip – Quickly create a new surrogate key file

Linux

Linux

This productivity tip, is how we can quickly create a new surrogate key file in Linux.  This example is leveraging native capabilities of Red Hat Enterprise Linux (RHEL) to skip a few commands, by using an existing surrogate key file to create a new surrogate file with a minimum of keys strokes and command line entries.

Creating a New Surrogate Key File From an Existing File

The basic process consists of just a few steps:

  1. Navigate to the location of your existing surrogate key files
  2. Copy an existing surrogate file
  3. Empty the new surrogate key file

Navigate to the location of your existing surrogate key files

This step is preparatory step; you will need to look at the path variable for the project you are working with to know where to go.  The actual path to the surrogate files your project can vary by project.

Copy an existing surrogate file

Assuming you have existing surrogate key files configured as needed, the use of the copy (cp) command can and the interactive and preserve options can eliminate the need to create the file, then set groups and permissions.   The interactive (-i) option prevent you from overwriting an existing files, in case you made a filename typo and the preserver (-p) option preserve the specified attributes (e.g. ownership, and permissions).

Basic Command

  • Here is the command formats with interactive and preserve, either format works
    • cp -ip <<FileName to Be Copied>> <<New Filename>>
  • Here is the command formats with only preserve
    • cp -p <<FileName to Be Copied>> <<New Filename>>

Example Command

  • cp -ip srky  blogexampl.srky
Copy Surrogate Key With Permissions

Copy Surrogate Key With Permissions

Empty the new surrogate key file

Setting the newly create surrogate key file to null will empty the file, so, DataStage can begin from the point configure in your DataStage job.

Basic Command

  • cat /dev/null > <<FileName>>

Example Command

  • cat /dev/null > blogexample.srky
Empty Surrogate Key File

Empty Surrogate Key File

Related References

 

Productivity Tip – Changing Owner and Groups on Surrogate Key File

Linux

Linux

 

This practice tip, is how we quickly update surrogate key file owner and group in Linux.  This example is leveraging native capabilities of Red Hat Enterprise Linux (RHEL) to skip a few commands, by using a combined command to set both the owner and group of a surrogate key file in a single command.

Surrogate Key File Owners and Groups

To ensure the DataStage has access to the path and Surrogate files, ensure the ‘dsadm’ (owner) and ‘dstage’ (group) have access to the surrogate key files

Setting Surrogate Key File Owners and Groups

You can change the ownership and group of a surrogate file at the same time, in Linux, with the change owner command. To do this navigate the surrogate key path containing the file, then execute the chown combined command.

Command chown basic format

  • chown <<OWNER>>:<<Group>> <<File Name>>

Example chown command

  • chown dsadm:dstage Blogexampl.txt
Chown On Surrogate Key File

Chown On Surrogate Key File

Related Reference

 

 

Netezza / PureData – How to find and kill table locks

Netezza/PureData - How to find and kill table locks

Netezza/PureData Table Session Locks

Sometimes there is a need to find and/or kill (terminate) table locks, so, that application process and user access can be restored.  To do this relative straight forward if you have access and the appropriate permission to the Netezza PureData server.

How to find table locks on a Netezza database

  • Log into the Netezza server
  • From the command line, navigate to Netezza directory (e.g. cd /NZ)
  • On the command line enter show locks command

 

Show Locks Command (nz_show_locks) Syntax

nz_show_locks <db name> <tablename>

 

Example Show Locks Command (nz_show_locks)

nz_show_locks dashboard_staging stg_nz_query_history

Netezza PureData Kill Table Session locks

Netezza PureData Kill Table Session locks

How to kill table locks on a Netezza database

  • Perform find locks above
  • Then, on the command line enter kill sessions locks command

Kill Sessions Command (nzsession) Syntax

nzsession subcmd [subcmd options]

Example Kill Sessions Command (nzsession)

nzsession abort -id  523662 -force

Related References:

 

l

 

 

 

 

 

 

 

How to clear a Surrogate Key file using Linux

Linux - Ho to Empty Surrogate Key File

Linux – Ho to Empty Surrogate Key File

Occasionally, the question comes up about how to clear/reset the surrogate key file from the Linux command line.  It is a simple process really, but it should be done with care and, only, if you need the keys in the dimension to be reset to the beginning.   A complete reset would require:

  • The target table to be truncated and,
  • All keys in use in facts to be removed, or reset after the fact, and/or the table truncated and reloaded,
  • The Surrogate Key file emptied, and
  • The ETL rerun.

Basic Command

  • cat /dev/null > <<FileName>>

Example Command

  • cat /dev/null > Season.srky

Related Links

DB2 JDBC ISJDBC.CONFIG Configuration

JDBC ( Java Database Connectivity)

JDBC ( Java Database Connectivity)

Here are a few pointers for building an IBM InfoSphere Information Server (IIS) isjdbc.config file for an IBM DB2 Universal Driver, Type 4.

Where to place JAR files

For Infosphere Information Server installs, as a standard practice, create a custom jdbc file in the install path.  And install any download Jar file not already installed by other applications in the jdbc folder. Usually, jdbc folder path looks something like this:

  • /opt/IBM/InformationServer/jdbc

CLASSPATH

  • db2jcc.jar
  • Classpath must have complete path and jar name

CLASS_NAMES

  • ibm.db2.jcc.DB2Driver

JAR Source URL

DB2 DEFAULT PORT

  • 5000

JDBC URL FORMAT

  • jdbc:db2://[:]/

JDBC URL EXAMPLE

jdbc:db2://127.0.0.1:50000/IADB

isjdbc.config EXAMPLE

CLASSPATH=/opt/IBM/InformationServer/ASBNode/lib/java/db2jcc.jar;

CLASS_NAMES=com.ibm.db2.jcc.DB2Driver;

 

Isjdbc.config FILE PLACEMENT

  • /opt/IBM/InformationServer/Server/DSEngine

Related References

 

Linux – Locate Command

Linux

Linux

While working with a Linux admin, he introduced me to the Linux ‘Locate’ command, which until now I had not seen or used.  The ‘locate’ command works much like the ‘find’ and is a quick and easy way to find files on the system.  We were using ‘locate’ to discover files within the server and the commands simplicity proved useful to me.

When the locate command is used without any options, the locate command displays every absolute pathname for which the user has access permission that contains any of the names of files and/or directories for the identified file.  So, it is important to know what rights your user has or better yet user the command as root or root sudo.  Otherwise, existing files can be omitted do to permissions restrictions. Also, the scope of the results is broader than and, usually, more complete than the find command.

Syntax for locate command

  • locate [options] name(s)

Example Locate command

When run as root this command returns all occurrences of the ‘odbc.ini’ file and their absolute path.

  • Locate odbc.ini

Locate Command Results

Linux Locate Command Results

Linux Locate Command Results

Same Search Utilizing the Find Command

Not only is the find command more complex for the purpose, but the results are more narrow in there information return.

Example Find command

  • find -type f -name odbc.ini

Find command Results

Linux Find Command Results

Linux Find Command Results

 

 

Netezza – JDBC ISJDBC.CONFIG Configuration

JDBC ( Java Database Connectivity)

JDBC ( Java Database Connectivity)

 

This jdbc information is based on Netezza (7.2.0) JDBC for InfoSphere Information Server11.5.  so, here are a few pointers for building an IBM InfoSphere Information Server (IIS) isjdbc.config file.

Where to place JAR files

For Infosphere Information Server installs, as a standard practice, create a custom jdbc file in the install path.  And install any download Jar file not already installed by other applications in the jdbc folder. Usually, jdbc folder path looks something like this:

  • /opt/IBM/InformationServer/jdbc

CLASSPATH

  • nzjdbc3.jar
  • Classpath must have complete path and jar name

CLASS_NAMES

  • netezza.Driver

JAR Source URL

IBM Netezza Client Components V7.2 for Linux

IBM Netezza Client Components V7.2 for Linux

 

File name

  • nz-linuxclient-v7.2.0.0.tar.gz

Unpack tar.gz

  • tar -zxvf nz-linuxclient-v7.2.0.0.tar.gz -C /opt/IBM/InformationServer/jdbc

DB2 DEFAULT PORT

  • 1521

JDBC URL FORMAT

  • jdbc:netezza://:/

JDBC URL EXAMPLE

  • jdbc:netezza://10.999.0.99:5480/dashboard

 

isjdbc.config EXAMPLE

CLASSPATH=usr/jdbc/nzjdbc3.jar;/usr/jdbc/nzjdbc.jar;/usr/local/nz/lib/nzjdbc3.jar;

CLASS_NAMES= org.netezza.Driver;

 

Isjdbc.config FILE PLACEMENT

  • /opt/IBM/InformationServer/Server/DSEngine

 

Related References

Data Warehouse – Effective Practices

Methodology

Methodology

Effective Practices

Effective practices are enablers, which can improve performance, data availability, environment stability, resource consumption, and data accuracy.

Use of an Enterprise Scheduler

The scheduling service in InfoSphere information Server (IIS) leverages the operating system (OS) scheduler, the common enterprise scheduler can provide these capabilities beyond those of a common OS scheduler:

  • Centralized control, monitoring, and maintenance of job stream processes
  • Improved insight into and control of cycle processes
  • Improved intervention capabilities, including alerts, job stream suspension, auto-restarts, and upstream/downstream dependency monitoring
  • Reduced time-to-recovery and increased flexibility in recovery options
  • Improved ability to monitor and alert for mission critical process that may be delayed or failing
  • Improved ability to automate disparate process requirements within and across systems
  • Improved load balancing to optimize use of resources or to compensate for loss of a given resource
  • Improved scalability and adaptability to infrastructure or application environment changes

Use of data Source Timestamps

When they exist or can be added to data, ‘created’ and ‘last updated’ timestamps can greatly reduce the impact of Change Data Capture (CDC) operations.  Especially, if the data warehouse, data model and load process store that last success run time of CDC jobs. This reduces the number of rows required to be processed and reduces the load on the RDBMS and/or ETL application server.  Leveraging ‘created’ and ‘last updated’ can, also, greatly reduce processing time required to perform the same CDC processes.

Event Based Scheduling

Event based scheduling, when coupled with an Enterprise scheduler, can increase data availability, distribute work opportunistically. Event based scheduling can allow all or part of a process stream to begin as soon as predecessor data sources have completed the requisite processes.  This can allow processes to begin soon as possible, which can reduce resource bottlenecks and contention. This, potentially, allows data to be made available earlier than a static time based schedule.  Event based scheduling can also delay processing, should the source system requisite processing completion be delayed; thereby, improving data accuracy in the receiving system.

Integrated RDBMS Maintenance

Integrating RDBMS Maintenance into the process job stream can perform on demand optimization as the processes move through their flow, improving performance.  Items such as indexing, distribution, and grooming, maintenance at key points ensures that the data structures are optimized for follow on processes to consume.

Application Server and Storage  Space Monitoring and Maintenance

Monitoring and actively clearing disk space can not only improves overall performance, and reduce costs, but it also improves application stability.

Data Retention Strategies

Data Retention strategies, an often overlooked form of data maintenance, which deals with establishing policies ensure only truly necessary data is kept and that information by essential category, which is no longer necessary is purged to limit legal liability, limit data growth, storage costs, and improve RDBMS performance.

Use Standard Practices

Use of standard practices both, application and industry, allows experienced resources to more readily understand the major application activities, their relationships, dependency, design and code.  This facilitates resourcing and support over the life cycle of the application.

 

How to get the current Status of a DB2 Database

To determine the current configuration of a DB2 database on a Linux server:

  • Sign-on to Linux server containing database as instance owner (e.g. db2inst1)
  • Run the following command to find out the current state of the database: db2 get db cfg for >

You should receive a response similar to this, but with additional values:

DB2 Current Database Configuration Example

DB2 Current Database Configuration Example