How the IBM Common SQL Engine (CSE) Improves DB2

Common SQL Engine (CSE)

Common SQL Engine (CSE)

Today, newfound efficiencies and innovation are key to any business success – small, medium or large. In the rapidly evolving field of data analytics, innovative approaches to handling data are particularly important since data is the most valuable resource any business can have. IBM common SQL Engine is delivering application and query compatibility that is allowing companies to turn their data into actionable insights. This is allowing businesses to unleash the power of their databases without constraints.

But, is this really important?

Yes. Many businesses have accumulated tons of data over the years. This data resides in higher volumes, more locations throughout an enterprise – on-premise and on-cloud –, and in greater variety. Typically, this data should be a huge advantage, providing enterprises with actionable insights. But, often, this doesn’t happen.

IBM Hybrid Data Management.

With such a massive barrel of complex legacy data, many organizations find it confusing to decide what to do with it. Or where to start. The process of migrating all that data into new systems is simply a non-starter. As a solution, enterprises are turning to IBM Db2 – a hybrid, intuitive data approach that marries data and analytics seamlessly. IBM Db2 hybrid data management allows flexible cloud and on-premises deployment of data.

However, such levels of flexibility typically require organizations to rewrite or restructure their queries, and applications that will use the diverse, ever-changing data. These changes may even require you to license new software. This is costly and unfeasible. To bridge this gap, the Common SQL Engine (CSE) comes into play.

How IBM Common SQL Engine is Positioning Db2 for the Future?

The IBM Common SQL Engine inserts a single layer of data abstraction at the very data source. This means that, instead of migrating the data all at once, you can now apply data analytics wherever the data resides – whether on private, public or hybrid cloud – by using the Common SQL Engine as a bridge.

The IBM’s Common SQL Engine provides portability and consistency of SQL commands, meaning that the SQL is functionally portable across multiple implementations. It allows seamless movement of workloads to the cloud and allows for multiplatform integration and configurations regardless of their programming language.

Ideally, the Common SQL Engine is supposed to be the heart of the query and the foundation of application compatibility. But it does so much more!

Its compatibility extends beyond data analytic applications to include security, management, governance, data management, and other functionalities as well.

How does this improve the quality, flexibility, and portability of Db2?

By allowing for integration across multiple platforms, workloads and programming languages, the Common SQL Engine, ultimately, leads to a “data without limits” environment for Db2 hybrid data management family through:

  1. Query and application compatibility

The Common SQL engine (CSE) ensures that users can write a query, and be confident that it will work across the Db2 hybrid data management family of offerings. With the CSE, you can change your data infrastructure and location – on-cloud or on-premises – without having to worry about license costs and application compatibility.

  1. Data virtualization and Integration

The common SQL engine has a built-in data virtualization service that ensures that you can access your data from all your sources. These services position Db2 family of offerings including, IBM Db2 warehouse, IBM Db2, IBM Db2 BigSQL amongst others.

This services also applies to IBM Integrated Analytics System, Teradata, Oracle, Puredata and Microsoft SQL server. Besides, you can work seamlessly with open-source solutions such as HIVE; and cloud sources such as Amazon Redshift. Such levels of integration are unprecedented!

By allowing users to effectively pull data from Db2 data stores and integrate it with data from non-IBM stores using a single query, the common SQL engine places Db2 at an authoritative position as compared to other data stores.

  1. Flexible Licensing

Licensing is one of the hardest nuts to crack, especially for smart organizations who rely on technologies such as the cloud to deliver their services. While application compatibility and data integration will save you time, flexible licensing saves you money, on the spot.

IBM’s common SQL engine allows flexible licensing, meaning that you can purchase one license model and deploy it whenever needed, or as your data architecture evolves. Using IBM’s FlexPoint licensing, you can purchase FlexPoints and use them across all Db2 data management offerings. This is a convenience in one place.

The flexible licensing will not only simplify the adoption and exchange of platform capabilities, but it also positions your business strategically by making it more agile. Your data managers will be able to access the tools needed on the fly, without going through a lethargic and tedious procurement process.

IBM Db2 Data Management Family Is Supported by Common SQL Engine (CSE) .

IBM Db2 is a family of custom, deployable database that allows enterprises to leverage existing investments. IBM Db2 allows businesses to use any type of data from an either structured or unstructured database (or data warehouse). It provides the right data foundation/environment with industry-leading data compression, on-premise and cloud deployment options, modern data security, robust performance for mixed loads and the ability to adjust and scale without redesigning.

The IBM Db2 family enable businesses to adapt, scale quickly and remain competitive without compromising security, risk levels or privacy. It features:

  • Always-on availability
  • Deployment and flexibility: On-premises, scale-on demand, and private or cloud deployments• Compression and performance
  • Embedded IoT technology is allowing businesses to act fast on the fly.

Some of these Db2 family offerings that are supported by the common SQL engine include:

  • Db2 Database
  • Db2 Hosted
  • Db2 Big SQL
  • Db2 on Cloud
  • Db2 Warehouse
  • Db2 Warehouse on Cloud
  • IBM Integrated Analytics System (IIAS)

Db2 Family Offerings and Beyond

Since the common SQL engine mainly focuses on data federation and propensity, other non-IBM databases can as well plug into the engine for SQL processing. These other 3rd party offerings include:

  • Watson Data Platform
  • Oracle
  • Hadoop
  • Microsoft SQL Server
  • Teradata
  • Hive

Conclusion

IBM Common SQL engine is allowing organizations to fully use data analytics to future-proof their business, and as well remain agile and competitive. In fact, besides the benefits of having robust tools woven into CSE, this SQL engine offers superior analytics and machine-learning positioning. Data processing can now happen at the speed of light –- 2X to 5X faster. The IBM Common SQL engine adds important capabilities to Db2, including freedom of location, freedom of use, and freedom of assembly.

Related References

End Of Support For IBM InfoSphere 9.1.0

IBM Information Server (IIS)

IBM Information Server (IIS)

End of Support for IBM InfoSphere Information Server 9.1.0

IBM InfoSphere Information Server 9.1.0 will reach End of Support on 2018-09-30.  If you are still on the InfoSphere Information Server (IIS) 9.1.0, I hope you have a plan to migrate to an 11-series version soon.  InfoSphere Information Server (IIS) 11.7 would be worth considering if you don’t already own an 11-series license. InfoSphere Information Server (IIS) 11.7 will allow you to take advantage of the evolving thin client tools and other capabilities in the 2018 release pipeline without needing to perform another upgrade.

Related References

IBM Support, End of support notification: InfoSphere Information Server 9.1.0

IBM Support, Software lifecycle, InfoSphere Information Server 9.1.0

IBM Knowledge Center, Home, InfoSphere Information Server 11.7.0, IBM InfoSphere Information Server Version 11.7.0 documentation

Essbase Connector Error – Client Commands are Currently Not Being Accepted

DataStage Essbase Connector, Essbase Connector Error, Client Commands are Currently Not Being Accepted

DataStage Essbase Connector

While investigating a recent Infosphere Information Server (IIS), Datastage, Essbase Connect error I found the explanations of the probable causes of the error not to be terribly meaningful.  So, now that I have run our error to ground, I thought it might be nice to jot down a quick note of the potential cause of the ‘Client Commands are Currently Not Being Accepted’ error, which I gleaned from the process.

Error Message Id

  • IIS-CONN-ESSBASE-01010

Error Message

An error occurred while processing the request on the server. The error information is 1051544 (message on contacting or from application:[<<DateTimeStamp>>]Local////3544/Error(1013204) Client Commands are Currently Not Being Accepted.

Possible Causes of The Error

This Error is a problem with access to the Essbase object or accessing the security within the Essbase Object.  This can be a result of multiple issues, such as:

  • Object doesn’t exist – The Essbase object didn’t exist in the location specified,
  • Communications – the location is unavailable or cannot be reached,
  • Path Security – Security gets in the way to access the Essbase object location
  • Essbase Security – Security within the Essbase object does not support the user or filter being submitted. Also, the Essbase object security may be corrupted or incomplete.
  • Essbase Object Structure –  the Essbase object was not properly structured to support the filter or the Essbase filter is malformed for the current structure.

Related References

IBM Knowledge Center, InfoSphere Information Server 11.7.0, Connecting to data sources, Enterprise applications, IBM InfoSphere Information Server Pack for Hyperion Essbase

Printable PDF Version of This Article

 

What Are The DataStage / QualityStage Join Stages?

Three Stages Which Join Records

Three Stages Which Join Records

While chasing an error to which only applied to join type stages, I thought it might be nice to identify what the InfoSphere Information Server DataStage / QualityStage are.  There are three of them, as you can see from the picture above, which are the:

  • Join Stage,
  • Lookup Stage,
  • And, Merge Stage.

All three stages that join data based on the values of identified key columns.

Related References

IBM Knowledge Center, InfoSphere Information Server 11.7.0, InfoSphere DataStage and QualityStage, Developing parallel jobs, Processing Data, Lookup Stage

IBM Knowledge Center, InfoSphere Information Server 11.7.0, InfoSphere DataStage and QualityStage, Developing parallel jobs, Processing Data, Join Stage

IBM Knowledge Center, InfoSphere Information Server 11.7.0, InfoSphere DataStage and QualityStage, Developing parallel jobs, Processing Data, Merge Stage

https://www.ibm.com/support/knowledgecenter/SSZJPZ_11.7.0/com.ibm.swg.im.iis.ds.parjob.dev.doc/topics/c_deeref_Merge_Stage.html

Parallel jobs on Windows fail with APT_IOPort::readBlkVirt;error

APT_IOPort::readBlkVirt Error Screenshot

APT_IOPort::readBlkVirt Error Screenshot

This a known error for windows systems and applies to DataStage and DataQuality jobs using the any or all the three join type stages (Join, Merge, and Lookup).

Error Message

  • <<Link name>>,0: APT_IOPort::readBlkVirt: read for block header, partition 0, [fd 4], returned -1 with errno 10,054 (Unknown error)

Message ID

  • IIS-DSEE-TFIO-00223

Applies To

  • Windows systems only
  • Parallel Engine Jobs the three join type stages (Join, Merge, and Lookup). It does not apply to Server Engine jobs.
  • Infosphere Information Server (IIS), Datastage and DataQuality 9.1 and higher

The Fix

  • Add the APT_NO_IOCOMM_OPTIMIZATION in project administrator and set to blank or 0. I left it blank so it would not impact other jobs
  • Add the environment variable to the job producing the error and set to 1

What it APT_NO_IOCOMM_OPTIMIZATION Does

  • Sets the use of shared memory as the transport type, rather than using the default sockets transport type.
  • Note that in most cases sockets transport type is faster, so, you likely will not to set this across the project as the default for all job. It is best to apply it as necessary for problematic jobs.

Related References

InfoSphere DataStage and QualityStage, Version 9.1 Job Compatibility

IBM Support, JR54078: PARALLEL JOBS ON WINDOWS FAIL WITH APT_IOPORT::READBLKVIRT; ERROR

IBM Support, Information Server DataStage job fails with unknown error 10,054.

 

DataStage – How to Pass the Invocation ID from one Sequence to another

DataStage Invocation ID Passing Pattern Overview

DataStage Invocation ID Passing Pattern Overview

When you are controlling a chain of sequences in the job stream and taking advantage of reusable (multiple instances) jobs it is useful to be able to pass the Invocation ID from the master controlling sequence and have it passed down and assigned to the job run.  This can easily be done with needing to manual enter the values in each of the sequences, by leveraging the DSJobInvocationId variable.  For this to work:

  • The job must have ‘Allow Multiple Instance’ enabled
  • The Invocation Id must be provided in the Parent sequence must have the Invocation Name entered
  • The receiving child sequence will have the invocation variable entered
  • At runtime, a DataStage invocation id instance of the multi-instance job will generate with its own logs.

Variable Name

  • DSJobInvocationId

Note

This approach allows for the reuse of job and the assignment of meaningful instance extension names, which are managed for a single point of entry in the object tree.

Related References: 

IBM Knowledge Center > InfoSphere Information Server 11.5.0

InfoSphere DataStage and QualityStage > Designing DataStage and QualityStage jobs > Building sequence jobs > Sequence job activities > Job Activity properties

DataStage – How to use single quoted parameter list in an Oracle Connector

Data Integration

Data Integration

While working with a client’s 9.1 DataStage version, I ran into a situation where they wanted to parameterize SQL where clause lists in an Oracle Connector stage, which honestly was not very straight forward to figure out.  First, if the APT_OSL_PARAM_ESC_SQUOTE is not set and single quotes are used in the parameter, the job creates unquoted invalid SQL when the parameter is populated.  Second, I found much of the information confusing and/or incomplete in its explanation.   After some research and some trial and error, here is how I resolved the issue.  I’ll endeavor to be concise, but holistic in my explanation.

When this Variable applies

This where I know this process applies, there may be other circumstances to which is this applicable, but I’m listing the ones here with which I have recent experience.

Infosphere Information Server Datastage

  • Versions 91, 11.3, and 11.5

Oracle RDBMS

  • Versions 11g and 12c

Configurations process

Here is a brief explanation of the steps I used to implement the where clause as a parameter.  Please note that in this example, I am using a job parameter to populate on a portion of the where clause, you can certainly pass the entire where clause as a parameter, if it is not too long.

Configure Project Variable in Administrator

  • Add APT_OSL_PARAM_ESC_SQUOTE to project in Administrator
  • Populate the APT_OSL_PARAM_ESC_SQUOTE Variable \
APT_OSL_PARAM_ESC_SQUOTE Project Variable

APT_OSL_PARAM_ESC_SQUOTE Project Variable

Create job parameter

Following your project name convention or standard practice, if you customer and/or project do not have established naming conventions, create the job parameter in the job. See jp_ItemSource parameter in the image below.

Job Parameter In Oracle Connector

Job Parameter In Oracle Connector

Add job parameter to Custom SQL in Select Oracle Connector Stage

On the Job parameter has been created, add the job parameter to the SQL statement of the job.

Job Parameter In SQL

Job Parameter In SQL

Related References

IBM Knowledge Center > InfoSphere Information Server 11.5.0

Connecting to data sources > Databases > Oracle databases > Oracle connector

IBM Support > Limitation of the Parameter APT_OSL_PARAM_ESC_SQUOTE on Plugins on Parallel Canvas

IBM Knowledge Center > InfoSphere Information Server 11.5.0

InfoSphere DataStage and Quality > Stage > Reference > Parallel Job Reference > Environment Variables > Miscellaneous > APT_OSL_PARAM_ESC_SQUOTE

 

How to know if your Oracle Client install is 32 Bit or 64 Bit

Oracle Database, How to know if your Oracle Client install is 32 Bit or 64 Bit

Oracle Database

 

How to know if your Oracle Client install is 32 Bit or 64 Bit

Sometimes you just need to know if your Oracle Client install is 32 bit or 64 bit. But how do you figure that out? Here are two methods you can try.

The first method

Go to the %ORACLE_HOME%\inventory\ContentsXML folder and open the comps.xml file.
Look for <DEP_LIST> on the ~second screen.

If you see this: PLAT=”NT_AMD64” then your Oracle Home is 64 bit
If you see this: PLAT=”NT_X86” then your Oracle Home is 32 bit.

It is possible to have both the 32-bit and the 64-bit Oracle Homes installed.

The second method

This method is a bit faster. Windows has a different lib directory for 32-bit and 64-bit software. If you look under the ORACLE_HOME folder if you see a “lib” AND a “lib32” folder you have a 64 bit Oracle Client. If you see just the “lib” folder you’ve got a 32 bit Oracle Client.

Related References

 

Oracle – How to get a list of user permission grants

IBM Infosphere Information Server (IIS), Oracle – How to get a list of user permission grants

IBM Infosphere Information Server (IIS)

Since the Infosphere, information server, repository, has to be installed manually with the scripts provided in the IBM software, sometimes you run into difficulties. So, here’s a quick script, which I have found useful in the past to identify user permissions for the IAUSER on Oracle database’s to help rundown discrepancies in user permissions.

 

SELECT *

FROM ALL_TAB_PRIVS

WHERE  GRANTEE = ‘iauser’

 

If we cannot run against the ALL_TAB_PRIVS view, then we can try the ALL_TAB_PRIVS view:

 

SELECT *

FROM USER_TAB_PRIVS

WHERE  GRANTEE = ‘iauser’

 

Related References

oracle help Center > Database Reference > ALL_TAB_PRIVS view

Netezza / PureData – Table Describe SQL

Netezza Puredata Table Describe SQL

Netezza / Puredata Table Describe SQL

If you want to describe a PureData / Netezza table in SQL, it can be done, but Netezza doesn’t have a describe command.  Here is a quick SQL, which will give the basic structure of a table or a view.  Honestly, if you have Aginity Generating the DDL is fast and more informative, at least to me.  If you have permissions to access NZSQL you can also use the slash commands (e.g. \d).

Example Netezza Table Describe SQL

select  name as Table_name,

owner as Table_Owner,

Createdate as Table_Created_Date,

type as Table_Type,

Database as Database_Name,

schema as Database_Schema,

attnum as Field_Order,

attname as Field_Name,

format_type as Field_Type,

attnotnull as Field_Not_Null_Indicator,

attlen as Field_Length

from _v_relation_column

where

name='<<Table Name Here>>’

Order by attnum;

 

Related References

IBM Knowledge Center, PureData System for Analytics, Version 7.2.1

IBM Netezza database user documentation, Command-line options for nzsql, Internal slash options

IBM Knowledge Center, PureData System for Analytics, Version 7.2.1

IBM Netezza getting started tips, About the Netezza data warehouse appliance, Commands and queries, Basic Netezza SQL information, Commonly used nzsql internal slash commands

IBM Knowledge Center, PureData System for Analytics, Version 7.2.1

IBM Netezza database user documentation, Netezza SQL introduction, The nzsql command options, Slash options

 

 

Netezza / PureData – Substring Function Example

SQL (Structured Query Language), Netezza PureData – Substring Function Example, Substr

Netezza / PureData – Substring Function Example

The function Substring (SUBSTR) in Netezza PureData provides the capability parse character type fields based on position within a character string.

Substring Functions Basic Syntax

SUBSTRING Function Syntax

SUBSTRING(<<CharacterField>>,<< StartingPosition integer>>, <<for Number of characters Integer–optional>>)

 

SUBSTR Function Syntax

SUBSTR((<>,<< StartingPosition integer>>, <>)

 

Example Substring SQL

Netezza / PureData Substring Example

Netezza / PureData Substring Example

Substring SQL Used In Example

SELECT  LOCATIONTEXT

— From the Left Of the String

— Using SUBSTRING Function

,’==SUBSTRING From the Left==’ as Divider1

,SUBSTRING(LOCATIONTEXT,1,5) as Beggining_Using_SUBSTRING_LFT

,SUBSTRING(LOCATIONTEXT,7,6) as Middle_Using_SUBSTRING_LFT

,SUBSTRING(LOCATIONTEXT,15) as End_Using_SUBSTRING_LFT

,’==SUBSTR From the Left==’ as Divider2

—Using SUBSTR Function

 

,SUBSTR(LOCATIONTEXT,1,5) as Beggining_Using_SUBSTR_LFT

,SUBSTR(LOCATIONTEXT,7,6) as Middle_Using_SUBSTR_LFT

,SUBSTR(LOCATIONTEXT,15) as End_Using_SUBSTR_LFT

— From the right of the String

,’==SUBSTRING From the Right==’ as Divider3

,SUBSTRING(LOCATIONTEXT,LENGTH(LOCATIONTEXT)-18, 8) as Beggining_Using_SUBSTRING_RGT

,SUBSTRING(LOCATIONTEXT,LENGTH(LOCATIONTEXT)-9, 6) as Middle_Using_SUBSTRING_RGT

,SUBSTRING(LOCATIONTEXT,LENGTH(LOCATIONTEXT)-1) as End_Using_SUBSTRING_RGT

,’==SUBSTR From the right==’ as Divider4

,SUBSTR(LOCATIONTEXT,LENGTH(LOCATIONTEXT)-18, 8) as Beggining_Using_SUBSTR_RGT

,SUBSTR(LOCATIONTEXT,LENGTH(LOCATIONTEXT)-9, 6) as Middle_Using_SUBSTR_RGT

,SUBSTR(LOCATIONTEXT,LENGTH(LOCATIONTEXT)-1) as End_Using_SUBSTR_RGT

FROM BLOG.D_ZIPCODE

where STATE = ‘PR’

AND CITY = ‘REPTO ROBLES’;

Related References

IBM Knowledge Center, PureData System for Analytics, Version 7.2.1

IBM Netezza database user documentation, Netezza SQL basics, Netezza SQL extensions, Character string functions

IBM Knowledge Center, PureData System for Analytics, Version 7.1.0

IBM Netezza Database User’s Guide, Netezza SQL basics, Functions and operators, Functions, Standard string functions

IBM Knowledge Center, PureData System for Analytics, Version 7.2.1

IBM Netezza database user documentation, Netezza SQL command reference, Functions

Netezza / PureData – Substring Function On Specific Delimiter

SQL (Structured Query Language), Netezza / PureData - Substring Function On Specific Delimiter, substr

Netezza / PureData – Substring Function On Specific Delimiter

The function Substring (SUBSTR) in Netezza PureData provides the capability parse character type fields based on position within a character string.  However, it is possible, with a little creativity, to substring based on the position of a character in the string. This approach give more flexibility to the substring function and makes the substring more useful in many cases. This approach works fine with either the substring or substr functions.  In this example, I used the position example provide the numbers for the string command.

 

Example Substring SQL

Netezza PureData Substring Function On Specific Character In String, substring, substr

Netezza PureData Substring Function On Specific Character In String

 

Substring SQL Used In Example

select LOCATIONTEXT

,position(‘,’ in LOCATIONTEXT) as Comma_Postion_In_String

—without Adjustment

,SUBSTRING(LOCATIONTEXT,position(‘,’ in LOCATIONTEXT)) as Substring_On_Comma

—Adjusted to account for extra space

,SUBSTRING(LOCATIONTEXT,position(‘,’ in LOCATIONTEXT)+2) as Substring_On_Comma_Ajusted

,’==Breaking_Up_The_Sting==’ as Divider

— breaking up the string

,SUBSTRING(LOCATIONTEXT,1, position(‘ ‘ in LOCATIONTEXT)-1) as Beggining_of_String

,SUBSTRING(LOCATIONTEXT,position(‘ ‘ in LOCATIONTEXT)+1, position(‘ ‘ in LOCATIONTEXT)-1) as Middle_Of_String

,SUBSTRING(LOCATIONTEXT,position(‘,’ in LOCATIONTEXT)+2) as End_Of_String

 

FROM Blog.D_ZIPCODE

where STATE = ‘PR’

AND CITY = ‘REPTO ROBLES’

Related References

IBM Knowledge Center, PureData System for Analytics, Version 7.2.1

IBM Netezza database user documentation, Netezza SQL basics, Netezza SQL extensions, Character string functions

IBM Knowledge Center, PureData System for Analytics, Version 7.1.0

IBM Netezza Database User’s Guide, Netezza SQL basics, Functions and operators, Functions, Standard string functions

IBM Knowledge Center, PureData System for Analytics, Version 7.2.1

IBM Netezza database user documentation, Netezza SQL command reference, Functions

Netezza / PureData – Position Function

SQL (Structured Query Language), Netezza PureData Position Function, SQL, Position Function

Netezza / PureData Position Function

 

The position function in Netezza is a simple enough function, it just returns the number of a specified character within a string (char, varchar, nvarchar, etc.) or zero if the character not found. The real power of this command is when you imbed it with character function, which require a numeric response, but the character may be inconsistent from row to row in a field.

The Position Function’s Basic Syntax

position(<<character or Character String>> in <<CharacterFieldName>>)

 

Example Position Function SQL

Netezza PureData Position Function, SQL, Position Function

Netezza PureData Position Function

 

Position Function SQL Used in Example

select LOCATIONTEXT, CITY

,’==Postion Funtion Return Values==’ as Divider

,position(‘,’ in LOCATIONTEXT) as Postion_In_Nbr_String

,position(‘-‘ in LOCATIONTEXT) as Postion_Value_Not_Found

,’==Postion Combined with Substring Function==’ as Divider2

,SUBSTRING(LOCATIONTEXT,position(‘,’ in LOCATIONTEXT)+2) as Position_Used_in_Substring_Function

FROM Blog.D_ZIPCODE  where STATE = ‘MN’ AND CITY = ‘RED WING’ limit 1;

 

 

Related References

IBM Knowledge Center, PureData System for Analytics, Version 7.1.0

IBM Netezza Database User’s Guide, Netezza SQL basics, Functions and operators, Functions, Standard string functions

IBM Knowledge Center, PureData System for Analytics, Version 7.2.1

IBM Netezza database user documentation, Netezza SQL command reference, Functions

 

PureData / Netezza – What date/time ranges are supported by Netezza?

SQL (Structured Query Language), Date/Time ranges supported by Netezza

Date/Time ranges supported by Netezza

Here is a synopsis of the temporal ranges ( date, time, and timestamp), which Netezza / PureData supports.

Temporal Type

Supported Ranges

Size In Bytes

Date

A month, day, and year. Values range from January 1, 0001, to December 31, 9999. 4 bytes

Time

An hour, minute, and second to six decimal places (microseconds). Values range from 00:00:00.000000 to 23:59:59.999999. 8 bytes

Related References

Temporal data types

PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza database user documentation, Netezza SQL basics, Data types, Temporal data types

Netezza date/time data type representations

PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza user-defined functions, Data type helper API reference, Temporal data type helper functions, Netezza date/time data type representations

Date/time functions

PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza database user documentation, Netezza SQL basics, Netezza SQL extensions, Date/time functions

Netezza / PureData – How to add a Foreign Key

DDL (Data Definition Language), Netezza PureData How to add a Foreign Key

DDL (Data Definition Language)

Adding a forging key to tables in Netezza / PureData is a best practice; especially, when working with dimensionally modeled data warehouse structures and with modern governance, integration (including virtualization), presentation semantics (including reporting, business intelligence and analytics).

Foreign Key (FK) Guidelines

  • A primary key must be defined on the table and fields (or fields) to which you intend to link the foreign key
  • Avoid using distribution keys as foreign keys
  • Foreign Key field should not be nullable
  • Your foreign key link field(s) must be of the same format(s) (e.g. integer to integer, etc.)
  • Apply standard naming conventions to constraint name:
    • FK_<<Constraint_Name>>_<<Number>>
    • <<Constraint_Name>>_FK<<Number>>
  • Please note that foreign key constraints are not enforced in Netezza

Steps to add a Foreign Key

The process for adding foreign keys involves just a few steps:

  • Verify guidelines above
  • Alter table add constraint SQL command
  • Run statistics, which is optional, but strongly recommended

Basic Foreign Key SQL Command Structure

Here is the basic syntax for adding Foreign key:

ALTER TABLE <<Owner>>.<<NAME_OF_TABLE_BEING_ALTERED>>

ADD CONSTRAINT <<Constraint_Name>>_fk<Number>>

FOREIGN KEY (<<Field_Name or Field_Name List>>) REFERENCES <<Owner>>.<<target_FK_Table_Name>.(<<Field_Name or Field_Name List>>) <<On Update | On Delete>> action;

Example Foreign Key SQL Command

This is a simple one field example of the foreign key (FK)

 

ALTER TABLE Blog.job_stage_fact

ADD CONSTRAINT job_stage_fact_host_dim_fk1

FOREIGN KEY (hostid) REFERENCES Blog.host_dim(hostid) ON DELETE cascade ON UPDATE no action;

Related References

Alter Table

PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza database user documentation, Netezza SQL command reference, Alter Table, constraints

 

 

Database – What is a foreign key?

Acronyms, Abbreviations, Terms, And Definitions, DDL (Data Definition Language), What is a foreign key

Acronyms, Abbreviations, Terms, And Definitions

Definition of a Foreign Key

  • A foreign Key (FK) is a constraint that references the unique primary key (PK) of another table.

Facts About Foreign Keys

  • Foreign Keys act as a cross-reference between tables linking the foreign key (Child record) to the Primary key (parent record) of another table, which establishing a link/relationship between the table keys
  • Foreign keys are not enforced by all RDBMS
  • The concept of referential integrity is derived from foreign key theory
  • Because Foreign keys involve more than one table relationship, their implementation can be more complex than primary keys
  • A foreign-key constraint implicitly defines an index on the foreign-key column(s) in the child table, however, manually defining a matching index may improve join performance in some database
  • The SQL, normally, provides the following referential integrity actions for deletions, when enforcing foreign-keys

Cascade

  • The deletion of a parent (primary key) record may cause the deletion of corresponding foreign-key records.

No Action

  • Forbids the deletion of a parent (primary key) record, if there are dependent foreign-key records.   No Action does not mean to suppress the foreign-key constraint.

Set null

  • The deletion of a parent (primary key) record causes the corresponding foreign-key to be set to null.

Set default

  • The deletion of a record causes the corresponding foreign-keys be set to a default value instead of null upon deletion of a parent (primary key) record

Related References

 

What are the Core Capability of Infosphere Information Server?

Information Server Core (IIS) Capabilities

Information Server Core (IIS) Capabilities

 

Three Core Capabilities of Information Server

InfoSphere Information Server (IIS) has three core capabilities:

  • Information Governance
  • Data Integration
  • Data Quality

What the Core Capabilities Provide

The three-core capability translate in to the high-level business processes:

Information Governance – Understand and collaborate

Provides a centrally managed repository and approach, which provides:

  • Information blueprints
  • Relationship discovery across data sources
  • Information technology (IT)-to-business mapping

Data Integration – Transform and deliver

A data integration capability, which provides:

  • Transformation
    • Massive scalability
    • Power for any complexity
    • Total traceability
  • Delivery
    • Data capture at any time
    • Delivery anywhere
    • Big data readiness

Data Quality – Cleanse and monitor

To turn data assets into trusted information:

  • Analysis & validation
  • Data cleansing
  • Data quality rules & management

Related References

IBM Knowledge Center, InfoSphere Information Server Version 11.5.0

Overview of IBM InfoSphere Information Server, Introduction to InfoSphere Information Server

 

 

 

Netezza / PureData – How to rebuild a Netezza view in Aginity

How To Generate View or table DDL in Aginity For Netezza PureData

How To Generate View or table DDL in Aginity For Netezza

 

Rebuilding Netezza view sometimes becomes necessary when the view’s source table have changed underneath the view.  Rebuilding a view can be done on Netezza or in Aginity. In Aginity, it is a simple process, assume your user has permissions to create or replace a view.  The process breaks down into just a few steps:

Generate the create / replace view SQL of the original view into the query window, if you don’t have it already

In the object browser:

  • Navigate to the Database and view you wish to rebuild
  • Select the view and right click
  • Select ‘Scripts’, then ‘DDL to Query window’

Make may updates to create / replace View SQL

  • This step is not always necessary, sometimes the changes which invalided the view did not actually impact the code of the view. If changes are necessary, make may updates to the SQL code.

Execute The code

  • This I usually do by choosing the ‘Execute as a single batch’ option.  Make sure the code executes successfully.

Verify the view

  • To verify the simply execute a select statement and make it executes without errors and/or warning.

Related References

 

SFDC – Using a timestamp literal in a where clause

Salesforce Connector

Salesforce Connector

Working with timestamp literals in the Infosphere SFDC Connector soql is much like working date literals.  So, here a quick example which may save you some time.

SOQL Timestamp String Literals Where Clause Rules

Basically, the timestamp pattern is straight forward and like the process for dates, but there are some differences. The basic rules are for a soql where clause:

  • No quotes
  • No functions
  • No Casting function, or casting for the where soql where clause to read it
  • It only applies to datetime fields
  • A Timestamp identifier ‘T’
  • And the ISO 1806 time notations

Example SOQL Timestamp String Literals

So, here are a couple of timestamp string literal examples in SQL:

  • 1901-01-01T00:00:00-00:00
  • 2016-01-31T00:00:00-00:00
  • 9999-10-31T00:00:00-00:00

Example SQL with Timestamp String Literal Where Clause

 

Select e.Id,

e.AccountId,

e.StartDateTime

From Event e

WHERE e.StartDateTime > 2014-10-31T00:00:00-00:00

 

Related References

Salesforce Developer Documentation

Home, Developer Documentation, Force.com SOQL and SOSL Reference

https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql_select_dateformats.htm

Salesforce Workbench

Home, Technical Library, Workbench

W3C

Date Time Formats