Netezza / PureData – How to add comments on a field

Netezza / PureData Table Documentation Practices

Netezza / PureData Table Documentation Practices

The ‘Comment on Column’ provides the same self-documentation capability as ‘Comment On table’, but drives the capability to the column field level.  This provides an opportunity to describe the purpose, business meaning, and/or source of a field to other developers and users.  The comment code is part of the DDL and can be migrated with the table structure DDL.  The statement can be run independently, or working with Aginity for PureData System for Analytics, they can be run as a group, with the table DDL, using the ‘Execute as a Single Batch (Ctrl+F5) command.

Basic ‘COMMENT ON field’ Syntax

  • The basic syntax to add a comment to add a comment to a column is:

COMMENT ON COLUMN <<Schema.TableName.ColumnName>> IS ‘<<Descriptive Comment>>’;

Example ‘COMMENT ON Field’ Syntax

  • This is example syntax, which would need to be changed and applied to each column field:

COMMENT ON COLUMN time_dim.time_srky IS ‘time_srky is the primary key and is a surrogate key derived from the date business/natural key’;

 

Related References

 

What are the Factor Affecting the Selection of Data Warehouse Naming Convention?

Data Warehouse Naming Convention

Database Table

The primary factors affecting the choices in the creation of Data Warehouse (DW) naming convention policy standards are the type of implementation, pattern of the implementation, and any preexisting conventions.

Type of implementation

The type of implementation will affect your naming convention choices. Basically, this boils down to, are you working with a Commercial-Off-The-Shelf (COTS) data warehouse or doing a custom build?

Commercial-Off-The-Shelf (COTS)

If it is a Commercial-Off-The-Shelf (COTS) warehouse, which you are modifying and or enhancing, then it is very strongly recommended that you conform to the naming conventions of the COTS product.  However, you may want to add an identifier to the conventions to identify your custom object.

Using this information as an exemplar:

  • FAV = Favinger, Inc. (Company Name – Custom Identifier)
  • GlobalSales = Global Sales (Subject)
  • MV = Materialized View (Object Type)

 

Suffix Pattern Naming Convention

<<Custom Identifier>>_<<Object Subject Name>>_<<Object Type>>

Example:  FAV_GlobalSales_MV

Prefix Pattern Naming Convention

<<Object Type>>_<<Custom Identifier>>_<<Object Subject Name>>

Example: MV_FAV_GlobalSales

Custom Data Warehouse Build

If you are creating a custom data warehouse from scratch, then you have more flexibility to choose your naming convention.  However, you will still need to take into account a few factors to achieve the maximum benefit from you naming conventions.

  • What is the high level pattern of you design?
  • Are there any preexisting naming conventions?

Data Warehouse Patterns

Your naming convention will need to take into account the overall intent and design pattern of the data warehouse, the objects and naming conventions of each pattern will vary, if for no other reason than the differences in the objects, their purpose, and the depth of their relationships.

High level Pattern of the Data Warehouse Implementation

The high level pattern of you design whether an Operational Data Store (ODS), Enterprise Data Warehouse (EDW), Data Mart (DM) or something else, will need to guide your naming convention, as the depth of logical and/or processing zone of each pattern will vary  and have some industry generally accepted conventions.

Structural Pattern of the Data Warehouse Implementation

The structural pattern of your data warehouse design whether, Snowflake, 3rd Normal Form, or Star Schema, will  need to guide your naming convention, as the depth of relationships each pattern will vary, have some industry generally accepted conventions, and will relate directly to you High level Data Warehouse pattern.

Preexisting Conventions

Often omitted factor of data warehouse naming conventions are the sources of preexisting conventions, which can have significant impacts both from an engineering and political point of view. The sources of these conventions can vary and may or may not be formally documented.

A common source naming convention conflict is with preexisting implementations, which may not even be document.  However, system objects and consumers are familiar will be exposed to these conventions, will need to be taken into account when accessing impacts to systems, political culture, user training, and the creation of a standard convention for your data warehouse.

The Relational Database Management System (RDBMS) in which you intend to build the data warehouse may have generally accepted conventions, which consumers may be familiar and have a preconceived expectations whether expressed or intended).

Change Management

Whatever data warehouse naming convention you chose, the naming conventions along with the data warehouse design patterns assumptions, should be well documented and placed in a managed and readily accessible, change management (CM) repository.

 

 

PureData – Table Effective Practices

IBM Netezza PureData Table Effective Practices

Database Table

Here a few tips, which can make a significant difference to the efficiency and effectiveness of developers and users, making information available to them when developing and creating analytic objects.  This information can, also, be very help to data modelers.  While some of these recommendations are not enforced by Netezza/PureData, this fact makes them no less helpful to your community.

Alter table to Identify Primary Keys (PK)

  • Visually helps developers and users to know what the keys primary keys of the table are
  • Primary key information can, also, be imported as meta data by other IBM tools (e.g. InfoSphere, Datastage, Data Architect, Governance Catalog, Aginity, etc.)
  • The query optimizer will use these definitions define efficient query execution plans

Alter table to Identify Foreign Keys (FK)

  • Illustrate table relationships for developers and users
  • Foreign key information can, also, be imported as meta data by other IBM tools (e.g. InfoSphere, Datastage, Data Architect, Governance Catalog, Aginity, etc.)
  • The query optimizer will use these definitions define efficient query execution plans

Limit Distribution Key to Non-Updatable Fields

  • This one seems obvious, but this problem occurs regularly, if tables and optimizations are not properly planned; Causing an error will be generated, if an update is attempted against a field contain in the distribution of a table.

Use Null on Fields

  • Using ‘Not Null’ whenever the field data and ETL transformation rules can enforce it, helps improve performance by reducing the number of null condition checks performed and reduces storage.

Use Consistence Field Properties

  • Use the same data type and field length in all tables with the same field name, reduces the amount of interpretation/conversion required by the system, developers, and report SQL.

Schedule Table Optimizations

  • Work with your DBA’s to determine the best scheduling time, system user, and priority of groom and generate statistics operations. Keep in mind the relationship to when the optimizations occur in relation to when users need to consume the data. All too often, this operation is not performed before users need the performance and/or is driven by DBA choice, without proper consideration to other processing performance needs.  This has proven, especially true, in data warehousing when the DBA does not have Data warehousing experience and/or does not understand the load patterns of the ETL/ELT process.

Related Links

 

Data Modeling – Database Table Field Ordering Effective Practices

Database Table Field Ordering Effective Practices

Database Table

Field ordering can help the performance on inserts and updates and, also, keeps developer and users from having to search entire table structure to be sure they have all the keys, etc.

Table Field Ordering

  1. Distribution Field Or Fields, if no distribution field is set the first field will be used by default.
  2. Primary Key Columns (including Parent and Child key fields)
  3. Foreign Key Columns (Not Null)
  4. Not Null Columns
  5. Nullable Columns
  6. Created Date Timestamp
  7. Modified (or Last Updated) Date Timestamp
  8. Large text Fields
  9. Large binary Columns or Binary Field references

Related References

Infosphere Data Architect Install Error CRIMC1029E or CRIMC1085E

The CRIMC1029E / CRIMC1085E errors may be caused by running the incorrect Infosphere Data Architect installer.  If you run the admin installer (launchpad.exe) on 64bit windows with insufficient privileges, the process will throw a CRIMC1029E / CRIMC1085E error.

What the Error Looks Like

 

Installation failed.

CRIMC1029E: Adding plug-in com.ibm.etools.cobol.win32_7.0.921.v20140409_0421 to repository E:\Program Files\IBM\SDPShared failed.

CRIMC1085E: Resumable download failed for: file:/E:/InstallIBM/INFO_DATA_ARCH_V9.1.2_WIN/disk2/ad/plugins/file000086.

‘plug-in com.ibm.etools.cobol.win32_7.0.921.v20140409_0421’ does not exist (at file:/E:/InstallIBM/INFO_DATA_ARCH_V9.1.2_WIN/disk2/ad/plugins/file000086).

Elapsed time 00:00.00.

Solution

Run the launchpad_win_nonadmin64.exe file, instead of the launchpad.exe file.

Reference Link:

Installing IBM InfoSphere Data Architect with the Installation Manager Install wizard

 

Infosphere Information Server (IIS) Component Alignment

Infosphere Information Server SDLC Alignment

Infosphere Information Server SDLC Alignment

In recent history, I have been asked several times to describe where different IIS components fit in the Software Development Lifecycle (SDLC) process.  The graphic above, list most of the more important IIS components in their relative SDLC relationships. However, it is important to note that that these are not absolutes. Many applications may cross boundaries depending on the practices of the individual company, the application spurt licensed by the company, and/or the applications implemented by the company.  For example, many components will participate in the sustainment phase of SDLC, although I did not list him in that role. This is especially true, if you’re using the governance tools (e.g. governance catalog ) and supporting your sustainment activities with modeling and development tools, such as, data architect.

Related References

InfoSphere Information Server (IIS) Component Descriptions

Infosphere Information Server LaunchPad Page

Infosphere Information Server LaunchPad Page

 

Each IIS component has a primary function in the InfoSphere architecture, which can by synopsized as follows:

 

Application Function
Blueprint Director IBM InfoSphere Blueprint Director is aimed at the Information Architect designing solution architectures for information-intensive projects.
Cognos (If purchased) Governance Dashboard (Framework Manager Model provided by IBM), Semantics, Analytics, and Reporting
Data Architect Data Architect is an enterprise data modeling and integration design tool. You can use it to discover, model, visualize, relate, and standardize diverse and distributed data assets, including dimensional models.
Data Click Data Click is an exciting new capability that helps novices and business users retrieve data and provision systems easily in only a few clicks.
Datastage DataStage is a data integration tool that enables users to move and transform data between operational, transactional, and analytical target systems.
Discovery Discovery is used to identify the transformation rules that have been applied to source system data to populate a target. Once accurately defined, these business objects and transformation rules provide the essential input into information-centric projects.
Fasttrack FastTrack streamlines collaboration between business analysts, data modelers, and developers by capturing and defining business requirements in a common format and, then, transforming that business logic (Source-to-Target-Mapping (STTM)) directly into DataStage ETL jobs.
Glossary Anywhere Business Glossary Anywhere, its companion module, augments Governance Catalog with more ease-of-use and extensibility features.
Governance Catalog The Governance Catalog includes business glossary assets (categories, terms, information governance policies, and information governance rules) and information assets.
Information Analyzer Information Analyzer provides capabilities to profile and analyze data.
Information Services Director Information Services Director provides a unified and consistent way to publish and manage shared information services in a service-oriented architecture (SOA).
Metadata Asset Manager Import, export, and manage common metadata assets in Metadata Repository and across applications
Operations Console Admin workspaces to investigate data, deploy applications, Web services, and monitor schedules and logs.
Qualitystage QualityStage provides data cleansing capabilities to help ensure quality and consistency by standardizing, validating, matching, and merging information to create comprehensive and authoritative information.
Server Manager Deployment tool to move, deploy, and control DataStage and QualityStage assets.

Related References

Data Modeling – What is Data Modeling?

Data Models, Data Modeling, What is data Modeling, logical Model, Conceptual Model, Physical Model

Data Models

Data modeling is the documenting of data relationships, characteristics, and standards based on its intended use of the data.   Data modeling techniques and tools capture and translate complex system designs into easily understood representations of the data creating a blueprint and foundation for information technology development and reengineering.

A data model can be thought of as a diagram  that illustrates the relationships between data. Although capturing all the possible relationships in a data model can be very time-intensive, a well-documented models allow stakeholders to identify errors and make changes before any programming code has been written.

Data modelers often use multiple models to view the same data and ensure that all processes, entities, relationships and data flows have been identified.

There are several different approaches to data modeling, including:

Concept Data Model (CDM)

  • The Concept Data Model (CDM) identifies the high level information entities, their relationships, and organized in the Entity Relationship Diagram (ERD).

Logical Data Model (LDM)

  • The Logical Data Model (LDM)  defines detail business information (in business terms) within each of the Concept Data Model and is a refinement of the information entities of the Concept Data Model.   Logical data model are non-RDBMS specific  business definition of tables, fields, and attributes contained within each information entity from which the Physical Data Model (PDM) and Entity Relationship Diagram (ERD) is produced.

Physical Data Model (PDM)

  • The Physical Data Model (PDM)  provides the actual technical details of the model and database object (e.g. table names, field names, etc.) to facilitate creation of accurate detail technical designs and actual database creation.  Physical Data Models are RDBMS specific definition of the logical model used build database, create deployable DDL statements, and to produce the Entity Relationship Diagram (ERD).

Related References

 

Why Use Materialized Views?

Database Views - Why Use Materialized Views

Database Views

If we consider Materialized Views (MV) in their simplest form, as a point in time stored query result, then materialized views serve two primary purposes Performance Optimization and Semantics Simplification.

Performance Optimization

There are several ways in which materialized views can improve performance:

  • Reduce Database Workloads: materialized views can reduce database workloads by pre-assembling frequently used queries and, thereby, eliminating the repetitive execution of joins, aggregations, and filtering.
  • Facilitate Database Optimizers: in some databases can be partitioning and indexing which are considered by database optimizers. Also, some databases, in which more than one materialized view has been applied to a table, the database optimizer will consider the all the associated materialized views when optimizing queries.
  • Reduced Network Workloads: by the use of database replication and/or mass deployment techniques to materialized views, they can be distributed to more local proximity to the consumers, thereby, reducing the data volume across the network and provided business continuation/disaster recovery capabilities, should the primary site become temporarily unavailable.
  • Precalculation and/or Preaggregation: Performing calculation and aggregation of information upon creation of a materialized view, eliminates the need to perform these functions on an on-demand basis as various consumers submit requests.
  • Data Subsets: by applying filters to eliminate unnecessary data (e.g. history data no long in common reporting use) or unnecessary data attributes (e.g. unused columns or columns intended for other information purposes) the impact of filter for these items is reduced and is effectively eliminated for consumers of the materialized view.

Semantics Simplification

Materialized views can be used to simplify the semantics provided to information consumers with Ad hoc capabilities and/or to simplify the construction of reporting and analytics objects.  Depending on the database and/or integration tools in use to create them, materialized views can simplify the consumer experience by:

  • Reduce or Eliminate Join Coding: When constructed materialized views can perform the joins and populate the materialized view with the value results of from the join table, thereby, eliminating the need for the consumer to perform this function as an ad hoc user or in the semantics of reporting and analytics tools
  • Pre-application of Business Rules: When constructed materialized views can apply business rules to facilitate queries by adding indicator flags and preapplying special business logic to data and populating the materialized view with the value results, thus, eliminating the need for the consumer to perform this function as an ad hoc user or in the semantics of reporting and analytics tools.
  • Precalculation and/or Pre-aggregation: Performing calculation and aggregation of information upon creation of a materialized view allow the consumer to use the results without need to build the calculations and/or aggregations in to the ad hoc query or in the semantics of reporting and analytics tools.  This also, helps to ensure information accuracy and consistency.
  • Data Subsets: By prefiltering the data during the creation of the view unnecessary or unused data and columns are not available to consumers and do not need to be filter out of ad hoc queries or in the semantics of reporting and analytics tools

Where do data models fit in the Software Development Life Cycle (SDLC) Process?

Data Model SDLC Relationship Diagram

Data Model SDLC Relationship Diagram

In the classic Software Development Life Cycle (SDLC) process, Data Models are typically initiated, by model type, at key process steps and are maintained as data model detail is added and refinement occurs.

The Concept Data Model (CDM) is, usually, created in the Planning phase.   However,  creation the Concept Data Model  can slide forwarded or backward,  somewhat , within the System Concept Development, Planning, and Requirements Analysis phases, depending upon  whether the application being modeled is a custom development effort or a modification of a Commercial-Off-The-Shelf (COTS) application.  The CDM is maintained, as necessary, through the remainder of the SDLC process.

The Logical Data Model (LDM) is created in the Requirement Analysis phase and is a refinement of the information entities of the Concept Data Model. The LDM is maintained, as necessary, through the remainder of the SDLC process.

The Physical Data Model (PDM) is created in the Design phase to facilitate creation of accurate detail technical designs and actual database creation. The PDM is maintained, as necessary, through the remainder of the SDLC process.

Related References: