PureData / Netezza – What date/time ranges are supported by Netezza?

SQL (Structured Query Language), Date/Time ranges supported by Netezza

Date/Time ranges supported by Netezza

Here is a synopsis of the temporal ranges ( date, time, and timestamp), which Netezza / PureData supports.

Temporal Type

Supported Ranges

Size In Bytes

Date

A month, day, and year. Values range from January 1, 0001, to December 31, 9999. 4 bytes

Time

An hour, minute, and second to six decimal places (microseconds). Values range from 00:00:00.000000 to 23:59:59.999999. 8 bytes

Related References

Temporal data types

PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza database user documentation, Netezza SQL basics, Data types, Temporal data types

Netezza date/time data type representations

PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza user-defined functions, Data type helper API reference, Temporal data type helper functions, Netezza date/time data type representations

Date/time functions

PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza database user documentation, Netezza SQL basics, Netezza SQL extensions, Date/time functions

Netezza / PureData – How to add a Foreign Key

DDL (Data Definition Language), Netezza PureData How to add a Foreign Key

DDL (Data Definition Language)

Adding a forging key to tables in Netezza / PureData is a best practice; especially, when working with dimensionally modeled data warehouse structures and with modern governance, integration (including virtualization), presentation semantics (including reporting, business intelligence and analytics).

Foreign Key (FK) Guidelines

  • A primary key must be defined on the table and fields (or fields) to which you intend to link the foreign key
  • Avoid using distribution keys as foreign keys
  • Foreign Key field should not be nullable
  • Your foreign key link field(s) must be of the same format(s) (e.g. integer to integer, etc.)
  • Apply standard naming conventions to constraint name:
    • FK_<<Constraint_Name>>_<<Number>>
    • <<Constraint_Name>>_FK<<Number>>
  • Please note that foreign key constraints are not enforced in Netezza

Steps to add a Foreign Key

The process for adding foreign keys involves just a few steps:

  • Verify guidelines above
  • Alter table add constraint SQL command
  • Run statistics, which is optional, but strongly recommended

Basic Foreign Key SQL Command Structure

Here is the basic syntax for adding Foreign key:

ALTER TABLE <<Owner>>.<<NAME_OF_TABLE_BEING_ALTERED>>

ADD CONSTRAINT <<Constraint_Name>>_fk<Number>>

FOREIGN KEY (<<Field_Name or Field_Name List>>) REFERENCES <<Owner>>.<<target_FK_Table_Name>.(<<Field_Name or Field_Name List>>) <<On Update | On Delete>> action;

Example Foreign Key SQL Command

This is a simple one field example of the foreign key (FK)

 

ALTER TABLE Blog.job_stage_fact

ADD CONSTRAINT job_stage_fact_host_dim_fk1

FOREIGN KEY (hostid) REFERENCES Blog.host_dim(hostid) ON DELETE cascade ON UPDATE no action;

Related References

Alter Table

PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza database user documentation, Netezza SQL command reference, Alter Table, constraints

 

 

Databases – What is ACID?

Acronyms, Abbreviations, Terms, And Definitions, What is ACID?

Acronyms, Abbreviations, Terms, And Definitions

What does ACID mean in database technologies?

  • Concerning databases, the acronym ACID means: Atomicity, Consistency, Isolation, and Durability.

Why is ACID important?

  • Atomicity, Consistency, Isolation, and Durability (ACID) are import to database, because ACID is a set of properties that guarantee that database transactions are processed reliably.

Where is the ACID Concept described?

  • Originally described by Theo Haerder and Andreas Reuter, 1983, in ‘Principles of Transaction-Oriented Database Recovery’, the ACID concept has been codified in ISO/IEC 10026-1:1992, Section 4

What is Atomicity?

  • Atomicity ensures that only two possible results from transactions, which are changing multiple data sets:
  • either the entire transaction completes successfully and is committed as a work unit
  • or, if part of the transaction fails, all transaction data can be rolled back to databases previously unchanged dataset

What is Consistency?

  • To provide consistency a transaction either creates a new valid data state or, if any failure occurs, returns all data to its state, which existed before the transaction started. Also, if a transaction is successful, then all changes to the system will have been properly completed, the data saved, and the system is in a valid state.

What is Isolation?

  • Isolation keeps each transaction’s view of database consistent while that transaction is running, regardless of any changes that are performed by other transactions. Thus, allowing each transaction to operate, as if it were the only transaction.

What is Durability?

  • Durability ensures that the database will keep track of pending changes in such a way that the state of the database is not affected, if a transaction processing is interrupted. When restarted, databases must return to a consistent state providing all previously saved/committed transaction data

 

Related References

Databases – Database Isolation Level Cross Reference

Database Type Isolation Levels Cross Reference

Database And Tables

 

Here is a table quick reference of some common database and/or connection types, which use connection level isolation and the equivalent isolation levels. This quick reference may prove useful as a job aid reference, when working with and making decisions about isolation level usage.

Database isolation levels

Data sources

Most restrictive isolation level

More restrictive isolation level

Less restrictive isolation level

Least restrictive isolation level

Amazon SimpleDB

Serializable Repeatable read Read committed Read Uncommitted

dashDB

Repeatable read Read stability Cursor stability Uncommitted read

DB2® family of products

Repeatable read Read stability* Cursor stability Uncommitted read

Informix®

Repeatable read Repeatable read Cursor stability Dirty read

JDBC

Serializable Repeatable read Read committed Read Uncommitted

MariaDB

Serializable Repeatable read Read committed Read Uncommitted

Microsoft SQL Server

Serializable Repeatable read Read committed Read Uncommitted

MySQL

Serializable Repeatable read Read committed Read Uncommitted

ODBC

Serializable Repeatable read Read committed Read Uncommitted

Oracle

Serializable Serializable Read committed Read committed

PostgreSQL

Serializable Repeatable read Read committed Read committed

Sybase

Level 3 Level 3 Level 1 Level 0

 

Related References

 

Database – What is a foreign key?

Acronyms, Abbreviations, Terms, And Definitions, DDL (Data Definition Language), What is a foreign key

Acronyms, Abbreviations, Terms, And Definitions

Definition of a Foreign Key

  • A foreign Key (FK) is a constraint that references the unique primary key (PK) of another table.

Facts About Foreign Keys

  • Foreign Keys act as a cross-reference between tables linking the foreign key (Child record) to the Primary key (parent record) of another table, which establishing a link/relationship between the table keys
  • Foreign keys are not enforced by all RDBMS
  • The concept of referential integrity is derived from foreign key theory
  • Because Foreign keys involve more than one table relationship, their implementation can be more complex than primary keys
  • A foreign-key constraint implicitly defines an index on the foreign-key column(s) in the child table, however, manually defining a matching index may improve join performance in some database
  • The SQL, normally, provides the following referential integrity actions for deletions, when enforcing foreign-keys

Cascade

  • The deletion of a parent (primary key) record may cause the deletion of corresponding foreign-key records.

No Action

  • Forbids the deletion of a parent (primary key) record, if there are dependent foreign-key records.   No Action does not mean to suppress the foreign-key constraint.

Set null

  • The deletion of a parent (primary key) record causes the corresponding foreign-key to be set to null.

Set default

  • The deletion of a record causes the corresponding foreign-keys be set to a default value instead of null upon deletion of a parent (primary key) record

Related References

 

Netezza / PureData – How to rebuild a Netezza view in Aginity

How To Generate View or table DDL in Aginity For Netezza PureData

How To Generate View or table DDL in Aginity For Netezza

 

Rebuilding Netezza view sometimes becomes necessary when the view’s source table have changed underneath the view.  Rebuilding a view can be done on Netezza or in Aginity. In Aginity, it is a simple process, assume your user has permissions to create or replace a view.  The process breaks down into just a few steps:

Generate the create / replace view SQL of the original view into the query window, if you don’t have it already

In the object browser:

  • Navigate to the Database and view you wish to rebuild
  • Select the view and right click
  • Select ‘Scripts’, then ‘DDL to Query window’

Make may updates to create / replace View SQL

  • This step is not always necessary, sometimes the changes which invalided the view did not actually impact the code of the view. If changes are necessary, make may updates to the SQL code.

Execute The code

  • This I usually do by choosing the ‘Execute as a single batch’ option.  Make sure the code executes successfully.

Verify the view

  • To verify the simply execute a select statement and make it executes without errors and/or warning.

Related References

 

What are the dashDB isolation levels?

dashDB

dashDB

 

Isolation levels are part of the ACID (Atomicity, Consistency, Isolation, Durability) paradigms in database control.  Isolation levels allow developers and user to trade-off consistency for a potential gain in performance. Therefore, it is essential to understand them and how the apply in structured Query Language(SQL).  The dashDB RDBMS has four isolations levels:

Repeatable Read (RR)

  • The repeatable read (RR) isolation level locks all the rows that an application references during a unit of work (UOW). If an application issues a SELECT statement twice within the same unit of work, the same result is returned each time. Under RR, lost updates, access to uncommitted data, non-repeatable reads, and phantom reads are not possible.
  • Under RR, an application can retrieve and operate on the rows as many times as necessary until the UOW completes. However, no other application can update, delete, or insert a row that would affect the result set until the UOW completes. Applications running under the RR isolation level cannot see the uncommitted changes of other applications. This isolation level ensures that all returned data remains unchanged until the time the application sees the data, even when temporary tables or row blocking is used.
  • Every referenced row is locked, not just the rows that are retrieved. For example, if you scan 20 000 rows and apply predicates to them, locks are held on all 20 000 rows, even if, say, only 200 rows qualify. Another application cannot insert or update a row that would be added to the list of rows referenced by a query if that query were to be executed again. This prevents phantom reads.
  • Because RR can acquire a considerable number of locks, this number might exceed limits specified by the locklist and maxlocks database configuration parameters. To avoid lock escalation, the optimizer might elect to acquire a single table-level lock for an index scan, if it appears that lock escalation is likely. If you do not want table-level locking, use the read stability isolation level.
  • While evaluating referential constraints, the dashDB might, occasionally, upgrade the isolation level used on scans of the foreign table to RR, regardless of the isolation level that was previously set by the user. This results in additional locks being held until commit time, which increases the likelihood of a deadlock or a lock timeout. To avoid these problems, create an index that contains only the foreign key columns, which the referential integrity scan can use instead.

Read Stability (RS)

  • The read stability (RS) isolation level locks only those rows that an application retrieves during a unit of work. RS ensures that any qualifying row read during a UOW cannot be changed by other application processes until the UOW completes, and that any change to a row made by another application process cannot be read until the change is committed by that process. Under RS, access to uncommitted data and non-repeatable reads are not possible. However, phantom reads are possible. Phantom reads might also be introduced by concurrent updates to rows where the old value did not satisfy the search condition of the original application but the new updated value does.
  • For example, a phantom row can occur in the following situation:
    • Application process P1 reads the set of rows n that satisfy some search condition.
    • Application process P2 then inserts one or more rows that satisfy the search condition and commits those new inserts.
    • P1 reads the set of rows again with the same search condition and obtains both the original rows and the rows inserted by P2.
  • In a dashDB environment, an application running at this isolation level might reject a previously committed row value, if the row is updated concurrently on a different member. To override this behavior, specify the WAIT_FOR_OUTCOME option.
  • This isolation level ensures that all returned data remains unchanged until the time the application sees the data, even when temporary tables or row blocking is used.
  • The RS isolation level provides both a high degree of concurrency and a stable view of the data. To that end, the optimizer ensures that table-level locks are not obtained until lock escalation occurs.
  • The RS isolation level is suitable for an application that:
    • Operates in a concurrent environment
    • Requires qualifying rows to remain stable for the duration of a unit of work
    • Does not issue the same query more than once during a unit of work, or does not require the same result set when a query is issued more than once during a unit of work

Cursor Stability (CS)

  • The cursor stability (CS) isolation level locks any row being accessed during a transaction while the cursor is positioned on that row. This lock remains in effect until the next row is fetched or the transaction terminates. However, if any data in the row was changed, the lock is held until the change is committed.
  • Under this isolation level, no other application can update or delete a row while an updatable cursor is positioned on that row. Under CS, access to the uncommitted data of other applications is not possible. However, non-repeatable reads and phantom reads are possible.
  • Cursor Stability (CS) is the default isolation level.
  • Cursor Stability (CS) is suitable when you want maximum concurrency and need to see only committed data.
  • In a dashDB environment, an application running at this isolation level may return or reject a previously committed row value, if the row is concurrently updated on a different member. The WAIT FOR OUTCOME option of the concurrent access resolution setting can be used to override this behavior.

Uncommitted Read (UR)

  • The uncommitted read (UR) isolation level allows an application to access the uncommitted changes of other transactions. Moreover, UR does not prevent another application from accessing a row that is being read, unless that application is attempting to alter or drop the table.
  • Under UR, access to uncommitted data, non-repeatable reads, and phantom reads are possible. This isolation level is suitable if you run queries against read-only tables, or if you issue SELECT statements only, and seeing data that has not been committed by other applications is not a problem.
  • Uncommitted Read (UR) works differently for read-only and updatable cursors.
  • Read-only cursors can access most of the uncommitted changes of other transactions.
  • Tables, views, and indexes that are being created or dropped by other transactions are not available while the transaction is processing. Any other changes by other transactions can be read before they are committed or rolled back. Updatable cursors operating under UR behave as though the isolation level were CS.
  • If an uncommitted read application uses ambiguous cursors, it might use the CS isolation level when it runs. To prevent this escalation, modify the cursors in the application program to be unambiguous and/or Change the SELECT statements to include the for read-only

 

Related References

IBM dashDB

Accessing remote data sources with fluid queries on dashDB Local, Developing for federation

 

Netezza / PureData – Two ways to get Numeric Day of Year

Netezza PureData Numeric Day Of Year

Netezza PureData Numeric Day Of Year

 

Two Ways To Get Numeric Day of Year

In Netezza there are at least two way to get the numeric day of year.  These are using:

  • The cast ‘DDD’ function or
  • The Extract ‘doy’ function

Example SQL for Numeric Day of Year

Here is a quick sample SQL of two ways to get the Numeric day of year in Netezza / PureData.

SELECT

CURRENT_DATE as “CURRENT_DATE”,

—————-Day Of Year Cast Method  ———————

 

TO_CHAR(CURRENT_DATE,’DDD’) AS CALENDAR_DAY_OF_YEAR_NUMBER_CAST_METHOD,

————Day Of Year Extract Method ———————–

DATE_PART(‘doy’, current_date) AS CALENDAR_DAY_OF_YEAR_NUMBER_EXTRACT_METHOD

FROM _V_DUAL;

 

Related References

Explicit and implicit casting

PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza database user documentation, SQL statement grammar, Explicit and implicit casting

Extract date and time values

PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza database user documentation, Netezza SQL basics, Functions and operators, Functions, Extract date and time values

 

 

 

 

Netezza / PureData – How to calculate months between two dates

Netezza, PureData, Months Between Two Dates, function, SQL, difference between two dates

Netezza PureData Months Between Two Dates

Recently, I had reason to during the months between two dates to test some data against business rule/requirement.  Pleasantly surprised I was to find that Netezza had an easy to use function ‘months_between’ function to calculate the difference.

 

The Months Between SQL Function syntax

The months_Between function uses two dates to perform the calculation.  Whether you want the output to be a positive or negative number determines the field order within the function.

  • For a positive number result, put the Newest Date Field first, separated by a comma, then Oldest Date Field
  • For a negative number result, just reverse the order putting the Oldest Date Field first, separated by a comma, then the Newest Date Field

The results will contain a decimal for the days of the month and you will need to round, based on your business requirements, to achieve a whole number.

 

SELECT months_between(<<DateField>>, <<DateField>>) as <<OutputFiledName>>,

from <<TableName>>;

 

Example Months Between SQL

SELECT months_between(current_date, Date(‘2017-01-01’)) as Difference_In_Months,

round(months_between(current_date, Date(‘2017-01-01’)) ) as Difference_In_Months_Rounded

from _v_dual;

 

Related References

Date/time functions

PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza database user documentation, Netezza SQL basics, Netezza SQL extensions, Date/time functions

 

 

 

 

 

Netezza / PureData Date – Difference in Days SQL

Netezza PureData Date Difference in Days SQL, Subtracting Non-inclusive dates, Subtracting inclusive dates

Netezza /PureData Date Difference in Days

Since Netezza does not have a datediff function, the ‘old school’ of calculating the difference, in days, between dates must be used.

Subtracting Inclusive Dates

To subtract to day and include end date, as a day, in calculation (1 day is added)

select (date(‘2015-12-31’) – date(‘2015-01-01’))+1 as Inclusive_dates

From _v_dual;

 

Subtracting Non-inclusive dates

To subtract dates non-inclusive simply subtract the dates

select date(‘2015-12-31’) – date(‘2015-01-01’) as Non_Inclusive_dates

From _v_dual;

 

Example SQL From Graphic

select (date(‘2015-12-31’) – date(‘2015-01-01’))+1 as Inclusive_dates,

date(‘2015-12-31’) – date(‘2015-01-01’) as Non_Inclusive_dates

From _v_dual;

 

Related References

Data Modeling – Fact Table Effective Practices

Database Table

Database Table

Here are a few guidelines for modeling and designing fact tables.

Fact Table Effective Practices

  • The table naming convention should identify it as a fact table. For example:
    • Suffix Pattern:
      • <<TableName>>_Fact
      • <<TableName>>_F
    • Prefix Pattern:
      • FACT_<TableName>>
      • F_<TableName>>
    • Must contain a temporal dimension surrogate key (e.g. date dimension)
    • Measures should be nullable – this has an impact on aggregate functions (SUM, COUNT, MIN, MAX, and AVG, etc.)
    • Dimension Surrogate keys (srky) should have a foreign key (FK) constraint
    • Do not place the dimension processing in the fact jobs

Related References

Data Modeling – Dimension Table Effective Practices

Database Table

Database Table

I’ve had these notes laying around for a while, so, I thought I consolidate them here.   So, here are few guidelines to ensure the quality of your dimension table structures.

Dimension Table Effective Practices

  • The table naming convention should identify it as a dimension table. For example:
    • Suffix Pattern:
      • <<TableName>>_Dim
      • <<TableName>>_D
    • Prefix Pattern:
      • Dim_<TableName>>
      • D_<TableName>>
  • Have Primary Key (PK) assigned on table surrogate Key
  • Audit fields – Type 1 dimensions should:
    • Have a Created Date timestamp – When the record was initially created
    • have a Last Update Timestamp – When was the record last updated
  • Job Flow: Do not place the dimension processing in the fact jobs.
  • Every Dimension should have a Zero (0), Unknown, row
  • Fields should be ‘NOT NULL’ replacing nulls with a zero (0) numeric and integer type fields or space ( ‘ ‘ ) for Character type files.
  • Keep dimension processing outside of the fact jobs

Related References

 

 

Netezza / PureData – Substring Function

SQL (Structured Query Language), Netezza Puredata Substring Function, substr, substring

Netezza Puredata Substring Function

 

Substring is a common enough function in SQL, however, the exact language format used to perform this function can vary from one database to another.  So, here are a few quick notes on the substring format in Netezza / PureData.

What is the purpose of a substring?

SUBSTRING allows SQL to extract part of a string with a field, which in Netezza / PureData is based on positions with the string.

Substring Function Syntax

Substring(<<Input Field>>)from <<start-position>> [for <<length in Characters>>]).

 

Notes:
·        Square brackets ‘[ ]’ indicate optional content, which can be useful under certain circumstances
·        If you do not specify the optional for length [for <<length in Characters>>], then the remained of the string will be returned

Example Substring SQL Using ‘For Length’ property

Netezza / Puredata Substring Function

Netezza / Puredata Substring Function

Example Substring SQL

SELECT  SUBSTRING(DD.DATE_SRKY FROM 1 FOR 4)  as  First_Four_Digits

, SUBSTRING(DD.DATE_SRKY FROM 5 FOR 2)  as  Middle_Two_Digits

, SUBSTRING(DD.DATE_SRKY FROM 7 FOR 2)  as  Last_Two_Digits

,DD.DATE_SRKY

FROM DATE_DIM DD

Order by DD.DATE_SRKY DESC

 

Example Substring SQL without the optional ‘For Length’ property

Netezza String Function Without For Length Property

Netezza String Function Without For Length Property

Example Substring SQL

SELECT  SUBSTRING(DD.DATE_SRKY FROM 5 )  as  Remainder_String_From_Position_5

, SUBSTRING(DD.DATE_SRKY FROM 7 )  as  Remainder_String_From_Position_7

,DD.DATE_SRKY

FROM DATE_DIM DD

Order by DD.DATE_SRKY DESC

 

Example Substring SQL which bring the parts together in a new format

Netezza / Puredata Substring Function Reassembled As Date

Netezza / Puredata Substring Function Reassembled As Date

 

Example Substring SQL

SELECT  SUBSTRING(DD.DATE_SRKY FROM 1 FOR 4)||’-‘||

SUBSTRING(DD.DATE_SRKY FROM 5 FOR 2)||’-‘||

SUBSTRING(DD.DATE_SRKY FROM 7 FOR 2) ||  ‘ 00:00:00’ as Reassembled_As_Date_Format

,DD.DATE_SRKY

FROM DATE_DIM DD

Order by DD.DATE_SRKY DESC

 

Related References

IBM Knowledge Center, PureData System for Analytics, Version 7.2.1

IBM Netezza database user documentation, Netezza SQL basics, Netezza SQL extensions, Character string functions

IBM Knowledge Center, PureData System for Analytics, Version 7.1.0

IBM Netezza Database User’s Guide, Netezza SQL basics, Functions and operators, Functions, Standard string functions

IBM Knowledge Center, PureData System for Analytics, Version 7.2.1

IBM Netezza database user documentation, Netezza SQL command reference, Functions

 

 

 

Netezza / PureData – Row number within a group

Netezza / PureData – row number within a group

SQL (Structured Query Language)

Sometimes it is useful to be able number data with it groups, which can be quickly and easily done suing the Row Number function in Netezza. However, you do need to pay particular attention to you ‘partition by’ (conceptually a group by operation) field list and field order, as well as, you ‘Order by’ field list.  Otherwise, you can get some bizarre  results.  it find it helpful to think of them as having a parent, child, relationship.  In this line of thought, your ‘partition by’ field list should usually have one less more field than than your ‘order by’.  This is not an absolute rule, but under normal circumstances your  ‘order by’ field list should contain all the field in your ‘partition by’  and, occasionally, provide a orby direction (ascending or descending).

SQL Netezza Row Number Format

ROW_NUMBER() OVER( PARTITION BY <<Partition_By_Field_list>> ORDER BY <<Sort_By_Field_List [direction asc/desc]>> ) as ROW_NUM

 

Example Row Number SQL

Row Number Within A Group

Row Number Within A Group

SELECT

qh_database

,qh_sessionid

, qh_tend

,ROW_NUMBER() OVER( PARTITION BY qh_database, qh_sessionid

ORDER BY qh_database, qh_sessionid, qh_tend desc ) as ROW_NUM

FROM nz_query_history

where qh_database = ‘system’

order by qh_database, qh_sessionid, qh_tend desc

 

Related References

IBM Knowledge Center, PureData System for Analytics, Version 7.2.1

IBM Netezza database user documentation, Netezza SQL basics, Netezza SQL extensions, Character string functions

IBM Knowledge Center, PureData System for Analytics, Version 7.1.0

IBM Netezza Database User’s Guide, Netezza SQL basics, Functions and operators, Functions, Standard string functions

 

 

 

 

 

 

Netezza / PureData – Case Statement Example

Netezza PureData Case statement Function example

Netezza PureData Case Function

The Case Statement, is one which I occasionally use within Netezza/PureData SQL and Stored Procedures.  Basically, the Case Function provides an ‘IF-THEN-ELSE’ decision capability.  This ‘IF-THEN-ELSE’ capability allows the evaluation conditions and return a value when the first condition is met and/ or else break out logic, if the condition or conditions are not met.

Case Function Language Structure

·       Case Function Basic Structure

CASE

WHEN <search-condition-1> THEN <result-1>

WHEN <search-condition-2> THEN <result-2>

WHEN <search-condition-n> THEN <result-n>

ELSE <default-result>

END

·       Case Function Simple Structure

CASE

WHEN <search-condition-1> THEN <result-1>

ELSE <default-result>

END

Case Function Embedded in SQL Select Example

Case Function Embedded In SQL Select Statement

Case Function Embedded In SQL Select Statement

 

SQL Used in Embedded Example

Select

now() as “Time”,

current_date as “Today”,

(

case

when (DATE_PART(‘HOUR’,NOW())>12 )

THEN date(current_date +  cast(‘1 days’ as interval))

ELSE  current_date

END ) as “PROCESSING_DATE”

From ADMIN._v_dual;

 

Related References

IBM Knowledge Center, PureData System for Analytics, Version 7.2.1

IBM Netezza database user documentation, Netezza SQL basics, Functions and operators, Functions

 

 

Data Modeling – Database Table Field Ordering Effective Practices

Database Table Field Ordering Effective Practices

Database Table

Field ordering can help the performance on inserts and updates and, also, keeps developer and users from having to search entire table structure to be sure they have all the keys, etc.

Table Field Ordering

  1. Distribution Field Or Fields, if no distribution field is set the first field will be used by default.
  2. Primary Key Columns (including Parent and Child key fields)
  3. Foreign Key Columns (Not Null)
  4. Not Null Columns
  5. Nullable Columns
  6. Created Date Timestamp
  7. Modified (or Last Updated) Date Timestamp
  8. Large text Fields
  9. Large binary Columns or Binary Field references

Related References

Where do data models fit in the Software Development Life Cycle (SDLC) Process?

Data Model SDLC Relationship Diagram

Data Model SDLC Relationship Diagram

In the classic Software Development Life Cycle (SDLC) process, Data Models are typically initiated, by model type, at key process steps and are maintained as data model detail is added and refinement occurs.

The Concept Data Model (CDM) is, usually, created in the Planning phase.   However,  creation the Concept Data Model  can slide forwarded or backward,  somewhat , within the System Concept Development, Planning, and Requirements Analysis phases, depending upon  whether the application being modeled is a custom development effort or a modification of a Commercial-Off-The-Shelf (COTS) application.  The CDM is maintained, as necessary, through the remainder of the SDLC process.

The Logical Data Model (LDM) is created in the Requirement Analysis phase and is a refinement of the information entities of the Concept Data Model. The LDM is maintained, as necessary, through the remainder of the SDLC process.

The Physical Data Model (PDM) is created in the Design phase to facilitate creation of accurate detail technical designs and actual database creation. The PDM is maintained, as necessary, through the remainder of the SDLC process.

Related References: