An Overview of DB2 Federation

DB2 Federation

DB2 Federation

Data analytics has changed where data is no longer manageable in relational databases only. Data is flowing from various sources which are not of the same format. This means it is not possible to store all data in the same repository. Some are best suited for storing in relational databases, others for Apache Hadoop while others are best suited for NoSQL databases.

During data analyzing, so much time is taken in trying to bring the distributed data together instead of obtaining insights. Db2 Federation has come to the rescue of data analysts. Federation concept in db2 eliminates the need for storing data in different repositories and reduces the hassle of getting insights.

What is DB2 Federation?

DB2 federation is a data integration technology that permits remote database objects to be accessed as local DB2 database objects. This technology connects multiple databases and makes them appear like one database.

How does DB2 federation work?

Federation allows you to access all of your data that is on multiple distributed databases using a single query. When implemented in an organization, this technology can be used to access data that is on any of the organization’s Db2, whether local or in the cloud.

Why use DB2 federation?

So, why should you use the federation? This concept brings data of all formats into one virtual source. With data being retrieved from one virtual source, analyzing it becomes cost-effective and efficient.

What are its primary use cases for DB2 federation?

Merging of various sources of data

DB2 federation facilitates consolidating of data from sources data local and cloud to form one virtual data source. This eliminates the process of migrating data which can be expensive and troublesome.

Increase the capacity of a repository beyond the fixed limits

Physical storage capacity is bound to have a limit which is one reason you may find an organization has distributed its data in various repositories. With federation, the storage is virtual and therefore doesn’t have any limit. This technology can greatly help you if your physical dataset is running low on space.

Linking up to Db2 Warehouse on Cloud

People who use Db2 products can federate data from Db2 on Cloud and Db2 Warehouse on the Cloud. This will give them a joint interface where they can access, add, query, and analyze data without encountering the complex ETL processes. Better still, no additional code will be required to execute all these processes. This makes it easy for people with the low technical know-how to use these products smoothly.

Split data across different servers

At times, you might choose to partition your data. With federation integration technology, partitioned data can be queried with a unified interface. Federation allows you to better balance your workloads, scale precise parts of an app, and create micro-services that work harmoniously.

Generally, db2 federation makes it access data by bringing it together into a single virtual source. This brings about cost and time-saving benefits. When you want to analyze data, you can get insights immediately instead of spending a lot of time querying through repositories.

Related References

Public Cloud Versus Private Cloud

Cloud Computing

Cloud Computing

A public cloud strategy refers to a situation where you utilize cloud resources on a shared platform. Examples of shared or public cloud solutions include Microsoft Azure, Amazon Web Services and Google cloud. There are several benefits associated with cloud solutions. On the other hand, a private cloud strategy refers to a situation where you can decide to have an infrastructure which is dedicated to serving your business. It is sometimes referred to as homegrown where you employ experts to run the services so that your business can access different features. There are several advantages of using a public cloud over private cloud which you should know before you make an informed decision on the right platform to invest. Some of the benefits of the public cloud strategy include the following:

Availability and scale of Expertise

If you compare the public cloud and the private cloud services, the public cloud

allows you to access more experts. Remember the companies which offer the cloud services have enough employees who are ready to help several clients. In most cases, the other clients whom the service providers serve will not experience problems at the same time. It implies that human resource will be directed toward solving your urgent issue. You can as well scale up or down at any given time as the need arises which is unlike a case of private cloud solutions where you will have to invest in infrastructure each time you will like to upgrade.

Downgrading on a private cloud system can expose you to lose because you will leave some resources underutilized.

The volume of Technical Resources to apply

You access more technical resources in a public cloud platform. Remember the companies which offer the public cloud solutions are fully equipped with highly experienced experts. They also have the necessary tools and resources which

they can apply to assure you the best technical solutions each time you need them. It is unlike a private arrangement where you will have to incur more costs if the technical challenges will need advanced tools and highly qualified experts.

Price point

The price of a private cloud is high when compared to a public arrangement. If you are looking for ways you can save money, then the best way to go about it is to involve a public cloud solution. In the shared platform, you will only pay for

what you need. If you do not need a lot of resources at a given time, you can downgrade the services and enjoy fair prices. Services such as AWS offer great cost containment across the time which makes it easy to access the services at fair prices. For any business to grow, it should invest in the right package which brings the return on investment. The services offered by the public cloud systems allow businesses to save and grow. You should as well take into consideration other factors such as ecosystems for cloud relationships before you make an informed decision. There are some business models which prefer private cloud solutions while others can work well under public cloud-based solutions.

Related References

Business Linux Operating Systems

Linux

Linux

Unix and Linux are different operating systems with have some common commands. Source code for Linux is freely available to the public and Unix is not available. Linux operating system is a free/open source and Some versions of Unix are proprietary and others are a free/open source. Linux Operating system can be used for desktop systems and for servers. But the Unix is mainly used in servers, mainframes and high-end computers.

AIX is an operating system based on Unix versions from IBM. It is mainly designed for IBM’s workstations and for the server hardware platforms. And HP-UX is the operating system from HP ( Hewlett Packard ) based on Unix versions.  HP-UX and AIX are stable operating system compare with Linux. HP-UX and AIX are platform dependent and they are limited to their own hardware. But in the case of Linux, it is platform independent and can be used with any hardware. Since HP-UX and AIX are platform dependent, they are optimised for the hardware and the performance is better than Linux operating systems.  AIX is outperforming Linux from 5 to 10 percent.

Unix

AT&T Unix, started in the 1970s at the Bell Labs and newer versions of Unix have developed and some of them are listed below. In 1980, AT&T licensed Unix to third-party vendors and leading to the development of different variants. Some of them are;

  • Berkeley Unix, FreeBSD and its variants
  • Solaris from Sun Microsystem
  • HP-UX from Hewlett-Packard
  • AIX from IBM
  • MacOs from Apple
  • Microsoft’s Xenix

Unix installations are costlier since it requires some special hardware. MacOS needs apple computers, AIX needs IBM hardware and HP-UX needs HP hardware etc.

Linux

Linux is a free and open source operating system based on Unix. Linux kernel was first developed by Linus Torvalds in 1991. Linux was originally developed for personal computers but nowadays it is using personal computers as well as in server systems. Since it is very flexible, it can be installed in any hardware systems. Linux operating system is available for mobile phones, tablets, video game consoles, mainframes and supercomputers. Some of the best distros for small business are;

  • Centos
  • ClearOS
  • OpenSUSE
  • IPFire
  • Ubuntu
  • Manjaro
  • Slackware

Linux Vs Unix

Linux Unix
The Source Code of Linux is freely available to its Users. The Source Code of Unix is not available for the general public.
Linux primarily uses Graphical User Interface with an optional Command Line Interface. Unix primarily uses Command Line Interface.
Linux OS is portable and can be executed in different Hard Drives. Unix is not portable.
Linux is very flexible and can be installed on most of the Home Based Pcs. Unix has a rigid requirement of the Hardware. Hence, cannot be installed on every other machine.
Linux is mainly used in Home Based PC, Mobile Phones, Desktops, etc. Unix is mainly used in Server Systems, Mainframes and High-End Computers.
Different Versions of Linux are: Ubuntu, Debian, OpenSuse, Redhat, Solaris, etc. Different Versions of Unix are: AIS, HP-UX, BSD, Iris, etc.
Linux Installation is economical and doesn’t require much specific and high-end hardware. Unix Installation is comparatively costlier as it requires more specific hardware circuitry.
The Filesystems supported by Linux are as follows: xfs, ramfs, nfs, vfat, cramfsm ext3, ext4, ext2, ext1, ufs, autofs, devpts, ntfs The Filesystems supported by Unix are as follows: zfs, js, hfx, gps, xfs, vxfs.
Linux is developed by an active Linux Community worldwide. Unix is developed by AT&T Developers.

Hardware architecture

Most commercial versions of UNIX distributions are coded for specific hardware. Like HP-UX for PA-RISC (Hewlett-Packard) and Itanium machines (Intel) and AIX is for Power processors ( IBM ). Since these distributions are limited, the developers can optimise their code for these architectures to get maximum utilisation of resources.  Since it uses proprietary hardware, Unix distributions are not cost effective.

  • HP-UX needs HP or Intel hardware
  • AIX needs IBM Hardware

Linux operating system is not dependent on the hardware, so it can be installed in any of the server systems which have a processor. Since the developers cannot assume the hardware architecture and they need to prepare the code for some general hardware specifications and that’s why Linux operating system has less performance than the commercial Unix variants.

  • Linux is open to all hardware

Licensing

GNU General Public License (GPL), is a form of copyleft and is used for the Linux kernel and many of the components from the GNU Project. Free software projects, although developed through collaboration, are often produced independently of each other. AIX and HP-UX are using proprietary licenses.

HP-UX

Developer Hewlett-Packard Enterprise
Written in C
OS family Unix (System V)
Initial release 1982; 36 years ago
Kernel type Monolithic with dynamically loadable modules
License Proprietary

 

IBM AIX

Developer IBM
Written in C
OS family Unix
Initial release 1986; 32 years ago
Kernel type Monolithic with dynamically loadable modules
License Proprietary

 

Linux

Developer Community, Linus Torvalds
Written in Primarily C and assembly
OS family Unix-like
Initial release September 17, 1991; 26 years ago
Kernel type Monolithic (Linux kernel)
License GPLv2[7] and other free and open-source licenses (the name “Linux” is a trademark[b])

 

Softwares and Tools

Softwares and tools in Linux are general to all hardware. But in the case of Unix, separate tools and software which leverage to get the maximum performance. So the performance of the systems is higher than the Linux operating system by comparing the hardware configuration. Unix has good performance than Linux systems. While considering the cost estimation, Linux will get more votes.

System Management Interface Tool ( SMIT ) with AIX is the tools used for package management, System Administration Manager (SAM) on HP-UX. Linux operating system uses rpm or dpkg etc. based on the variants.

Software Installation and Patch Management

R H Linux

HP-UX

AIX

Install rpm -i file swinstall –s depot software installp –a [-c] FileSet
Update rpm -U/F file swinstall –s depot software installp –a FileSet
List rpm -q swlist –l product lslpp –L all
Remove rpm -e swremove software installp –u FileSet
Patches rpm -u swinstall installp
List Patches rpm -q -a swlist –l product lslpp –L all
Patch check up2date/yum security_patch_check compare_report

File system

While talking about the file systems, Linux scores more than the other Unix versions. Unix supports two or three file systems locally. But Linux supports almost all the file systems available on any operating system.

 

System Filesystem
AIX jfs, gpfs
HP-UX hfs, vxfs

Kernel

The kernel is the core of the operating system and the source code of the kernel are not freely available for the commercial versions of Unix. For the Linux operating system, the users can check and verify the code and even modify it if required.

Support

The commercial versions of Unix come with a license cost. Since these operating systems are purchased, the vendor will provide technical support to the end users to the smooth running of the operating systems.

In the case of the Linux operating system, we need to use the open source forums and community for getting support from the users and developers around the world or hire some freelancers for fixing the issues.

Related References

Netezza / PureData – List of Views against a table

PureData Powered by Netezza

PureData Powered by Netezza

I have found myself using this simple, but useful SQL time in recent weeks to research different issues and to help with impact analysis.  So, I thought I would post it while I’m thinking about it.  It just gives a list of views using a table, which can be handy to know.  This SQL is simple and could be converted to an equi-join.  I used the like statement mostly because I sometimes want to know if there are other views a similar nature in the same family (by naming convention) of tables.

Select All Fields From The _V_View

This is the simplest form of this SQL to views, which a table.

Select * from _v_view

where DEFINITION like ‘%<<TABLE_NAME>>%’ ;

Select Minimal Fields From The _V_View

This is the version of the SQL, which I normally use, to list the views, which use a table.

Select VIEWNAME, OWNER from _v_view

where DEFINITION like ‘%<<TABLE_NAME>>%’;

Related References

Major Cloud Computing Models

Cloud Computing

Cloud Computing

Cloud computing enables convenient, ubiquitous, measures, and on-demand access to a shared pool of scalable and configurable resources, such as servers, applications, databases, networks, and other services. Also, these resources can be provisioned and released rapidly with minimum interaction and management from the provider.

The rapidly expanding technology is rife with obscure acronyms, with major ones being SaaS, PaaS, and IaaS. These acronyms distinguish the three major cloud computing models discussed in this article. Notably, cloud computing virtually meets any imaginable IT needs in diverse ways. In effect, the cloud computing models are necessary to show the role that a cloud service provides and how the function is accomplished. The three main cloud computing paradigms can be demonstrated on the diagram shown below.

The three major cloud computing models

The three major cloud computing models

Infrastructure as a Service (IaaS)

In infrastructure as a service model, the cloud provider offers a service that allows users to process, store, share, and user other fundamental computing resources to run their software, which can include operating systems and applications. In this case, a consumer has minimum control over the underlying cloud infrastructure, but has significant control over operating systems, deployed applications, storage, and some networking components, such as the host firewalls.

Based on its description, IaaS can be regarded as the lowest-level cloud service paradigm, and possibly the most crucial one. With this paradigm, a cloud vendor provides pre-configured computing resources to consumers via a virtual interface. From the definition, IaaS pertains underlying cloud infrastructure but does not include applications or an operating system. Implementation of the applications, operating system, and some network components, such as the host firewalls is left up to the end user. In other words, the role of the cloud provider is to enable access to the computing infrastructure necessary to drive and support their operating systems and application solutions.

In some cases, the IaaS model can provide extra storage for data backups, network bandwidth, or it can provide access to enhanced performance computing which was traditionally available using supercomputers. IaaS services are typically provided to users through an API or a dashboard.

Features of IaaS

  • Users transfer the cost of purchasing IT infrastructure to a cloud provider
  • Infrastructure offered to a consumer can be increased or reduced depending on business storage and processing needs
  • The consumer will be saved from challenges and costs of maintaining hardware
  • High availability of data is in the cloud
  • Administrative tasks are virtualized
  • IaaS is highly flexible compared to other models
  • Highly scalable and available
  • Permits consumers to focus on their core business and transfer critical IT roles to a cloud provider
Infrastructure as a Service (IaaS)

Infrastructure as a Service (IaaS)

IaaS Use Cases

A series of use cases can explore the above benefits and features afforded by IaaS. For instance, an organization that lacks the capital to own and manage their data centers can purchase an IaaS offering to achieve fast and affordable IT infrastructure for their business. Also, the IaaS can be expanded or terminated based on the consumer needs. Another set of companies that can deploy IaaS include traditional organizations seeking large computing power with low expenditure to run their workloads. IaaS model is also a good option for rapidly growing enterprises that avoid committing to specific hardware or software since their business needs are likely to evolve.

Popular IaaS Services

Major IT companies are offering popular IaaS services that are powering a significant portion of the Internet even without users realizing it.

Amazon EC2: Offers scalable and highly available computing capacity in the cloud. Allows users to develop and deploy applications rapidly without upfront investment in hardware

IBM’s SoftLayer: Cloud computing services offering a series of capabilities, such as computing, networking, security, storage, and so on, to enable faster and reliable application development. The solution features bare-metal, hypervisors, operating systems, database systems, and virtual servers for software developers.

NaviSite: offers application services, hosting, and managed cloud services for IT infrastructure

ComputeNext: the solution empowers internal business groups and development teams with DevOps productivity from a single API.

Platform as a Service (PaaS)

Platform as a service model involves the provision of capabilities that allow users to create their applications using programming languages, tools, services, and libraries owned and distributed by a cloud provider. In this case, the consumer has minimum control over the underlying cloud computing resources such as servers, storage, and operating system. However, the user has significant control over the applications developed and deployed on the PaaS service.

In PaaS, cloud computing is used to provide a platform for consumers to deploy while developing, initializing, implementing, and managing their application. This offering includes a base operating system and a suite of development tools and solutions. PaaS effectively eliminates the needs for consumers to purchase, implement and maintain the computing resources traditionally needed to build useful applications. Some people use the term ‘middleware’ to refer to PaaS model since the offering comfortably sits between SaaS and IaaS.

Features of PaaS

  • PaaS service offers a platform for development, tasking, and hosting tools for consumer applications
  • PaaS is highly scalable and available
  • Offer cost effective and simple way to develop and deploy applications
  • Users can focus on developing quality applications without worrying about the underlying IT infrastructure
  • Business policy automation
  • Many users can access a single development service or tool
  • Offers database and web services integration
  • Consumers have access to powerful and reliable server software, storage capabilities, operating systems, and information and application backup
  • Allows remote teams to collaborate, which improves employee productivity
Platform as a Service (PaaS)

Platform as a Service (PaaS)

PaaS Use Cases

Software development companies and other enterprises that want to implement agile development methods can explore PaaS capabilities in their business models. Many PaaS services can be used in application development. PaaS development tools and services are always updated and made available via the Internet to offer a simple way for businesses to develop, test, and prototype their software solutions. Since developers’ productivity is enhanced by allowing remote workers to collaborate, PaaS consumers can rapidly release applications and get feedback for improvement. PaaS has led to the emergence of the API economy in application development.

Popular PaaS Offerings

There exist major PaaS services that are helping organizations to streamline application development. PaaS offering is delivered over the Internet and allows developers to focus more on creating quality and highly functional application while not worrying about the operating system, storage, and other infrastructure.

Google’s App Engine: the solution allows developers to build scalable mobile and web backends in any language in the cloud. Users can bring their own language runtimes, third-party libraries, and frameworks

IBM BlueMix: this PaaS solution from IBM allows developers to avoid vendor lock-in and leverage the flexible and open cloud environment using diverse IBM tools, open technologies, and third-party libraries and frameworks.

Heroku: the solution provides companies with a platform where they can build, deliver, manage, and scale their applications while abstracting and bypassing computing infrastructure hassles

Apache Stratos: this PaaS offering offers enterprise-ready quality service, security, governance, and performance that allows development, modification, deployment, and distribution of applications.

Red Hat’s OpenShift: a container application platform that offers operations and development-centric tools for rapid application development, easy deployment, scalability, and long-term maintenance of applications

Software as a Service (SaaS)

Software as a service model involves the capabilities provided to users by using a cloud vendor’s application hosted and running on a cloud infrastructure. Such applications are conveniently accessible from different platforms and devices through a web browser, a thin client interface, or a program interface. In this model, the end user has minimum control of the underlying cloud-based computing resources, such as servers, operating system, or the application capabilities

SaaS can be described as software licensing and delivery paradigm that features a complete and functional software solutions provided to users on a metered and subscription basis. Since users access the application via browsers or thin client and program interfaces, SaaS makes the host operating system insignificant in the operation of the product. As mentioned, the service is metered. In this case, SaaS customers are billed based on their consumption, while others pay a flat monthly fee.

Features of SaaS

  • SaaS providers offer applications via subscription structure
  • User transfer the need to develop, install, manage, or upgrade applications to SaaS vendors
  • Applications and data is securely stored in the cloud
  • SaaS is easily managed from a central location
  • Remote serves are deployed to host the application
  • Users can access SaaS offering from any location with Internet access
  • On-premise hardware failure does not interfere with an application or cause data loss
  • Users can reduce or increase use of cloud-based resources depending on their processing and storage needs
  • Applications offered via SaaS model are accessible from any location and almost all Internet-enabled devices
Software as a Service (SaaS)

Software as a Service (SaaS)

SaaS Use Cases

SaaS use case is a typical use case for many companies seeking to benefit from quality application usage without the need to develop, maintain and upgrade the required components. Companies can acquire SaaS solutions for ERP, mail, office applications, collaboration tool, among others. SaaS is also crucial for small companies and startups that wish to launch e-commerce service rapidly but lack the time and resource to develop and maintain the software or buy servers for hosting the platform. SaaS is also used by companies with short-term projects that require collaboration from different members located remotely.

Popular SaaS Services

SaaS offerings are more widespread as compared to IaaS and PaaS. In fact, a majority of consumers use SaaS services without realizing it.

Office365: the cloud-based solution provides productivity software for subscribed consumers. Allows users to access Microsoft Office tools on various platforms, such as Android, MacOS, and Windows, etc.

Box: the SaaS offers secure file storage, sharing, and collaboration from any location and platform

Dropbox: modern application designed for collaboration and for creating, storing, and accessing files, docs, and folders.

Salesforce: the SaaS is among the leading customer relationship management platform that offers a series of capabilities for sales, marketing, service, and more.

Today, cloud computing models have revolutionized the way businesses deploy and manage computing resources and infrastructure. With the advent and evolution of the three major cloud computing models, that it IaaS, PaaS, and SaaS, consumers will find a suitable cloud offering that satisfies virtually all IT needs. These models’ capabilities coupled with competition from popular cloud computing service providers will continue availing IT solutions for consumers demanding for availability, enhanced performance, quality services, better coverage, and secure applications.

Consumers should review their business needs and do a cost-benefit analysis to approve the best model for their business. Also, consumers should conduct thorough workload assessment while migrating to a cloud service.

Big Data vs. Virtualization

Big Data Information Approaches

Big Data Information Approaches

Globally, organizations are facing challenges emanating from data issues, including data consolidation, value, heterogeneity, and quality. At the same time, they have to deal with the aspect of Big Data. In other words, consolidating, organizing, and realizing the value of data in an organization has been a challenge over the years. To overcome these challenges, a series of strategies have been devised. For instance, organizations are actively leveraging on methods such as Data Warehouses, Data Marts, and Data Stores to meet their data assets requirements. Unfortunately, the time and resources required to deliver value using these legacy methods is a distressing issue. In most cases, typical Data Warehouses applied for business intelligence (BI) rely on batch processing to consolidate and present data assets. This traditional approach is affected by the latency of information.

Big Data

As the name suggests, Big Data describes a large volume of data that can either be structured or unstructured. It originates from business processes among other sources. Presently, artificial intelligence, mobile technology, social media, and the Internet of Things (IoT) have become new sources of vast amounts of data. In Big Data, the organization and consolidation matter more than the volume of the data. Ultimately, big data can be analyzed to generate insights that can be crucial in strategic decision making for a business.

Features of Big Data

The term Big Data is relatively new. However, the process of collecting and preserving vast amounts of information for different purposes has been there for decades. Big Data gained momentum recently with the three V’s features that include volume, velocity, and variety.

Volume: First, businesses gather information from a set of sources, such as social media, day-to-day operations, machine to machine data, weblogs, sensors, and so on. Traditionally, storing the data was a challenge. However, the requirement has been made possible by new technologies such as Hadoop.

Velocity: Another defining nature of Big Data is that it flows at an unprecedented rate that requires real-time processing. Organizations are gathering information from RFID tags, sensors, and other objects that need timely processing of data torrents.

Variety: In modern enterprises, information comes in different formats. For instance, a firm can gather numeric and structured data from traditional databases as well as unstructured emails, video, audio, business transactions, and texts.

Complexity: As mentioned above, Big Data comes from diverse sources and in varying formats. In effect, it becomes a challenge to consolidate, match, link, cleanse, or modify this data across an organizational system. Unfortunately, Big Data opportunities can only be explored when an organization successfully correlates relationships and connects multiple data sets to prevent it from spiraling out of control.

Variability: Big Data can have inconsistent flows within periodic peaks. For instance, in social media, a topic can be trending, which can tremendously increase collected data. Variability is also common while dealing with unstructured data.

Big Data Potential and Importance

The vast amount of data collected and preserved on a global scale will keep growing. This fact implies that there is more potential to generate crucial insights from this information. Unfortunately, due to various issues, only a small fraction of this data actually gets analyzed. There is a significant and untapped potential that businesses can explore to make proper and beneficial use of this information.

Analyzing Big Data allows businesses to make timely and effective decisions using raw data. In reality, organizations can gather data from diverse sources and process it to develop insights that can aid in reducing operational costs, production time, innovating new products, and making smarter decisions. Such benefits can be achieved when enterprises combine Big Data with analytic techniques, such as text analytics, predictive analytics, machine learning, natural language processing, data mining and so on.

Big Data Application Areas

Practically, Big Data can be used in nearly all industries. In the financial sector, a significant amount of data is gathered from diverse sources, which requires banks and insurance companies to innovate ways to manage Big Data. This industry aims at understanding and satisfying their customers while meeting regulatory compliance and preventing fraud. In effect, banks can exploit Big Data using advanced analytics to generate insights required to make smart decisions.

In the education sector, Big Data can be employed to make vital improvements on school systems, quality of education and curriculums. For instance, Big Data can be analyzed to assess students’ progress and to design support systems for professors and tutors.

Healthcare providers, on the other hand, collect patients’ records and design various treatment plans. In the healthcare sector, practitioners and service providers are required to offer accurate and timely treatment that is transparent to meet the stringent regulations in the industry and to enhance the quality of life. In this case, Big Data can be managed to uncover insights that can be used to improve the quality of service.

Governments and different authorities can apply analytics to Big Data to create the understanding required to manage social utilities and to develop solutions necessary to solve common problems, such as city congestion, crime, and drug use. However, governments must also consider other issues such as privacy and confidentiality while dealing with Big Data.

In manufacturing and processing, Big Data offers insights that stakeholders can use to efficiently use raw materials to output quality products. Manufacturers can perform analytics on big data to generate ideas that can be used to increase market share, enhance safety, minimize wastage, and solve other challenges faster.

In the retail sector, companies rely heavily on customer loyalty to maintain market share in a highly competitive market. In this case, managing big data can help retailers to understand the best methods to utilize in marketing their products to existing and potential consumers, and also to sustain relationships.

Challenges Handling Big Data

With the introduction of Big Data, the challenge of consolidating and creating value on data assets becomes magnified. Today, organizations are expected to handle increased data velocity, variety, and volume. It is now a business necessity to deal with traditional enterprise data and Big Data. Traditional relational databases are suitable for storing, processing, and managing low-latency data. Big Data has increased volume, variety, and velocity, making it difficult for legacy database systems to efficiently handle it.

Failing to act on this challenge implies that enterprises cannot tap the opportunities presented by data generated from diverse sources, such as machine sensors, weblogs, social media, and so on. On the contrary, organizations that will explore Big Data capabilities amidst its challenges will remain competitive. It is necessary for businesses to integrate diverse systems with Big Data platforms in a meaningful manner, as heterogeneity of data environments continue to increase.

Virtualization

Virtualization involves turning physical computing resources, such as databases and servers into multiple systems. The concept consists of making the function of an IT resource simulated in software, making it identical to the corresponding physical object. Virtualization technique uses abstraction to create a software application to appear and operate like hardware to provide a series of benefits ranging from flexibility, scalability, performance, and reliability.

Typically, virtualization is made possible using virtual machines (VMs) implemented in microprocessors with necessary hardware support and OS-level implementations to enhance computational productivity. VMs offers additional convenience, security, and integrity with little resource overhead.

Benefits of Virtualization

Achieving the economics of wide-scale functional virtualization using available technologies is easy to improve reliability by employing virtualization offered by cloud service providers on fully redundant and standby basis. Traditionally, organizations would deploy several services to operate at a fraction of their capacity to meet increased processing and storage demands. These requirements resulted in increased operating costs and inefficiencies. With the introduction of virtualization, the software can be used to simulate functionalities of hardware. In effect, businesses can outstandingly eliminate the possibility of system failures. At the same time, the technology significantly reduces capital expense components of IT budgets. In future, more resources will be spent on operating, than acquisition expenses. Company funds will be channeled to service providers instead of purchasing expensive equipment and hiring local personnel.

Overall, virtualization enables IT functions across business divisions and industries to be performed more efficiently, flexibly, inexpensively, and productively. The technology meaningfully eliminates expensive traditional implementations.

Apart from reducing capital and operating costs for organizations, virtualization minimizes and eliminates downtime. It also increases IT productivity, responsiveness, and agility. The technology provides faster provisioning of resources and applications. In case of incidents, virtualization allows fast disaster recovery that maintains business continuity.

Types of Virtualization

There are various types of virtualization, such as a server, network, and desktop virtualization.

In server virtualization, more than one operating system runs on a single physical server to increase IT efficiency, reduce costs, achieve timely workload deployment, improve availability and enhance performance.

Network virtualization involves reproducing a physical network to allow applications to run on a virtual system. This type of virtualization provides operational benefits and hardware independence.

In desktop virtualization, desktops and applications are virtualized and delivered to different divisions and branches in a company. Desktop virtualization supports outsourced, offshore, and mobile workers who can access simulate desktop on tablets and iPads.

Characteristics of Virtualization

Some of the features of virtualization that support the efficiency and performance of the technology include:

Partitioning: In virtualization, several applications, database systems, and operating systems are supported by a single physical system since the technology allows partitioning of limited IT resources.

Isolation: Virtual machines can be isolated from the physical systems hosting them. In effect, if a single virtual instance breaks down, the other machine, as well as the host hardware components, will not be affected.

Encapsulation: A virtual machine can be presented as a single file while abstracting other features. This makes it possible for users to identify the VM based on a role it plays.

Data Virtualization – A Solution for Big Data Challenges

Virtualization can be viewed as a strategy that helps derive information value when needed. The technology can be used to add a level of efficiency that makes big data applications a reality. To enjoy the benefits of big data, organizations need to abstract data from different reinforcements. In other words, virtualization can be deployed to provide partitioning, encapsulation, and isolation that abstracts the complexities of Big Data stores to make it easy to integrate data from multiple stores with other data from systems used in an enterprise.

Virtualization enables ease of access to Big Data. The two technologies can be combined and configured using the software. As a result, the approach makes it possible to present an extensive collection of disassociated and structured and unstructured data ranging from application and weblogs, operating system configuration, network flows, security events, to storage metrics.

Virtualization improves storage and analysis capabilities on Big Data. As mentioned earlier, the current traditional relational databases are incapable of addressing growing needs inherent to Big Data. Today, there is an increase in special purpose applications for processing varied and unstructured big data. The tools can be used to extract value from Big Data efficiently while minimizing unnecessary data replication. Virtualization tools also make it possible for enterprises to access numerous data sources by integrating them with legacy relational data centers, data warehouses, and other files that can be used in business intelligence. Ultimately, companies can deploy virtualization to achieve a reliable way to handle complexity, volume, and heterogeneity of information collected from diverse sources. The integrated solutions will also meet other business needs for near-real-time information processing and agility.

In conclusion, it is evident that the value of Big Data comes from processing information gathered from diverse sources in an enterprise. Virtualizing big data offers numerous benefits that cannot be realized while using physical infrastructure and traditional database systems. It provides simplification of Big Data infrastructure that reduces operational costs and time to results. Shortly, Big Data use cares will shift from theoretical possibilities to multiple use patterns that feature powerful analytics and affordable archival of vast datasets. Virtualization will be crucial in exploiting Big Data presented as abstracted data services.

 

Netezza / Puredata – How to replace or trim CHAR(0) is NULL characters in a field

PureData Powered by Netezza

PureData Powered by Netezza

Occasionally, one runs into the problem of hidden field values breaking join criteria.  I have had to clean up bad archive and conversion data with hidden characters serval times over the last couple of weeks, so, I thought I might as well capture this note for future use.

I tried the Replace command which is prevalent for Netezza answers to this issue on the web, but my client’s version does not support that command.  So, I needed to use the Translate command instead to accomplish it.  It took a couple of searches of the usual bad actors to find the character causing the issue, which on this day was chr(0).  Here is a quick mockup of the command I used to solve this issue.

Example Select Statement

Here is a quick example select SQL to identify problem rows.

SELECT TRANSLATE(F.BLOGTYPE_CODE, CHR(0), ”) AS BLOGTYPE_CODE, BT.BLOG_TYP_ID, LENGTH(BT.BLOG_TYP_ID) AS LNGTH_BT, LENGTH(F.BLOGTYPE_CODE) AS LNGTH_ BLOGTYPE

FROM  BLOGS_TBL F,  BLOG_TYPES BT WHERE TRANSLATE(F.BLOGTYPE_CODE, CHR(0), ”) =  BT.BLOG_TYP_ID AND LENGTH(BT.BLOG_TYP_ID) <>Length(LENGTH(F.BLOGTYPE_CODE) ;

 

Example Update Statement

Here is a quick shell update statement to remove the Char(0) characters from the problem field.

Update <<Your Table Name>> A

Set A.<<Your Field Name>> = TRANSLATE(A.<<Your FieldName>>, CHR(0), ”)

where length(A.<<Your Field Name>>) <> Length(A.<<Your FieldName>>) And << Additional criteria>>;

 

 

 

End Of Support For IBM InfoSphere 9.1.0

IBM Information Server (IIS)

IBM Information Server (IIS)

End of Support for IBM InfoSphere Information Server 9.1.0

IBM InfoSphere Information Server 9.1.0 will reach End of Support on 2018-09-30.  If you are still on the InfoSphere Information Server (IIS) 9.1.0, I hope you have a plan to migrate to an 11-series version soon.  InfoSphere Information Server (IIS) 11.7 would be worth considering if you don’t already own an 11-series license. InfoSphere Information Server (IIS) 11.7 will allow you to take advantage of the evolving thin client tools and other capabilities in the 2018 release pipeline without needing to perform another upgrade.

Related References

IBM Support, End of support notification: InfoSphere Information Server 9.1.0

IBM Support, Software lifecycle, InfoSphere Information Server 9.1.0

IBM Knowledge Center, Home, InfoSphere Information Server 11.7.0, IBM InfoSphere Information Server Version 11.7.0 documentation

What Is Machine Learning?

Machine Learning

Machine Learning

Machine learning is Artificial Intelligence (AI) which enables a system to learn from data rather than through explicit programming.  Machine learning uses algorithms that iteratively learn from data to improve, describe data, and predict outcomes.  As the algorithms ingest training data to produce a more precise machine learning model. Once trained, the machine learning model, when provided data will generate predictions based on the data that taught the model.  Machine learning is a crucial ingredient for creating modern analytics models.

Netezza / PureData – How to Substring on a Character

PureData Powered by Netezza

PureData Powered by Netezza

 

I had a reason this week to perform a substring on a character in Netezza this week, something I have not had a need to do before.  The process was not as straightforward as I would have thought, since the command is explained as a static position command, and the IBM documentation, honestly, wasn’t much help.  Knowing full well, that text strings are variable having to provide a static position is not terribly useful in and of itself.  So, we need to use an expression to make the substring command flexible and dynamic.

I did get it work the way I needed, but it took two commands to make it happen:

  • The First was the ’instr’ command to identify the field and character I wanted to substring on: instr(<<FIELD_NAME>>,’~’) as This provides the position number of the tilde (~).
  • The second was the ‘substr’ command in which I embedded the ‘instr’ command: substr(<<FIELD_NAME>>,0,instr(<<FIELD_NAME>>,’~’) )

This worked nicely for what I needed, which was to pick out a file name from the beginning of a string, which was delimited with a tilde (~)

Substring on a Character Command Format

  • This format example starts with position zero (0) as position 1 of substring command and goes to the first tilde (~) as position 2 of the substring command.
Select  <<FIELD_NAME>>

, instr(<>,’~’) as pos2

, substr(<<FIELD_NAME>>,0,instr(<<FIELD_NAME>>,’~’) ) as Results

From <<Table_Name>>

where  <<Where_Clause>>;

 

 

Related references

IBM Knowledge Center, Home, PureData System for Analytics 7.2.1, IBM Netezza database user documentation, Netezza SQL basics, Netezza SQL extensions, Character string functions

IBM Knowledge Center, Home PureData System for Analytics 7.0.3, IBM Netezza Database User’s Guide, Netezza SQL basics, Netezza SQL extensions, Character functions

Essbase Connector Error – Client Commands are Currently Not Being Accepted

DataStage Essbase Connector, Essbase Connector Error, Client Commands are Currently Not Being Accepted

DataStage Essbase Connector

While investigating a recent Infosphere Information Server (IIS), Datastage, Essbase Connect error I found the explanations of the probable causes of the error not to be terribly meaningful.  So, now that I have run our error to ground, I thought it might be nice to jot down a quick note of the potential cause of the ‘Client Commands are Currently Not Being Accepted’ error, which I gleaned from the process.

Error Message Id

  • IIS-CONN-ESSBASE-01010

Error Message

An error occurred while processing the request on the server. The error information is 1051544 (message on contacting or from application:[<<DateTimeStamp>>]Local////3544/Error(1013204) Client Commands are Currently Not Being Accepted.

Possible Causes of The Error

This Error is a problem with access to the Essbase object or accessing the security within the Essbase Object.  This can be a result of multiple issues, such as:

  • Object doesn’t exist – The Essbase object didn’t exist in the location specified,
  • Communications – the location is unavailable or cannot be reached,
  • Path Security – Security gets in the way to access the Essbase object location
  • Essbase Security – Security within the Essbase object does not support the user or filter being submitted. Also, the Essbase object security may be corrupted or incomplete.
  • Essbase Object Structure –  the Essbase object was not properly structured to support the filter or the Essbase filter is malformed for the current structure.

Related References

IBM Knowledge Center, InfoSphere Information Server 11.7.0, Connecting to data sources, Enterprise applications, IBM InfoSphere Information Server Pack for Hyperion Essbase

Printable PDF Version of This Article

 

My Most Used Windows 10 Keyboard Shortcuts

Shortcut Keystrokes

Shortcut Keystrokes

While there are a great number of useful windows 10 shortcuts, I have the list below the combination, which I use daily.  Many of the shortcuts can be used across multiple applications (e.g. Notepad++, MS Word, SQL Server, Aginity, etc.) and save a considerable amount of mouse work.  Overall, these shortcut keys are more efficient and faster than using the mouse to perform the same task on a repetitive basis.

You may want to investigate the numerous other Windows 10 shortcuts keys, which best apply to your daily activities, but these are the ones, which I have found most useful and which I have committed to memory.

Table of My Most Used Windows Shortcuts

Key
Strokes

Behavior

Alt
+ Tab

Switch
between open apps

Ctrl
+ A

Select
all items in a document or window

Ctrl
+ Alt + Tab

Use
the arrow keys to switch between all open apps

Ctrl
+ C

Copy
the selected item

Ctrl
+ D

Delete
the selected item and move it to the Recycle Bin

Ctrl
+ F

Select
the search box

Ctrl
+ V

Paste
the selected item

Ctrl
+ X

Cut
the selected item

Esc

Stop
or leave the current task

F5

Refresh
the active window

F11

Maximize
Window

Related References

 Microsoft > Windows Support > Keyboard shortcuts in Windows

 

 

 

 

 

 

Oracle TO_CHAR to SQL Server CONVERT Equivalents to change Date to String

Transact SQL (T-SQL)

Transact SQL (T-SQL)

When it comes to SQL I tend to lean on the SQL I have used the most over the years, which is Oracle.  Today was no exception, I found myself trying to use the TO_CHAR command in SQL Server to format a date, which of course does not work. So, after a little thought, here are some examples of how you can the SQL Server Convert Command the achieve the equivalent result.

Example SQL Server Date Conversion SQL

Example SQL Server Date Conversion SQL

Example SQL Server Date Conversion SQL Code

This SQL of examples runs, as is, no from table required.

 

Select

CONVERT(VARCHAR(10), GETDATE(), 20) as
‘YYYY-MM-DD’

,CONVERT(VARCHAR(19), GETDATE(), 20) as ‘YYYY-MM-DD HH24:MI:SS’

,CONVERT(VARCHAR(8), GETDATE(), 112) as YYYYMMDD

,CONVERT(VARCHAR(6), GETDATE(), 112) as YYYYMM

,CONVERT(VARCHAR(12), DATEPART(YEAR, GETDATE()))+ RIGHT(‘0’+CAST(MONTH(GETDATE()) AS VARCHAR(2)),2)
as
YYYYMM_Method_2

,CONVERT(VARCHAR(4), GETDATE(), 12) as YYMM

,CONVERT(VARCHAR(4), GETDATE(), 112) as YYYY

,CONVERT(VARCHAR(4), DATEPART(YEAR, GETDATE())) as YYYY_Method_2

,CONVERT(VARCHAR(4), YEAR(GETDATE())) as YYYY_Method_3

,RIGHT(‘0’+CAST(MONTH(GETDATE()) AS VARCHAR(2)),2) as Two_Digit_Month

,SUBSTRING(ltrim(CONVERT(VARCHAR(4), GETDATE(), 12)),3,2) as Two_Digit_Month_2

,CONVERT(VARCHAR(10), GETDATE(), 111) as ‘YYYY/MM/DD’

,CONVERT(VARCHAR(5), GETDATE(), 8) as ‘HH24:MI’

,CONVERT(VARCHAR(8), GETDATE(), 8) ‘HH24:MI:SS’

Map TO_CHAR formats to SQL Server

You can map an Oracle TO_CHAR formats to SQL Server alternative commands as follows:

TO_CHAR
String

VARCHAR
Length

SQL
Server Convert Style

YYYY-MM-DD

VARCHAR(10)

20,
21, 120, 121, 126 and 127

YYYY-MM-DD
HH24:MI:SS

VARCHAR(19)

20,
21, 120 and 121

YYYYMMDD

VARCHAR(8)

112

YYYYMM

VARCHAR(6)

112

YYMM

VARCHAR(4)

12

YYYY

VARCHAR(4)

112

MM

VARCHAR(2)

12

YYYY/MM/DD

VARCHAR(10)

111

HH24:MI

VARCHAR(5)

8,
108, 14 and 114

HH24:MI:SS

VARCHAR(8)

8,
108, 14 and 114

Translating the formats commands

Here are some example of translating the formats commands.

Format

SQL
Server

YYYY-MM-DD

CONVERT(VARCHAR(10),
GETDATE(), 20)

YYYY-MM-DD
HH24:MI:SS

CONVERT(VARCHAR(19),
GETDATE(), 20)

YYYYMMDD

CONVERT(VARCHAR(8),
GETDATE(), 112)

YYYYMM

CONVERT(VARCHAR(6),
GETDATE(), 112)

YYMM

CONVERT(VARCHAR(4),
GETDATE(), 12)

YYYY

CONVERT(VARCHAR(4),
GETDATE(), 112)

YYYY

CONVERT(VARCHAR(4),
DATEPART(YEAR, GETDATE()))

YYYY

CONVERT(VARCHAR(4),
YEAR(GETDATE()))

MM

RIGHT(‘0’+CAST(MONTH(GETDATE())
AS VARCHAR(2)),2)

MM

SUBSTRING(ltrim(CONVERT(VARCHAR(4),
GETDATE(), 12)),3,2)

YYYY/MM/DD

CONVERT(VARCHAR(10),
GETDATE(), 111)

HH24:MI

CONVERT(VARCHAR(5),
GETDATE(), 8)

HH24:MI:SS

CONVERT(VARCHAR(8),
GETDATE(), 8)

Related Reference

Microsoft Docs, SQL, T-SQL Functions, GETDATE (Transact-SQL)

Microsoft Docs, SQL, T-SQL Functions, Date and Time Data Types and Functions (Transact-SQL)

Microsoft Docs, SQL, T-SQL Functions, DATEPART (Transact-SQL)

 

 

What Are The DataStage / QualityStage Join Stages?

Three Stages Which Join Records

Three Stages Which Join Records

While chasing an error to which only applied to join type stages, I thought it might be nice to identify what the InfoSphere Information Server DataStage / QualityStage are.  There are three of them, as you can see from the picture above, which are the:

  • Join Stage,
  • Lookup Stage,
  • And, Merge Stage.

All three stages that join data based on the values of identified key columns.

Related References

IBM Knowledge Center, InfoSphere Information Server 11.7.0, InfoSphere DataStage and QualityStage, Developing parallel jobs, Processing Data, Lookup Stage

IBM Knowledge Center, InfoSphere Information Server 11.7.0, InfoSphere DataStage and QualityStage, Developing parallel jobs, Processing Data, Join Stage

IBM Knowledge Center, InfoSphere Information Server 11.7.0, InfoSphere DataStage and QualityStage, Developing parallel jobs, Processing Data, Merge Stage

https://www.ibm.com/support/knowledgecenter/SSZJPZ_11.7.0/com.ibm.swg.im.iis.ds.parjob.dev.doc/topics/c_deeref_Merge_Stage.html

Parallel jobs on Windows fail with APT_IOPort::readBlkVirt;error

APT_IOPort::readBlkVirt Error Screenshot

APT_IOPort::readBlkVirt Error Screenshot

This a known error for windows systems and applies to DataStage and DataQuality jobs using the any or all the three join type stages (Join, Merge, and Lookup).

Error Message

  • <<Link name>>,0: APT_IOPort::readBlkVirt: read for block header, partition 0, [fd 4], returned -1 with errno 10,054 (Unknown error)

Message ID

  • IIS-DSEE-TFIO-00223

Applies To

  • Windows systems only
  • Parallel Engine Jobs the three join type stages (Join, Merge, and Lookup). It does not apply to Server Engine jobs.
  • Infosphere Information Server (IIS), Datastage and DataQuality 9.1 and higher

The Fix

  • Add the APT_NO_IOCOMM_OPTIMIZATION in project administrator and set to blank or 0. I left it blank so it would not impact other jobs
  • Add the environment variable to the job producing the error and set to 1

What it APT_NO_IOCOMM_OPTIMIZATION Does

  • Sets the use of shared memory as the transport type, rather than using the default sockets transport type.
  • Note that in most cases sockets transport type is faster, so, you likely will not to set this across the project as the default for all job. It is best to apply it as necessary for problematic jobs.

Related References

InfoSphere DataStage and QualityStage, Version 9.1 Job Compatibility

IBM Support, JR54078: PARALLEL JOBS ON WINDOWS FAIL WITH APT_IOPORT::READBLKVIRT; ERROR

IBM Support, Information Server DataStage job fails with unknown error 10,054.

 

SQL server table Describe (DESC) equivalent

 

Transact SQL (T-SQL)

Transact SQL (T-SQL)

Microsoft SQL Server doesn’t seem have a describe command and usually, folks seem to want to build a stored procedure to get the describe behaviors.  However, this is not always practical based on your permissions. So, the simple SQL below will provide describe like information in a pinch.  You may want to dress it up a bit; but I usually just use it raw, as shown below by adding the table name.

Describe T-SQL Equivalent

Select *

From INFORMATION_SCHEMA.COLUMNS

Where TABLE_NAME = ‘<<TABLENAME>>’;

Related References

Microsoft SQL Server – Useful links

Microsoft SQL Server 2017

Microsoft SQL Server 2017

Here are a few references for the Microsoft SQL Server 2017 database, which may be helpful.

Table Of Useful Microsoft SQL Server Database References

Reference Type

Link

SQL Server 2017 Download Page

https://www.microsoft.com/en-us/sql-server/sql-server-downloads

SQL SERVER version, edition, and update level

https://support.microsoft.com/en-us/help/321185/how-to-determine-the-version–edition-and-update-level-of-sql-server-a

SQL Server 2017 Release Notes

https://docs.microsoft.com/en-us/sql/sql-server/sql-server-2017-release-notes

SQL Server Transact SQL Commands

https://technet.microsoft.com/en-us/library/ms189826(v=sql.90).aspx

Related References

What is Source Control?

Acronyms, Abbreviations, Terms, And Definitions

Acronyms, Abbreviations, Terms, And Definitions

Source Control is an Information technology environment management system for storing, tracking and managing changes to software. This is commonly done through a process of creating branches (copies for safely creating new features) off of the stable master version of the software, then merging stable feature branches back into the master version. This is also known as version control or revision control.

DataStage – How to Pass the Invocation ID from one Sequence to another

DataStage Invocation ID Passing Pattern Overview

DataStage Invocation ID Passing Pattern Overview

When you are controlling a chain of sequences in the job stream and taking advantage of reusable (multiple instances) jobs it is useful to be able to pass the Invocation ID from the master controlling sequence and have it passed down and assigned to the job run.  This can easily be done with needing to manual enter the values in each of the sequences, by leveraging the DSJobInvocationId variable.  For this to work:

  • The job must have ‘Allow Multiple Instance’ enabled
  • The Invocation Id must be provided in the Parent sequence must have the Invocation Name entered
  • The receiving child sequence will have the invocation variable entered
  • At runtime, a DataStage invocation id instance of the multi-instance job will generate with its own logs.

Variable Name

  • DSJobInvocationId

Note

This approach allows for the reuse of job and the assignment of meaningful instance extension names, which are managed for a single point of entry in the object tree.

Related References: 

IBM Knowledge Center > InfoSphere Information Server 11.5.0

InfoSphere DataStage and QualityStage > Designing DataStage and QualityStage jobs > Building sequence jobs > Sequence job activities > Job Activity properties

Why Consilience Is Important?

Tree of knowledge

Tree of knowledge

What is Consilience?

Consilience is the confluence of concepts and/or principles from different disciplines, especially, when forming a comprehensive unifying theory.

Independent Confirmation

Why are some inventions discovered at the same time in different parts of the world? Does this have something to do with the scientific process of “sharing important discoveries?” Generally, scientists believe that they are part of a community of knowledge. Their discoveries do not occur in a vacuum. They must give credit to those who went before and created the foundation for their work. Therefore, when they discover something new, they are required to share it with the entire world. This sharing is part of knowledge evolution. Interestingly enough, it is also key to the World Wide Web. Collaboration is one of the key strengths of the Internet. It is a way to increase overall knowledge of Planet Earth. Science can also increase the strength of their theories through independent confirmation.

Result Conciliation

There are oftentimes prescriptions for the types and numbers of witnesses to accomplish certain legal requirements. Anyone who has completed an experiment understands the importance of result conciliation. A hypothesis is not proven to be true unless it can be repeated by independent sources. This shows that the reality is objective. The word, Consilience was formed by two Latin words – “com” meaning “together” and the suffix “-silence” meaning “jumping.” Therefore, Consilience means “jumping together” or a “convergence of proof from independent sources.” Scientists should use different methods to reach the same conclusion. Business and economics have a similar concept. Just think of the concept of a Recession or Depression. These are officially declared when a variety of indicators are in agreement – stock market, employment, inflation, money supply and so forth.

Knowledge Evolution

Consulting can use the concept of Consilience to teach firms how to follow objective norms. Technology consulting can compare a subjective company’s practices to objective industry norms. The best career development is successful based on objective, independent analysis. The concordance of evidence can help a business create a successful strategy. Consilience is the convergence of evidence from independent sources to prove the validity of a conclusion. Objective corporate success can be achieved by satisfying objective needs of your customers. Business intelligence requires an objective standard, such as Consilience to be useful.

Conclusion

Consilience is important to you because the answer to any given problem may not necessarily come from within your field of expertise and experience. rather, to be truly competitive in an ever in an ever increasing world of knowledge, we need to adopt a broad-scoped renaissance approach to learning and thinking, which folds in other sets of concepts and principles to create the durable solutions for today and tomorrow.