How the IBM Common SQL Engine (CSE) Improves DB2

Common SQL Engine (CSE)

Common SQL Engine (CSE)

Today, newfound efficiencies and innovation are key to any business success – small, medium or large. In the rapidly evolving field of data analytics, innovative approaches to handling data are particularly important since data is the most valuable resource any business can have. IBM common SQL Engine is delivering application and query compatibility that is allowing companies to turn their data into actionable insights. This is allowing businesses to unleash the power of their databases without constraints.

But, is this really important?

Yes. Many businesses have accumulated tons of data over the years. This data resides in higher volumes, more locations throughout an enterprise – on-premise and on-cloud –, and in greater variety. Typically, this data should be a huge advantage, providing enterprises with actionable insights. But, often, this doesn’t happen.

IBM Hybrid Data Management.

With such a massive barrel of complex legacy data, many organizations find it confusing to decide what to do with it. Or where to start. The process of migrating all that data into new systems is simply a non-starter. As a solution, enterprises are turning to IBM Db2 – a hybrid, intuitive data approach that marries data and analytics seamlessly. IBM Db2 hybrid data management allows flexible cloud and on-premises deployment of data.

However, such levels of flexibility typically require organizations to rewrite or restructure their queries, and applications that will use the diverse, ever-changing data. These changes may even require you to license new software. This is costly and unfeasible. To bridge this gap, the Common SQL Engine (CSE) comes into play.

How IBM Common SQL Engine is Positioning Db2 for the Future?

The IBM Common SQL Engine inserts a single layer of data abstraction at the very data source. This means that, instead of migrating the data all at once, you can now apply data analytics wherever the data resides – whether on private, public or hybrid cloud – by using the Common SQL Engine as a bridge.

The IBM’s Common SQL Engine provides portability and consistency of SQL commands, meaning that the SQL is functionally portable across multiple implementations. It allows seamless movement of workloads to the cloud and allows for multiplatform integration and configurations regardless of their programming language.

Ideally, the Common SQL Engine is supposed to be the heart of the query and the foundation of application compatibility. But it does so much more!

Its compatibility extends beyond data analytic applications to include security, management, governance, data management, and other functionalities as well.

How does this improve the quality, flexibility, and portability of Db2?

By allowing for integration across multiple platforms, workloads and programming languages, the Common SQL Engine, ultimately, leads to a “data without limits” environment for Db2 hybrid data management family through:

  1. Query and application compatibility

The Common SQL engine (CSE) ensures that users can write a query, and be confident that it will work across the Db2 hybrid data management family of offerings. With the CSE, you can change your data infrastructure and location – on-cloud or on-premises – without having to worry about license costs and application compatibility.

  1. Data virtualization and Integration

The common SQL engine has a built-in data virtualization service that ensures that you can access your data from all your sources. These services position Db2 family of offerings including, IBM Db2 warehouse, IBM Db2, IBM Db2 BigSQL amongst others.

This services also applies to IBM Integrated Analytics System, Teradata, Oracle, Puredata and Microsoft SQL server. Besides, you can work seamlessly with open-source solutions such as HIVE; and cloud sources such as Amazon Redshift. Such levels of integration are unprecedented!

By allowing users to effectively pull data from Db2 data stores and integrate it with data from non-IBM stores using a single query, the common SQL engine places Db2 at an authoritative position as compared to other data stores.

  1. Flexible Licensing

Licensing is one of the hardest nuts to crack, especially for smart organizations who rely on technologies such as the cloud to deliver their services. While application compatibility and data integration will save you time, flexible licensing saves you money, on the spot.

IBM’s common SQL engine allows flexible licensing, meaning that you can purchase one license model and deploy it whenever needed, or as your data architecture evolves. Using IBM’s FlexPoint licensing, you can purchase FlexPoints and use them across all Db2 data management offerings. This is a convenience in one place.

The flexible licensing will not only simplify the adoption and exchange of platform capabilities, but it also positions your business strategically by making it more agile. Your data managers will be able to access the tools needed on the fly, without going through a lethargic and tedious procurement process.

IBM Db2 Data Management Family Is Supported by Common SQL Engine (CSE) .

IBM Db2 is a family of custom, deployable database that allows enterprises to leverage existing investments. IBM Db2 allows businesses to use any type of data from an either structured or unstructured database (or data warehouse). It provides the right data foundation/environment with industry-leading data compression, on-premise and cloud deployment options, modern data security, robust performance for mixed loads and the ability to adjust and scale without redesigning.

The IBM Db2 family enable businesses to adapt, scale quickly and remain competitive without compromising security, risk levels or privacy. It features:

  • Always-on availability
  • Deployment and flexibility: On-premises, scale-on demand, and private or cloud deployments• Compression and performance
  • Embedded IoT technology is allowing businesses to act fast on the fly.

Some of these Db2 family offerings that are supported by the common SQL engine include:

  • Db2 Database
  • Db2 Hosted
  • Db2 Big SQL
  • Db2 on Cloud
  • Db2 Warehouse
  • Db2 Warehouse on Cloud
  • IBM Integrated Analytics System (IIAS)

Db2 Family Offerings and Beyond

Since the common SQL engine mainly focuses on data federation and propensity, other non-IBM databases can as well plug into the engine for SQL processing. These other 3rd party offerings include:

  • Watson Data Platform
  • Oracle
  • Hadoop
  • Microsoft SQL Server
  • Teradata
  • Hive

Conclusion

IBM Common SQL engine is allowing organizations to fully use data analytics to future-proof their business, and as well remain agile and competitive. In fact, besides the benefits of having robust tools woven into CSE, this SQL engine offers superior analytics and machine-learning positioning. Data processing can now happen at the speed of light –- 2X to 5X faster. The IBM Common SQL engine adds important capabilities to Db2, including freedom of location, freedom of use, and freedom of assembly.

Related References

An Overview of DB2 Federation

DB2 Federation

DB2 Federation

Data analytics has changed where data is no longer manageable in relational databases only. Data is flowing from various sources which are not of the same format. This means it is not possible to store all data in the same repository. Some are best suited for storing in relational databases, others for Apache Hadoop while others are best suited for NoSQL databases.

During data analyzing, so much time is taken in trying to bring the distributed data together instead of obtaining insights. Db2 Federation has come to the rescue of data analysts. Federation concept in db2 eliminates the need for storing data in different repositories and reduces the hassle of getting insights.

What is DB2 Federation?

DB2 federation is a data integration technology that permits remote database objects to be accessed as local DB2 database objects. This technology connects multiple databases and makes them appear like one database.

How does DB2 federation work?

Federation allows you to access all of your data that is on multiple distributed databases using a single query. When implemented in an organization, this technology can be used to access data that is on any of the organization’s Db2, whether local or in the cloud.

Why use DB2 federation?

So, why should you use the federation? This concept brings data of all formats into one virtual source. With data being retrieved from one virtual source, analyzing it becomes cost-effective and efficient.

What are its primary use cases for DB2 federation?

Merging of various sources of data

DB2 federation facilitates consolidating of data from sources data local and cloud to form one virtual data source. This eliminates the process of migrating data which can be expensive and troublesome.

Increase the capacity of a repository beyond the fixed limits

Physical storage capacity is bound to have a limit which is one reason you may find an organization has distributed its data in various repositories. With federation, the storage is virtual and therefore doesn’t have any limit. This technology can greatly help you if your physical dataset is running low on space.

Linking up to Db2 Warehouse on Cloud

People who use Db2 products can federate data from Db2 on Cloud and Db2 Warehouse on the Cloud. This will give them a joint interface where they can access, add, query, and analyze data without encountering the complex ETL processes. Better still, no additional code will be required to execute all these processes. This makes it easy for people with the low technical know-how to use these products smoothly.

Split data across different servers

At times, you might choose to partition your data. With federation integration technology, partitioned data can be queried with a unified interface. Federation allows you to better balance your workloads, scale precise parts of an app, and create micro-services that work harmoniously.

Generally, db2 federation makes it access data by bringing it together into a single virtual source. This brings about cost and time-saving benefits. When you want to analyze data, you can get insights immediately instead of spending a lot of time querying through repositories.

Related References

A Road Map For Migrating To A Public Cloud

Cloud Migration

Cloud Migration

A Road Map for Migrating to A Public Cloud Environment

Today, most organizations are looking for ways to cut down their sprawling IT budgets and define efficient paths for new developments. Making the move to the cloud is being seen as a more strategic and an economically viable idea, that is primarily allowing organizations to gain quick access to new platforms, services, and toolsets. But the migration of applications to the cloud environment needs a clear, and well-thought-out cloud migration strategy.

We are past the year of confusion and fear on matters cloud environment. In fact, almost everyone now agrees that the cloud is a key element of any company’s IT investment. What is not yet clear is what to move, how to move, and industry best practices to protect your investment in a public cloud environment. Therefore, a solid migration plan is an essential part of any cloud migration process.

Here are a few things you should pay close attention to when preparing a cloud migration planning template:

  • Data Protection

When planning to migrate to the cloud, it is paramount to remember that it is not a good idea to migrate every application. As you learn the baby steps, keep your legacy apps and other sensitive data such as private banking info off the cloud. This will ensure that, in case of a breach on your public cloud, your sensitive data and legacy systems will not fall into the hands of unsavory individuals.

  • Security

Security of the data being migrated to the cloud should be just as important as on the cloud. Any temporary storage locations used during the cloud migration process should be secure from unauthorized intrusions.

Although security can be hard to quantify, it is one of the key components and considerations of any cloud service. The very basic security responsibility includes getting it right around your password security. Remember that, while you can massively increase the security around your applications, it is practically very different to deal with on-cloud threats and breaches since you technically don’t own any of the cloud software.

Some of the security concerns that you’ll need to look into include:

  • Is your data securely transferred and stored in the cloud?
  • Besides the passwords, does your cloud provider offer some type of 2-factor authentication?
  • How else are the users authenticated?
  • Does your provider meet the industry’s regulatory requirements?
  • Backup and Disaster recovery strategies

A backup and disaster strategy ensure that your data will be protected in case of a disaster. These strategies are unique to every organization depending on its application needs and the relevance of those applications to their organization.

To devise a foolproof DR strategy, it is important to identify and prioritize applications and determine the downtime acceptable for each application, services, and data.

Some of the things to consider when engineering your backup and disaster recovery blueprint include:

  • Availability of sufficient bandwidth and network capacity to redirect all users in case of a disaster.
  • Amount of data that may require backup.
  • Type of data to be protected
  • How long can it take to restore your systems from the cloud?
  • Communications Capacity enablement

Migrating to a cloud environment should make your business more agile and responsive to the market. Therefore, a robust communications enablement should be provided. Ideally, your cloud provider should be able to provide you with a contact center, unified messaging, mobility, presence, and integration with other business applications.

While the level of sophistication and efficiency of on-premise communications platforms depends on the capabilities of the company IT’s staff, cloud environments should offer communication tools with higher customizations to increase productivity.

A highly customized remote communications enablement will allow your company to refocus its IT resources to new innovation, spur agility, cut down on hardware costs and allow for more engagements with partners and customers.

Simply put, cloud communications:

  • Increase efficiency and productivity
  • Enables reimagined experience
  • Are designed for a seamless interaction.
  • legal liability and protection

Other important considerations when developing your cloud migration planning template are compliance with regulatory requirements and software licensing. For many businesses, data protection and regulatory compliance with HIPPA and GDPR is a constant concern, especially when dealing with identifiable data. Getting this right, the first time will allow you to move past the compliance issue blissfully.

When migrating, look for a cloud provider with comprehensive security assurance programs and governance-focused features. This will help your business operate more secure and in line with industry standards.

Ready to migrate your processes to a public cloud environment? Follow these pointers develop a comprehensive cloud migration planning template.

Related References

Public Cloud Versus Private Cloud

Cloud Computing

Cloud Computing

A public cloud strategy refers to a situation where you utilize cloud resources on a shared platform. Examples of shared or public cloud solutions include Microsoft Azure, Amazon Web Services and Google cloud. There are several benefits associated with cloud solutions. On the other hand, a private cloud strategy refers to a situation where you can decide to have an infrastructure which is dedicated to serving your business. It is sometimes referred to as homegrown where you employ experts to run the services so that your business can access different features. There are several advantages of using a public cloud over private cloud which you should know before you make an informed decision on the right platform to invest. Some of the benefits of the public cloud strategy include the following:

Availability and scale of Expertise

If you compare the public cloud and the private cloud services, the public cloud

allows you to access more experts. Remember the companies which offer the cloud services have enough employees who are ready to help several clients. In most cases, the other clients whom the service providers serve will not experience problems at the same time. It implies that human resource will be directed toward solving your urgent issue. You can as well scale up or down at any given time as the need arises which is unlike a case of private cloud solutions where you will have to invest in infrastructure each time you will like to upgrade.

Downgrading on a private cloud system can expose you to lose because you will leave some resources underutilized.

The volume of Technical Resources to apply

You access more technical resources in a public cloud platform. Remember the companies which offer the public cloud solutions are fully equipped with highly experienced experts. They also have the necessary tools and resources which

they can apply to assure you the best technical solutions each time you need them. It is unlike a private arrangement where you will have to incur more costs if the technical challenges will need advanced tools and highly qualified experts.

Price point

The price of a private cloud is high when compared to a public arrangement. If you are looking for ways you can save money, then the best way to go about it is to involve a public cloud solution. In the shared platform, you will only pay for

what you need. If you do not need a lot of resources at a given time, you can downgrade the services and enjoy fair prices. Services such as AWS offer great cost containment across the time which makes it easy to access the services at fair prices. For any business to grow, it should invest in the right package which brings the return on investment. The services offered by the public cloud systems allow businesses to save and grow. You should as well take into consideration other factors such as ecosystems for cloud relationships before you make an informed decision. There are some business models which prefer private cloud solutions while others can work well under public cloud-based solutions.

Related References

Business Linux Operating Systems

Linux

Linux

Unix and Linux are different operating systems with have some common commands. Source code for Linux is freely available to the public and Unix is not available. Linux operating system is a free/open source and Some versions of Unix are proprietary and others are a free/open source. Linux Operating system can be used for desktop systems and for servers. But the Unix is mainly used in servers, mainframes and high-end computers.

AIX is an operating system based on Unix versions from IBM. It is mainly designed for IBM’s workstations and for the server hardware platforms. And HP-UX is the operating system from HP ( Hewlett Packard ) based on Unix versions.  HP-UX and AIX are stable operating system compare with Linux. HP-UX and AIX are platform dependent and they are limited to their own hardware. But in the case of Linux, it is platform independent and can be used with any hardware. Since HP-UX and AIX are platform dependent, they are optimised for the hardware and the performance is better than Linux operating systems.  AIX is outperforming Linux from 5 to 10 percent.

Unix

AT&T Unix, started in the 1970s at the Bell Labs and newer versions of Unix have developed and some of them are listed below. In 1980, AT&T licensed Unix to third-party vendors and leading to the development of different variants. Some of them are;

  • Berkeley Unix, FreeBSD and its variants
  • Solaris from Sun Microsystem
  • HP-UX from Hewlett-Packard
  • AIX from IBM
  • MacOs from Apple
  • Microsoft’s Xenix

Unix installations are costlier since it requires some special hardware. MacOS needs apple computers, AIX needs IBM hardware and HP-UX needs HP hardware etc.

Linux

Linux is a free and open source operating system based on Unix. Linux kernel was first developed by Linus Torvalds in 1991. Linux was originally developed for personal computers but nowadays it is using personal computers as well as in server systems. Since it is very flexible, it can be installed in any hardware systems. Linux operating system is available for mobile phones, tablets, video game consoles, mainframes and supercomputers. Some of the best distros for small business are;

  • Centos
  • ClearOS
  • OpenSUSE
  • IPFire
  • Ubuntu
  • Manjaro
  • Slackware

Linux Vs Unix

Linux Unix
The Source Code of Linux is freely available to its Users. The Source Code of Unix is not available for the general public.
Linux primarily uses Graphical User Interface with an optional Command Line Interface. Unix primarily uses Command Line Interface.
Linux OS is portable and can be executed in different Hard Drives. Unix is not portable.
Linux is very flexible and can be installed on most of the Home Based Pcs. Unix has a rigid requirement of the Hardware. Hence, cannot be installed on every other machine.
Linux is mainly used in Home Based PC, Mobile Phones, Desktops, etc. Unix is mainly used in Server Systems, Mainframes and High-End Computers.
Different Versions of Linux are: Ubuntu, Debian, OpenSuse, Redhat, Solaris, etc. Different Versions of Unix are: AIS, HP-UX, BSD, Iris, etc.
Linux Installation is economical and doesn’t require much specific and high-end hardware. Unix Installation is comparatively costlier as it requires more specific hardware circuitry.
The Filesystems supported by Linux are as follows: xfs, ramfs, nfs, vfat, cramfsm ext3, ext4, ext2, ext1, ufs, autofs, devpts, ntfs The Filesystems supported by Unix are as follows: zfs, js, hfx, gps, xfs, vxfs.
Linux is developed by an active Linux Community worldwide. Unix is developed by AT&T Developers.

Hardware architecture

Most commercial versions of UNIX distributions are coded for specific hardware. Like HP-UX for PA-RISC (Hewlett-Packard) and Itanium machines (Intel) and AIX is for Power processors ( IBM ). Since these distributions are limited, the developers can optimise their code for these architectures to get maximum utilisation of resources.  Since it uses proprietary hardware, Unix distributions are not cost effective.

  • HP-UX needs HP or Intel hardware
  • AIX needs IBM Hardware

Linux operating system is not dependent on the hardware, so it can be installed in any of the server systems which have a processor. Since the developers cannot assume the hardware architecture and they need to prepare the code for some general hardware specifications and that’s why Linux operating system has less performance than the commercial Unix variants.

  • Linux is open to all hardware

Licensing

GNU General Public License (GPL), is a form of copyleft and is used for the Linux kernel and many of the components from the GNU Project. Free software projects, although developed through collaboration, are often produced independently of each other. AIX and HP-UX are using proprietary licenses.

HP-UX

Developer Hewlett-Packard Enterprise
Written in C
OS family Unix (System V)
Initial release 1982; 36 years ago
Kernel type Monolithic with dynamically loadable modules
License Proprietary

 

IBM AIX

Developer IBM
Written in C
OS family Unix
Initial release 1986; 32 years ago
Kernel type Monolithic with dynamically loadable modules
License Proprietary

 

Linux

Developer Community, Linus Torvalds
Written in Primarily C and assembly
OS family Unix-like
Initial release September 17, 1991; 26 years ago
Kernel type Monolithic (Linux kernel)
License GPLv2[7] and other free and open-source licenses (the name “Linux” is a trademark[b])

 

Softwares and Tools

Softwares and tools in Linux are general to all hardware. But in the case of Unix, separate tools and software which leverage to get the maximum performance. So the performance of the systems is higher than the Linux operating system by comparing the hardware configuration. Unix has good performance than Linux systems. While considering the cost estimation, Linux will get more votes.

System Management Interface Tool ( SMIT ) with AIX is the tools used for package management, System Administration Manager (SAM) on HP-UX. Linux operating system uses rpm or dpkg etc. based on the variants.

Software Installation and Patch Management

R H Linux

HP-UX

AIX

Install rpm -i file swinstall –s depot software installp –a [-c] FileSet
Update rpm -U/F file swinstall –s depot software installp –a FileSet
List rpm -q swlist –l product lslpp –L all
Remove rpm -e swremove software installp –u FileSet
Patches rpm -u swinstall installp
List Patches rpm -q -a swlist –l product lslpp –L all
Patch check up2date/yum security_patch_check compare_report

File system

While talking about the file systems, Linux scores more than the other Unix versions. Unix supports two or three file systems locally. But Linux supports almost all the file systems available on any operating system.

 

System Filesystem
AIX jfs, gpfs
HP-UX hfs, vxfs

Kernel

The kernel is the core of the operating system and the source code of the kernel are not freely available for the commercial versions of Unix. For the Linux operating system, the users can check and verify the code and even modify it if required.

Support

The commercial versions of Unix come with a license cost. Since these operating systems are purchased, the vendor will provide technical support to the end users to the smooth running of the operating systems.

In the case of the Linux operating system, we need to use the open source forums and community for getting support from the users and developers around the world or hire some freelancers for fixing the issues.

Related References

Major Cloud Computing Models

Cloud Computing

Cloud Computing

Cloud computing enables convenient, ubiquitous, measures, and on-demand access to a shared pool of scalable and configurable resources, such as servers, applications, databases, networks, and other services. Also, these resources can be provisioned and released rapidly with minimum interaction and management from the provider.

The rapidly expanding technology is rife with obscure acronyms, with major ones being SaaS, PaaS, and IaaS. These acronyms distinguish the three major cloud computing models discussed in this article. Notably, cloud computing virtually meets any imaginable IT needs in diverse ways. In effect, the cloud computing models are necessary to show the role that a cloud service provides and how the function is accomplished. The three main cloud computing paradigms can be demonstrated on the diagram shown below.

The three major cloud computing models

The three major cloud computing models

Infrastructure as a Service (IaaS)

In infrastructure as a service model, the cloud provider offers a service that allows users to process, store, share, and user other fundamental computing resources to run their software, which can include operating systems and applications. In this case, a consumer has minimum control over the underlying cloud infrastructure, but has significant control over operating systems, deployed applications, storage, and some networking components, such as the host firewalls.

Based on its description, IaaS can be regarded as the lowest-level cloud service paradigm, and possibly the most crucial one. With this paradigm, a cloud vendor provides pre-configured computing resources to consumers via a virtual interface. From the definition, IaaS pertains underlying cloud infrastructure but does not include applications or an operating system. Implementation of the applications, operating system, and some network components, such as the host firewalls is left up to the end user. In other words, the role of the cloud provider is to enable access to the computing infrastructure necessary to drive and support their operating systems and application solutions.

In some cases, the IaaS model can provide extra storage for data backups, network bandwidth, or it can provide access to enhanced performance computing which was traditionally available using supercomputers. IaaS services are typically provided to users through an API or a dashboard.

Features of IaaS

  • Users transfer the cost of purchasing IT infrastructure to a cloud provider
  • Infrastructure offered to a consumer can be increased or reduced depending on business storage and processing needs
  • The consumer will be saved from challenges and costs of maintaining hardware
  • High availability of data is in the cloud
  • Administrative tasks are virtualized
  • IaaS is highly flexible compared to other models
  • Highly scalable and available
  • Permits consumers to focus on their core business and transfer critical IT roles to a cloud provider
Infrastructure as a Service (IaaS)

Infrastructure as a Service (IaaS)

IaaS Use Cases

A series of use cases can explore the above benefits and features afforded by IaaS. For instance, an organization that lacks the capital to own and manage their data centers can purchase an IaaS offering to achieve fast and affordable IT infrastructure for their business. Also, the IaaS can be expanded or terminated based on the consumer needs. Another set of companies that can deploy IaaS include traditional organizations seeking large computing power with low expenditure to run their workloads. IaaS model is also a good option for rapidly growing enterprises that avoid committing to specific hardware or software since their business needs are likely to evolve.

Popular IaaS Services

Major IT companies are offering popular IaaS services that are powering a significant portion of the Internet even without users realizing it.

Amazon EC2: Offers scalable and highly available computing capacity in the cloud. Allows users to develop and deploy applications rapidly without upfront investment in hardware

IBM’s SoftLayer: Cloud computing services offering a series of capabilities, such as computing, networking, security, storage, and so on, to enable faster and reliable application development. The solution features bare-metal, hypervisors, operating systems, database systems, and virtual servers for software developers.

NaviSite: offers application services, hosting, and managed cloud services for IT infrastructure

ComputeNext: the solution empowers internal business groups and development teams with DevOps productivity from a single API.

Platform as a Service (PaaS)

Platform as a service model involves the provision of capabilities that allow users to create their applications using programming languages, tools, services, and libraries owned and distributed by a cloud provider. In this case, the consumer has minimum control over the underlying cloud computing resources such as servers, storage, and operating system. However, the user has significant control over the applications developed and deployed on the PaaS service.

In PaaS, cloud computing is used to provide a platform for consumers to deploy while developing, initializing, implementing, and managing their application. This offering includes a base operating system and a suite of development tools and solutions. PaaS effectively eliminates the needs for consumers to purchase, implement and maintain the computing resources traditionally needed to build useful applications. Some people use the term ‘middleware’ to refer to PaaS model since the offering comfortably sits between SaaS and IaaS.

Features of PaaS

  • PaaS service offers a platform for development, tasking, and hosting tools for consumer applications
  • PaaS is highly scalable and available
  • Offer cost effective and simple way to develop and deploy applications
  • Users can focus on developing quality applications without worrying about the underlying IT infrastructure
  • Business policy automation
  • Many users can access a single development service or tool
  • Offers database and web services integration
  • Consumers have access to powerful and reliable server software, storage capabilities, operating systems, and information and application backup
  • Allows remote teams to collaborate, which improves employee productivity
Platform as a Service (PaaS)

Platform as a Service (PaaS)

PaaS Use Cases

Software development companies and other enterprises that want to implement agile development methods can explore PaaS capabilities in their business models. Many PaaS services can be used in application development. PaaS development tools and services are always updated and made available via the Internet to offer a simple way for businesses to develop, test, and prototype their software solutions. Since developers’ productivity is enhanced by allowing remote workers to collaborate, PaaS consumers can rapidly release applications and get feedback for improvement. PaaS has led to the emergence of the API economy in application development.

Popular PaaS Offerings

There exist major PaaS services that are helping organizations to streamline application development. PaaS offering is delivered over the Internet and allows developers to focus more on creating quality and highly functional application while not worrying about the operating system, storage, and other infrastructure.

Google’s App Engine: the solution allows developers to build scalable mobile and web backends in any language in the cloud. Users can bring their own language runtimes, third-party libraries, and frameworks

IBM BlueMix: this PaaS solution from IBM allows developers to avoid vendor lock-in and leverage the flexible and open cloud environment using diverse IBM tools, open technologies, and third-party libraries and frameworks.

Heroku: the solution provides companies with a platform where they can build, deliver, manage, and scale their applications while abstracting and bypassing computing infrastructure hassles

Apache Stratos: this PaaS offering offers enterprise-ready quality service, security, governance, and performance that allows development, modification, deployment, and distribution of applications.

Red Hat’s OpenShift: a container application platform that offers operations and development-centric tools for rapid application development, easy deployment, scalability, and long-term maintenance of applications

Software as a Service (SaaS)

Software as a service model involves the capabilities provided to users by using a cloud vendor’s application hosted and running on a cloud infrastructure. Such applications are conveniently accessible from different platforms and devices through a web browser, a thin client interface, or a program interface. In this model, the end user has minimum control of the underlying cloud-based computing resources, such as servers, operating system, or the application capabilities

SaaS can be described as software licensing and delivery paradigm that features a complete and functional software solutions provided to users on a metered and subscription basis. Since users access the application via browsers or thin client and program interfaces, SaaS makes the host operating system insignificant in the operation of the product. As mentioned, the service is metered. In this case, SaaS customers are billed based on their consumption, while others pay a flat monthly fee.

Features of SaaS

  • SaaS providers offer applications via subscription structure
  • User transfer the need to develop, install, manage, or upgrade applications to SaaS vendors
  • Applications and data is securely stored in the cloud
  • SaaS is easily managed from a central location
  • Remote serves are deployed to host the application
  • Users can access SaaS offering from any location with Internet access
  • On-premise hardware failure does not interfere with an application or cause data loss
  • Users can reduce or increase use of cloud-based resources depending on their processing and storage needs
  • Applications offered via SaaS model are accessible from any location and almost all Internet-enabled devices
Software as a Service (SaaS)

Software as a Service (SaaS)

SaaS Use Cases

SaaS use case is a typical use case for many companies seeking to benefit from quality application usage without the need to develop, maintain and upgrade the required components. Companies can acquire SaaS solutions for ERP, mail, office applications, collaboration tool, among others. SaaS is also crucial for small companies and startups that wish to launch e-commerce service rapidly but lack the time and resource to develop and maintain the software or buy servers for hosting the platform. SaaS is also used by companies with short-term projects that require collaboration from different members located remotely.

Popular SaaS Services

SaaS offerings are more widespread as compared to IaaS and PaaS. In fact, a majority of consumers use SaaS services without realizing it.

Office365: the cloud-based solution provides productivity software for subscribed consumers. Allows users to access Microsoft Office tools on various platforms, such as Android, MacOS, and Windows, etc.

Box: the SaaS offers secure file storage, sharing, and collaboration from any location and platform

Dropbox: modern application designed for collaboration and for creating, storing, and accessing files, docs, and folders.

Salesforce: the SaaS is among the leading customer relationship management platform that offers a series of capabilities for sales, marketing, service, and more.

Today, cloud computing models have revolutionized the way businesses deploy and manage computing resources and infrastructure. With the advent and evolution of the three major cloud computing models, that it IaaS, PaaS, and SaaS, consumers will find a suitable cloud offering that satisfies virtually all IT needs. These models’ capabilities coupled with competition from popular cloud computing service providers will continue availing IT solutions for consumers demanding for availability, enhanced performance, quality services, better coverage, and secure applications.

Consumers should review their business needs and do a cost-benefit analysis to approve the best model for their business. Also, consumers should conduct thorough workload assessment while migrating to a cloud service.

Big Data vs. Virtualization

Big Data Information Approaches

Big Data Information Approaches

Globally, organizations are facing challenges emanating from data issues, including data consolidation, value, heterogeneity, and quality. At the same time, they have to deal with the aspect of Big Data. In other words, consolidating, organizing, and realizing the value of data in an organization has been a challenge over the years. To overcome these challenges, a series of strategies have been devised. For instance, organizations are actively leveraging on methods such as Data Warehouses, Data Marts, and Data Stores to meet their data assets requirements. Unfortunately, the time and resources required to deliver value using these legacy methods is a distressing issue. In most cases, typical Data Warehouses applied for business intelligence (BI) rely on batch processing to consolidate and present data assets. This traditional approach is affected by the latency of information.

Big Data

As the name suggests, Big Data describes a large volume of data that can either be structured or unstructured. It originates from business processes among other sources. Presently, artificial intelligence, mobile technology, social media, and the Internet of Things (IoT) have become new sources of vast amounts of data. In Big Data, the organization and consolidation matter more than the volume of the data. Ultimately, big data can be analyzed to generate insights that can be crucial in strategic decision making for a business.

Features of Big Data

The term Big Data is relatively new. However, the process of collecting and preserving vast amounts of information for different purposes has been there for decades. Big Data gained momentum recently with the three V’s features that include volume, velocity, and variety.

Volume: First, businesses gather information from a set of sources, such as social media, day-to-day operations, machine to machine data, weblogs, sensors, and so on. Traditionally, storing the data was a challenge. However, the requirement has been made possible by new technologies such as Hadoop.

Velocity: Another defining nature of Big Data is that it flows at an unprecedented rate that requires real-time processing. Organizations are gathering information from RFID tags, sensors, and other objects that need timely processing of data torrents.

Variety: In modern enterprises, information comes in different formats. For instance, a firm can gather numeric and structured data from traditional databases as well as unstructured emails, video, audio, business transactions, and texts.

Complexity: As mentioned above, Big Data comes from diverse sources and in varying formats. In effect, it becomes a challenge to consolidate, match, link, cleanse, or modify this data across an organizational system. Unfortunately, Big Data opportunities can only be explored when an organization successfully correlates relationships and connects multiple data sets to prevent it from spiraling out of control.

Variability: Big Data can have inconsistent flows within periodic peaks. For instance, in social media, a topic can be trending, which can tremendously increase collected data. Variability is also common while dealing with unstructured data.

Big Data Potential and Importance

The vast amount of data collected and preserved on a global scale will keep growing. This fact implies that there is more potential to generate crucial insights from this information. Unfortunately, due to various issues, only a small fraction of this data actually gets analyzed. There is a significant and untapped potential that businesses can explore to make proper and beneficial use of this information.

Analyzing Big Data allows businesses to make timely and effective decisions using raw data. In reality, organizations can gather data from diverse sources and process it to develop insights that can aid in reducing operational costs, production time, innovating new products, and making smarter decisions. Such benefits can be achieved when enterprises combine Big Data with analytic techniques, such as text analytics, predictive analytics, machine learning, natural language processing, data mining and so on.

Big Data Application Areas

Practically, Big Data can be used in nearly all industries. In the financial sector, a significant amount of data is gathered from diverse sources, which requires banks and insurance companies to innovate ways to manage Big Data. This industry aims at understanding and satisfying their customers while meeting regulatory compliance and preventing fraud. In effect, banks can exploit Big Data using advanced analytics to generate insights required to make smart decisions.

In the education sector, Big Data can be employed to make vital improvements on school systems, quality of education and curriculums. For instance, Big Data can be analyzed to assess students’ progress and to design support systems for professors and tutors.

Healthcare providers, on the other hand, collect patients’ records and design various treatment plans. In the healthcare sector, practitioners and service providers are required to offer accurate and timely treatment that is transparent to meet the stringent regulations in the industry and to enhance the quality of life. In this case, Big Data can be managed to uncover insights that can be used to improve the quality of service.

Governments and different authorities can apply analytics to Big Data to create the understanding required to manage social utilities and to develop solutions necessary to solve common problems, such as city congestion, crime, and drug use. However, governments must also consider other issues such as privacy and confidentiality while dealing with Big Data.

In manufacturing and processing, Big Data offers insights that stakeholders can use to efficiently use raw materials to output quality products. Manufacturers can perform analytics on big data to generate ideas that can be used to increase market share, enhance safety, minimize wastage, and solve other challenges faster.

In the retail sector, companies rely heavily on customer loyalty to maintain market share in a highly competitive market. In this case, managing big data can help retailers to understand the best methods to utilize in marketing their products to existing and potential consumers, and also to sustain relationships.

Challenges Handling Big Data

With the introduction of Big Data, the challenge of consolidating and creating value on data assets becomes magnified. Today, organizations are expected to handle increased data velocity, variety, and volume. It is now a business necessity to deal with traditional enterprise data and Big Data. Traditional relational databases are suitable for storing, processing, and managing low-latency data. Big Data has increased volume, variety, and velocity, making it difficult for legacy database systems to efficiently handle it.

Failing to act on this challenge implies that enterprises cannot tap the opportunities presented by data generated from diverse sources, such as machine sensors, weblogs, social media, and so on. On the contrary, organizations that will explore Big Data capabilities amidst its challenges will remain competitive. It is necessary for businesses to integrate diverse systems with Big Data platforms in a meaningful manner, as heterogeneity of data environments continue to increase.

Virtualization

Virtualization involves turning physical computing resources, such as databases and servers into multiple systems. The concept consists of making the function of an IT resource simulated in software, making it identical to the corresponding physical object. Virtualization technique uses abstraction to create a software application to appear and operate like hardware to provide a series of benefits ranging from flexibility, scalability, performance, and reliability.

Typically, virtualization is made possible using virtual machines (VMs) implemented in microprocessors with necessary hardware support and OS-level implementations to enhance computational productivity. VMs offers additional convenience, security, and integrity with little resource overhead.

Benefits of Virtualization

Achieving the economics of wide-scale functional virtualization using available technologies is easy to improve reliability by employing virtualization offered by cloud service providers on fully redundant and standby basis. Traditionally, organizations would deploy several services to operate at a fraction of their capacity to meet increased processing and storage demands. These requirements resulted in increased operating costs and inefficiencies. With the introduction of virtualization, the software can be used to simulate functionalities of hardware. In effect, businesses can outstandingly eliminate the possibility of system failures. At the same time, the technology significantly reduces capital expense components of IT budgets. In future, more resources will be spent on operating, than acquisition expenses. Company funds will be channeled to service providers instead of purchasing expensive equipment and hiring local personnel.

Overall, virtualization enables IT functions across business divisions and industries to be performed more efficiently, flexibly, inexpensively, and productively. The technology meaningfully eliminates expensive traditional implementations.

Apart from reducing capital and operating costs for organizations, virtualization minimizes and eliminates downtime. It also increases IT productivity, responsiveness, and agility. The technology provides faster provisioning of resources and applications. In case of incidents, virtualization allows fast disaster recovery that maintains business continuity.

Types of Virtualization

There are various types of virtualization, such as a server, network, and desktop virtualization.

In server virtualization, more than one operating system runs on a single physical server to increase IT efficiency, reduce costs, achieve timely workload deployment, improve availability and enhance performance.

Network virtualization involves reproducing a physical network to allow applications to run on a virtual system. This type of virtualization provides operational benefits and hardware independence.

In desktop virtualization, desktops and applications are virtualized and delivered to different divisions and branches in a company. Desktop virtualization supports outsourced, offshore, and mobile workers who can access simulate desktop on tablets and iPads.

Characteristics of Virtualization

Some of the features of virtualization that support the efficiency and performance of the technology include:

Partitioning: In virtualization, several applications, database systems, and operating systems are supported by a single physical system since the technology allows partitioning of limited IT resources.

Isolation: Virtual machines can be isolated from the physical systems hosting them. In effect, if a single virtual instance breaks down, the other machine, as well as the host hardware components, will not be affected.

Encapsulation: A virtual machine can be presented as a single file while abstracting other features. This makes it possible for users to identify the VM based on a role it plays.

Data Virtualization – A Solution for Big Data Challenges

Virtualization can be viewed as a strategy that helps derive information value when needed. The technology can be used to add a level of efficiency that makes big data applications a reality. To enjoy the benefits of big data, organizations need to abstract data from different reinforcements. In other words, virtualization can be deployed to provide partitioning, encapsulation, and isolation that abstracts the complexities of Big Data stores to make it easy to integrate data from multiple stores with other data from systems used in an enterprise.

Virtualization enables ease of access to Big Data. The two technologies can be combined and configured using the software. As a result, the approach makes it possible to present an extensive collection of disassociated and structured and unstructured data ranging from application and weblogs, operating system configuration, network flows, security events, to storage metrics.

Virtualization improves storage and analysis capabilities on Big Data. As mentioned earlier, the current traditional relational databases are incapable of addressing growing needs inherent to Big Data. Today, there is an increase in special purpose applications for processing varied and unstructured big data. The tools can be used to extract value from Big Data efficiently while minimizing unnecessary data replication. Virtualization tools also make it possible for enterprises to access numerous data sources by integrating them with legacy relational data centers, data warehouses, and other files that can be used in business intelligence. Ultimately, companies can deploy virtualization to achieve a reliable way to handle complexity, volume, and heterogeneity of information collected from diverse sources. The integrated solutions will also meet other business needs for near-real-time information processing and agility.

In conclusion, it is evident that the value of Big Data comes from processing information gathered from diverse sources in an enterprise. Virtualizing big data offers numerous benefits that cannot be realized while using physical infrastructure and traditional database systems. It provides simplification of Big Data infrastructure that reduces operational costs and time to results. Shortly, Big Data use cares will shift from theoretical possibilities to multiple use patterns that feature powerful analytics and affordable archival of vast datasets. Virtualization will be crucial in exploiting Big Data presented as abstracted data services.

 

Data Warehousing vs. Data Virtualization

Information Management

Information Management

Today, a business heavily depends on data to gain insights into their processes and operations and to develop new ways to increase market share and profits. In most cases, data required to generate the insights are sourced and located in diverse places, which requires reliable access mechanism. Currently, data warehousing and data virtualization are two principal techniques used to store and access the sources of critical data in a company. Each approach offers various capabilities and can be deployed for particular use cases as described in this article.

Data Warehousing

A data warehouse is designed and developed to secure host historical data from different sources. In effect, this technique protects data sources from performance degradation caused by the impact of sophisticated analytics and enormous demands for reports. Today, various tools and platforms have been developed for data warehouse automation in companies. They can be deployed to quicken development, automate testing, maintenance, and other steps involved in data warehousing. In a data warehouse, data is stored as a series of snapshots, where a record represents data at a particular time. In effect, companies can analyze data warehouse snapshots to compare data between different periods. The results are converted into insights required to make crucial business decisions.

Moreover, a data warehouse is optimized for other functions, such as data retrieval. The technology duplicates data to allow database de-normalization that enhances query performance. The solution is further deployed to create an enterprise data warehouse (EDW) used to service the entire organization.

Data Warehouse Information Architecture

Data Warehouse Information Architecture

Features of a Data Warehouse

A data warehouse is subject-oriented, and it is designed to help entities analyze data. For instance, a company can start a data warehouse focused on sales to learn more about sales data. Analytics on this warehouse can help establish insights such as the best customer for the period. The data warehouse is subject oriented since it can be defined based on a subject matter.

A data warehouse is integrated. Data from various sources is first out into a consistent format. The process requires the firm to resolve some challenges, such as naming conflicts and inconsistencies on units of measure.

A data warehouse in nonvolatile. In effect, data entered into the warehouse should not change after it is stored. This feature increases accuracy and integrity in data warehousing.

A data warehouse is time variant since it focuses on data changes over time. Data warehousing discovers trends in business by using large amounts of historical data. In effect, a typical operation in a data warehouse scans millions of rows to return an output.

A data warehouse is designed and developed to handle ad hoc queries. In most cases, organizations may not predict the amount of workload of a data warehouse. Therefore, it is recommendable to optimize the data warehouse to perform optimally over any possible query operation.

A data warehouse is regularly updated by the ETL process using bulk data modification techniques. Therefore, end users cannot directly update the data warehouse.

Advantages of Data Warehousing

The primary motivation for developing a data warehouse is to provide timely information required for decision making in an organization. A business intelligence data warehouse serves as an initial checkpoint for crucial business data. When a company stores its data in a data warehouse, tracking it becomes natural. The technology allows users to perform quick searches to be able to retrieve and analyze static data.

Another driver for companies investing in data warehouses involves integrating data from disparate sources. This capability adds value to operational applications like customer relationship management systems. A well-integrated warehouse allows the solution to translate information to a more usable and straightforward format, making it easy for users to understand the business data.

The technology also allows organizations to perform a series of analysis on data.

A data warehouse reduces the cost to access historical data in an organization.

Data warehousing provides standardization of data across an organization. Moreover, it helps identify and eliminate errors. Before loading data, the solution shows inconsistencies to users and corrects them.

A data warehouse also improves the turnaround time for analysis and report generation.

The technology makes it easy for users to access and share data. A user can conduct a quick search on a data warehouse to find and analyze static data without wasting time.

Data warehousing removes informational processing load from transaction-oriented databases.

Disadvantages of Data Warehousing

While data warehousing technology is undoubtedly beneficial to many organizations, not all data warehouses are relevant to a business. In some cases, a data warehouse can be expensive to scale and maintain.

Preparing a data warehouse is time-consuming since it requires users to input raw data, which has to be achieved manually.

A data warehouse is not a perfect choice for handing unstructured and complex raw data. Moreover, it faces difficulties incompatibility. Depending on the data sources, companies may require a business intelligence team to ensure compatibility is achieved for data coming from sources running distinct operating systems and programs.

The technology requires a maintenance cost to continue working correctly. The solution needs to be updated with latest features that might be costly. Regularly maintaining a data warehouse will need a business to spend more on top of the initial investment.

A data warehouse use can be limited due to information privacy and confidentiality issues. In most cases, businesses collect and store sensitive data belonging to their clients. Viewing it is only allowed to individual employees, which limits the benefits offered by a data warehouse.

Data Warehousing Use Case

There are a series of ways organizations use data warehouses. Businesses can optimize the technology for performance by identifying the type of data warehouse they have.

  1. A data warehouses can be used by an organization that is struggling to report efficiently on business operations and activities. The solution makes it possible to access the required data
  2. A data warehouse is necessary for an organization where data is copied separately by different divisions for analysis in spreadsheets that are not consistent with one another.
  3. Data warehousing is crucial in organizations where uncertainties about data accuracy are causing executives to question the veracity of reports.
  4. A data warehouse is crucial for business intelligence acceleration. The technology delivers rapid data insights to analysts at different scales, concurrency, and without requiring manual tuning or optimization of a database.
Data Virtualization Information Architecture

Data Virtualization Information Architecture

Data Virtualization

Data virtualization technology does not require transfer or storage of data. Instead, users employ a combination of application programming interfaces (APIs) and metadata (data about data) to interface with data in different sources. Users use joined queries to gain access to the original data sources. In other words, data virtualization offers a simplified and integrated view to business data in real-time as requested by business users, applications, and analytics. In effect, the technology makes it possible to integrate data from distinct sources, formats, and locations, without replication. It creates a unified virtual data layer that delivers data services to support users and various business applications.

Data virtualization performs many of the same data integration functions, that is, extract, transform, and load, data replication, and federation. It leverages modern technology to deliver real-time data integration with agility, low cost, and high speed. In effect, data virtualization eliminates traditional data integration and reduces the need for replicated data warehouses and data marts in most cases.

Capabilities and Benefits of Data Virtualization

There are various benefits of implementing data virtualization in an organization.

Firstly, data virtualization allows access and leverage of all information that helps a firm achieve a competitive advantage. The solution offers a unified virtual layer that abstracts the underlying source complexity and presents disparate data sources as a single source.

Data virtualization is cheaper since it does not require actual hardware devices to be installed. In other words, organizations no longer need to purchase and dedicate a lot of IT resources and additional monetary investment to create on-site resources, similar to the one used in a data warehouse.

Data virtualization allows speedy deployment of resources. In this solution, resource provisioning is fast and straightforward. Organizations are not required to set up physical machines or to create local networks or install other IT components. Users have a single point of access to a virtual environment that can be distributed to the entire company.

Data virtualization is an energy-efficient system since the solution does not require additional local hardware and software. Therefore, an organization will not be required to install cooling systems.

Disadvantages of Data Virtualization

Data virtualization creates a security risk. In the modern world, having information is a cheap way to make money. In effect, company data is frequently targeted by hackers. Implementing data virtualization from disparate sources may give an opportunity to malicious users to steal critical information and use it for monetary gain.

Data virtualization requires a series of channels or links that must work in cohesion to perform the intended task. In this cases, all data sources should be available for virtualization to work effectively.

Data Virtualization Use Cases

  • Companies that rely on business intelligence require data virtualization for rapid prototyping to meet immediate business needs. Data virtualization can create a real-time reporting solution that unifies access to multiple internal databases.
  • Provisioning data services for single-view applications, such as in customer service and call center applications require data virtualization.

 

Common Information Technology Architectures

Overview Of Common Information Technology Architectures

The world is currently in the Information and Technology era, were as, so many experts are of the opinion that the Silicon Valley days are beginning to come to an end. Information and Technology is basically what the world revolves around today which makes it necessary to consider some technical overview of Information and Technology architecture use. The term Information Technology is often used in place for computer networks, and it also surrounds other information related technologies like television, cell phones and so on, showing the connection between IT and ICT (thou IT and ICT are often used to replace each other but technically are different). When talking about IT architectural, it is the framework or basis that supports an organization or system. Information technology architectural concerning computing involves virtual and physical resources supporting the collection, processing, analysis and storage of data. The architecture, in this case, can be integrated into a data center or in some other instances decentralized into multiple data centers, which can be managed and controlled by the IT department or third-party IT firm, just like cloud provider or colocation facility. IT architectures usually come into play when we consider hardware for computers (Big Iron: mainframe & Supercomputers), software, internet (LAN / WAN Server based), e-commerce, telecom equipment, storage (Cloud) and so on.

Information Technology Industry Overview

Information Technology Industry Overview

We human beings have been able to manipulate, store, and retrieve data since 3000Bc, but the modern sense of information technology first appeared in an article in 1958 published in a Havard Business Review: Harold j.Leavitt and Thomas L.whisler were the authors, and they further commented that the new technology was lacking an established name. It shall be called information technology (IT). Information Technology is used in virtually all sectors and industries, talking about education, agriculture, marketing, health, governance, finance and so on. Whatever you do, it is always appropriate to have a basic overview of the architectural uses of Information Technology. Now we take a look at some standard Information technology architectures use with regards to technology environment patterns such as Big Iron (mainframe & Supercomputers); Cloud; LAN / WAN Server based; storage (Cloud).

Big Iron (Mainframe & Supercomputers)

Big iron is a term used by hackers, and as defined in the hacker’s dictionary the Jargon File refers to it as “large, expensive, ultra-fast computers. It is used for number crunching supercomputers such as Crays, but can include more conventional big commercial mainframes”. Often used concerning IBM mainframes, when discussing their survival after the invention of lower cost Unix computing systems. More recently the term also applies to highly efficient computer servers and ranches, whose steel racks naturally work in the same manner.

Supercomputers are known to be the world’s fastest and largest computers, and they are primarily used for complex scientific calculations. There are similar components in a supercomputer and desktop computer: they both have memory processors and hard-drives. Although similarities exist between supercomputers and desktop computers, the speeds are significantly different. Supercomputers are way faster and more extensive. The supercomputers large disk storage, high memory, and processors increase the speed and the power of the machine. Although desktop computers can perform thousands or millions of floating-point operations per second know as (megaflops), supercomputers speeds perform at billions of operations per second also known as (gigaflops) and even up to trillions of operations per second know as (teraflops).

Mainframe Computers

Mainframe Computers

Evolution Of Mainframe and Supercomputers

Currently, many computers are indeed faster than the very first supercomputer, the Cray-1, which is designed and developed by Cray Research team during the mid-70s. The Cray-1 had the capacity of computing at the rate of 167 megaflops using a rapid form of computing called the Vector Processing,   which is composed of quick execution of instructions in a state of pipelined fashion. In the mid-80s a faster method of supercomputing was originated: which was called Parallel Processing.  Applications that made use of parallel processing were and are still able to solve computational issues by using multiple processors. Example: if you were going to prepare ice cream, sundaes for nine of your friends. You would need ten scoops of ice cream, ten bowls; ten drizzles of chocolate syrup with ten cherries, working alone you would put one scoop of ice-cream in each bowl and drizzle the syrup on each other. Now, this method of preparing sundaes will be categorized as vector processing. To get the job done very quickly, you will need help from some friends to assist you in a parallel processing method. If five people prepare the ice-cream mixture, it would be five times as fast.

Parallel Processing

Parallel Processing

Application Of Mainframe and Supercomputers

Supercomputers are very powerful that they can provide researchers with the insight into sources that are small, too fast, too big, or maybe very slow to observe in laboratories. Astrophysicists make use of supercomputers as time machines to explore the past and the future of the universe. A fascinating supercomputer simulation was created in the year 2000 that was able to depict the collision of two galaxies: The Andromeda and our very own Milky Way, although this collision will not happen in another 3 billion years from now.

This particular simulation allowed scientist to experiment and the view the result now. The simulation was conducted by Blue Horizon, a parallel supercomputer in the Diego, Supercomputer Center. Using 256 of Blue Horizon’s 1,152 processors, the simulation showed what would happen to millions of stars if the galaxies collided. Another example is molecular dynamic (molecular interactions with each other). Simulation events done with supercomputers allow scientists to study their interactions when two molecules are docked down. Researchers can generate an atom-by-atom picture of the molecular geometry by determining the shape of a molecule’s surface. Atomic experimentation at this level is extremely difficult or impossible to perform in a laboratory environment, but supercomputers have paved the way for scientists to stimulate such behaviors with ease.

Supercomputers Of The Future

Various research centers are always diving into new applications such as data mining to explore additional and multiple uses of supercomputing. Data mining allows scientist to find previously unknown relationships among data, just like the Protein Data Bank at San Diego Supercomputer Center is collecting scientific data that provides other scientists all around the world with more significant ways of understanding of biological systems. So this will provide researchers with new and unlimited insights of the effects, causes, and treatments of so many diseases. Capabilities of and applications of supercomputers will continue to grow as institutions all over the world are willing to share their various discoveries making researchers more proficient at parallel processing.

information technology Data Storage

Electronic data storage, which is widely used in modern computers today, has a date that spans from World War II when a delay memory line was developed to remove the interference from radar signals. We also have the William tube, which was the very first random-access digital storage, based on the cathode ray tube which consumed more electrical power. The problem regarding this was that the information stored in the delay line memory was liable to change flexibly and fast, especially very volatile. So it had to be continuously refreshed, and information was lost whenever power was removed. The first form of non-volatile computer storage system was the magnetic drum, which was the magnetic drum, it was invented in 1932 and used in the (Ferranti Mark 1) the very first commercially available electronic that was for general-purpose.

IBM initially introduced the very first hard disk drive in 1956, as an added component to their 305 RAMAC computer system. Most digitalized data today are stored magnetically on a hard disk, or optically such as CD-ROMS. But in 2002 the digital storage capacity exceeded analog for the first time. In the year 2007, almost 94% of data stored in the world was digitally held: 28% optical devices, 52% hard disks, 11% digital magnetic tape. The worldwide capacity for storing information on electronic devices grew from 3 Exabyte (1986) to 295 Exabyte (2007), doubling every three years. 

Cloud Computing

Cloud Computing

Cloud Storage

Cloud storage is a modern data storage system in which the digital data is stored in an array of logical pools, the physical storage system composes of multiple servers and often various locations, and the environment is usually owned by and managed by a hosting company. Cloud storage supplying companies are in charge of for keeping the data available and accessible, individuals; organizations lease or buy storage capacity from the suppliers to store user, application data or organization. Cloud storage refers to a hosted object-storage service, I a long run the term has broadened to include other sources of data storage systems that are available as a service, just like extended storage.  Examples of block storage services are Amazon S3 and Microsoft Azure storage. Then we also have OceanStore and VISION cloud which are storage systems that can be hosted and also deployed with cloud characteristics.

Cloud computing is changing implementation and design of IT infrastructures. Typically, business-owned traditional database centers are mostly private, and capital-intensive resources (Big-Iron: Mainframe and supercomputers), cloud base computing, on the other hand, enables organizations to have access to cloud base service providers with credible data center infrastructure for a mostly avoidable fee. Infrastructure-as-a-service model, cloud computing, allows flexible data storage on demand. Consumers can beseech cloud service provider’s to help store, compute, and offer other IT related services without installing gadgets and other resources locally, saving a lot of space and money while users can quickly adjust cloud base usage depending on required workload.

Servers

On a typical day, people tend to use different IT-based servers or networks. Firstly, the process of checking your email, over a Wi-Fi connection on your PC, in your house, is a typical server.

The process of logging on to your computer at your place of work, to have access to files from the company’s database that is another typical server. When you are out for coffee the Wi-Fi hotspot at the coffee shop, is another type of server-based communications.

All of these typical servers are set up differently. Servers are mainly categorized according to a geographic area of use and the requirements of the server within those geographic areas. Servers can service just about anyone from one man usage within with one device to millions of people and devices anywhere on the planet.

Some Common Servers we will consider Include:

  • WAN (Wide Area Network)
  • LAN (Local Area Network)
  • PAN (Personal Area Network)
  • MAN (Metropolitan Area Network)

Let’s go into some detail on these networks.

Area Net Relative Size Relationship

Area Net Relative Size Relationship

PAN (Personal Area Network)

PAN (personal area network), is a server integrated for a single person within a building or nearby. It could be inside a little office or a home. A PAN could incorporate at least one PC, phones, minor gadgets, computer game consoles and other gadgets. On the off chance that various people utilize a similar system inside a home, the system is some of the time known as a HAN (Home Area Network).

In an exceptionally common setup, a home will have a single, wired Internet connection associated with a modem. This modem at that point gives both wired and remote service for numerous gadgets. The system is regularly managed from a PC yet can be accessed to from other electronic devices.

This kind of server gives incredible adaptability. For instance, it enables you to:

  • Send a report to the printer in the workplace upstairs while you’re perched in another room with your portable workstation
  • Upload the pictures from your mobile phone to storage device (cloud) associated with your desktop PC
  • View movies from an internet streaming platform on your TV

If this sounds well-known to you, you likely have a PAN in your home without you knowing what it’s called.

LAN (Local Area Network)

LAN (Local Area Network) comprises of a PC network at a single location, regularly an individual office building. LAN is useful for sharing assets, for example, information stockpiling and printers. LANs can be worked with generally modest equipment, for example, network connectors, hubs, and Ethernet links.

A small LAN server may just utilize two PCs, while bigger LANs can oblige a higher number of PCs. A LAN depends on wired networking for speed and security optimization; however wireless networks can be associated with a LAN. Fast speed and moderately low cost are the qualifying attributes of LANs.

LANs are regularly utilized for a place where individuals need to share resources and information among themselves yet not with the outside world. Think about an office building where everyone ought to have the capacity to get to records on the server or have the ability to print an archive to at least one printer. Those assignments ought to be simple for everyone working in a similar office, yet you would not want someone strolling into the office and have access.

 

MAN (Metropolitan Area Network)

MAN (metropolitan area network) comprises of a PC organize over a whole city, school grounds or little district. Contingent upon the arrangement, this kind of system can cover a range from 5 to around 50 kilometers over. A MAN is often used to associate a group of LANs together to form a broader system. When this kind of server is mainly intended for a campus, it can be called CAN (Campus Area Network).

WAN (Wide Area Network)

WAN (wide area network), involves a vast region, for example, a whole nation or the entire world. A WAN can contain various littler systems, for example, LANs or MANs. The Internet is the best-known case of an open WAN.

Conclusion

The world is changing rapidly as modern world continues its unstoppable growth. With so much of the changes happening its good education be capable of touching students in various ways. Students today are leaders, teacher’s inventors and businessmen and women of tomorrow. Information technology has a crucial role in students being able to retain their job and go to school. Especially now that most schools offer various online courses, classes that can be accessed on tablets laptops and mobile phones.

Information technology is reshaping many aspects of the world’s economies, governments, and societies.  IT provide more efficient services, catalyze economic growth, and strengthen social networks, with about 95% of the world’s population now living in an area with the presence of a featured use and implementation of IT. IT is diversified, what you are probably using to have access to this article is based on IT architectural features. Technological advancement is a positive force behind growth in economies of nations, citizen engagement, and job creation.

Information Technology (IT) Requirements Management (REQM) For Development

Requirement Management Process

Requirement Management Process

Information Technology Requirements Management

Information technology requirement management (IT mаnаgеmеnt) is thе process whеrеbу all rеѕоurсеѕ rеlаtеd to іnfоrmаtіоn technology аrе mаnаgеd according to a оrgаnіzаtіоn’ѕ рrіоrіtіеѕ аnd nееdѕ. Thіѕ includes tangible rеѕоurсеѕ like nеtwоrkіng hаrdwаrе, соmрutеrѕ аnd реорlе, as wеll as іntаngіblе rеѕоurсеѕ like ѕоftwаrе аnd data. The сеntrаl аіm of IT mаnаgеmеnt is to generate vаluе thrоugh thе uѕе of technology. Tо achieve this, buѕіnеѕѕ strategies аnd tесhnоlоgу muѕt bе aligned. Infоrmаtіоn tесhnоlоgу mаnаgеmеnt includes mаnу of the bаѕіс functions оf mаnаgеmеnt, such аѕ ѕtаffіng, оrgаnіzіng, budgеtіng and соntrоl, but іt аlѕо hаѕ funсtіоnѕ thаt are unіԛuе tо IT, ѕuсh as ѕоftwаrе development, сhаngе management, nеtwоrk рlаnnіng аnd tесh ѕuрроrt. Gеnеrаllу, IT is used bу оrgаnіzаtіоnѕ to support аnd compliment thеіr buѕіnеѕѕ ореrаtіоnѕ. Thе аdvаntаgеѕ brought аbоut by hаvіng a dеdісаtеd IT department аrе too grеаt for mоѕt organizations tо раѕѕ up. Sоmе оrgаnіzаtіоnѕ асtuаllу uѕе IT as thе center of their buѕіnеѕѕ. Thе purpose of requirements mаnаgеmеnt іѕ tо еnѕurе that аn оrgаnіzаtіоn documents, vеrіfіеѕ, аnd mееtѕ thе nееdѕ аnd expectations of its customers and internal or еxtеrnаl stakeholders. Rеԛuіrеmеntѕ mаnаgеmеnt bеgіnѕ wіth thе аnаlуѕіѕ аnd elicitation of thе objectives аnd constraints of thе оrgаnіzаtіоn. Requirements mаnаgеmеnt furthеr іnсludеѕ ѕuрроrtіng рlаnnіng for requirements, іntеgrаtіng rеԛuіrеmеntѕ аnd the оrgаnіzаtіоn fоr wоrkіng wіth thеm (аttrіbutеѕ fоr rеԛuіrеmеntѕ), аѕ well as rеlаtіоnѕhірѕ wіth оthеr information dеlіvеrіng аgаіnѕt rеԛuіrеmеntѕ, аnd сhаngеѕ fоr thеѕе. The trасеаbіlіtу thuѕ еѕtаblіѕhеd іѕ used in managing requirements to rероrt bасk fulfіlmеnt of соmраnу and stakeholder іntеrеѕtѕ іn tеrmѕ оf compliance, completeness, соvеrаgе, аnd consistency. Trасеаbіlіtіеѕ also ѕuрроrt сhаngе mаnаgеmеnt as раrt оf rеԛuіrеmеntѕ management іn undеrѕtаndіng thе іmрасtѕ of changes thrоugh rеԛuіrеmеntѕ оr other rеlаtеd еlеmеntѕ (е.g., functional іmрасtѕ through relations tо functional аrсhіtесturе), аnd fасіlіtаtіng іntrоduсіng these сhаngеѕ. Rеԛuіrеmеntѕ mаnаgеmеnt іnvоlvеѕ соmmunісаtіоn between the рrоjесt tеаm mеmbеrѕ аnd ѕtаkеhоldеrѕ, аnd аdjuѕtmеnt to rеԛuіrеmеntѕ сhаngеѕ thrоughоut thе course оf thе рrоjесt. Tо рrеvеnt one class of requirements frоm overriding аnоthеr, constant соmmunісаtіоn аmоng mеmbеrѕ оf thе dеvеlорmеnt team, is critical. Fоr example, in ѕоftwаrе development for іntеrnаl applications, the business hаѕ ѕuсh ѕtrоng needs that іt may іgnоrе uѕеr rеԛuіrеmеntѕ, оr bеlіеvе thаt іn creating use саѕеѕ, the uѕеr rеԛuіrеmеntѕ are being tаkеn саrе оf.

The major IT Requirement Management Phases

Investigation

  • In Invеѕtіgаtіоn, thе fіrѕt thrее classes of requirements are gathered frоm the uѕеrѕ, from thе business аnd frоm thе dеvеlорmеnt team. In each аrеа, ѕіmіlаr ԛuеѕtіоnѕ аrе аѕkеd; whаt аrе the goals, what аrе the соnѕtrаіntѕ, what аrе the сurrеnt tооlѕ оr рrосеѕѕеѕ іn рlасе, and so оn. Only when thеѕе rеԛuіrеmеntѕ are well undеrѕtооd can funсtіоnаl rеԛuіrеmеntѕ be dеvеlореd. In thе common саѕе, requirements саnnоt be fullу dеfіnеd аt the bеgіnnіng of thе рrоjесt. Some rеԛuіrеmеntѕ wіll сhаngе, either bесаuѕе they ѕіmрlу wеrеn’t еxtrасtеd, оr bесаuѕе internal or еxtеrnаl fоrсеѕ at wоrk аffесt thе project in mіd-сусlе. Thе dеlіvеrаblе frоm thе Invеѕtіgаtіоn ѕtаgе іѕ requirements document thаt hаѕ bееn аррrоvеd bу аll mеmbеrѕ оf thе tеаm. Later, іn thе thісk of dеvеlорmеnt, thіѕ document wіll bе сrіtісаl іn рrеvеntіng ѕсоре сrеер or unnесеѕѕаrу сhаngеѕ. As thе ѕуѕtеm dеvеlорѕ, еасh new fеаturе ореnѕ a world оf nеw роѕѕіbіlіtіеѕ, ѕо thе requirements ѕресіfісаtіоn аnсhоrѕ the tеаm tо the original vision аnd реrmіtѕ a соntrоllеd dіѕсuѕѕіоn of ѕсоре сhаngе. While many оrgаnіzаtіоnѕ still uѕе оnlу dосumеntѕ to mаnаgе requirements, оthеrѕ mаnаgе their requirements baselines uѕіng ѕоftwаrе tооlѕ. Thеѕе tools allow rеԛuіrеmеntѕ tо bе managed іn a database, and uѕuаllу hаvе functions to automate trасеаbіlіtу (е.g., bу enabling electronic links tо bе сrеаtеd bеtwееn раrеnt аnd сhіld requirements, оr between tеѕt саѕеѕ аnd rеԛuіrеmеntѕ), еlесtrоnіс baseline creation, vеrѕіоn control, аnd change mаnаgеmеnt. Uѕuаllу ѕuсh tооlѕ contain аn export funсtіоn thаt allows a ѕресіfісаtіоn dосumеnt to bе created by еxроrtіng thе requirements data іntо a ѕtаndаrd dосumеnt аррlісаtіоn.

 Feasibility

  • In the Feasibility stage, costs of the rеquіrеmеntѕ аrе dеtеrmіnеd. Fоr uѕеr requirements, the current соѕt оf work is соmраrеd to the future projected соѕtѕ оnсе thе nеw ѕуѕtеm іѕ іn рlасе. Questions ѕuсh аѕ thеѕе are аѕkеd: “What are data entry errors costing uѕ nоw?” Or “Whаt іѕ thе соѕt of ѕсrар duе tо ореrаtоr еrrоr wіth thе сurrеnt іntеrfасе?” Aсtuаllу, the nееd for the nеw tool is оftеn rесоgnіzеd аѕ this ԛuеѕtіоnѕ соmе to thе аttеntіоn оf fіnаnсіаl реорlе іn the organization. Business costs wоuld іnсludе, “Whаt department hаѕ the budget fоr this?” “Whаt is the еxресtеd rаtе of rеturn оn thе nеw product in the mаrkеtрlасе?” “Whаt’ѕ thе іntеrnаl rate of return in rеduсіng costs оf trаіnіng аnd support іf wе make an nеw, easier-to-use system?” Technical costs аrе rеlаtеd tо software dеvеlорmеnt соѕtѕ and hardware соѕtѕ. “Dо wе hаvе thе rіght реорlе tо сrеаtе the tool?” “Dо we nееd nеw equipment tо ѕuрроrt еxраndеd ѕоftwаrе rоlеѕ?” Thіѕ lаѕt ԛuеѕtіоn іѕ аn іmроrtаnt tуре. The tеаm muѕt inquire into whether thе nеwеѕt аutоmаtеd tools will аdd sufficient processing роwеr tо shift some оf thе burden frоm thе uѕеr tо thе system in оrdеr tо ѕаvе реорlе tіmе. Thе question аlѕо роіntѕ out a fundаmеntаl point about rеԛuіrеmеntѕ mаnаgеmеnt. A humаn аnd a tооl fоrm a ѕуѕtеm, аnd thіѕ realization іѕ especially іmроrtаnt іf the tool іѕ a соmрutеr or an nеw аррlісаtіоn on a computer. Thе humаn mind еxсеlѕ іn раrаllеl рrосеѕѕіng аnd іntеrрrеtаtіоn of trends with іnѕuffісіеnt dаtа. Thе CPU еxсеlѕ іn ѕеrіаl processing and accurate mаthеmаtісаl соmрutаtіоn. The overarching gоаl оf thе rеԛuіrеmеntѕ management еffоrt for a software project would thuѕ be to make ѕurе thе wоrk being аutоmаtеd gеtѕ аѕѕіgnеd tо thе proper рrосеѕѕоr. Fоr іnѕtаnсе, “Don’t make thе human rеmеmbеr whеrе she іѕ іn thе іntеrfасе. Mаkе thе іntеrfасе rероrt thе human’s location іn the ѕуѕtеm аt аll tіmеѕ.” Or “Dоn’t mаkе thе humаn еntеr thе ѕаmе dаtа in twо ѕсrееnѕ. Mаkе thе system store thе dаtа аnd fіll іn thе second ѕсrееn аѕ needed.” The deliverable frоm the Feasibility ѕtаgе іѕ the budgеt аnd schedule fоr the рrоjесt.

Design

  • Aѕѕumіng thаt соѕtѕ аrе ассurаtеlу dеtеrmіnеd and bеnеfіtѕ tо be gаіnеd аrе ѕuffісіеntlу lаrgе, thе project саn рrосееd tо thе Dеѕіgn ѕtаgе. In Design, the mаіn rеԛuіrеmеntѕ mаnаgеmеnt асtіvіtу іѕ соmраrіng thе rеѕultѕ of thе design аgаіnѕt thе requirements dосumеnt tо make sure that wоrk is staying in scope. Agаіn, flexibility іѕ раrаmоunt tо success. Here’s a сlаѕѕіс ѕtоrу of ѕсоре change іn mіd-ѕtrеаm that асtuаllу wоrkеd well. Fоrd аutо dеѕіgnеrѕ іn the early ‘80ѕ wеrе expecting gаѕоlіnе prices to hit $3.18 реr gаllоn by thе еnd оf thе dесаdе. Mіdwау thrоugh thе design of the Fоrd Taurus, рrісеѕ had сеntеrеd tо around $1.50 a gаllоn. Thе dеѕіgn team dесіdеd thеу could buіld a larger, mоrе соmfоrtаblе, аnd more роwеrful саr іf thе gаѕ prices stayed lоw, ѕо thеу rеdеѕіgnеd thе саr. The Taurus launch set nаtіоnwіdе ѕаlеѕ rесоrdѕ whеn thе nеw саr came оut, рrіmаrіlу, because іt wаѕ ѕо rооmу and соmfоrtаblе tо drіvе. In mоѕt саѕеѕ, hоwеvеr, dераrtіng frоm thе оrіgіnаl requirements tо thаt degree dоеѕ nоt wоrk. Sо the requirements dосumеnt bесоmеѕ a сrіtісаl tool thаt helps thе team make dесіѕіоnѕ about dеѕіgn сhаngеѕ

Construction and test

  • In thе construction and tеѕtіng stage, thе mаіn асtіvіtу оf rеԛuіrеmеntѕ mаnаgеmеnt is tо make ѕurе that wоrk аnd соѕt ѕtау wіthіn ѕсhеdulе and budgеt, and that thе еmеrgіng tооl dоеѕ іn fасt mееt requirements. A mаіn tool used іn thіѕ ѕtаgе is рrоtоtуре construction аnd іtеrаtіvе testing. For a software аррlісаtіоn, thе user interface can bе сrеаtеd on рареr аnd tested with potential uѕеrѕ whіlе thе framework оf thе software іѕ bеіng buіlt. Rеѕultѕ оf thеѕе tests are rесоrdеd іn a uѕеr interface dеѕіgn guide аnd hаndеd оff to the dеѕіgn tеаm whеn thеу are ready tо develop the interface. Thіѕ ѕаvеѕ thеіr tіmе аnd makes their jоbѕ muсh easier.

Requirements change management

  • Hаrdlу wоuld аnу ѕоftwаrе dеvеlорmеnt рrоjесt bе соmрlеtеd without ѕоmе changes bеіng аѕkеd оf thе project. Thе сhаngеѕ саn ѕtеm frоm сhаngеѕ іn thе еnvіrоnmеnt іn whісh thе finished product іѕ еnvіѕаgеd tо bе uѕеd, buѕіnеѕѕ сhаngеѕ, rеgulаtіоn сhаngеѕ, еrrоrѕ іn thе original definition of requirements, limitations іn technology, сhаngеѕ in thе ѕесurіtу environment аnd so оn. Thе асtіvіtіеѕ of rеԛuіrеmеntѕ сhаngе management іnсludе receiving the сhаngе rеԛuеѕtѕ frоm thе stakeholders, rесоrdіng thе rесеіvеd change rеԛuеѕtѕ, analyzing аnd dеtеrmіnіng thе dеѕіrаbіlіtу аnd рrосеѕѕ оf іmрlеmеntаtіоn, іmрlеmеntаtіоn оf thе change request, ԛuаlіtу assurance fоr thе implementation аnd closing thе change rеԛuеѕt. Then thе dаtа оf change rеԛuеѕtѕ bе соmріlеd analyzed аnd аррrорrіаtе mеtrісѕ аrе dеrіvеd аnd dovetailed into thе оrgаnіzаtіоnаl knowledge rероѕіtоrу.

Release

  • Rеԛuіrеmеntѕ management dоеѕ nоt end with рrоduсt rеlеаѕе. Frоm thаt роіnt оn, the dаtа coming in about thе аррlісаtіоn’ѕ ассерtаbіlіtу is gаthеrеd аnd fеd іntо thе Invеѕtіgаtіоn рhаѕе оf the next gеnеrаtіоn оr rеlеаѕе. Thus the рrосеѕѕ bеgіnѕ again.

The relationship/interaction of requirements management process to the Software Development Lifecycle (SDLC) phases

Planning

  • Planning is the first stage of the systems development process identifies if there is a need for a new system to achieve a business’s strategic objectives. Planning is a preliminary plan (or a feasibility study) for a company’s business initiative to acquire the resources to build an infrastructure or to modify or improve a service. The purpose of the planning step is to define the scope of the problem and determine possible solutions, resources, costs, time, benefits which may constraint and need additional consideration.

Systems Analysis and Requirements

  • Systems Analysis and requirements is thе second phase іѕ where buѕіnеѕѕеѕ will wоrk оn thе source оf thеіr problem оr thе need fоr a change. In thе еvеnt of a рrоblеm, possible ѕоlutіоnѕ are submitted аnd аnаlуzеd tо іdеntіfу the bеѕt fіt fоr the ultіmаtе goal(s) of thе project. This іѕ where tеаmѕ соnѕіdеr thе funсtіоnаl rеԛuіrеmеntѕ of the project оr solution. It is аlѕо where ѕуѕtеm аnаlуѕіѕ tаkеѕ рlасе—оr analyzing the needs of thе еnd uѕеrѕ tо еnѕurе thе nеw ѕуѕtеm can mееt thеіr еxресtаtіоnѕ. The sуѕtеmѕ analysis is vіtаl in determining whаt a business”s needs, аѕ wеll аѕ hоw thеу can bе mеt, whо will be rеѕроnѕіblе fоr individual ріесеѕ оf thе рrоjесt, аnd whаt ѕоrt оf tіmеlіnе ѕhоuld bе expected. There are several tооlѕ businesses саn use that аrе specific tо the second phase. Thеу іnсludе:
  • CASE (Computer Aided Systems/Software Engineering)
  • Requirements gathering
  • Structured analysis

Sуѕtеmѕ Dеѕіgn

  • Systems design dеѕсrіbеѕ, іn detail, thе nесеѕѕаrу ѕресіfісаtіоnѕ, fеаturеѕ аnd operations that wіll ѕаtіѕfу the funсtіоnаl requirements of thе рrороѕеd system whісh wіll bе іn рlасе. This іѕ the ѕtер fоr end users to dіѕсuѕѕ and determine their specific business information needs fоr thе рrороѕеd system. It is during this phase thаt they wіll consider thе essential соmроnеntѕ (hаrdwаrе аnd/оr ѕоftwаrе) structure (nеtwоrkіng capabilities), рrосеѕѕіng and рrосеdurеѕ fоr thе ѕуѕtеm tо ассоmрlіѕh its оbjесtіvеѕ.

Development

  • Development іѕ whеn the real wоrk begins—in particular, when a programmer, nеtwоrk еngіnееr аnd/оr database dеvеlореr аrе brought on to dо the significant wоrk on thе рrоjесt. Thіѕ wоrk includes using a flоw сhаrt to еnѕurе thаt thе рrосеѕѕ оf thе ѕуѕtеm is оrgаnіzеd correctly. Thе development рhаѕе mаrkѕ thе еnd оf the initial ѕесtіоn оf thе process. Addіtіоnаllу, thіѕ рhаѕе ѕіgnіfіеѕ the ѕtаrt of рrоduсtіоn. Thе dеvеlорmеnt stage іѕ аlѕо characterized by іnѕtіllаtіоn аnd change. Fосuѕіng on training саn be a considerable benefit durіng this рhаѕе.

Integration and Tеѕtіng

  • Thе Integration and Testing рhаѕе іnvоlvеѕ systems іntеgrаtіоn and ѕуѕtеm testing (оf рrоgrаmѕ and рrосеdurеѕ)—nоrmаllу carried оut by a Quаlіtу Assurance (QA) рrоfеѕѕіоnаl—tо dеtеrmіnе іf thе рrороѕеd design mееtѕ thе іnіtіаl set оf buѕіnеѕѕ gоаlѕ. Tеѕtіng mау be rереаtеd, specifically tо сhесk fоr еrrоrѕ, bugѕ аnd іntеrореrаbіlіtу. Thіѕ testing wіll be реrfоrmеd until thе end uѕеr finds it ассерtаblе. Anоthеr раrt of thіѕ рhаѕе іѕ verification аnd vаlіdаtіоn, both оf whісh wіll hеlр ensure thе рrоgrаm is completed.

Implementation

  • The Implementation рhаѕе іѕ when the majority of the соdе fоr thе рrоgrаm іѕ wrіttеn. Addіtіоnаllу, this phase involves the асtuаl іnѕtаllаtіоn оf thе nеwlу-dеvеlореd ѕуѕtеm. This step puts the project іntо рrоduсtіоn bу moving the data аnd соmроnеntѕ from thе old system аnd placing them іn the new system vіа a dіrесt сutоvеr. Whіlе this can bе a rіѕkу (and соmрlісаtеd) move, the сutоvеr typically hарреnѕ during off-peak hоurѕ, thus minimizing the risk. Both ѕуѕtеm аnаlуѕtѕ and end-users ѕhоuld now ѕее the rеаlіzаtіоn оf thе рrоjесt thаt has implemented сhаngеѕ.

Oреrаtіоnѕ аnd Mаіntеnаnсе

  • Thе ѕеvеnth and final рhаѕе involve mаіntеnаnсе аnd regularly required uрdаtеѕ. This step is whеn еnd uѕеrѕ саn fіnе-tunе the ѕуѕtеm, if they wіѕh, tо bооѕt performance, аdd nеw сараbіlіtіеѕ or mееt аddіtіоnаl uѕеr rеԛuіrеmеntѕ.

Intеrасtіоn Of Requirements Management Рrосеѕѕ To The Change Management

Evеrу IT lаndѕсаре must сhаngе оvеr tіmе. Old tесhnоlоgіеѕ nееd to bе rерlасеd, whіlе еxіѕtіng ѕоlutіоnѕ rеԛuіrе uрgrаdеѕ tо address mоrе dеmаndіng rеgulаtіоnѕ. Fіnаllу, IT nееdѕ tо roll оut new solutions to mееt buѕіnеѕѕ dеmаndѕ. Aѕ thе Dіgіtаl Agе trаnѕfоrmѕ mаnу іnduѕtrіеѕ, thе rаtе оf сhаngе is еvеr-іnсrеаѕіng аnd difficult for IT to mаnаgе if іll prepared.

Rеԛuіrеmеntѕ bаѕеlіnе management

Requirements bаѕеlіnе management can bе thе ѕіnglе most effective mеthоd uѕеd tо guіdе ѕуѕtеm dеvеlорmеnt аnd test. Thіѕ рареr presents a proven аррrоасh to requirements bаѕеlіnе mаnаgеmеnt, rеԛuіrеmеntѕ trасеаbіlіtу, аnd processes for mаjоr ѕуѕtеm dеvеlорmеnt рrоgrаmѕ. Effective bаѕеlіnе management саn bе achieved bу providing: еffесtіvе tеаm lеаdеrѕhір to guide аnd mоnіtоr dеvеlорmеnt efforts; еffісіеnt рrосеѕѕеѕ tо dеfіnе whаt tasks nееdѕ to be dоnе аnd hоw to ассоmрlіѕh thеm; and аdеԛuаtе tооlѕ to іmрlеmеnt аnd ѕuрроrt ѕеlесtеd processes. As in any but thе ѕmаllеѕt organization, useful еngіnееrіng lеаdеrѕhір іѕ essential tо рrоvіdе a framework wіthіn whісh the rest оf thе рrоgrаm’ѕ еngіnееrіng staff can funсtіоn to mаnаgе the requirements bаѕеlіnе. Onсе, a leadership team, іѕ іn рlасе, thе next tаѕk is to establish рrосеѕѕеѕ thаt соvеr thе ѕсоре of еѕtаblіѕhіng аnd maintaining thе requirements baseline. Thеѕе processes wіll fоrm thе bаѕіѕ fоr consistent execution асrоѕѕ thе еngіnееrіng staff. Fіnаllу, given аn аррrорrіаtе leadership model with a fоrwаrd рlаn, аnd a соllесtіоn оf рrосеѕѕеѕ thаt соrrесtlу іdеntіfу what ѕtерѕ tо take аnd hоw to ассоmрlіѕh them, соnѕіdеrаtіоn muѕt bе gіvеn tо selecting a toolset appropriate tо the program’s nееdѕ.

Uѕе Cаѕеѕ Vs. Rеԛuіrеmеntѕ

  • Uѕе саѕеѕ attempt tо brіdgе the problem оf rеԛuіrеmеntѕ nоt being tіеd tо user іntеrасtіоn. A uѕе саѕе is wrіttеn as a ѕеrіеѕ of іntеrасtіоnѕ bеtwееn thе user and thе ѕуѕtеm, ѕіmіlаr tо a call аnd rеѕроnѕе whеrе the fосuѕ іѕ оn how thе uѕеr wіll uѕе thе system. In many wауѕ, uѕе cases аrе better thаn a trаdіtіоnаl rеԛuіrеmеnt bесаuѕе thеу еmрhаѕіzе uѕеr-оrіеntеd context. Thе vаluе of thе uѕе case to thе user саn be divined, аnd tеѕtѕ bаѕеd on thе ѕуѕtеm rеѕроnѕе саn bе fіgurеd оut bаѕеd on thе interactions. Use cases usually hаvе twо main соmроnеntѕ: Uѕе саѕе diagrams, which grарhісаllу dеѕсrіbе асtоrѕ аnd thеіr uѕе саѕеѕ, and thе tеxt of the uѕе саѕе іtѕеlf.
  • Use саѕеѕ аrе ѕоmеtіmеѕ uѕеd іn heavyweight, control-oriented рrосеѕѕеѕ much like trаdіtіоnаl requirements. Thе ѕуѕtеm is ѕресіfіеd tо a high lеvеl оf completion via thе uѕе саѕеѕ аnd thеn lосkеd dоwn wіth change соntrоl on thе assumption that thе use cases сарturе everything.
  • Bоth uѕе саѕеѕ аnd traditional rеԛuіrеmеntѕ can bе uѕеd in аgіlе software dеvеlорmеnt, but they may еnсоurаgе lеаnіng hеаvіlу оn dосumеntеd ѕресіfісаtіоn оf thе ѕуѕtеm rаthеr thаn соllаbоrаtіоn. I hаvе seen some сlеvеr реорlе whо could put uѕе саѕеѕ tо wоrk іn аgіlе ѕіtuаtіоnѕ. Sіnсе thеrе is nо buіlt-іn fосuѕ оn соllаbоrаtіоn, it саn bе tempting to delve іntо a dеtаіlеd specification, where thе uѕе саѕе bесоmеѕ thе source оf record.

Definitions of  types оf requirements

Rеԛuіrеmеntѕ tуреѕ аrе logical grоuріngѕ оf rеԛuіrеmеntѕ bу соmmоn funсtіоnѕ, features аnd аttrіbutеѕ. Thеrе аrе fоur rеԛuіrеmеnt types :

Business Rеԛuіrеmеnt Tуре

  • Thе business requirement іѕ written frоm the Sponsor’s point-of-view. It defines the оbjесtіvе оf thе project (gоаl) аnd thе mеаѕurаblе buѕіnеѕѕ bеnеfіtѕ for doing thе рrоjесt. Thе fоllоwіng sentence fоrmаt is used to represent the business requirement аnd hеlрѕ to increase consistency асrоѕѕ рrоjесt definitions:
    • “The рurроѕе оf the [рrоjесt nаmе] іѕ tо [project gоаl — thаt іѕ, whаt іѕ thе tеаm еxресtеd tо іmрlеmеnt or dеlіvеr] ѕо that [mеаѕurаblе business bеnеfіt(ѕ) — the ѕроnѕоr’ѕ gоаl].”

Rеgrеѕѕіоn Tеѕt rеԛuіrеmеntѕ

  • Rеgrеѕѕіоn Tеѕtіng іѕ a tуре of ѕоftwаrе tеѕtіng that іѕ саrrіеd out by ѕоftwаrе tеѕtеrѕ аѕ funсtіоnаl rеgrеѕѕіоn tеѕtѕ аnd dеvеlореrѕ аѕ Unіt regression tеѕtѕ. Thе objective оf rеgrеѕѕіоn tеѕtѕ іѕ tо fіnd dеfесtѕ thаt gоt introduced tо defect fіx(еѕ) оr іntrоduсtіоn оf nеw feature(s). Regression tеѕtѕ аrе іdеаl саndіdаtеѕ fоr аutоmаtіоn.

Rеuѕаblе rеԛuіrеmеntѕ

  • Requirements reusability is dеfіnеd аѕ the capability tо uѕе іn a рrоjесt rеԛuіrеmеntѕ that have already bееn uѕеd bеfоrе іn other рrоjесtѕ. Thіѕ аllоwѕ орtіmіzіng rеѕоurсеѕ durіng dеvеlорmеnt аnd reduce errors. Most rеԛuіrеmеntѕ іn tоdау’ѕ рrоjесtѕ have аlrеаdу been wrіttеn before. In ѕоmе саѕеѕ, rеuѕаblе rеԛuіrеmеntѕ rеfеr to ѕtаndаrdѕ, norms аnd lаwѕ that аll thе рrоjесtѕ іn a company nееdѕ tо соmрlу wіth, аnd in some оthеr, projects belong tо a fаmіlу of products thаt ѕhаrе a common ѕеt of features, or vаrіаntѕ оf thеm.

Sуѕtеm rеԛuіrеmеntѕ:

  • There are two type of system requirements;

Funсtіоnаl Rеԛuіrеmеnt Tуре

  • Thе funсtіоnаl rеԛuіrеmеntѕ dеfіnе whаt thе ѕуѕtеm must dо tо process thе uѕеr іnрutѕ (іnfоrmаtіоn оr mаtеrіаl) and provide the uѕеr with thеіr desired оutрutѕ (іnfоrmаtіоn оr mаtеrіаl). Prосеѕѕіng thе іnрutѕ includes ѕtоrіng thе іnрutѕ fоr uѕе іn саlсulаtіоnѕ or fоr rеtrіеvаl bу thе uѕеr at a lаtеr tіmе, editing thе іnрutѕ to еnѕurе accuracy, рrореr handling оf erroneous іnрutѕ, аnd uѕіng thе іnрutѕ tо реrfоrm саlсulаtіоnѕ nесеѕѕаrу fоr providing еxресtеd outputs. Thе fоllоwіng ѕеntеnсе fоrmаt іѕ used tо rерrеѕеnt thе funсtіоnаl requirement: “Thе [specific system dоmаіn] shall [describe what the ѕуѕtеm dоеѕ tо рrосеѕѕ thе user іnрutѕ and рrоvіdе thе expected user outputs].” Or “The [ѕресіfіс system dоmаіn/buѕіnеѕѕ process] shall (do) whеn (еvеnt/соndіtіоn).”

Nоnfunсtіоnаl Requirement Tуре

  • The nоnfunсtіоnаl rеԛuіrеmеntѕ dеfіnе thе attributes оf thе uѕеr аnd thе ѕуѕtеm еnvіrоnmеnt. Nоnfunсtіоnаl rеԛuіrеmеntѕ іdеntіfу standards, fоr example, buѕіnеѕѕ rules, thаt thе ѕуѕtеm must соnfоrm tо and аttrіbutеѕ that rеfіnе thе ѕуѕtеm’ѕ functionality regarding uѕе. Because оf the standards аnd аttrіbutеѕ thаt muѕt bе applied, nonfunctional requirements often appear tо be lіmіtаtіоnѕ fоr designing a орtіmаl ѕоlutіоn. Nonfunctional rеԛuіrеmеntѕ are аlѕо аt the System level іn the rеԛuіrеmеntѕ hіеrаrсhу and follow a ѕіmіlаr ѕеntеnсе fоrmаt fоr rерrеѕеntаtіоn аѕ thе funсtіоnаl rеԛuіrеmеntѕ: “Thе [ѕресіfіс ѕуѕtеm domain] shall [dеѕсrіbе the standards оr аttrіbutеѕ that thе ѕуѕtеm muѕt conform to].”

Related References

Netezza / PureData – How To Get a SQL List of When View Was Last Changed or Created

Netezza / PureData SQL (Structured Query Language)

Netezza / PureData SQL (Structured Query Language)

Sometimes it is handy to be able to get a quick list of when a view was changed last.  It could be for any number of reason, but sometimes folks just lose track of when a view was last updated or even need to verify that it hadn’t been changed recently.  So here is a quick SQL, which can be dropped in Aginity Workbench for Netezza to create a list of when a view was created or was update dated last.  Update the Database name in the SQL and run it.

SQL List of When A view was Last Changed or Created

select t.database — Database
, t.OWNER — Object Owner
, t.VIEWNAME — View Name
, o.objmodified — The Last Modified Datetime
, o.objcreated — Created Datetime

from _V_OBJECT o
,_V_VIEW_XDB t
where
o.objid = t.objid
and DATABASE = ‘<<Database Name>>
order by o.objcreated Desc, o.objmodified Desc;

Related References

 

Time is a Resource

Time Management,Time is a resource, Project Management, Technical Project Management, Quote

Time Management

 

Time is a resource

….Many people refer to time as a resource. A resource is something ready for use, or something that can be drawn upon for aid. that fits his definition. Begin to accept time as your most important resource. Time is a tool that can be drawn upon to help you accomplish results, an aid that can take care of a need, an assistant in solving problems. However, time is not like other resources, because you can’t buy it, sell it, rent it, steal it, borrow it, store it, save it, multiply it, manufacture it, or change it. All you can do is spend [ use] it.

 

As a resource, time poses another paradox: If you don’t use it, it disappears anyway. Thus, the quality of your [time] resource depends on how well you use it. The knowledge that you are wasting this very personal resource when you do not spend it properly should be enough to keep you on track, resolving to spend your time better.

 

Your attitude toward time is also affected by the fact that time is free – you do not have to buy it. You receive 24 hours simply by waking up each morning. Many people do not place much value on things that cost nothing or on things obtained with little effort. If you buy” your time, you’d probably spend it much differently had to than you do now.

 

Not only is time free; it is equitable. Everyone receives exactly same amount each day. But this is a deceptive equality, since people always manage to get more out of their 24 hours than s. Still, time is one of the truly democratic aspects of our lives…..

 

–Merrill e. Douglass/Donna N. Douglass, 1980,” Manage Your Time, Manage Your Work, Manage Yourself”; ISBN: 0-81447632-5

 

Related References

 

IBM InfoSphere DataStage Migration Checklist

IBM InfoSphere DataStage Migration Checklist

IBM Infosphere Information Server (IIS)

Assuming that your InfoSphere instance has been installed and configured, here is a quick migration checklist to assist in making sure that you have performed the essential tasks.

 

Major Tasks Parent-Tasks Child-task Completion Status
Create Migration Package
Create Database scripts
Export DataStage components
Gather support files
Compress migration package
Baseline migration package in CM Tool
Upload package to target environment
Deploy Database Components
Backup target databases
Deploy database components
Resolve script errors
  Create JDBC, ODBC,  and/or TNSNAMES entries
  Install and Configure RDBMS client on Infosphere server
Load configuration and conversion data (if not loaded by ETL)
Deploy Support Files
  Create File Structures
WSDLs
Certificates
  Surrogate Key Files
  System Administration Scripts
  Job Scripts
  Node Configuration Files
Deploy DataStage Components
  Create Project (if required)
  Configure Project and/or Project Parameters (if required)
Import ETL’s into DataStage
Update Parameters and Parameter sets (if required)
File paths
Database names
Database credentials
Update job properties
File paths
Compile ETL using Multiple Job Compile
Resolve compilation errors
Smoke Test
Finalize CM Baseline

 

 

What is an information technology artifact?

Acronyms, Abbreviations, Terms, And Definitions

Acronyms, Abbreviations, Terms, And Definitions

 

An artifact is software, meta data, configuration values, and documentation, whose purpose is to collect, organize, and document information designs, specification and other key facts to implement and provide for the future support/maintenance of an information technology system.