Technical Debt

Technical Debt

Technical Debt

During the software engineering process, there are different issues which should be dealt with or else they will subject the project to unnecessary costs later. The technical debt perspective should be considered in each step of software development. For instance, when analyzing the cost of cloud approaches, you need to take into consideration the technical debt. You should as well factor the engineering aspect when making technical decisions such as choosing between cloud services vs. homegrown solutions.

What is technical debt?

Technical debt refers to the implied cost which will be incurred to do additional rework on a system after the engineering process is done. For example, engineers can choose to go for an easy option so that they can save time during the product design. The right steps which they will avoid will later need to be implemented which will mean a product has to be recalled or it will have to be fixed after it has reached the market which will cost more in terms of resources and manpower.

What are the most common types/causes technical debts?

Deliberate tech debt

In this case, engineers are aware of the step which is necessary during project implementation, but they will ignore it provided they can go for a shortcut which will save on cost and avail the product to the market. For instance, when analyzing the advantage of using the public cloud, some engineers may assume certain benefits, and later they will realize they are very necessary hence they are forced to go back and procure the system. It will lead to wastage in the company. Some engineers will not like doing the same process every now and then; they can avoid a given process only to expose the final product to flaws which will require re-engineering.

Accidental/outdated design tech debt

After designing a product or software, with time the technology will advance and render the design less effective in solving certain needs. For instance, due to advancement in technology, the tools you incorporated in a given software may end up being flawed which will make the product less effective which may necessitate re-engineering. Engineers may try their level best to come up with great designs, but advancement in technology can make their designs less effective.

Bit rot tech debt

It is a situation where a complexity develops over time. For example, a system or a component can develop unnecessary complexity due to different changes which have been incorporated over time. As engineers try to solve emerging needs, they can end up exposing the product to more complications which can be costly in the long run.

Strategies for minimizing technical debt

How to minimize deliberate tech debt

To avoid the tech debt, you need to track the backlog when engineers started the work. If you can track the backlog and identify areas where the engineers are trying to save time, you can avoid the debt.

Minimizing Accidental/outdated design tech debt

You need to refactor the subsystem every now and then so that you can identify the technical debt and fix it. For example, if the software is exposing you to unnecessary slowdowns, you need to fix the errors and make it meet industry standards.

Addressing Bit rot tech debt

Engineers should take time to understand the system they are running and clear any bad codes.

What Is Machine Learning?

Machine Learning

Machine Learning

Machine learning is Artificial Intelligence (AI) which enables a system to learn from data rather than through explicit programming.  Machine learning uses algorithms that iteratively learn from data to improve, describe data, and predict outcomes.  As the algorithms ingest training data to produce a more precise machine learning model. Once trained, the machine learning model, when provided data will generate predictions based on the data that taught the model.  Machine learning is a crucial ingredient for creating modern analytics models.

Why Consilience Is Important?

Tree of knowledge

Tree of knowledge

What is Consilience?

Consilience is the confluence of concepts and/or principles from different disciplines, especially, when forming a comprehensive unifying theory.

Independent Confirmation

Why are some inventions discovered at the same time in different parts of the world? Does this have something to do with the scientific process of “sharing important discoveries?” Generally, scientists believe that they are part of a community of knowledge. Their discoveries do not occur in a vacuum. They must give credit to those who went before and created the foundation for their work. Therefore, when they discover something new, they are required to share it with the entire world. This sharing is part of knowledge evolution. Interestingly enough, it is also key to the World Wide Web. Collaboration is one of the key strengths of the Internet. It is a way to increase overall knowledge of Planet Earth. Science can also increase the strength of their theories through independent confirmation.

Result Conciliation

There are oftentimes prescriptions for the types and numbers of witnesses to accomplish certain legal requirements. Anyone who has completed an experiment understands the importance of result conciliation. A hypothesis is not proven to be true unless it can be repeated by independent sources. This shows that the reality is objective. The word, Consilience was formed by two Latin words – “com” meaning “together” and the suffix “-silence” meaning “jumping.” Therefore, Consilience means “jumping together” or a “convergence of proof from independent sources.” Scientists should use different methods to reach the same conclusion. Business and economics have a similar concept. Just think of the concept of a Recession or Depression. These are officially declared when a variety of indicators are in agreement – stock market, employment, inflation, money supply and so forth.

Knowledge Evolution

Consulting can use the concept of Consilience to teach firms how to follow objective norms. Technology consulting can compare a subjective company’s practices to objective industry norms. The best career development is successful based on objective, independent analysis. The concordance of evidence can help a business create a successful strategy. Consilience is the convergence of evidence from independent sources to prove the validity of a conclusion. Objective corporate success can be achieved by satisfying objective needs of your customers. Business intelligence requires an objective standard, such as Consilience to be useful.

Conclusion

Consilience is important to you because the answer to any given problem may not necessarily come from within your field of expertise and experience. rather, to be truly competitive in an ever in an ever increasing world of knowledge, we need to adopt a broad-scoped renaissance approach to learning and thinking, which folds in other sets of concepts and principles to create the durable solutions for today and tomorrow.

 

Common Information Technology Architectures

Overview Of Common Information Technology Architectures

The world is currently in the Information and Technology era, were as, so many experts are of the opinion that the Silicon Valley days are beginning to come to an end. Information and Technology is basically what the world revolves around today which makes it necessary to consider some technical overview of Information and Technology architecture use. The term Information Technology is often used in place for computer networks, and it also surrounds other information related technologies like television, cell phones and so on, showing the connection between IT and ICT (thou IT and ICT are often used to replace each other but technically are different). When talking about IT architectural, it is the framework or basis that supports an organization or system. Information technology architectural concerning computing involves virtual and physical resources supporting the collection, processing, analysis and storage of data. The architecture, in this case, can be integrated into a data center or in some other instances decentralized into multiple data centers, which can be managed and controlled by the IT department or third-party IT firm, just like cloud provider or colocation facility. IT architectures usually come into play when we consider hardware for computers (Big Iron: mainframe & Supercomputers), software, internet (LAN / WAN Server based), e-commerce, telecom equipment, storage (Cloud) and so on.

Information Technology Industry Overview

Information Technology Industry Overview

We human beings have been able to manipulate, store, and retrieve data since 3000Bc, but the modern sense of information technology first appeared in an article in 1958 published in a Havard Business Review: Harold j.Leavitt and Thomas L.whisler were the authors, and they further commented that the new technology was lacking an established name. It shall be called information technology (IT). Information Technology is used in virtually all sectors and industries, talking about education, agriculture, marketing, health, governance, finance and so on. Whatever you do, it is always appropriate to have a basic overview of the architectural uses of Information Technology. Now we take a look at some standard Information technology architectures use with regards to technology environment patterns such as Big Iron (mainframe & Supercomputers); Cloud; LAN / WAN Server based; storage (Cloud).

Big Iron (Mainframe & Supercomputers)

Big iron is a term used by hackers, and as defined in the hacker’s dictionary the Jargon File refers to it as “large, expensive, ultra-fast computers. It is used for number crunching supercomputers such as Crays, but can include more conventional big commercial mainframes”. Often used concerning IBM mainframes, when discussing their survival after the invention of lower cost Unix computing systems. More recently the term also applies to highly efficient computer servers and ranches, whose steel racks naturally work in the same manner.

Supercomputers are known to be the world’s fastest and largest computers, and they are primarily used for complex scientific calculations. There are similar components in a supercomputer and desktop computer: they both have memory processors and hard-drives. Although similarities exist between supercomputers and desktop computers, the speeds are significantly different. Supercomputers are way faster and more extensive. The supercomputers large disk storage, high memory, and processors increase the speed and the power of the machine. Although desktop computers can perform thousands or millions of floating-point operations per second know as (megaflops), supercomputers speeds perform at billions of operations per second also known as (gigaflops) and even up to trillions of operations per second know as (teraflops).

Mainframe Computers

Mainframe Computers

Evolution Of Mainframe and Supercomputers

Currently, many computers are indeed faster than the very first supercomputer, the Cray-1, which is designed and developed by Cray Research team during the mid-70s. The Cray-1 had the capacity of computing at the rate of 167 megaflops using a rapid form of computing called the Vector Processing,   which is composed of quick execution of instructions in a state of pipelined fashion. In the mid-80s a faster method of supercomputing was originated: which was called Parallel Processing.  Applications that made use of parallel processing were and are still able to solve computational issues by using multiple processors. Example: if you were going to prepare ice cream, sundaes for nine of your friends. You would need ten scoops of ice cream, ten bowls; ten drizzles of chocolate syrup with ten cherries, working alone you would put one scoop of ice-cream in each bowl and drizzle the syrup on each other. Now, this method of preparing sundaes will be categorized as vector processing. To get the job done very quickly, you will need help from some friends to assist you in a parallel processing method. If five people prepare the ice-cream mixture, it would be five times as fast.

Parallel Processing

Parallel Processing

Application Of Mainframe and Supercomputers

Supercomputers are very powerful that they can provide researchers with the insight into sources that are small, too fast, too big, or maybe very slow to observe in laboratories. Astrophysicists make use of supercomputers as time machines to explore the past and the future of the universe. A fascinating supercomputer simulation was created in the year 2000 that was able to depict the collision of two galaxies: The Andromeda and our very own Milky Way, although this collision will not happen in another 3 billion years from now.

This particular simulation allowed scientist to experiment and the view the result now. The simulation was conducted by Blue Horizon, a parallel supercomputer in the Diego, Supercomputer Center. Using 256 of Blue Horizon’s 1,152 processors, the simulation showed what would happen to millions of stars if the galaxies collided. Another example is molecular dynamic (molecular interactions with each other). Simulation events done with supercomputers allow scientists to study their interactions when two molecules are docked down. Researchers can generate an atom-by-atom picture of the molecular geometry by determining the shape of a molecule’s surface. Atomic experimentation at this level is extremely difficult or impossible to perform in a laboratory environment, but supercomputers have paved the way for scientists to stimulate such behaviors with ease.

Supercomputers Of The Future

Various research centers are always diving into new applications such as data mining to explore additional and multiple uses of supercomputing. Data mining allows scientist to find previously unknown relationships among data, just like the Protein Data Bank at San Diego Supercomputer Center is collecting scientific data that provides other scientists all around the world with more significant ways of understanding of biological systems. So this will provide researchers with new and unlimited insights of the effects, causes, and treatments of so many diseases. Capabilities of and applications of supercomputers will continue to grow as institutions all over the world are willing to share their various discoveries making researchers more proficient at parallel processing.

information technology Data Storage

Electronic data storage, which is widely used in modern computers today, has a date that spans from World War II when a delay memory line was developed to remove the interference from radar signals. We also have the William tube, which was the very first random-access digital storage, based on the cathode ray tube which consumed more electrical power. The problem regarding this was that the information stored in the delay line memory was liable to change flexibly and fast, especially very volatile. So it had to be continuously refreshed, and information was lost whenever power was removed. The first form of non-volatile computer storage system was the magnetic drum, which was the magnetic drum, it was invented in 1932 and used in the (Ferranti Mark 1) the very first commercially available electronic that was for general-purpose.

IBM initially introduced the very first hard disk drive in 1956, as an added component to their 305 RAMAC computer system. Most digitalized data today are stored magnetically on a hard disk, or optically such as CD-ROMS. But in 2002 the digital storage capacity exceeded analog for the first time. In the year 2007, almost 94% of data stored in the world was digitally held: 28% optical devices, 52% hard disks, 11% digital magnetic tape. The worldwide capacity for storing information on electronic devices grew from 3 Exabyte (1986) to 295 Exabyte (2007), doubling every three years. 

Cloud Computing

Cloud Computing

Cloud Storage

Cloud storage is a modern data storage system in which the digital data is stored in an array of logical pools, the physical storage system composes of multiple servers and often various locations, and the environment is usually owned by and managed by a hosting company. Cloud storage supplying companies are in charge of for keeping the data available and accessible, individuals; organizations lease or buy storage capacity from the suppliers to store user, application data or organization. Cloud storage refers to a hosted object-storage service, I a long run the term has broadened to include other sources of data storage systems that are available as a service, just like extended storage.  Examples of block storage services are Amazon S3 and Microsoft Azure storage. Then we also have OceanStore and VISION cloud which are storage systems that can be hosted and also deployed with cloud characteristics.

Cloud computing is changing implementation and design of IT infrastructures. Typically, business-owned traditional database centers are mostly private, and capital-intensive resources (Big-Iron: Mainframe and supercomputers), cloud base computing, on the other hand, enables organizations to have access to cloud base service providers with credible data center infrastructure for a mostly avoidable fee. Infrastructure-as-a-service model, cloud computing, allows flexible data storage on demand. Consumers can beseech cloud service provider’s to help store, compute, and offer other IT related services without installing gadgets and other resources locally, saving a lot of space and money while users can quickly adjust cloud base usage depending on required workload.

Servers

On a typical day, people tend to use different IT-based servers or networks. Firstly, the process of checking your email, over a Wi-Fi connection on your PC, in your house, is a typical server.

The process of logging on to your computer at your place of work, to have access to files from the company’s database that is another typical server. When you are out for coffee the Wi-Fi hotspot at the coffee shop, is another type of server-based communications.

All of these typical servers are set up differently. Servers are mainly categorized according to a geographic area of use and the requirements of the server within those geographic areas. Servers can service just about anyone from one man usage within with one device to millions of people and devices anywhere on the planet.

Some Common Servers we will consider Include:

  • WAN (Wide Area Network)
  • LAN (Local Area Network)
  • PAN (Personal Area Network)
  • MAN (Metropolitan Area Network)

Let’s go into some detail on these networks.

Area Net Relative Size Relationship

Area Net Relative Size Relationship

PAN (Personal Area Network)

PAN (personal area network), is a server integrated for a single person within a building or nearby. It could be inside a little office or a home. A PAN could incorporate at least one PC, phones, minor gadgets, computer game consoles and other gadgets. On the off chance that various people utilize a similar system inside a home, the system is some of the time known as a HAN (Home Area Network).

In an exceptionally common setup, a home will have a single, wired Internet connection associated with a modem. This modem at that point gives both wired and remote service for numerous gadgets. The system is regularly managed from a PC yet can be accessed to from other electronic devices.

This kind of server gives incredible adaptability. For instance, it enables you to:

  • Send a report to the printer in the workplace upstairs while you’re perched in another room with your portable workstation
  • Upload the pictures from your mobile phone to storage device (cloud) associated with your desktop PC
  • View movies from an internet streaming platform on your TV

If this sounds well-known to you, you likely have a PAN in your home without you knowing what it’s called.

LAN (Local Area Network)

LAN (Local Area Network) comprises of a PC network at a single location, regularly an individual office building. LAN is useful for sharing assets, for example, information stockpiling and printers. LANs can be worked with generally modest equipment, for example, network connectors, hubs, and Ethernet links.

A small LAN server may just utilize two PCs, while bigger LANs can oblige a higher number of PCs. A LAN depends on wired networking for speed and security optimization; however wireless networks can be associated with a LAN. Fast speed and moderately low cost are the qualifying attributes of LANs.

LANs are regularly utilized for a place where individuals need to share resources and information among themselves yet not with the outside world. Think about an office building where everyone ought to have the capacity to get to records on the server or have the ability to print an archive to at least one printer. Those assignments ought to be simple for everyone working in a similar office, yet you would not want someone strolling into the office and have access.

 

MAN (Metropolitan Area Network)

MAN (metropolitan area network) comprises of a PC organize over a whole city, school grounds or little district. Contingent upon the arrangement, this kind of system can cover a range from 5 to around 50 kilometers over. A MAN is often used to associate a group of LANs together to form a broader system. When this kind of server is mainly intended for a campus, it can be called CAN (Campus Area Network).

WAN (Wide Area Network)

WAN (wide area network), involves a vast region, for example, a whole nation or the entire world. A WAN can contain various littler systems, for example, LANs or MANs. The Internet is the best-known case of an open WAN.

Conclusion

The world is changing rapidly as modern world continues its unstoppable growth. With so much of the changes happening its good education be capable of touching students in various ways. Students today are leaders, teacher’s inventors and businessmen and women of tomorrow. Information technology has a crucial role in students being able to retain their job and go to school. Especially now that most schools offer various online courses, classes that can be accessed on tablets laptops and mobile phones.

Information technology is reshaping many aspects of the world’s economies, governments, and societies.  IT provide more efficient services, catalyze economic growth, and strengthen social networks, with about 95% of the world’s population now living in an area with the presence of a featured use and implementation of IT. IT is diversified, what you are probably using to have access to this article is based on IT architectural features. Technological advancement is a positive force behind growth in economies of nations, citizen engagement, and job creation.

Whаt Iѕ A Cоdе Sniрреt?

Acronyms, Abbreviations, Terms, And Definitions

Acronyms, Abbreviations, Terms, And Definitions

 

Whаt Iѕ A Cоdе Sniрреt?

Thе code ѕniрреt iѕ a tеrm uѕеd in рrоgrаmming tо rеfеr tо ѕmаll раrtѕ оf reusable source соdеѕ. Suсh kinds оf соdеѕ аrе аvаilаblе both in binary оr tеxt context. Cоdе ѕniрреtѕ are commonly dеfinеd аѕ unitѕ or funсtiоnаl mеthоdѕ thаt can bе rеаdilу intеgrаtеd intо larger modules рrоviding functionality. Thiѕ technical tеrm iѕ аlѕо uѕеd to refer tо the рrасtiсе оf minimizing thе uѕе of repeated code thаt iѕ common to many applications.

Java рrоgrаmmеrѕ use соdе ѕniрреtѕ аѕ an informative mean tо ѕuрроrt the рrосеѕѕ оf еnсоding. Normally, a ѕniрреt shows an еntirе functional unit corresponding tо code a ѕmаll рrоgrаm, оr a ѕinglе funсtiоn, a сlаѕѕ, a template or a bunch of related funсtiоnѕ.

Prоgrаmmеrѕ use ѕniрреt codes with thе ѕаmе purposes аѕ аn аррliсаtiоn. Fоr еxаmрlе, they uѕе it as a way to ѕhоw the соdе as a proven ѕоlutiоn to a givеn рrоblеm. Thеу mау аlѕо use this tо illuѕtrаtе рrоgrаmming “triсkѕ” of nоn-triviаl imрlеmеntаtiоn to highlight thе ресuliаritiеѕ of a givеn соmрilеr. Sоmе реорlе uѕе thiѕ as an еxаmрlе оf соdе portability оr еvеn tо uѕе thеm tо lower the Jаvа programming timе. Organic аnd thеmаtiс collections of ѕniрреt соdеѕ inсludе thе digital соllесtiоn оf tiрѕ аnd triсkѕ аnd асt аѕ a ѕоurсе fоr lеаrning and rеfining рrоgrаmming.

Thе snippet iѕ ѕhоrt аnd fulfillѕ thе раrtiсulаr tаѕk well, it dоеѕ nоt nееd any еxtrа соdе beyond ѕtаndаrd library and ѕуѕtеm dереndеnt code. Thе ѕniрреt iѕn’t the complete рrоgrаm – аnd for thаt you will ѕubmit thе соdе in the ѕоurсе code rероѕitоrу that iѕ thе bеѕt place to handle the lаrgеr рrоgrаmѕ. Ideally, thе ѕniрреt must be thе ѕесtiоn of соdе, whiсh уоu mау ѕniр оut оf the lаrgеr рrоgrаm аnd very еаѕilу reuse in оthеr рrоgrаm. In order, to mаkе ѕniрреtѕ ѕimрlе tо use, it is good to еnсарѕulаtе in thе funсtiоn, сlаѕѕ аnd роtеntiаllу, аѕ thе framework tо ѕtаrt thе new рrоgrаm.

For a рrоgrаmmеr, having gооd code ѕniрреtѕ iѕ vеrу imроrtаnt. Mаnу people uѕе different wау tо kеер thеir code with them. Thеrе iѕ a lоt of оnlinе ѕоlutiоn аlѕо for thоѕе likе аgаinѕt. Hаving gооd соdе in hаnd is vеrу imроrtаnt tо dеlivеr best in class рrоduсt. Sniрреtѕ should bе аlwауѕ mоdulаr and роrtаblе. Sо that iѕ should bе plugged intо уоur соdе easily. Many реорlе uѕе github giѕt to keep thеir snippets. Rubу рrоgrаmmеrѕ uѕе mоdulеѕ to сrеаtе соdе ѕniрреtѕ.

What is Time Management?

Time Management, What is time management

Time Management

 

 

Why Time Management is Important

Time management is a habit, a process, and a mindset for the working professional to get things done.  Time management is our personalized tactical plan to handle today, tomorrow and the coming days.  Good time management is about working smarter, not harder, to get the most done same 24 hour in a day and seven days in a week…etc., etc..

Time management is, also, how we ensure that we:

  • know is happening,
  • Know what needs to happen in the future,
  • Are properly focused on the important tasks, and
  • Achieve work life balance.

Definition of Time Management

Time management is the process of organizing, planning, and working to increase efficiency and productivity, both professionally and personally.

Related References

 

Databases – What is ACID?

Acronyms, Abbreviations, Terms, And Definitions, What is ACID?

Acronyms, Abbreviations, Terms, And Definitions

What does ACID mean in database technologies?

  • Concerning databases, the acronym ACID means: Atomicity, Consistency, Isolation, and Durability.

Why is ACID important?

  • Atomicity, Consistency, Isolation, and Durability (ACID) are import to database, because ACID is a set of properties that guarantee that database transactions are processed reliably.

Where is the ACID Concept described?

  • Originally described by Theo Haerder and Andreas Reuter, 1983, in ‘Principles of Transaction-Oriented Database Recovery’, the ACID concept has been codified in ISO/IEC 10026-1:1992, Section 4

What is Atomicity?

  • Atomicity ensures that only two possible results from transactions, which are changing multiple data sets:
  • either the entire transaction completes successfully and is committed as a work unit
  • or, if part of the transaction fails, all transaction data can be rolled back to databases previously unchanged dataset

What is Consistency?

  • To provide consistency a transaction either creates a new valid data state or, if any failure occurs, returns all data to its state, which existed before the transaction started. Also, if a transaction is successful, then all changes to the system will have been properly completed, the data saved, and the system is in a valid state.

What is Isolation?

  • Isolation keeps each transaction’s view of database consistent while that transaction is running, regardless of any changes that are performed by other transactions. Thus, allowing each transaction to operate, as if it were the only transaction.

What is Durability?

  • Durability ensures that the database will keep track of pending changes in such a way that the state of the database is not affected, if a transaction processing is interrupted. When restarted, databases must return to a consistent state providing all previously saved/committed transaction data

 

Related References

Database – What is TCL?

TCL (Transaction Control Language)

SQL (Structured Query Language)

TCL (Transaction Control Language) statements are used to manage the changes made by DML statements. It allows statements to be grouped together into logical transactions. The main TCL commands are:

  • COMMIT
  • SAVEPOINT
  • ROLLBACK
  • SET TRANSACTION

Related References

 

What is Process Asset Library?

Documentation, Process Asset Library, PAL, SOP, Procedures, Artifacts, CMM, CMMI

Documentation

 

What is Process Asset Library (PAL)?

Process Asset Library (PAL) is a centralized repository, within an organization, which contains essential artifacts that document processes or are process assets (e.g. configuration Items and designs) used by an organization, project, team, and/or work group.  The assets may, also, be leveraged to achieve process improvement, which is the intent of lessons learned document, for example.

What is in the Process Asset Library (PAL)?

Process Asset Library (PAL), usually, houses of the following types of artifacts:

  • Organizational policies
  • Process descriptions
  • Procedures
  • Development plans
  • Acquisition plans
  • Quality assurance plans
  • Training materials
  • Process aids (e.g. templates, checklists, job aides and forms)
  • Lessons learned reports

 

Related References

CMMI Institute

What Is Capability Maturity Model Integration (CMMI)?

Building Organizational Capability

 

What is Confluence?

Acronyms, Abbreviations, Terms, And Definitions, What is Confluence?

Acronyms, Abbreviations, Terms, And Definitions

Confluence

Confluence is the place or process of  merging of two things.

Business Perspective of Confluence

From a business perspective, confluences is the of merging processes, concepts, principles, and/or technologies.

Related References

 

What is Information Management?

Information Management (IM)

Information Management (IM)

Information Management Definition

Information Management (IM) tends to vary a based on your business perspective, but is all the systems, processes, practice (business and technical) within organizations for the creation, use, and disposal of business information to support business operations.

Information Management (IM) Activities

Information Management activities may include, but are not be limited to:

  • Information creation, capture, storage, and disposal
  • The governance of information, practices, meaning and usage
  • Information protection, Regulatory compliance, privacy, and limiting legal liability
  • Technological infrastructure, such as, architecture, strategies and delivery enablement

Related References

 

What is an ERP?

Enterprise Resource Planning (ERP)

Enterprise Resource Planning (ERP)

What does ERP mean?

  • ERP Means “Enterprise Resource Planning”

What is an ERP?

  • An ERP is business software application or series of applications, which facilitate the daily operations of business. An ERP an be commercial-off-the-shelf (COTS) applications (which may or may not be customized) or custom built (home grown) by the business and/or assemblages of different vendor applications and/or models.  ERP applications dules from a variety of vendors.

Common ERP Major Functions

  • ERP application software typically support these major business operations:

Financials Management system (FMS)

  • FMS supports accounting, consolidation, planning, and procurement.

Customer Relationship Management (CRM)

  • CRM facilitates customer interactions and data throughout the customer lifecycle, with the goal of improving business relationships with customers, assisting in customer retention and sales growth.

Human Resources Management System (HRMS)

  • HRMS supports workforce acquisition, workforce management, workforce optimization, and benefits administration

Enterprise Learning Management (ELM)

  • ELM is the integrated application which increases workforce knowledge, and skills, and competencies to achieve critical organizational objectives.

Asset Management (AM)

  • AM support activities for deploying, operating, maintaining, upgrading, and disposing of assets cost-effectively.

Supply Chain management (SCM)

  • SCM is the oversight of materials, information, and finances as they move in a process from supplier to manufacturer to wholesaler to retailer to consumer.

Related References

What is TQM?

Acronyms, Abbreviations, Terms, And Definitions,totalquality, #totalquality, management, #management, quality, #quality, TQM, #TQM,

Acronyms, Abbreviations, Terms, And Definitions

 

What is TQM?

TQM means “Total Quality Management”.

What is Total Quality Management

Total Quality Management (TQM) is a management philosophy, which promotes total customer satisfaction through continuous improvement of products and processes, enabled by employee empowerment.

What does CRM Mean?

Customer Relationship Management (CRM)

Customer Relationship Management (CRM)

 

What is CRM?

CRM (customer relationship management) is a type of ERP application, which are used to facilitate sales, marketing, and business development interactions throughout the customer life cycle.

What does a CRM Application do?

A CRM application capabilities, broadly, encompass:

Marketing Integration

  • Lead management, email marketing, and campaign management

Sales Force Automation

  • Contact management, pipeline analysis, sales forecasting, and more

Customer Service & Support

  • Ticketing, knowledge management systems, self-service, and live chat

Field Service Management

  • Scheduling, dispatching, invoicing, and more

Call Center Automation

  • Call routing, monitoring, CTI, and IVR

Help Desk Automation

  • Ticketing, IT asset management, self-service and more

Channel Management

  • Contact and lead management, partner relationship management, and market development funds management

Business analytics integration

  • Analytics application and Business intelligence and reporting integration, which may include internal reporting capabilities.

Related References

Information Technology – What is Greer’s Third Law?

Greer's Third Law, Acronyms, Abbreviations, Terms, And Definitions, What is Greer's Third Law

Greer’s Third Law

Greer’s Third Law

A computer program does what you tell it to do, not what you want it to do

Related References

Business Communications – Do not listen to reply, listen to understand

Maxims And Truisms

Maxims And Truisms

“Do not listen to reply, listen to understand”

Meaning

  • If you are busy formulating your own response in your head, you cannot possibly be fully hearing or understanding what the person is saying. You are more likely to make mistaken assumptions about what they are trying to say, which makes your arguments weaker and less relevant. If you had listened to understand first, then you would know how to more appropriately and powerfully respond.

 

 

Business Communications – Seek First to Understand, Then to be Understood

Maxims And Truisms

Maxims And Truisms

 

“Seek First to Understand, Then to be Understood”
Steven Covey – 7 Habits of Highly Effective People (Habit 5)

 

 

Meaning:

  • From a communications standpoint, you always need to make the other person feel “heard” before they are willing to “hear you”
  • If you are not listening to understand, then you are probably missing the point of what the person is actually trying to say or what they actually want, which puts you in a weaker bargaining position.

What is Crayne’s Law?

Acronyms, Abbreviations, Terms, And Definitions, What is Crayne's Law,Crayne's Law

Crayne’s Law

Crayne’s Law

All computers wait at the same speed

 

“The Serious Assembler” by Charles A. Crayne and Dian Girand, 1985

Related References

Database – What is a Primary Key?

Database Table

Database Table

What is a primary Key?

What a primary key is depends, somewhat, on the database.  However, in its simplest form a primary key:

  • Is a field (Column) or combination of Fields (columns) which uniquely identifies every row.
  • Is an index in database systems which use indexes for optimization
  • Is a type of table constraint
  • Is applied with a data definition language (DDL) alter command
  • And, depending on the data model can, define parent-Child relationship between tables

Related References

Database – What is DDL?

SQL (Structured Query Language), Database, What is DDL?

SQL (Structured Query Language)

What is DDL (Data Definition Language)?

DDL (Data Definition Language), are the statements used to manage tables, schemas, domains, indexes, views, and privileges.  The the major actions performed by DDL commands are: create, alter, drop, grant, and revoke.

 

Related References