Topics Covered
Software crisis & myths, Software engineering. Software process & process models: Linear sequential, prototyping, RAD, Evolutionary Product & Process. Project management concepts: People, Product, Process, Project. W5HH principles, critical practice.
Questions Covered
Que 1. Discuss software in terms of software engineering? State the characteristics of software?
Software is treated as a good software by the means of different factors. A software product is concluded as a good software by what it offers and how well it can be used.
Software engineering is the process of designing, developing, and maintaining software systems. A good software is one that meets the needs of its users, performs its intended functions reliably, and is easy to maintain. There are several characteristics of good software that are commonly recognized by software engineers, which are important to consider when developing a software system. These characteristics include functionality, usability, reliability, performance, security, maintainability, reusability, scalability, and testability.
Also Read: Unit-2 Project Management
Software is a logical rather than a physical system element. Therefore, software has characteristics that are considerably different than those of hardware :
1. Software is developed or engineered, it is not manufactured in the classical sense.
Although some similarities exist between software development and hardware manufacture, the two activities are fundamentally different. In both activities, high quality is achieved through good design, but the manufacturing phase for hardware can introduce quality problems that are nonexistent (or easily corrected) for software. Both activities are dependent on people, but the relationship between people applied and work accomplished is entirely different Both activities require the construction of a “product” but the approaches are different.

2. Software doesn’t “wear out.”
The relationship, often called the “bathtub curve,” indicates that hardware exhibits relatively high failure rates early in its life (these failures are often attributable to design or manufacturing defects); defects are corrected and the failure rate drops to a steady-state level(ideally, quite low) for some period of time. As time passes, however, the failure rate
rises again as hardware components suffer from the cumulative affects of dust, vibration, abuse, temperature extremes, and many other environmental maladies. Stated simply, the hardware begins to wear out. Software is not susceptible to the environmental maladies that cause hardware to wear out. In theory, therefore, the failure rate curve for software should take the form of the “idealized curve” shown in Figure. Undiscovered defects will cause high failure rates early in the life of a program. However, these are corrected (ideally, without introducing other errors) and the curve flattens as shown. The idealized curve is a gross over simplification of actual failure models for software. However, the implication is clear—software doesn’t wear out. But it does deteriorate! This seeming contradiction can best be explained by considering the “actual curve” shown in Figure. During its life, software will undergo change (maintenance). As changes are made, it is likely that some new defects will be introduced, causing the failure rate curve to spike as shown in Figure. Before the curve can return to the original steady-state failure rate, another change is requested, causing the curve to spike again. Slowly, the minimum failure rate level begins to rise—the software is deteriorating due to change.

3. Although the industry is moving toward component-based assembly, most
software continues to be custom built.
Consider the manner in which the control hardware for a computer-based product is designed and built. The design engineer draws a simple schematic of the digital circuitry, does some fundamental analysis to assure that proper function will be achieved, and then goes to the shelf where catalogs of digital components exist. Each integrated circuit (called an IC or a chip) has a part number, a defined and validated function, a well-defined interface, and a standard set of integration guidelines. After each component is selected, it can be ordered off the shelf. As an engineering discipline evolves, a collection of standard design components is created. Standard screws and off-the-shelf integrated circuits are only two of thousands of standard components that are used by mechanical and electrical engineers as they design new systems. The reusable components have been created so that the engineer can concentrate on the truly innovative elements of a design, that is, the parts of the design that represent something new. In the hardware world, component reuse is a natural part of the engineering process. In the software world, it is something that has only begun to be achieved on a broad scale.
Que 2. Explain different categories of software.
Software is a program or set of programs containing instructions that provide desired functionality.
It is somewhat difficult to develop meaningful generic categories for software applications. As software complexity grows, neat compartmentalization disappears. The following software areas indicate the breadth of potential applications :
System software : System software is a collection of programs written to service other programs. Some system software (e.g., compilers, editors, and file management utilities) process complex, but determinate, information structures. Other systems applications (e.g., operating system components, drivers, telecommunications processors) process largely indeterminate data. In either case, the system software area is characterized by heavy interaction with computer hardware; heavy usage by multiple users; concurrent operation that requires scheduling, resource sharing, and sophisticated process management; complex data structures; and multiple external interfaces.
Real-time software : Software that monitors/analyzes/controls real-world events as they occur is called real time. Elements of real-time software include a data gathering component that collects and formats information from an external environment, an analysis component that transforms information as required by the application, a control/output component that responds to the external environment, and a monitoring component that coordinates all other components so that real-time response (typically ranging from 1 millisecond to 1 second) can be maintained.
Business software : Business information processing is the largest single software application area. Discrete “systems” (e.g. payroll, accounts receivable/payable, inventory) have evolved into management information system (MIS) software that accesses one or more large databases containing business information. Applications in this area restructure existing data in a way that facilitates business operations or management decision making. In addition to conventional data processing application, business software applications also encompass interactive computing (e.g., point-of-sale transaction processing).
Engineering and scientific software : Engineering and scientific software have been characterized by “number crunching” algorithms. Applications range from astronomy to volcanology, from automotive stress analysis to space shuttle orbital dynamics, and from molecular biology to automated manufacturing. However, modern applications within the engineering/scientific area are moving away from conventional numerical algorithms. Computer-aided design, system simulation, and other interactive applications have begun to take on real-time and even system software characteristics.
Embedded software: Intelligent products have become commonplace in nearly every consumer and industrial market. Embedded software resides in read-only memory and is used to control products and systems for the consumer and industrial markets. Embedded software can perform very limited and esoteric functions (e.g., keypad control for a microwave oven) or provide significant function and control capability (e.g., digital functions in an automobile such as fuel control, dashboard displays, and braking systems).
Personal computer software : The personal computer software market has burgeoned over the past two decades. Word processing, spreadsheets, computer graphics, multimedia, entertainment, database management, personal and business financial applications, external network, and database access are only a few of hundreds of applications.
Web-based software : The Web pages retrieved by a browser are software that incorporates executable instructions (e.g., CGI, HTML, Perl, or Java), and data (e.g., hypertext and a variety of visual and audio formats). In essence, the network becomes a massive computer providing an almost unlimited software resource that can be accessed by anyone with a modem.
Artificial intelligence software : Artificial intelligence (AI) software makes use of nonnumerical algorithms to solve complex problems that are not amenable to computation or straightforward analysis. Expert systems, also called knowledge-based systems, pattern recognition (image and voice), artificial neural networks, theorem proving, and game playing are representative of applications within this category.
Que 3. State and explain crises and myths regarding software.
Software Crises :
The word crisis is defined in Webster’s Dictionary as “a turning point in the course of anything; decisive or crucial time, stage or event.” Yet, in terms of overall software quality and the speed with which computer-based systems and products are developed, there has been no “turning point,” no “decisive time,” only slow, evolutionary change, punctuated by explosive technological changes in disciplines associated with software.
Problems and causes of the software crises :
1. Schedule & cost estimates are often inaccurate.
2. Quality of s/w is sometime less than adequate.
3. S/w maintenance task not up to mark.
4. Many errors detected after delivery.
5. Poor communication between s/w developer & customer.
Software Myths :
Most, experienced experts have seen myths or superstitions (false beliefs or interpretations) or misleading attitudes (naked users) which creates major problems for management and technical people. The types of software-related myths are listed below.
1. Management myths : Managers with software responsibility, like managers in most disciplines, are often under pressure to maintain budgets, keep schedules from slipping, and improve quality. Like a drowning person who grasps at a straw, a software manager often grasps at belief in a software myth, if that belief will lessen the pressure (even temporarily).
Myth: We already have a book that’s full of standards and procedures for building software, won’t that provide my people with everything they need to know?
Reality: The book of standards may very well exist, but is it used? Are software practitioners aware of its existence? Does it reflect modern software engineering practice? Is it complete? Is it streamlined to improve time to delivery while still maintaining a focus on quality? In many cases, the answer to all of these questions is “no.”
Myth: My people have state-of-the-art software development tools, after all, we buy them the newest computers.
Reality: It takes much more than the latest model mainframe, workstation, or PC to do high-quality software development. Computer-aided software engineering (CASE) tools are more important than hardware for achieving good quality and productivity, yet the majority of software developers still do not use them effectively.
Myth: If we get behind schedule, we can add more programmers and catch up(sometimes called the Mongolian horde concept).
Reality: Software development is not a mechanistic process like manufacturing. In the words of Brooks [BRO75]: “adding people to a late software project makes it later.” At first, this statement may seem counterintuitive. However, as new people
are added, people who were working must spend time educating the newcomers, thereby reducing the amount of time spent on productive development effort. People can be added but only in a planned and well-coordinated manner.
Myth: If I decide to outsource3 the software project to a third party, I can just relax and let that firm build it.
Reality: If an organization does not understand how to manage and control software projects internally, it will invariably struggle when it outsources software projects.
2. Customer myths : A customer who requests computer software may be a person at the next desk, a technical group down the hall, the marketing/sales department, or an outside company that has requested software under contract. In many cases, the customer believes myths about software because software managers and practitioners do little to correct misinformation. Myths lead to false expectations (by the customer) and ultimately, dissatisfaction with the developer.
Myth: A general statement of objectives is sufficient to begin writing programs— we can fill in the details later.
Reality: A poor up-front definition is the major cause of failed software efforts. A formal and detailed description of the information domain, function, behavior, performance, interfaces, design constraints, and validation criteria is essential. These characteristics can be determined only after thorough communication between customer and developer.
Myth: Project requirements continually change, but change can be easily accommodated because software is flexible.
Reality: It is true that software requirements change, but the impact of change varies with the time at which it is introduced. Figure illustrates the impact of change. If serious attention is given to up-front definition, early requests for change can be accommodated easily. The customer can review requirements and recommend modifications with relatively little impact on cost. When changes are requested during software design, the cost impact grows rapidly. Resources have been committed and a design framework has been established. Change can cause upheaval that requires additional resources and major design modification, that is, additional cost. Changes in function, performance, interface, or other characteristics during implementation (code and test) have a severe impact on cost. Change, when requested
after software is in production, can be over an order of magnitude more expensive than the same change requested earlier.

3. Practitioner’s myths : Myths that are still believed by software practitioners have been fostered by 50 years of programming culture. During the early days of software, programming was viewed as an art form. Old ways and attitudes die hard.
Myth: Once we write the program and get it to work, our job is done.
Reality: Someone once said that “the sooner you begin ‘writing code’, the longer it’ll take you to get done.” Industry data ([LIE80], [JON91], [PUT97]) indicate that between 60 and 80 percent of all effort expended on software will be expended after
it is delivered to the customer for the first time.
Myth: Until I get the program “running” I have no way of assessing its quality.
Reality: One of the most effective software quality assurance mechanisms can be applied from the inception of a project—the formal technical review. Software reviews (described in Chapter 8) are a “quality filter” that have been found to be more effective than testing for finding certain classes of software defects.
Myth: The only deliverable work product for a successful project is the working program.
Reality: A working program is only one part of a software configuration that includes many elements. Documentation provides a foundation for successful engineering and, more important, guidance for software support.
Myth: Software engineering will make us create voluminous and unnecessary documentation and will invariably slow us down.
Reality: Software engineering is not about creating documents. It is about creating quality. Better quality leads to reduced rework. And reduced rework results in faster delivery times.
Que 4. Describe evolving role of software?
Software delivers the most important product of our time—information. Software transforms personal data (e.g., an individual’s financial transactions) so that the data can be more useful in a local context; it manages business information to enhance competitiveness; it provides a gateway to worldwide information networks (e.g., Internet) and provides the means for acquiring information in all of its forms.
The role of computer software has undergone significant change over a time span of little more than 50 years. Dramatic improvements in hardware performance, pro-found changes in computing architectures, vast increases in memory and storage capacity, and a wide variety of exotic input and output options have all precipitated more sophisticated and complex computer-based systems. Sophistication and complexity can produce dazzling results when a system succeeds, but they can also pose huge problems for those who must build complex systems.
Popular books published during the 1970s and 1980s provide useful historical insight into the changing perception of computers and software and their impact on our culture. Osborne [OSB79] characterized a “new industrial revolution.” Toffler
[TOF80] called the advent of microelectronics part of “the third wave of change” in human history, and Naisbitt [NAI82] predicted a transformation from an industrial society to an “information society.” Feigenbaum and McCorduck [FEI83] suggested that information and knowledge (controlled by computers) would be the focal point for power in the twenty-first century, and Stoll [STO89] argued that the “electronic community” created by networks and software was the key to knowledge interchange throughout the world.
As the 1990s began, Toffler [TOF90] described a “power shift” in which old power structures (governmental, educational, industrial, economic, and military) disintegrate as computers and software lead to a “democratization of knowledge.” Yourdon [YOU92] worried that U.S. companies might loose their competitive edge in software related businesses and predicted “the decline and fall of the American programmer.” Hammer and Champy [HAM93] argued that information technologies were to play a pivotal role in the “reengineering of the corporation.” During the mid-1990s, the pervasiveness of computers and software spawned a rash of books by “neo-Luddites” (e.g., Resisting the Virtual Life, edited by James Brook and Iain Boal and The Future Does Not Compute by Stephen Talbot). These authors demonized the computer, emphasizing legitimate concerns but ignoring the profound benefits that have already been realized. [LEV95]
During the later 1990s, Yourdon [YOU96] re-evaluated the prospects for the software professional and suggested the “the rise and resurrection” of the American programmer. As the Internet grew in importance, his change of heart proved to be correct. As the twentieth century closed, the focus shifted once more, this time to the impact of the Y2K “time bomb” (e.g., [YOU98b], [DEJ98], [KAR99]). Although the predictions of the Y2K doomsayers were incorrect, their popular writings drove home the pervasiveness of software in our lives. Today, “ubiquitous computing” [NOR98] has spawned a generation of information appliances that have broadband connectivity to the Web to provide “a blanket of connectedness over our homes, offices and motorways” [LEV99]. Software’s role continues to expand.
The lone programmer of an earlier era has been replaced by a team of software specialists, each focusing on one part of the technology required to deliver a complex application. And yet, the same questions asked of the lone programmer are being
asked when modern computer-based systems are built:
• Why does it take so long to get software finished?
• Why are development costs so high?
• Why can’t we find all the errors before we give the software to customers?
• Why do we continue to have difficulty in measuring progress as software is
being developed?
These, and many other questions,1 are a manifestation of the concern about software and the manner in which it is developed—a concern that has lead to the adoption of software engineering practice.
Que 5. Interpret Software engineering as Layered approach.
Software engineering is a fully layered technology, to develop software we need to go from one layer to another. All the layers are connected and each layer demands the fulfillment of the previous layer.

Layered technology is divided into four parts:
Layer 1 — Tools
Software engineering tools provide automated or semi-automated support for the process and the methods. When tools are integrated so that information created by one tool can be used by another, a system for the support of software development, called computer-aided software engineering, is established. CASE combines software, hardware, and a software engineering database (a repository containing important information about analysis, design, program construction, and testing) to create a software engineering environment analogous to CAD/CAE (computer-aided
design/engineering) for hardware.
Layer 2 — Method
The second layer establishes the methods of developing the software. This includes any technical knowledge and resources required for development. Some tasks include choosing methods for:
- Communication
- Analysis
- Modeling
- Program construction
- Testing and support
It’s good to remember that in the Tools layer, your team will choose the tools you will use for the project, but in the Method layer, you will be choosing how to use the tools.
Layer 3 — Process
It is the foundation or base layer of software engineering. It is key that binds all the layers together which enables the development of software before the deadline or on time. Process defines a framework that must be established for the effective delivery of software engineering technology. The software process covers all the activities, actions, and tasks required to be carried out for software development.
Process activities are listed below:-
- Communication: It is the first and foremost thing for the development of software. Communication is necessary to know the actual demand of the client.
- Planning: It basically means drawing a map for reduced the complication of development.
- Modeling: In this process, a model is created according to the client for better understanding.
- Construction: It includes the coding and testing of the problem.
- Deployment:- It includes the delivery of software to the client for evaluation and feedback.
Layer 4 — A Quality Focus
At this point, the software is developed and refined to a point, but it is critical to apply quality control to the finished product. Besides testing the end product to ensure that it meets the client’s specifications, it also needs real-world testing to determine how efficient, usable, and reusable it will be, and it needs to explore how many resource maintenance will require. If it is replacing an older software or platform, quality control will ensure the new software will meet the needs.
Que 6. Define software process. Explain common process framework?
Software Process Framework is an abstraction of the software development process. It details the steps and chronological order of a process. Since it serves as a foundation for them, it is utilized in most applications. Task sets, umbrella activities, and process framework activities all define the characteristics of the software development process.

The SEI approach provides a measure of the global effectiveness of a company’s software engineering practices and establishes five process maturity levels that are defined in the following manner:
Level 1: Initial. The software process is characterized as ad hoc and occasionally even chaotic. Few processes are defined, and success depends on individual effort.
Level 2: Repeatable. Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.
Level 3: Defined. The software process for both management and engineering activities is documented, standardized, and integrated into an organization wide software process. All projects use a documented and approved version of the organization’s process for developing and supporting software. This level includes all characteristics defined for level 2.
Level 4: Managed. Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled using detailed measures. This level includes all characteristics defined for level 3.
Level 5: Optimizing. Continuous process improvement is enabled by quantitative feedback from the process and from testing innovative ideas and technologies. This level includes all characteristics defined for level 4.
The five levels defined by the SEI were derived as a consequence of evaluating responses to the SEI assessment questionnaire that is based on the CMM. The results of the questionnaire are distilled to a single numerical grade that provides an indication of an organization’s process maturity.
Process Framework Activities:
The process framework is required for representing common process activities. Five framework activities are described in a process framework for software engineering. Communication, planning, modeling, construction, and deployment are all examples of framework activities. Each engineering action defined by a framework activity comprises a list of needed work outputs, project milestones, and software quality assurance (SQA) points.
- Communication: By communication, customer requirement gathering is done. Communication with consumers and stakeholders to determine the system’s objectives and the software’s requirements.
- Planning: Establish engineering work plan, describes technical risk, lists resources requirements, work produced and defines work schedule.
- Modeling: Architectural models and design to better understand the problem and for work towards the best solution. The software model is prepared by:
o Analysis of requirements
o Design - Construction: Creating code, testing the system, fixing bugs, and confirming that all criteria are met. The software design is mapped into a code by:
o Code generation
o Testing - Deployment: In this activity, a complete or non-complete product or software is represented to the customers to evaluate and give feedback. On the basis of their feedback, we modify the product for the supply of better products.
Que 7. Discuss the linear sequential model with advantages and disadvantages.
It is also called a classic life cycle or waterfall model. It suggests a systematic, sequential approach to Software Development that begins at a systematic level and progresses through communication, planning, modeling, construction, and deployment.

The linear sequential model encompasses the following activities :
Software requirements analysis : The requirements gathering process is intensified and focused specifically on software. To understand the nature of the program(s) to be built, the software engineer (“analyst”) must understand the information domain (described in Chapter 11) for the software, as well as required function, behavior, performance, and interface. Requirements for both the system and the software are documented and reviewed with the customer.
Design : Software design is actually a multistep process that focuses on four distinct attributes of a program: data structure, software architecture, interface representations, and procedural (algorithmic) detail. The design process translates requirements into a representation of the software that can be assessed for quality before coding
begins. Like requirements, the design is documented and becomes part of the software configuration.
Code generation : The design must be translated into a machine-readable form. The code generation step performs this task. If design is performed in a detailed manner, code generation can be accomplished mechanistically. Testing. Once code has been generated, program testing begins. The testing process focuses on the logical internals of the software, ensuring that all statements have been tested, and on the functional externals; that is, conducting tests to uncover errors and ensure that defined input will produce actual results that agree with required results.
Support : Software will undoubtedly undergo change after it is delivered to the customer (a possible exception is embedded software). Change will occur because errors have been encountered, because the software must be adapted to accommodate changes in its external environment (e.g., a change required because of a new operating system or peripheral device), or because the customer requires functional or performance enhancements. Software support/maintenance reapplies each of the preceding phases to an existing program rather than a new one.
Advantages :
- Simple to use & understand
- Each phase is independent of other phases & completely separate.
- Suitable for smaller projects for clearly outline projects.
Disadvantages :
- Not good for large project.
- Little flexible &difficult to go back
- No output or working software is produced until late in the
Limitations :
- Requirement: suitable for well understood problem.
- Risks: Risks factor are not considered high risk(late feedback)
- User communication: does not require constant support from business user.
- Useful: for small s/w & low risk project.
- Changes: Difficult to adjust the software with required changes.
- Availability: working s/w available at the end of s/w life cycle.
Que 8. Paraphrase Prototype model with advantages and disadvantages.
Prototype model is used when the customers do not know the exact project requirements beforehand. In this model, a prototype of the end product is first developed, tested and refined as per customer feedback repeatedly till a final acceptable prototype is achieved which forms the basis for developing the final product.

The prototyping paradigm begins with requirements gathering. Developer and customer meet and define the overall objectives for the software, identify whatever requirements are known, and outline areas where further definition is
mandatory. A “quick design” then occurs. The quick design focuses on a representation of those aspects of the software that will be visible to the customer/user (e.g., input approaches and output formats). The quick design leads to the construction of a prototype. The prototype is evaluated by the customer/user and used to refine requirements for the software to be developed. Iteration occurs as the prototype is tuned to satisfy the needs of the customer, while at the same time enabling the developer to better understand what needs to be done.
Ideally, the prototype serves as a mechanism for identifying software requirements. If a working prototype is built, the developer attempts to use existing program fragments or applies tools (e.g., report generators, window managers) that enable working programs to be generated quickly.
Advantages :
- Easy and quick to identify customer requirements.
- Customer can validate the prototype at the earlier stage &provide their inputs & feedback.
- User got a feel for actual system as working model of system is Provided.
- User is actively involved in the development .
- Error can be detected much earlier.
- Quicker user feedback is available leading to better solution.
Disadvantages :
- This model is costly.
- It has poor documentation because of continuously changing customer requirements.
- There may be too much variation in requirements.
- Customers may not be satisfied or interested in the
- product after seeing the initial prototype.
Prototyping can also be problematic for the following reasons :
- The developer often makes implementation compromises in order to get a prototype working quickly.
- An inappropriate operating system or programming language may be used simply because it is available and known;
- An inefficient algorithm may be implemented simply to demonstrate capability.
- After a time, the developer may become familiar with these choices and forget all the reasons why they were inappropriate.
- The less-than-ideal choice has now become an integral part of the system.
Que 9. Explain RAD model in detail.
RAD is a linear sequential software development process model that emphasizes a concise development cycle using an element based construction approach. If the requirements are well understood and described, and the project scope is a constraint, the RAD process enables a development team to create a fully functional system within a concise time period.

The RAD approach encompasses the following phases:
Business modeling : The information flow among business functions is modeled in a way that answers the following questions: What information drives the business process? What information is generated? Who generates it? Where does the information go? Who processes it?
Data modeling : The information flow defined as part of the business modeling phase is refined into a set of data objects that are needed to support the business. The characteristics (called attributes) of each object are identified and the relationships between these objects defined.
Process modeling : The data objects defined in the data modeling phase are transformed to achieve the information flow necessary to implement a business function. Processing descriptions are created for adding, modifying, deleting, or retrieving a data object.
Application generation : RAD assumes the use of fourth generation techniques. Rather than creating software using conventional third generation programming languages the RAD process works to reuse existing program components (when possible) or create reusable components (when necessary). In all cases, automated tools are used to facilitate construction of the software.
Testing and turnover : Since the RAD process emphasizes reuse, many of the program components have already been tested. This reduces overall testing time. However, new components must be tested and all interfaces must be fully exercised.
Advantages :
- Emphasizes an extremely short development cycle.
- “high-speed” adaptation of the linear sequential model by using component-based construction.
- If requirements are well understood and project scope is constrained, the RAD process enables a development team to create a “fully functional system” within very short time periods (e.g., 60 to 90 days)
Disadvantages :
- Creating the right number of RAD teams sufficient human resources for large but scalable projects.
- Committed developers and customers required for rapid-fire activities.
- If a system cannot be properly modularized, building the components necessary for RAD will be problematic.
- If high performance is an issue that needs tunning the interfaces to system components, the RAD approach may not work.
- RAD is not appropriate when technical risks are high.
Que 10. Represent incremental process model of software development with its feature.
Incremental Model is a process of software development where requirements divided into multiple standalone modules of the software development cycle. In this model, each module goes through the requirements, design, implementation and testing phases. Every subsequent release of the module adds function to the previous release. The process continues until the complete system achieved.

Requirement analysis: In the first phase of the incremental model, the product analysis expertise identifies the requirements. And the system functional requirements are understood by the requirement analysis team. To develop the software under the incremental model, this phase performs a crucial role.
Design & Development: In this phase of the Incremental model of SDLC, the design of the system functionality and the development method are finished with success. When software develops new practicality, the incremental model uses style and development phase.
Testing : In this phase, once the code is written, it is tested to determine whether it works as expected. Prior to handing over code to the testing team, the developer performs initial testing such as unit testing and/or application integration testing. If all goes well, the code is moved to the testing environment.
From there, the testing team will perform the testing. There are several types of testing the testing team performs quality assurance (QA) testing, system integration testing (SIT), user acceptance testing (UAT), and approval testing. Testing is done to determine if the code and programming meet customer/business requirements. Before the implementation phase begins, companies can identify all bugs and errors in their software during the testing phase. Software bugs can jeopardize (put someone/something in a risky position) a client’s business if they aren’t fixed before deployment.
Implementation: Implementation phase enables the coding phase of the development system. It involves the final coding that design in the designing and development phase and tests the functionality in the testing phase. After completion of this phase, the number of the product working is enhanced and upgraded up to the final system product.
Advantage of Incremental Model :
- Errors are easy to be recognized.
- Easier to test and debug
- More flexible.
- Simple to manage risk because it handled during its iteration.
- The Client gets important functionality early.
Disadvantage of Incremental Model :
- Need for good planning
- Total Cost is high.
- Well defined module interfaces are needed.
Que 11. With a neat sketch, describe the spiral model in detail. Give its advantages and drawbacks.
Spiral Model is also known as Meta Model because it subsumes all the other SDLC models. In its diagrammatic representation, it looks like a spiral with many loops, that’s the reason it’s called as Spiral. Each loop of the spiral is called a Phase of the software development process. This model has capability to handle risks.

A spiral model is divided into a number of framework activities, also called task regions. Typically, there are between three and six task regions.
- Customer communication—tasks required to establish effective communication between developer and customer.
- Planning—tasks required to define resources, timelines, and other project related information.
- Risk analysis—tasks required to assess both technical and management risks.
- Engineering—tasks required to build one or more representations of the application.
- Construction and release—tasks required to construct, test, install, and provide user support (e.g., documentation and training).
- Customer evaluation—tasks required to obtain customer feedback based on evaluation of the software representations created during the engineering stage and implemented during the installation stage.
Advantages of Spiral Model :
- Software is produced early in the software life cycle.
- Risk handling is one of important advantages of the Spiral model, it is best development model to follow due to the risk analysis and risk handling at every phase.
- Flexibility in requirements. In this model, we can easily change requirements at later phases and can be incorporated accurately. Also, additional Functionality can be added at a later date.
- It is good for large and complex projects.
- It is good for customer satisfaction. We can involve customers in the development of products at early phase of the software development. Also, software is produced early in the software life cycle.
- Strong approval and documentation control.
- It is suitable for high risk projects, where business needs may be unstable. A highly customized product can be developed using this.
Disadvantages of Spiral Model:
- It is not suitable for small projects as it is expensive.
- It is much more complex than other SDLC models. Process is complex.
- Too much dependable on Risk Analysis and requires highly specific expertise.
- Difficulty in time management. As the number of phases is unknown at the start of the project, so time estimation is very difficult.
- Spiral may go on indefinitely.
- End of the project may not be known early.
- It is not suitable for low risk projects.
- May be hard to define objective, verifiable milestones. Large numbers of intermediate stages require excessive documentation.
Que 12. Paraphrase the concept of project management in brief.
A Software Project is the complete procedure of software development from requirement gathering to testing and maintenance, carried out according to the execution methodologies, in a specified period of time to achieve intended software product.
Effective software project management focuses on the four P’s:
People :
The cultivation of motivated, highly skilled software people has been discussed since the 1960. In fact, the “people factor” is so important that the Software Engineering Institute has developed a people management capability maturity model (PM-CMM), “to enhance the readiness of software organizations to undertake increasingly complex applications by helping to attract, grow, motivate, deploy, and retain the talent needed to improve their software development capability” [CUR94].
The people management maturity model defines the following key practice areas for software people: recruiting, selection, performance management, training, compensation, career development, organization and work design, and team/culture
development. Organizations that achieve high levels of maturity in the people management area have a higher likelihood of implementing effective software engineering practices.
The Product :
Before a project can be planned, product1 objectives and scope should be established, alternative solutions should be considered, and technical and management constraints should be identified. Without this information, it is impossible to define reasonable (and accurate) estimates of the cost, an effective assessment of risk, a realistic breakdown of project tasks, or a manageable project schedule that provides a meaningful indication of progress.
The software developer and customer must meet to define product objectives and scope. In many cases, this activity begins as part of the system engineering or business process engineering (Chapter 10) and continues as the first step in software
requirements analysis (Chapter 11). Objectives identify the overall goals for the product (from the customer’s point of view) without considering how these goals will be achieved. Scope identifies the primary data, functions and behaviors that characterize the product, and more important, attempts to bound these characteristics in a quantitative manner.
Once the product objectives and scope are understood, alternative solutions are considered. Although very little detail is discussed, the alternatives enable managers and practitioners to select a “best” approach, given the constraints imposed by delivery deadlines, budgetary restrictions, personnel availability, technical interfaces, and myriad other factors.
The Process :
A software process (Chapter 2) provides the framework from which a comprehensive plan for software development can be established. A small number of framework activities are applicable to all software projects, regardless of their size or
complexity. A number of different task sets—tasks, milestones, work products, and quality assurance points—enable the framework activities to be adapted to the characteristics of the software project and the requirements of the project team. Finally, umbrella activities—such as software quality assurance, software configuration management, and measurement—overlay the process model. Umbrella activities are independent of any one framework activity and occur throughout the process.
The Project :
We conduct planned and controlled software projects for one primary reason—it is the only known way to manage complexity. And yet, we still struggle. In 1998, industry data indicated that 26 percent of software projects failed outright and 46 percent experienced cost and schedule overruns. Although the success rate for software projects has improved somewhat, our project failure rate remains higher than it should be.
In order to avoid project failure, a software project manager and the software engineers who build the product must avoid a set of common warning signs, understand the critical success factors that lead to good project management, and develop a commonsense approach for planning, monitoring and controlling the project
Que 13. Explain W5HH principle in detail.
Barry Boehm suggests an approach that addresses project objectives, milestones and schedules, responsibilities, management and technical approaches, and required resources.
He calls it the WWWWWHH principle, after a series of questions that lead to a definition of key project characteristics and the resultant project plan:
Why is the system being developed? The answer to this question enables all parties to assess the validity of business reasons for the software work. Stated in another way, does the business purpose justify the expenditure of people, time,
and money?
What will be done, by when? The answers to these questions help the team
to establish a project schedule by identifying key project tasks and the milestones
that are required by the customer.
Who is responsible for a function? Earlier in this chapter, we noted that the role and responsibility of each member of the software team must be defined. The answer to this question helps accomplish this.
Where are they organizationally located? Not all roles and responsibilities reside within the software team itself. The customer, users, and other stakeholders also have responsibilities.
How will the job be done technically and managerially? Once product scope is established, a management and technical strategy for the project must be defined.
How much of each resource is needed? The answer to this question is derived by developing estimates (Chapter 5) based on answers to earlier questions.
Boehm’s W5HH principle is applicable regardless of the size or complexity of a software project. The questions noted provide an excellent planning outline for the project manager and the software team.
Que 14. Represent component based development model with advantages and disadvantages.
Component-based software engineering (CBSE) can be defined as an approach to software development that relies on software reuse. It aims at reducing costs of building software through developing different components and integrating them to a well-defined software architecture. These components are language independent and can be developed by different team of programmers. Each of them should be independent of the whole system and should have some clearly defined functionality. Moreover, they should be assembled in context of well-defined architecture and communicate with each other using interfaces. Although components are shared, their implementations are hidden.

The model works in following manner:
- Step-1: First identify all the required candidate components, i.e., classes with the help of application data and algorithms.
- Step-2: If these candidate components are used in previous software projects then they must be present in the library.
- Step-3: Such preexisting components can be excited from the library and used for further development.
- Step-4: But if the required component is not present in the library then build or create the component as per requirement.
- Step-5: Place this newly created component in the library. This makes one iteration of the system.
- Step-6: Repeat steps 1 to 5 for creating n iterations, where n denotes the number of iterations required to develop the complete application.
Advantages of the component-based software engineering (CBSE) process model:
- Reduces considerably the amount of software to be developed and so reducing cost and risks
- It usually allows for faster delivery of software.
- In principle, more reliable systems, due to using previously tested components.
- Management of complexity.
- Reduced development time.
Disadvantages component-based software engineering (CBSE) process model:
- Compromises in requirements needed and this may lead to a system that does not meet the real needs of the users.
- Less control over the system’s evolution as new versions of the reusable components are not under the control of the organisation using them.
- Components maintenance may be another issue