Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software.
The term software engineering first appeared in the 1968 NATO Software Engineering Conference and was meant to provoke thought regarding the current "software crisis" at the time.[2] Since then, it has continued as a profession and field of study dedicated to creating software that is of higher quality, more affordable, maintainable, and quicker to build. Since the field is still relatively young compared to its sister fields of engineering, there is still much debate around what software engineering actually is, and if it conforms to the classical definition of engineering. It has grown organically out of the limitations of viewing software as just programming. Software development is a term sometimes preferred by practitioners[who?] in the industry who view software engineering as too heavy-handed and constrictive to the malleable process of creating software.[citation needed] Although software engineering is a young profession, the field's future looks bright as Money Magazine and Salary.com rated software engineering as the best job in America in 2006. Furthermore, if you rank the number of engineers in the United States by discipline, the number of software engineers tops the list.
On any software project of typical size, problems like these are guaranteed to come up. Despite all attempts to prevent it, important details will be overlooked. This is the difference between craft and engineering. Experience can lead us in the right direction. This is craft. Experience will only take us so far into uncharted territory. Then we must take what we started with and make it better through a controlled process of refinement. This is engineering.
In software engineering, we desperately need good design at all levels. In particular, we need good top level design. The better the early design, the easier detailed design will be. Designers should use anything that helps. Structure charts, Booch diagrams, state tables, PDL, etc. -- if it helps, then use it. We must keep in mind, however, that these tools and notations are not a software design. Eventually, we have to create the real software design, and it will be in some programming language. Therefore, we should not be afraid to code our designs as we derive them. We simply must be willing to refine them as necessary.
One final point: the goal of any engineering design project is the production of some documentation. Obviously, the actual design documents are the most important, but they are not the only ones that must be produced. Someone is eventually expected to use the software. It is also likely that the system will have to be modified and enhanced at a later time. This means that auxiliary documentation is as important for a software project as it is for a hardware project. Ignoring for now users manuals, installation guides, and other documents not directly associated with the design process, there are still two important needs that must be solved with auxiliary design documents.
To summarize:
o Real software runs on computers. It is a sequence of ones and zeros that is stored on some magnetic media. It is not a program listing in C++ (or any other programming language).
o A program listing is a document that represents a software design. Compilers and linkers actually build software designs.
o Real software is incredibly cheap to build, and getting cheaper all the time as computers get faster.
o Real software is incredibly expensive to design. This is true because software is incredibly complex and because practically all the steps of a software project are part of the design process.
o Programming is a design activity -- a good software design process recognizes this and does not hesitate to code when coding makes sense.
o Coding actually makes sense more often than believed. Often the process of rendering the design in code will reveal oversights and the need for additional design effort. The earlier this occurs, the better the design will be.
o Since software is so cheap to build, formal engineering validation methods are not of much use in real world software development. It is easier and cheaper to just build the design and test it than to try to prove it.
o Testing and debugging are design activities -- they are the software equivalent of the design validation and refinement processes of other engineering disciplines. A good software design process recognizes this and does not try to short change the steps.
o There are other design activities -- call them top level design, module design, structural design, architectural design, or whatever. A good software design process recognizes this and deliberately includes the steps.
o All design activities interact. A good software design process recognizes this and allows the design to change, sometimes radically, as various design steps reveal the need.
o Many different software design notations are potentially useful -- as auxiliary documentation and as tools to help facilitate the design process. They are not a software design.
o Software development is still more a craft than an engineering discipline. This is primarily because of a lack of rigor in the critical processes of validating and improving a design.
o Ultimately, real advances in software development depend upon advances in programming techniques, which in turn mean advances in programming languages. C++ is such an advance. It has exploded in popularity because it is a mainstream programming language that directly supports better software design.
o C++ is a step in the right direction, but still more advances are needed.
Posted by AVINASH at 9:26 AM 0 comments
A
* Adoraview Makakopy
* Allwonders Softwares
* Amazon.com
* America Online
* Ameritech
* Ameritrade Holding Corporation
* Analysts International
* Adoraview Malakopy Graphics
B
* BellSouth
* BMC Software
* Break Through Technologies
* British Telecommunications
C
* Cable & Wireless
* Cadence Design Systems
* Century Telephone Ents.
* China Telecom (Hong Kong)
* CIBER
* Cincinnati Bell
* Cisco Systems
* Compaq
* Compuware Corporation
* ComVision 2000.com
* Cotrantech.com
* CTS Corporation
* Cyrus Multi Media.com
D
* Dell Computer
* DoubleClick
E
* EarthLink Network
* Edwards (J.D.)
* Electronic Arts Online, EA Online
* EMC
* Ericsson
* Everex Technologies
* Excel Communications.
* Excite
G
* Getronics
* GTE Corporation
H
* HBO & Co.
* Hewlett-Packard
* Hong Kong Telecom
I
* IBM Corporation
* Infoseek
* Ingram Micro
* Intel Corporation
* International E-commerce
* Intermedia Communications
* Ivolga
J
* J.D. Edwards
K
* Keane
L
* Lastar Datacomm Solutions
* Lanco Global Systems
* L-3 Communications Corporation
* Lexmark International Group
* Loral Space & Communications
* Lucent Technologies
* Lycos
M
* Maxim Integrated Products
* McKesson HBOC
* Microsoft Corporation
* MindSpring Enterprises
* MISys, Manufacturing Informations System
* Mitel Corpotation
N
* National Computer Systems
* Net 400 (email Software Solution)
* Network Solutions
* Neural Soft
* Nextel Communications
* Nidec America Corporation
* Nokia
O
* Oracle Corporation
P
* Pacific Communications
* Pacific Gateway Exchange
* PanAmSat Corporation
* PeopleSoft
* Polaris Software Lab Ltd
Q
* Qualcomm
R
* RealNetworks
* RustyBrick Web Design & Web Development
S
* SAP AG, Sap Solutions
* SBC Communications
* Silex Technologies
* Singapore Telecom
* SK Web Graphic
* Softbank Technology Corp.
* Solectron
* Sony Corporation
* Sprint Communications
* Sterling Software
* Storage Tek
* Sun Microsystems
* SunGard Data Systems
* Symbol Technologies
T
* Tech Data
* Teleglobe Communications Corporation
* Tellabs
* The Internet Lost and Found
U
* Usit Technologies Private Ltd
* UserEase.com
* USIT Technologies Private Ltd
V
* Vanguard Cellular Systems
* VGL softech Ltd
* Vodafone
* Vox Vision
Y
* Yahoo!
Posted by AVI
Posted by AVINASH at 3:08 AM 0 comments
What is a software process model?
In contrast to software life cycle models, software process models often represent a networked
sequence of activities, objects, transformations, and events that embody strategies for
accomplishing software evolution. Such models can be used to develop more precise and
formalized descriptions of software life cycle activities. Their power emerges from their
utilization of a sufficiently rich notation, syntax, or semantics, often suitable for computational
processing.
Software process networks can be viewed as representing multiple interconnected task chains
(Kling 1982, Garg 1989). Task chains represent a non-linear sequence of actions that structure
and transform available computational objects (resources) into intermediate or finished products.
Non-linearity implies that the sequence of actions may be non-deterministic, iterative,
accommodate multiple/parallel alternatives, as well as partially ordered to account for
incremental progress. Task actions in turn can be viewed a non-linear sequences of primitive
actions which denote atomic units of computing work, such as a user's selection of a command or
menu entry using a mouse or keyboard. Winograd and others have referred to these units of
cooperative work between people and computers as "structured discourses of work" (Winograd
1986), while task chains have become popularized under the name of "workflow" (Bolcer 1998).
Task chains can be employed to characterize either prescriptive or descriptive action sequences.
Prescriptive task chains are idealized plans of what actions should be accomplished, and in what
order. For example, a task chain for the activity of object-oriented software design might include
the following task actions:
Develop an informal narrative specification of the system.
Identify the objects and their attributes.
Identify the operations on the objects.
Identify the interfaces between objects, attributes, or operations.
Implement the operations.
Clearly, this sequence of actions could entail multiple iterations and non-procedural primitive
action invocations in the course of incrementally progressing toward an object-oriented software
design.
Task chains join or split into other task chains resulting in an overall production network or web
(Kling 1982). The production web represents the "organizational production system" that
transforms raw computational, cognitive, and other organizational resources into assembled,
integrated and usable software systems. The production lattice therefore structures how a
software system is developed, used, and maintained. However, prescriptive task chains and
actions cannot be formally guaranteed to anticipate all possible circumstances or idiosyncratic
foul-ups that can emerge in the real world of software development (Bendifallah 1989, Mi 1990).
Thus, any software production web will in some way realize only an approximate or incomplete
description of software development.
Articulation work is a kind of unanticipated task that is performed when a planned task chain is
inadequate or breaks down. It is work that represents an open-ended non-deterministic sequence
of actions taken to restore progress on the disarticulated task chain, or else to shift the flow of
productive work onto some other task chain (Bendifallah 1987, Grinter 1996, Mi 1990, Mi 1996,
Scacchi and Mi 1997). Thus, descriptive task chains are employed to characterize the observed
course of events and situations that emerge when people try to follow a planned task sequence.
Articulation work in the context of software evolution includes actions people take that entail
either their accommodation to the contingent or anomalous behavior of a software system, or
negotiation with others who may be able to affect a system modification or otherwise alter
current circumstances (Bendifallah 1987, Grinter 1996, Mi 1990, Mi 1996, Scacchi and Mi
1997). This notion of articulation work has also been referred to as software process dynamism.
Traditional Software Life Cycle Models
Traditional models of software evolution have been with us since the earliest days of software
engineering. In this section, we identify four. The classic software life cycle (or "waterfall chart")
and stepwise refinement models are widely instantiated in just about all books on modern
programming practices and software engineering. The incremental release model is closely
related to industrial practices where it most often occurs. Military standards based models have
also reified certain forms of the classic life cycle model into required practice for government
contractors. Each of these four models uses coarse-grain or macroscopic characterizations when
describing software evolution. The progressive steps of software evolution are often described as
stages, such as requirements specification, preliminary design, and implementation; these usually
have little or no further characterization other than a list of attributes that the product of such a
stage should possess. Further, these models are independent of any organizational development
setting, choice of programming language, software application domain, etc. In short, the
traditional models are context-free rather than context-sensitive. But as all of these life cycle
models have been in use for some time, we refer to them as the traditional models, and
characterize each in turn.
Classic Software Life Cycle
The classic software life cycle is often represented as a simple prescriptive waterfall software
phase model, where software evolution proceeds through an orderly sequence of transitions from
one phase to the next in order (Royce 1970). Such models resemble finite state machine
descriptions of software evolution. However, these models have been perhaps most useful in
helping to structure, staff, and manage large software development projects in complex
organizational settings, which was one of the primary purposes (Royce 1970, Boehm 1976).
Alternatively, these classic models have been widely characterized as both poor descriptive and
prescriptive models of how software development "in-the-small" or "in-the-large" can or should
occur. Figure 1 provides a common view of the waterfall model for software development
attributed to Royce (1970)
Posted by AVINASH at 3:04 AM 0 comments
Adaptable Process Model - Product Description
The intent of RSP&A's Adaptable Process Model (APM) is to provide you with a software process that you can customize and adapt to local needs. The APM includes a detailed process flow implemented as a hypertext document, descriptions of many key software engineering tasks, document templates, and checklists. Acquiring the APM can significantly reduce the time required to develop your company's software process description.
Because the complete APM is provided in hypertext format within the RSP&A Web site, you and your colleagues review the complete generic process. If you think it has merit for your organization, the complete hypertext version can be acquired for an extremely reasonable price. You can then build a local website for Internet or intranet application, while at the same time making the adaptations necessary to mold the APM to local requirements. In most cases, large portions of the APM can be used as is, but in every case, you have the capability to modify terminology and process content to meet your needs and better reflect your local information technologies or engineering environment.
Posted by AVINASH at 2:56 AM 0 comments
Test Execution
In the butterfly model of software test development, test execution is a separate piece of the overall approach. In fact, it is the smallest piece – the slender insect’s body – but it also provides the muscle that makes the wings work. It is important to note, however, that test execution (as defined for this model) includes only the formal running of the designed tests. Informal test execution is a normal part of test design, and in fact is also a normal part of software design and development.
Formal test execution marks the moment in the software development process where the developer and the tester join forces. In a way, formal execution is the moment when the developer gets to take credit for the tester’s work – by demonstrating that the software works as advertised. The tester, on the other hand, should already have proactively identified bugs (in both the software and the tests) and helped to eliminate them – well before the commencement of formal test execution!
Formal test execution should (almost) never reveal bugs. I hope this plain statement raises some eyebrows – although it is very much true. The only reasonable cause of unexpected failure in a formal test execution is hardware failure. The software, along with the test itself, should have been through the wringer enough to be bone-dry. Note, however, that unexpected failure is singled out in the above paragraph. That implies that some software tests will have expected failures, doesn’t it? Yes, it surely does!
The reasons behind expected failure vary, but allow me to relate a case in point:
In the commercial jet engine control business, systems engineers prepare a wide variety of tests against the system (being the FADEC – or Full Authority Digital Engine Control) requirements. One such commonly employed test is the “flight envelope” test. The flight envelope test essentially begins with the simulated engine either off or at idle with the real controller (both hardware and software) commanding the situation. Then the engine is spooled up and taken for a simulated ride throughout its defined operational domain – varying altitude, speed, thrust, temperature, etc. in accordance with real world recorded profiles. The expected results for this test are produced by running a simulation (created and maintained independently from the application software itself) with the same input data sets.
Minor failures in the formal execution of this test are fairly common. Some are hard failures – repeatable on every single run of the test. Others are soft – only intermittently reaching out to bite the tester. Each and every failure is investigated, naturally – and the vast majority of flight envelope failures are caused by test stand problems. These can include issues like a voltage source being one twentieth of a volt low, or slight timing mismatches caused by the less exact timekeeping of the test stand workstation as compared to the FADEC itself.
Some flight envelope failures are attributed to the model used to provide expected results. In such cases, hours and days of gut-wrenching analytical work go into identifying the miniscule difference between the model and the actual software.
A handful of flight envelope test failures are caused by the test parameters themselves. Tolerances may be set at unrealistically tight levels, for example. Or slight operating mode mismatches between the air speed and engine fan speed may cause a fault to be intermittently annunciated.
In very few cases have I seen the software being tested lay at the root of the failure. (I did witness the bugs being fixed, by the way!)
The point is this – complex and complicated tests can fail due to a variety of reasons, from hardware failure, through test stand problems, to application error. Intermittent failures may even jump into the formal run, just to make life interesting.
But the test engineer understands the complexity of the test being run, and anticipates potential issues that may cause failures. In fact, the test is expected to fail once in a while. If it doesn’t, then it isn’t doing its job – which is to exercise the control software throughout its valid operational envelope. As in all applications, the FADEC’s boundaries of valid operation are dark corners in which bugs (or at least potential bugs) congregate. It was mentioned during our initial discussion of the V development model that the model is sufficient, from a software development point of view, to express the lineage of test artifacts. This is because testing, again from the development viewpoint, is composed of only the body of the butterfly – formal test execution. We testers, having learned the hard way, know better.
Posted by AVINASH at 2:55 AM 0 comments
Test Design
Thus far, the tester has produced a lot of analytical output, some semi-formalized documentary artifacts, and several tentative approaches to testing the software. At this point, the tester is ready for the next step: test design.
The right wing of the butterfly represents the act of designing and implementing the test cases needed to verify the design artifact as replicated in the implementation. Like test analysis, it is a relatively large piece of work. Unlike test analysis, however, the focus of test design is not to assimilate information created by others, but rather to implement procedures, techniques, and data sets that achieve the test’s objective(s).
The outputs of the test analysis phase are the foundation for test design. Each requirement or design construct has had at least one technique (a measurement, demonstration, or analysis) identified during test analysis that will validate or verify that requirement. The tester must now put on his or her development hat and implement the intended technique.
Software test design, as a discipline, is an exercise in the prevention, detection, and elimination of bugs in software. Preventing bugs is the primary goal of software testing [BEIZ90]. Diligent and competent test design prevents bugs from ever reaching the implementation stage. Test design, with its attendant test analysis foundation, is therefore the premiere weapon in the arsenal of developers and testers for limiting the cost associated with finding and fixing bugs.
Before moving further ahead, it is necessary to comment on the continued analytical work performed during test design. As previously noted, tentative approaches are mapped out in the test analysis phase. During the test design phase of test development, those tentatively selected techniques and approaches must be evaluated more fully, until it is “proven” that the test’s objectives are met by the selected technique. If all tentatively selected approaches fail to satisfy the test’s objectives, then the tester must put his test analysis hat back on and start looking for more alternatives.
Posted by AVINASH at 2:53 AM 0 comments
There are various models which have been presented in the past 20 years in the field of Software Engineering for Development and Testing. Let us discuss and explore into few of the famous models.
The following models are addressed:
Waterfall Model.
Spiral Model.
'V' Model.
'W' Model, and
Butterfly Model.
The Waterfall Model
This is one of the first models of software development, presented by B.W.Boehm. The Waterfall model is a step-by-step method of achieving tasks. Using this model, one can get on to the next phase of development activity only after completing the current phase. Also one can go back only to the immediate previous phase.
In Waterfall Model each phase of the development activity is followed by the Verification and Validation activities. One phase is completed with the testing activities, then the team proceeds to the next phase. At any point of time, we can move only one step to the immediate previous phase. For example, one cannot move from the Testing phase to the Design phase.
Spiral Model
In the Spiral Model, a cyclical and prototyping view of software development is shown. Test are explicitly mentioned (risk analysis, validation of requirements and of the development) and the test phase is divided into stages. The test activities include module, integration and acceptance tests. However, in this model the testing also follows the coding. The exception to this is that the test plan should be constructed after the design of the system. The spiral model also identifies no activities associated with the removal of defects.
'V' Model
Many of the process models currently used can be more generally connected by the 'V' model where the 'V' describes the graphical arrangement of the individual phases. The 'V' is also a synonym for Verification and Validation.
By the ordering of activities in time sequence and with abstraction levels the connection between development and test activities becomes clear. Oppositely laying activities complement one another (i.e.) server as a base for test activities. For example, the system test is carried out on the basis of the results specification phase.
The 'W' Model
From the testing point of view, all of the models are deficient in various ways:
The Test activities first start after the implementation. The connection between the various test stages and the basis for the test is not clear.
The tight link between test, debug and change tasks during the test phase is not clear.
Why 'W' Model?
In the models presented above, there usually appears an unattractive task to be carried out after coding. In order to place testing on an equal footing, a second 'V' dedicated to testing is integrated into the model. Both 'V's put together give the 'W' of the 'W-Model'.
Butterfly Model of Test Development
Butterflies are composed of three pieces – two wings and a body. Each part represents a piece of software testing, as described hereafter.
Test Analysis
The left wing of the butterfly represents test analysis – the investigation, quantization, and/or re-expression of a facet of the software to be tested. Analysis is both the byproduct and foundation of successful test design. In its earliest form, analysis represents the thorough pre-examination of design and test artifacts to ensure the existence of adequate testability, including checking for ambiguities, inconsistencies, and omissions.
Test analysis must be distinguished from software design analysis. Software design analysis is constituted by efforts to define the problem to be solved, break it down into manageable and cohesive chunks, create software that fulfills the needs of each chunk, and finally integrate the various software components into an overall program that solves the original problem. Test analysis, on the other hand, is concerned with validating the outputs of each software development stage or micro-iteration, as well as verifying compliance of those outputs to the (separately validated) products of previous stages.
Test analysis mechanisms vary according to the design artifact being examined. For an aerospace software requirement specification, the test engineer would do all of the following, as a minimum:
Verify that each requirement is tagged in a manner that allows correlation of the tests for that requirement to the requirement itself. (Establish Test Traceability)
Verify traceability of the software requirements to system requirements.
Inspect for contradictory requirements.
Inspect for ambiguous requirements.
Inspect for missing requirements.
Check to make sure that each requirement, as well as the specification as a whole, is understandable.
Identify one or more measurement, demonstration, or analysis method that may be used to verify the requirement’s implementation (during formal testing).
Create a test “sketch” that includes the tentative approach and indicates the test’s objectives.
Out of the items listed above, only the last two are specifically aimed at the act of creating test cases. The other items are almost mechanical in nature, where the test design engineer is simply checking the software engineer’s work. But all of the items are germane to test analysis, where any error can manifest itself as a bug in the implemented application.
Test analysis also serves a valid and valuable purpose within the context of software development. By digesting and restating the contents of a design artifact (whether it be requirements or design), testing analysis offers a second look – from another viewpoint – at the developer’s work. This is particularly true with regard to lower-level design artifacts like detailed design and source code. This kind of feedback has a counterpart in human conversation. To verify one’s understanding of another person’s statements, it is useful to rephrase the statement in question using the phrase “So, what you’re saying is…”. This powerful method of confirming comprehension and eliminating miscommunication is just as important for software development – it helps to weed out misconceptions on the part of both the developer and tester, and in the process identifies potential problems in the software itself.
It should be clear from the above discussion that the tester’s analysis is both formal and informal. Formal analysis becomes the basis for documentary artifacts of the test side of the V. Informal analysis is used for immediate feedback to the designer in order to both verify that the artifact captures the intent of the designer and give the tester a starting point for understanding the software to be tested.
In the bulleted list shown above, the first two analyses are formal in nature (for an aerospace application). The verification of system requirement tags is a necessary step in the creation of a test traceability matrix. The software to system requirements traceability matrix similarly depends on the second analysis.
The three inspection analyses listed are more informal, aimed at ensuring that the specification being examined is of sufficient quality to drive the development of a quality implementation. The difference is in how the analytical outputs are used, not in the level of effort or attention that go into the analysis.
Posted by AVINASH at 2:53 AM 0 comments
Prescriptive Software Process Models
This page addresses software process models in the "prescriptive" category—that is, models that define a distinct series of activities, actions, and tasks, as well as a workflow that can be used to build computer software. The following topic categories are presented:
Posted by AVINASH at 2:42 AM 0 comments
Iterative and Incremental development is a cyclic software development process developed in response to the weaknesses of the waterfall model. It starts with an initial planning and ends with deployment with the cyclic interaction in between.
The iterative and incremental development is an essential part of the Rational Unified Process, the Dynamic Systems Development Method, Extreme Programming and generally the agile software development frameworks
Incremental development is a scheduling and staging strategy, in which the various parts of the system are developed at different times or rates, and integrated as they are completed. It does not imply, require nor preclude iterative development or waterfall development - both of those are rework strategies. The alternative to incremental development is to develop the entire system with a "big bang" integration.
Iterative development is a rework scheduling strategy in which time is set aside to revise and improve parts of the system. It does not presuppose incremental development, but works very well with it. A typical difference is that the output from an increment is not necessarily subject to further refinement, and its testing or user feedback is not used as input for revising the plans or specifications of the successive increments. On the contrary, the output from an iteration is examined for modification, and especially for revising the targets of the successive iterations.[clarification needed]
The two terms were merged in practical use in the mid-1990s. The authors of the Unified Process (UP) and the Rational Unified Process (RUP) selected the term "iterative development", and "iterations" to generally mean any combination of incremental and iterative development. Most people saying "iterative" development mean that they do both incremental and iterative development. Some project teams get into trouble by doing only one and not the other without realizing it
The basic idea behind iterative enhancement is to develop a software system incrementally, allowing the developer to take advantage of what was being learned during the development of earlier, incremental, deliverable versions of the system. Learning comes from both the development and use of the system, where possible. Key steps in the process were to start with a simple implementation of a subset of the software requirements and iteratively enhance the evolving sequence of versions until the full system is implemented. At each iteration, design modifications are made and new functional capabilities are added.
The Procedure itself consists of the Initialization step, the Iteration step, and the Project Control List. The initialization step creates a base version of the system. The goal for this initial implementation is to create a product to which the user can react. It should offer a sampling of the key aspects of the problem and provide a solution that is simple enough to understand and implement easily. To guide the iteration process, a project control list is created that contains a record of all tasks that need to be performed. It includes such items as new features to be implemented and areas of redesign of the existing solution. The control list is constantly being revised as a result of the analysis phase.
The iteration involves the redesign and implementation of a task from the project control list, and the analysis of the current version of the system. The goal for the design and implementation of any iteration is to be simple, straightforward, and modular, supporting redesign at that stage or as a task added to the project control list. The level of design detail is not dictated by the interactive approach. In a light-weight iterative project the code may represent the major source of documentation of the system; however, in a mission-critical iterative project a formal Software Design Document may be used. The analysis of an iteration is based upon user feedback, and the program analysis facilities available. It involves analysis of the structure, modularity, usability, reliability, efficiency, & achievement of goals. The project control list is modified in light of the analysis results
Iterative development slices the deliverable business value (system functionality) into iterations. In each iteration a slice of functionality is delivered through cross-discipline work, starting from the model/requirements through to the testing/deployment. The unified process groups iterations into phases: inception, elaboration, construction, and transition.
Each of the phases may be divided into 1 or more iterations, which are usually time-boxed rather than feature-boxed. Architects and analysts work one iteration ahead of developers and testers to keep their work-product backlog full
Posted by AVINASH at 2:36 AM 0 comments
In many instances the client only has a general view of what is expected from the software product. In such a scenario where there is an absence of detailed information regarding the input to the system, the processing needs and the output requirements, the prototyping model may be employed.
This model reflects an attempt to increase the flexibility of the development process by allowing the client to interact and experiment with a working representation of the product. The developmental process only continues once the client is satisfied with the functioning of the prototype. At that stage the developer determines the specifications of the client’s real needs.
Software prototyping, a possible activity during software development, is the creation of prototypes, i.e., incomplete versions of the software program being developed.
A prototype typically implements only a small subset of the features of the eventual program, and the implementation may be completely different from that of the eventual product.
The purpose of a prototype is to allow users of the software to evaluate proposals for the design of the eventual product by actually trying them out, rather than having to interpret and evaluate the design based on descriptions.
Prototyping has several benefits: The software designer and implementer can obtain feedback from the users early in the project. The client and the contractor can compare if the software made matches the software specification, according to which the software program is built. It also allows the software engineer some insight into the accuracy of initial project estimates and whether the deadlines and milestones proposed can be successfully met. The degree of completeness and the techniques used in the prototyping have been in development and debate since its proposal in the early 1970's.
This process is in contrast with the 1960s and 1970s monolithic development cycle of building the entire program first and then working out any inconsistencies between design and implementation, which led to higher software costs and poor estimates of time and cost. The monolithic approach has been dubbed the "Slaying the (software) Dragon" technique, since it assumes that the software designer and developer is a single hero who has to slay the entire dragon alone. Prototyping can also avoid the great expense and difficulty of changing a finished software product.
Posted by AVINASH at 2:36 AM 0 comments
The Team Software Process (TSP), along with the Personal Software Process, helps the high-performance engineer to
Engineering groups use the TSP to apply integrated team concepts to the development of software-intensive systems. A four-day launch process walks teams and their managers through
After the launch, the TSP provides a defined process framework for managing, tracking and reporting the team's progress.
Using TSP, an organization can build self-directed teams that plan and track their work, establish goals, and own their processes and plans. These can be pure software teams or integrated product teams of 3 to 20 engineers.
TSP will help your organization establish a mature and disciplined engineering practice that produces secure, reliable software. Find out how you can use TSP to strengthen your security practices.
TSP is also being used as the basis for a new measurement framework for software acquirers and developers. This effort is the Integrated Software Acquisition Metrics (ISAM) Project.
The Personal Software Process (PSP) shows engineers how to
Because personnel costs constitute 70 percent of the cost of software development, the skills and work habits of engineers largely determine the results of the software development process. Based on practices found in the Capability Maturity Model (CMM), the PSP can be used by engineers as a guide to a disciplined and structured approach to developing software. The PSP is a prerequisite for an organization planning to introduce the TSP.
The PSP can be applied to many parts of the software development process, including
Posted by AVINASH at 2:35 AM 0 comments
RAD is a linear sequential software development process model that emphasis an extremely short development cycle using a component based construction approach. If the requirements are well understood and defines, and the project scope is constraint, the RAD process enables a development team to create a fully functional system with in very short time period.
RAD model has the following phases:
What are the advantages and disadvantages of RAD?
RAD reduces the development time and reusability of components help to speed up development. All functions are modularized so it is easy to work with.
For large projects RAD require highly skilled engineers in the team. Both end customer and developer should be committed to complete the system in a much abbreviated time frame. If commitment is lacking RAD will fail. RAD is based on Object Oriented approach and if it is difficult to modularize the project the RAD may not work well.
Cinoy M.R is a Computing Engineer, specializing in solution / concept selling in Information Technology, Wealth Management, as well as Stress Management.
Posted by AVINASH at 2:35 AM 0 comments
The Unified Process is not simply a process, but rather an extensible framework which should be customized for specific organizations or projects. The Rational Unified Process is, similarly, a customizable framework. As a result it is often impossible to say whether a refinement of the process was derived from UP or from RUP, and so the names tend to be used interchangeably.
The name Unified Process as opposed to Rational Unified Process is generally used to describe the generic process, including those elements which are common to most refinements. The Unified Process name is also used to avoid potential issues of copyright infringement since Rational Unified Process and RUP are trademarks of IBM. The first book to describe the process was titled The Unified Software Development Process (ISBN 0-201-57169-2) and published in 1999 by Ivar Jacobson, Grady Booch and James Rumbaugh. Since then various authors unaffiliated with Rational Software have published books and articles using the name Unified Process, whereas authors affiliated with Rational Software have favored the name Rational Unified Process.
Refinements of the Unified Process vary from each other in how they categorize the project disciplines or workflows. The Rational Unified Process defines nine disciplines: Business Modeling, Requirements, Analysis and Design, Implementation, Test, Deployment, Configuration and Change Management, Project Management, and Environment. The Enterprise Unified Process extends RUP through the addition of eight "enterprise" disciplines. Agile refinements of UP such as OpenUP/Basic and the Agile Unified Process simplify RUP by reducing the number of disciplines.
Refinements also vary in the emphasis placed on different project artifacts. Agile refinements streamline RUP by simplifying workflows and reducing the number of expected artifacts.
Refinements also vary in their specification of what happens after the Transition phase. In the Rational Unified Process the Transition phase is typically followed by a new Inception phase. In the Enterprise Unified Process the Transition phase is followed by a Production phase.
The number of Unified Process refinements and variations is countless. Organizations utilizing the Unified Process invariably incorporate their own modifications and extensions. The following is a list of some of the better known refinements and variations.
Posted by AVINASH at 2:34 AM 0 comments
The steps in the spiral model can be generalized as follows:
The spiral model is used most often in large projects. For smaller projects, the concept of agile software development is becoming a viable alternative. The US military has adopted the spiral model for its Future Combat Systems program.
The spiral model promotes quality assurance through prototyping at each stage in systems development.
Posted by AVINASH at 2:32 AM 0 comments
The waterfall model is a sequential software development process, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design (validation), Construction, Testing and maintenance.
It should be readily apparent that the waterfall development model has its origins in the manufacturing and construction industries; highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development. Ironically, the use of the waterfall model for software development essentially ignores the 'soft' in 'software'.
The first formal description of the waterfall model is often cited to be an article published in 1970 by Winston W. Royce (1929–1995), although Royce did not use the term "waterfall" in this article. Ironically, Royce was presenting this model as an example of a flawed, non-working model (Royce 1970). This is in fact the way the term has generally been used in writing about software development—as a way to criticize a commonly used software practiceIn Royce's original Waterfall model, the following phases are followed in order:
To follow the waterfall model, one proceeds from one phase to the next in a purely sequential manner. For example, one first completes requirements specification, which are set in stone. When the requirements are fully completed, one proceeds to design. The software in question is designed and a blueprint is drawn for implementers (coders) to follow — this design should be a plan for implementing the requirements given. When the design is fully completed, an implementation of that design is made by coders. Towards the later stages of this implementation phase, separate software components produced are combined to introduce new functionality and remove errors.
Thus the waterfall model maintains that one should move to a phase only when its preceding phase is completed and perfected. However, there are various modified waterfall models (including Royce's final model) that may include slight or major variations upon this process.
Time spent early on in software production can lead to greater economy later on in the software lifecycle; that is, it has been shown many times that a bug found in the early stages of the production lifecycle (such as requirements specification or design) is cheaper, in terms of money, effort and time, to fix than the same bug found later on in the process. ([McConnell 1996], p. 72, estimates that "a requirements defect that is left undetected until construction or maintenance will cost 50 to 200 times as much to fix as it would have cost to fix at requirements time.") To take an extreme example, if a program design turns out to be impossible to implement, it is easier to fix the design at the design stage than to realize months later, when program components are being integrated, that all the work done so far has to be scrapped because of a broken design.
This is the central idea behind Big Design Up Front (BDUF) and the waterfall model - time spent early on making sure that requirements and design are absolutely correct will save you much time and effort later. Thus, the thinking of those who follow the waterfall process goes, one should make sure that each phase is 100% complete and absolutely correct before proceeding to the next phase of program creation. Program requirements should be set in stone before design is started (otherwise work put into a design based on incorrect requirements is wasted); the program's design should be perfect before people begin work on implementing the design (otherwise they are implementing the wrong design and their work is wasted), etc.
A further argument for the waterfall model is that it places emphasis on documentation (such as requirements documents and design documents) as well as source code. In less designed and documented methodologies, should team members leave, much knowledge is lost and may be difficult for a project to recover from. Should a fully working design document be present (as is the intent of Big Design Up Front and the waterfall model) new team members or even entirely new teams should be able to familiarize themselves by reading the documents.
As well as the above, some prefer the waterfall model for its simple approach and argue that it is more disciplined. Rather than what the waterfall adherent sees as chaos, the waterfall model provides a structured approach; the model itself progresses linearly through discrete, easily understandable and explainable phases and thus is easy to understand; it also provides easily markable milestones in the development process. It is perhaps for this reason that the waterfall model is used as a beginning example of a development model in many software engineering texts and courses.
It is argued that the waterfall model and Big Design up Front in general can be suited to software projects which are stable (especially those projects with unchanging requirements, such as with shrink wrap software) and where it is possible and likely that designers will be able to fully predict problem areas of the system and produce a correct design before implementation is started. The waterfall model also requires that implementers follow the well made, complete design accurately, ensuring that the integration of the system proceeds smoothly.
The waterfall model is argued by many to be a bad idea in practice, mainly because of their belief that it is impossible, for any non-trivial project, to get one phase of a software product's lifecycle perfected before moving on to the next phases and learning from them. For example, clients may not be aware of exactly what requirements they want before they see a working prototype and can comment upon it; they may change their requirements constantly, and program designers and implementers may have little control over this. If clients change their requirements after a design is finished, that design must be modified to accommodate the new requirements, invalidating quite a good deal of effort if overly large amounts of time have been invested in Big Design Up Front. Designers may not be aware of future implementation difficulties when writing a design for an unimplemented software product. That is, it may become clear in the implementation phase that a particular area of program functionality is extraordinarily difficult to implement. If this is the case, it is better to revise the design than to persist in using a design that was made based on faulty predictions and that does not account for the newly discovered problem areas.
Dr. Winston W. Royce, in "Managing the Development of Large Software Systems", the first paper that describes the waterfall model, also describes the simplest form as "risky and invites failure".
Steve McConnell in Code Complete (a book which criticizes the widespread use of the waterfall model) refers to design as a "wicked problem" — a problem whose requirements and limitations cannot be entirely known before completion. The implication of this is that it is impossible to perfect one phase of software development, thus it is impossible if using the waterfall model to move on to the next phase.
David Parnas, in "A Rational Design Process: How and Why to Fake It", writes:[4]
“Many of the [system's] details only become known to us as we progress in the [system's] implementation. Some of the things that we learn invalidate our design and we must backtrack.”
The idea behind the waterfall model may be "measure twice; cut once", and those opposed to the waterfall model argue that this idea tends to fall apart when the problem being measured is constantly changing due to requirement modifications and new realizations about the problem itself then.
In response to the perceived problems with the pure waterfall model, many modified waterfall models have been introduced. These models may address some or all of the criticisms of the pure waterfall model. Many different models are covered by Steve McConnell in the "lifecycle planning" chapter of his book Rapid Development: Taming Wild Software Schedules.
While all software development models will bear some similarity to the waterfall model, as all software development models will incorporate at least some phases similar to those used within the waterfall model, this section will deal with those closest to the waterfall model. For models which apply further differences to the waterfall model, or for radically different models seek general information on the software development process.
The sashimi model (so called because it features overlapping phases, like the overlapping fish of Japanese sashimi) was originated by Peter DeGrace. It is sometimes referred to as the "waterfall model with overlapping phases" or "the waterfall model with feedback". Since phases in the sashimi model overlap, information of problem spots can be acted upon during phases that would typically, in the pure waterfall model, precede others. For example, since the design and implementation phases will overlap in the sashimi model, implementation problems may be discovered during the design and implementation phase of the development process.This helps alleviate many of the problems associated with the Big Design Up Front philosophy of the waterfall model. JHGUTUKTUT
Posted by AVINASH at 2:31 AM 0 comments
What is requirement?
A requirement describes a condition or capability to which a system must conform; either derived directly from user needs, or stated in a contract, standard, specification, or other formally imposed document. In systems engineering, a requirement can be a description of what a system must do.In other words A statement identifying a capability, physical characteristic, or quality factor that bounds a product or process need for which a solution will be pursued.
What is requirement Engineering?
Requirements Engineering is the process of establishing the services that the customer requires from the system and the constraints under which it is to be developed and operated
What are the requirement engineering processes?
What is requirement Management?
A systematic approach to eliciting, organizing and documenting the software requirements of the system, and establishing and maintaining agreement between the customer and the project team on changes to those requirements. Effective requirements management includes maintaining a clear statement of the requirements, along with appropriate attributes and traceability to other requirements and other project artifacts.
Why Requirement Management is important?
Requirements analysis is a colossal initial step in software development. Managing changing requirements throughout the software development life cycle is the key to developing a successful solution, one that meets users' needs and is developed on time and within budget. A crucial aspect of effectively managing requirements is communicating requirements to all team members throughout the entire life cycle. In truth, requirements management benefits all project stakeholders, end users, project managers, developers, and testers by ensuring that they are continually kept apprised of requirement status and understand the impact of changing requirements specifically, to schedules, functionality, and costs.
What are the key requirement management skills?
What are the artifacts used to manage requirements?
What is requirement Management plan?
Describes the requirements artifacts, requirement types, and their respective requirements attributes, specifying the information to be collected and control mechanisms to be used for measuring, reporting, and controlling changes to the product requirements
What is Requirement Implementation?
Requirements implementation is the actual work of transforming requirements into software architectural designs, detailed designs, code, and test cases.
What are requirement sources?
The term goal refers to the overall, high-level objectives of the software. Goals provide the motivation for the software, but are often vaguely formulated.
Domain knowledge: The software engineer needs to acquire, or have available, knowledge about the application domain. This enables them to infer tacit knowledge that the stakeholders do not articulate, assess the trade-offs that will be necessary between conflicting requirements, and, sometimes, to act as a “user” champion.
The operational environment: Requirements will be derived from the environment in which the software will be executed. These may be, for example, timing constraints in real-time software or interoperability constraints in an office environment. These must be actively sought out, because they can greatly affect software feasibility and cost, and restrict design choices.
The organizational environment : Software is often required to support a business process, the selection of which may be conditioned by the structure, culture, and internal politics of the organization. The software engineer needs to be sensitive to these, since, in general, new software should not force unplanned change on the business process.
What are the main types of Requirements?
What are the different statuses of requirement?
What are FURPS?
Functionality -It includes feature sets ,capabilities, security
Usability -It may include such subcategories as human factors (see Concepts: User-Centered Design), aesthetics, consistency in the user interface, online and context-sensitive help, wizards and agents, user documentation, training materials
Reliability - Reliability requirements to be considered are frequency and severity of failure, recoverability, predictability, accuracy, mean time between failures (MTBF)
Performance - A performance requirement imposes conditions on functional requirements. For example, for a given action, it may specify performance parameters for: speed, efficiency, availability, accuracy, throughput, response time, recovery time, resource usage
Supportability -Supportability requirements may include testability, extensibility, adaptability, maintainability, compatibility, configurability, serviceability, installability, localizability (internationalization)
What is System Function Requirements?
These requirements specify a condition or capability that must be met or possessed by a system or its component(s). System functional requirements include functional and non-functional requirements. System functional requirements are developed to directly or indirectly satisfy user requirements.
What is non-technical requirement?
Requirements like agreements, conditions, and/or contractual terms that affect and determine the management activities of a project
What are functional Requirements?
Functional requirements capture the intended behavior of the system. This behavior may be expressed as services, tasks or functions the system is required to perform.
It specifies actions that a system must be able to perform, without taking physical constraints into consideration. Functional requirements thus specify the input and output behavior of a systems
What are non functional Requirements?
Non functional Requirements specify the qualities that the product must possess. These are things such as security, compatibility with existing systems, performance requirements, etc. In a product manufacturing example, non-functional requirements would be manufacturing requirements, or the conditions, processes, materials, and tools required to get the product from the design board to the shipping dock.
What is user interface requirement?
These are driven from Functional and Use Case Requirements, are traced from them both, depending on where they were derived from. They include items such as screen layout, tab flow, mouse and keyboard use, what controls to use for what functions (e.g. radio button, pull down list), and other “ease of use” issues.
What is emergent property requirement?
Some requirements represent emergent properties of software—that is, requirements which cannot be addressed by a single component, but which depend for their satisfaction on how all the software components interoperate. Emergent properties are crucially dependent on the system architecture.
What is navigation requirement?
These are driven and traced from the Use Case, as the Use Case lists the flow of the system, and the Navigation Requirements depict how that flow will take place. They are usually presented in a storyboard format, and should show the screen flow of each use case, and every alternate flow. Additionally, they should state what happens to the data or transaction for each step. They include the various ways to get to all screens, and an application screen map should be one of the artifacts derived in this category of requirements.
What is implementation requirement?
An implementation requirement specifies the coding or construction of a system like standards, implementation languages, operation environment
What are stable and volatile requirements?
Requirements changes occur while the requirements being elicited analyzed and validated and after the system has gone in to service
Stable requirements are concerned with the essence of a system and its application domain. They change more slowly than volatile requirements.
Volatile requirements are specific to the instantiation of the system in a particular environment and for a particular customer.
What are the different types of volatile requirements?
What is measuring requirement?
As a practical matter, it is typically useful to have some concept of the “volume” of the requirements for a particular software product. This number is useful in evaluating the “size” of a change in requirements, in estimating the cost of a development or maintenance task, or simply for use as the denominator in other measurements. Functional Size Measurement (FSM) is a technique for evaluating the size of a body of functional requirements. What is requirement definition?
What are upgradeability requirements?
Upgradeability is our ability to cost-effectively deploy new versions of the product to customers with minimal downtime or disruption. A key feature supporting this goal is automatic download of patches and upgrade of the end-user's machine. Also, we shall use data file formats that include enough meta-data to allow us to reliably transform existing customer data during an upgrade.
What is program requirement?
These are not requirements imposed on the system or product to be delivered, but on the process to be followed by the contractor. Program requirements should be necessary, concise, attainable, complete, consistent and unambiguous. Program requirements are managed in the same manner as product requirements. Program requirements include: compliance with federal, state or local laws including environmental laws; administrative requirements such as security; customer/contractor relationship requirements such as directives to use government facilities for specific types of work such as test; and specific work directives (such as those included in Statements of Work and Contract Data Requirements Lists). Program requirements may also be imposed on a program by corporate policy or practice.
What is performance requirement?
These are quantitative requirements of system performance, and are verifiable Individually. A performance requirement is a user-oriented quality requirement that specifies a required amount of performance
What is physical requirement?
A physical requirement specifies a physical characteristic like materials, shape, size, weight a system must possess
What is quantifiable requirement?
The requirements have been grouped into “non-quantifiable requirements” and “quantifiable requirements.” Quantifiable requirements are those whose presence or absence can be quantified in a binary manner. Non-quantifiable requirements are requirements that are not quantifiable.
What is an iteration plan?
A time-sequenced set of activities and tasks, with assigned resources, containing task dependencies, for the iteration
Posted by AVINASH at 2:29 AM 0 comments
Based on original Visionary template by Justin Tadlock
Visionary Reloaded theme by Blogger Templates
This template is brought to you by Blogger templates