Get Started. It's Free
or sign up with your email address
Rocket clouds
320 by Mind Map: 320

1. SW quality

1.1. computerWorld[Hil05]: bad software plagues nearly every organization that uses computers, causing lost work hours during computer downtime, lost or corrupted data, missed sales opportunities, high IT support and maintenance costs, and low customer satisfaction

1.2. infoWorld[Fos06]: reporting that the quality problem had not gotten any better.

1.3. Today, software quality remains an issue, but who is to blame? Customers blame developers, arguing that sloppy practices lead to low-quality software. Developers blame customers (and other stakeholders), arguing that irrational delivery dates and a continuing stream of changes force them to deliver software before it has been fully validated.

2. Quality

2.1. The American Heritage Dictionary defines quality as :“a characteristic or attribute of something.”

2.2. For software, two kinds of quality may be encountered:

2.2.1. Quality of design encompasses requirements, specifications, and the design of the system.

2.2.2. Quality of conformance is an issue focused primarily on implementation.

2.2.3. User satisfaction = compliant product + good quality + delivery within budget and schedule

3. Quality—A Pragmatic View

3.1. The transcendental view argues (like Persig) that quality is something that you immediately recognize, but cannot explicitly define.

3.2. The user view sees quality in terms of an end-user’s specific goals. If a product meets those goals, it exhibits quality.

3.3. The manufacturer’s view defines quality in terms of the original specification of the product. If the product conforms to the spec, it exhibits quality.

3.4. The product view suggests that quality can be tied to inherent characteristics (e.g., functions and features) of a product.

3.5. Finally, the value-based view measures quality based on how much a customer is willing to pay for a product. In reality, quality encompasses all of these views and more.

4. Useful Product

4.1. A useful product delivers the content, functions, and features that the end-user desires

4.2. But as important, it delivers these assets in a reliable, error free way.

4.3. A useful product always satisfies those requirements that have been explicitly stated by stakeholders.

4.4. In addition, it satisfies a set of implicit requirements (e.g., ease of use) that are expected of all high quality software.

5. Quality Dimensions

5.1. Durability.

5.2. Serviceability.

5.3. Aesthetics. Most of us would agree that an aesthetic entity has a certain elegance, a unique flow, and an obvious “presence” that are hard to quantify but evident nonetheless.

5.4. Perception. In some situations, you have a set of prejudices that will influence your perception of quality.

6. The Software Quality Dilemma

6.1. If you produce a software system that has terrible quality, you lose because no one will want to buy it.

6.2. If on the other hand you spend infinite time, extremely large effort, and huge sums of money to build the absolutely perfect piece of software, then it's going to take so long to complete and it will be so expensive to produce that you'll be out of business anyway.

6.3. Either you missed the market window, or you simply exhausted all your resources.

6.4. So people in industry try to get to that magical middle ground where the product is good enough not to be rejected right away, such as during evaluation, but also not the object of so much perfectionism and so much work that it would take too long or cost too much to complete. [Ven03]

7. “Good Enough” Software

7.1. Good enough software delivers high quality functions and features that end-users desire, but at the same time it delivers other more obscure or specialized functions and features that contain known bugs.

7.2. Arguments against “good enough.”

7.2.1. It is true that “good enough” may work in some application domains and for a few major software companies. After all, if a company has a large marketing budget and can convince enough people to buy version 1.0, it has succeeded in locking them in.

7.2.2. If you work for a small company be wary of this philosophy. If you deliver a “good enough” (buggy) product, you risk permanent damage to your company’s reputation.

7.2.3. You may never get a chance to deliver version 2.0 because bad buzz may cause your sales to plummet and your company to fold.

7.2.4. If you work in certain application domains (e.g., real time embedded software, application software that is integrated with hardware can be negligent and open your company to expensive litigation.

8. Cost of Quality

8.1. Prevention costs include

8.1.1. quality planning

8.1.2. formal technical reviews

8.1.3. test equipment

8.1.4. Training

8.2. Internal failure costs include

8.2.1. rework

8.2.2. repair

8.2.3. failure mode analysis

8.3. External failure costs are

8.3.1. complaint resolution

8.3.2. product return and replacement

8.3.3. help line support

8.3.4. warranty work

9. Cost

9.1. The relative costs to find and repair an error or defect increase dramatically as we go from prevention to detection to internal failure to external failure costs.

10. Quality and Risk

10.1. “People bet their jobs, their comforts, their safety, their entertainment, their decisions, and their very lives on computer software. It better be right.”

10.1.1. Example:

10.1.1.1. Throughout the month of November, 2000 at a hospital in Panama, 28 patients received massive overdoses of gamma rays during treatment for a variety of cancers. In the months that followed, five of these patients died from radiation poisoning and 15 others developed serious complications. What caused this tragedy? A software package, developed by a U.S. company, was modified by hospital technicians to compute modified doses of radiation for each patient.

11. Quality and Security

11.1. Gary McGraw comments [Wil05]:

11.1.1. “Software security relates entirely and completely to quality. You must think about security, reliability, availability, dependability—at the beginning, in the design, architecture, test, and coding phases, all through the software life cycle [process]. Even people aware of the software security problem have focused on late life-cycle stuff. The earlier you find the software problem, the better. And there are two kinds of software problems. One is bugs, which are implementation problems. The other is software flaws—architectural problems in the design. People pay too much attention to bugs and not enough on flaws.”

12. Achieving Software Quality

12.1. Critical success factors:

12.1.1. Software Engineering Methods

12.1.2. Project Management Techniques

12.1.3. Quality Control

12.1.4. Quality Assurance

13. What Are Reviews?

13.1. a meeting conducted by technical people for technical people

13.2. a technical assessment of a work product created during the software engineering process

13.3. a software quality assurance mechanism

13.4. a training ground

14. What Reviews Are Not

14.1. A project summary or progress assessment

14.2. A meeting intended solely to impart information

14.3. A mechanism for political or personal reprisal!

15. What Do We Look For?

15.1. Errors and defects

15.1.1. Error—a quality problem found before the software is released to end users

15.1.2. Defect—a quality problem found only after the software has been released to end-users

15.2. We make this distinction because errors and defects have very different economic, business, psychological, and human impact

15.3. However, the temporal distinction made between errors and defects in this book is not mainstream thinking

16. Informal Reviews

16.1. Informal reviews include:

16.1.1. a simple desk check of a software engineering work product with a colleague

16.1.2. a casual meeting (involving more than 2 people) for the purpose of reviewing a work product, or

16.1.3. the review-oriented aspects of pair programming

16.2. pair programming encourages continuous review as a work product (design or code) is created.

16.2.1. The benefit is immediate discovery of errors and better work product quality as a consequence.

17. Formal Technical Reviews

17.1. The objectives of an FTR are:

17.1.1. to uncover errors in function, logic, or implementation for any representation of the software

17.1.2. to verify that the software under review meets its requirements

17.1.3. to ensure that the software has been represented according to predefined standards

17.1.4. to achieve software that is developed in a uniform manner

17.1.5. to make projects more manageable

17.2. The FTR is actually a class of reviews that includes walkthroughs and inspections.

18. The Players

18.1. Producer—the individual who has developed the work product

18.1.1. informs the project leader that the work product is complete and that a review is required

18.2. Review leader—evaluates the product for readiness, generates copies of product materials, and distributes them to two or three reviewers for advance preparation.

18.3. Reviewer(s)—expected to spend between one and two hours reviewing the product, making notes, and otherwise becoming familiar with the work.

18.4. Recorder—reviewer who records (in writing) all important issues raised during the review.

19. Conducting the Review

19.1. Review the product, not the producer.

19.2. Set an agenda and maintain it.

19.3. Limit debate and rebuttal.

19.4. Enunciate problem areas, but don't attempt to solve every problem noted.

19.5. Take written notes.

19.6. Limit the number of participants and insist upon advance preparation.

19.7. Develop a checklist for each product that is likely to be reviewed.

19.8. Allocate resources and schedule time for FTRs.

19.9. Conduct meaningful training for all reviewers.

19.10. Review your early reviews.

20. Elements of SQA

20.1. Standards

20.2. Reviews and Audits

20.3. Testing

20.4. Error/defect collection and analysis

20.5. Change management

20.6. Education

20.7. Vendor management

20.8. Security management

20.9. Safety

20.10. Risk management

21. Role of the SQA Group

21.1. Prepares an SQA plan for a project.

21.1.1. The plan identifies

21.1.1.1. evaluations to be performed

21.1.1.2. audits and reviews to be performed

21.1.1.3. standards that are applicable to the project

21.1.1.4. procedures for error reporting and tracking

21.1.1.5. documents to be produced by the SQA group

21.1.1.6. amount of feedback provided to the software project team

21.1.2. Participates in the development of the project’s software process description.

21.1.2.1. The SQA group reviews the process description for compliance with organizational policy, internal software standards, externally imposed standards (e.g., ISO-9001), and other parts of the software project plan.

21.1.3. Reviews software engineering activities to verify compliance with the defined software process.

21.1.3.1. identifies, documents, and tracks deviations from the process and verifies that corrections have been made.

21.1.4. Audits designated software work products to verify compliance with those defined as part of the software process.

21.1.4.1. reviews selected work products; identifies, documents, and tracks deviations; verifies that corrections have been made

21.1.4.2. periodically reports the results of its work to the project manager.

21.1.5. Ensures that deviations in software work and work products are documented and handled according to a documented procedure.

21.1.6. Records any noncompliance and reports to senior management.

21.1.6.1. Noncompliance items are tracked until they are resolved.

22. SQA Goals

22.1. Requirements quality. The correctness, completeness, and consistency of the requirements model will have a strong influence on the quality of all work products that follow.

22.2. Design quality. Every element of the design model should be assessed by the software team to ensure that it exhibits high quality and that the design itself conforms to requirements.

22.3. Code quality. Source code and related work products (e.g., other descriptive information) must conform to local coding standards and exhibit characteristics that will facilitate maintainability.

22.4. Quality control effectiveness. A software team should apply limited resources in a way that has the highest likelihood of achieving a high quality result.

23. Statistical SQA

23.1. Information about software errors and defects is collected and categorized.

23.2. An attempt is made to trace each error and defect to its underlying cause (e.g., non-conformance to specifications, design error, violation of standards, poor communication with the customer).

23.3. Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all possible causes), isolate the 20 percent (the vital few).

23.4. Once the vital few causes have been identified, move to correct the problems that have caused the errors and defects.

24. Six-Sigma for Software Engineering

24.1. The term “six sigma” is derived from six standard deviations—3.4 instances (defects) per million occurrences—implying an extremely high quality standard.

24.2. The Six Sigma methodology defines three core steps:

24.2.1. Define customer requirements and deliverables and project goals via well-defined methods of customer communication

24.2.2. Measure the existing process and its output to determine current quality performance (collect defect metrics)

24.2.3. Analyze defect metrics and determine the vital few causes.

24.2.4. Improve the process by eliminating the root causes of defects.

24.2.5. Control the process to ensure that future work does not reintroduce the causes of defects.