Iniziamo. È gratuito!
o registrati con il tuo indirizzo email
320 da Mind Map: 320

1. Quality—A Pragmatic View

1.1. The transcendental view argues (like Persig) that quality is something that you immediately recognize, but cannot explicitly define.

1.2. The user view sees quality in terms of an end-user’s specific goals. If a product meets those goals, it exhibits quality.

1.3. The manufacturer’s view defines quality in terms of the original specification of the product. If the product conforms to the spec, it exhibits quality.

1.4. The product view suggests that quality can be tied to inherent characteristics (e.g., functions and features) of a product.

1.5. Finally, the value-based view measures quality based on how much a customer is willing to pay for a product. In reality, quality encompasses all of these views and more.

2. Useful Product

2.1. A useful product delivers the content, functions, and features that the end-user desires

2.2. But as important, it delivers these assets in a reliable, error free way.

2.3. A useful product always satisfies those requirements that have been explicitly stated by stakeholders.

2.4. In addition, it satisfies a set of implicit requirements (e.g., ease of use) that are expected of all high quality software.

3. Quality Dimensions

3.1. Durability.

3.2. Serviceability.

3.3. Aesthetics. Most of us would agree that an aesthetic entity has a certain elegance, a unique flow, and an obvious “presence” that are hard to quantify but evident nonetheless.

3.4. Perception. In some situations, you have a set of prejudices that will influence your perception of quality.

4. “Good Enough” Software

4.1. Good enough software delivers high quality functions and features that end-users desire, but at the same time it delivers other more obscure or specialized functions and features that contain known bugs.

4.2. Arguments against “good enough.”

4.2.1. It is true that “good enough” may work in some application domains and for a few major software companies. After all, if a company has a large marketing budget and can convince enough people to buy version 1.0, it has succeeded in locking them in.

4.2.2. If you work for a small company be wary of this philosophy. If you deliver a “good enough” (buggy) product, you risk permanent damage to your company’s reputation.

4.2.3. You may never get a chance to deliver version 2.0 because bad buzz may cause your sales to plummet and your company to fold.

4.2.4. If you work in certain application domains (e.g., real time embedded software, application software that is integrated with hardware can be negligent and open your company to expensive litigation.

5. Cost of Quality

5.1. Prevention costs include

5.1.1. quality planning

5.1.2. formal technical reviews

5.1.3. test equipment

5.1.4. Training

5.2. Internal failure costs include

5.2.1. rework

5.2.2. repair

5.2.3. failure mode analysis

5.3. External failure costs are

5.3.1. complaint resolution

5.3.2. product return and replacement

5.3.3. help line support

5.3.4. warranty work

6. Cost

6.1. The relative costs to find and repair an error or defect increase dramatically as we go from prevention to detection to internal failure to external failure costs.

7. What Do We Look For?

7.1. Errors and defects

7.1.1. Error—a quality problem found before the software is released to end users

7.1.2. Defect—a quality problem found only after the software has been released to end-users

7.2. We make this distinction because errors and defects have very different economic, business, psychological, and human impact

7.3. However, the temporal distinction made between errors and defects in this book is not mainstream thinking

8. Formal Technical Reviews

8.1. The objectives of an FTR are:

8.1.1. to uncover errors in function, logic, or implementation for any representation of the software

8.1.2. to verify that the software under review meets its requirements

8.1.3. to ensure that the software has been represented according to predefined standards

8.1.4. to achieve software that is developed in a uniform manner

8.1.5. to make projects more manageable

8.2. The FTR is actually a class of reviews that includes walkthroughs and inspections.

9. The Players

9.1. Producer—the individual who has developed the work product

9.1.1. informs the project leader that the work product is complete and that a review is required

9.2. Review leader—evaluates the product for readiness, generates copies of product materials, and distributes them to two or three reviewers for advance preparation.

9.3. Reviewer(s)—expected to spend between one and two hours reviewing the product, making notes, and otherwise becoming familiar with the work.

9.4. Recorder—reviewer who records (in writing) all important issues raised during the review.

10. Role of the SQA Group

10.1. Prepares an SQA plan for a project.

10.1.1. The plan identifies

10.1.1.1. evaluations to be performed

10.1.1.2. audits and reviews to be performed

10.1.1.3. standards that are applicable to the project

10.1.1.4. procedures for error reporting and tracking

10.1.1.5. documents to be produced by the SQA group

10.1.1.6. amount of feedback provided to the software project team

10.1.2. Participates in the development of the project’s software process description.

10.1.2.1. The SQA group reviews the process description for compliance with organizational policy, internal software standards, externally imposed standards (e.g., ISO-9001), and other parts of the software project plan.

10.1.3. Reviews software engineering activities to verify compliance with the defined software process.

10.1.3.1. identifies, documents, and tracks deviations from the process and verifies that corrections have been made.

10.1.4. Audits designated software work products to verify compliance with those defined as part of the software process.

10.1.4.1. reviews selected work products; identifies, documents, and tracks deviations; verifies that corrections have been made

10.1.4.2. periodically reports the results of its work to the project manager.

10.1.5. Ensures that deviations in software work and work products are documented and handled according to a documented procedure.

10.1.6. Records any noncompliance and reports to senior management.

10.1.6.1. Noncompliance items are tracked until they are resolved.

11. Statistical SQA

11.1. Information about software errors and defects is collected and categorized.

11.2. An attempt is made to trace each error and defect to its underlying cause (e.g., non-conformance to specifications, design error, violation of standards, poor communication with the customer).

11.3. Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all possible causes), isolate the 20 percent (the vital few).

11.4. Once the vital few causes have been identified, move to correct the problems that have caused the errors and defects.

12. Six-Sigma for Software Engineering

12.1. The term “six sigma” is derived from six standard deviations—3.4 instances (defects) per million occurrences—implying an extremely high quality standard.

12.2. The Six Sigma methodology defines three core steps:

12.2.1. Define customer requirements and deliverables and project goals via well-defined methods of customer communication

12.2.2. Measure the existing process and its output to determine current quality performance (collect defect metrics)

12.2.3. Analyze defect metrics and determine the vital few causes.

12.2.4. Improve the process by eliminating the root causes of defects.

12.2.5. Control the process to ensure that future work does not reintroduce the causes of defects.

13. SW quality

13.1. computerWorld[Hil05]: bad software plagues nearly every organization that uses computers, causing lost work hours during computer downtime, lost or corrupted data, missed sales opportunities, high IT support and maintenance costs, and low customer satisfaction

13.2. infoWorld[Fos06]: reporting that the quality problem had not gotten any better.

13.3. Today, software quality remains an issue, but who is to blame? Customers blame developers, arguing that sloppy practices lead to low-quality software. Developers blame customers (and other stakeholders), arguing that irrational delivery dates and a continuing stream of changes force them to deliver software before it has been fully validated.

14. Quality

14.1. The American Heritage Dictionary defines quality as :“a characteristic or attribute of something.”

14.2. For software, two kinds of quality may be encountered:

14.2.1. Quality of design encompasses requirements, specifications, and the design of the system.

14.2.2. Quality of conformance is an issue focused primarily on implementation.

14.2.3. User satisfaction = compliant product + good quality + delivery within budget and schedule

15. The Software Quality Dilemma

15.1. If you produce a software system that has terrible quality, you lose because no one will want to buy it.

15.2. If on the other hand you spend infinite time, extremely large effort, and huge sums of money to build the absolutely perfect piece of software, then it's going to take so long to complete and it will be so expensive to produce that you'll be out of business anyway.

15.3. Either you missed the market window, or you simply exhausted all your resources.

15.4. So people in industry try to get to that magical middle ground where the product is good enough not to be rejected right away, such as during evaluation, but also not the object of so much perfectionism and so much work that it would take too long or cost too much to complete. [Ven03]

16. Quality and Risk

16.1. “People bet their jobs, their comforts, their safety, their entertainment, their decisions, and their very lives on computer software. It better be right.”

16.1.1. Example:

16.1.1.1. Throughout the month of November, 2000 at a hospital in Panama, 28 patients received massive overdoses of gamma rays during treatment for a variety of cancers. In the months that followed, five of these patients died from radiation poisoning and 15 others developed serious complications. What caused this tragedy? A software package, developed by a U.S. company, was modified by hospital technicians to compute modified doses of radiation for each patient.

17. Quality and Security

17.1. Gary McGraw comments [Wil05]:

17.1.1. “Software security relates entirely and completely to quality. You must think about security, reliability, availability, dependability—at the beginning, in the design, architecture, test, and coding phases, all through the software life cycle [process]. Even people aware of the software security problem have focused on late life-cycle stuff. The earlier you find the software problem, the better. And there are two kinds of software problems. One is bugs, which are implementation problems. The other is software flaws—architectural problems in the design. People pay too much attention to bugs and not enough on flaws.”

18. Achieving Software Quality

18.1. Critical success factors:

18.1.1. Software Engineering Methods

18.1.2. Project Management Techniques

18.1.3. Quality Control

18.1.4. Quality Assurance

19. What Are Reviews?

19.1. a meeting conducted by technical people for technical people

19.2. a technical assessment of a work product created during the software engineering process

19.3. a software quality assurance mechanism

19.4. a training ground

20. What Reviews Are Not

20.1. A project summary or progress assessment

20.2. A meeting intended solely to impart information

20.3. A mechanism for political or personal reprisal!

21. Informal Reviews

21.1. Informal reviews include:

21.1.1. a simple desk check of a software engineering work product with a colleague

21.1.2. a casual meeting (involving more than 2 people) for the purpose of reviewing a work product, or

21.1.3. the review-oriented aspects of pair programming

21.2. pair programming encourages continuous review as a work product (design or code) is created.

21.2.1. The benefit is immediate discovery of errors and better work product quality as a consequence.

22. Conducting the Review

22.1. Review the product, not the producer.

22.2. Set an agenda and maintain it.

22.3. Limit debate and rebuttal.

22.4. Enunciate problem areas, but don't attempt to solve every problem noted.

22.5. Take written notes.

22.6. Limit the number of participants and insist upon advance preparation.

22.7. Develop a checklist for each product that is likely to be reviewed.

22.8. Allocate resources and schedule time for FTRs.

22.9. Conduct meaningful training for all reviewers.

22.10. Review your early reviews.

23. Elements of SQA

23.1. Standards

23.2. Reviews and Audits

23.3. Testing

23.4. Error/defect collection and analysis

23.5. Change management

23.6. Education

23.7. Vendor management

23.8. Security management

23.9. Safety

23.10. Risk management

24. SQA Goals

24.1. Requirements quality. The correctness, completeness, and consistency of the requirements model will have a strong influence on the quality of all work products that follow.

24.2. Design quality. Every element of the design model should be assessed by the software team to ensure that it exhibits high quality and that the design itself conforms to requirements.

24.3. Code quality. Source code and related work products (e.g., other descriptive information) must conform to local coding standards and exhibit characteristics that will facilitate maintainability.

24.4. Quality control effectiveness. A software team should apply limited resources in a way that has the highest likelihood of achieving a high quality result.