1. Review type
1.1. Informal review
1.1.1. Purpose: detect potential defect
1.1.2. not base on a formal(documented) process
1.1.3. lead bu author
1.1.4. performed: buddy check or by more people
1.1.5. result : may be document
1.1.6. checklist: option
1.2. Walkthrough
1.2.1. find defect
1.2.2. individual preparation: optional
1.2.3. led by author
1.2.4. scribe: mandatory
1.2.5. result: potential defect log and review report
1.2.6. checklist: optional
1.2.7. form: scenarios, dry run or simulations
1.3. Technical review
1.3.1. purpose: gaining consensus, detecting potential defect
1.3.2. individual preparation: required
1.3.3. reviewer: peer or expert
1.3.4. lead by facilitator(not author)
1.3.5. scribe: mandatory(not author)
1.3.6. checklist: optional
1.3.7. result: potential defect log and review report
1.4. Inspection
1.4.1. purpose: detect potential
1.4.2. base on rules and checklist
1.4.3. individual preparation: required
1.4.4. reviewer: peer or expert
1.4.5. scribe: mandatory
1.4.6. result: potential defect log and review report
1.5. Adhoc
1.5.1. no guidance
1.5.2. dependent on reviewer skill
1.6. Checklist-base
1.7. scenarios and dry run
1.8. Perspective base
1.9. role-base
2. Static testing
2.1. 5 activities
2.1.1. Planning
2.1.2. initiate review
2.1.3. individual review
2.1.4. issue communication and analysis
2.1.5. fixing and reporting
2.2. 6 role
2.2.1. Author
2.2.2. Managers
2.2.2.1. responsible for review planning
2.2.2.2. Decides on the execution of review
2.2.2.3. assign staff, budget and time
2.2.2.4. monitors ongoing cost- effectiveness
2.2.3. Facilitators/ Moderator
2.2.3.1. ensure effective running
2.2.3.2. mediate
2.2.3.3. is often the person upon whom the success of the review depend
2.2.4. Reviewers leader
2.2.4.1. responsible for review
2.2.4.2. decide who involved and organizes when/ where
2.2.5. Reviewer
2.2.5.1. person woking on project
2.2.5.2. identify potential defect
2.2.5.3. may represent different perspective
2.2.6. Scribe/ Recorder
2.2.6.1. collate potential defect
2.2.6.2. records new potential defect
3. White box
3.1. Coverage= number coverage items exercised/ number coverage items
3.2. Statement coverage= number statement exercised/number statement
3.3. Decision coverage= decision outcome exercised/ decision outcome
3.4. Cyclomatic= hinh thoi + 1
3.5. path= tat ca duong di
3.6. 100% branch/ decision coverage mean 100% statement coverage
3.7. 100% path coverage mean 100% decision coverage
3.8. 100% LCSAJ means 100% branch coverage
3.9. 100% coverage does not mean 100% tested
4. Tester/ test lead
4.1. test lead
4.1.1. Coordinate test strategy/plan
4.1.2. test planning
4.1.3. initiate the specification,preparation, implementation and execution of tests.
4.1.4. monitor the test result and check the exit criteria
4.1.5. adapt planning
4.1.6. manage the configuration
4.1.7. introduce suitable metric
4.1.8. decide what should be automated
4.1.9. select tool
4.1.10. decide about the implementation of the test env
4.1.11. write summary report
4.2. tester
4.2.1. select framework
4.2.2. review and contribute to test plans
4.2.3. set up test env
4.2.4. automation test
4.2.5. review test developed by other
4.2.6. Measure performance of components and systems
4.2.7. Use test administration/management tools, test monitoring tools, and test execution tools
4.2.8. Use test administration/management tools, test monitoring tools, and test execution tools
5. Defect management
5.1. Objectives
5.1.1. provide dev and other parties with information about any adverse
5.1.2. Provide test managers a means of tracking the quality
5.1.3. Provide ideas for development and test process
5.2. template
6. Chuan
6.1. ISO 25010
6.1.1. Strategy: Methodical
6.1.2. Characteristic
6.1.2.1. reliability
6.1.2.2. usability
6.1.2.3. efficiency
6.1.2.4. maintainability
6.1.2.5. portability
6.2. IEEE 829
6.2.1. Test plan, incident report, template
6.2.2. test approach: test design specification
7. Maintainace
7.1. Scope depend on
7.1.1. degree of risk of the change
7.1.2. size of the existing system
7.1.3. size of the change
7.2. trigger/indicator ̣̣̣̣̣̣̣̣- kich hoat
7.2.1. modification, such as planned enhancement, corrective and emergency change env, upg
7.2.2. Migration- such as from one platform
7.2.2.1. retirement - system is retire, require testing of data migration
7.2.2.2. restore/retrieve - after archiving for long retention period may also be need
8. Tool
8.1. Support management
8.1.1. Test mn tool
8.1.2. Requirement mn tool
8.1.3. defect mn
8.1.4. configuration mn
8.1.5. continuous integration
8.1.6. most useful for reponding test metrics
8.2. support Static
8.2.1. support review
8.2.2. static analysis tool
8.2.3. Static analysis tool can calculate metrics from the code
8.2.4. Effective and essentially low cost method of finding defect
8.3. Execution and logging
8.3.1. test execution tool
8.3.1.1. it executes test objective using automated test script
8.3.2. coverage tool
8.3.3. test harnesses
8.3.4. unit test framework tool
8.4. Performance measurement
8.4.1. performance testing
8.4.2. monitoring
8.4.3. dynamic analysis
8.5. Specialized testing need
8.5.1. data quality assessment
8.5.2. data conversion and migration
8.5.3. usability testing
8.5.4. accessibility testing
8.5.5. localization testing
8.5.6. security testing
8.5.7. portability testing
8.6. Test harness
8.6.1. tool to generate the env and run component test
8.7. Coverage measurement tool
8.7.1. through intrusive or non-intrusive mean, calculate and measure the percentage of a certain type of code structure that have been exer. These structure could be statement, branches and module
8.8. keyword-driven
8.8.1. action word are defined to cover specific interaction in system which can then be used by tester to build test
8.8.2. action of tester will be recorded in ascript that is then being generalized to run with several set of test input data
8.9. pilot project
8.9.1. To assess whether the benefit will be achieved at a reasonable cost
8.9.2. gaining in depth knowledge about the tool
8.9.3. evaluate how the tool fit with existing processes and practices, and determining what would need to change
8.9.4. deciding on standard way of using, managing...
8.9.5. understand the metric that you wish the tool to collect and report, and configuring...
8.10. Success factor
8.10.1. Rolling out the tool to the rest of the organization incrementally
8.10.2. Adapting and improving processes to fit with the use of the tool
8.10.3. Providing training, coaching, and mentoring for tool users
8.10.4. Defining guidelines for the use of the tool (e.g., internal standards for automation
9. Testing Activities_O
9.1. Planning
9.1.1. defining the review criteria
9.1.2. selecting the personnel
9.1.3. allocating roles
9.1.4. define Entry and Exit criteria
9.1.5. check entry criteria
9.2. kick-off
9.2.1. distribute document
9.2.2. explaining the objective, process and document to participant
9.3. Individual preparation
9.3.1. preparing for review meeting
9.3.2. noting potential defect, question, cmt
9.4. Review (examination/evaluation/recording)
9.4.1. discussing or logging
9.4.2. noting defect
9.4.3. examination/evaluation/recording
9.4.4. select part of document to review
9.5. Rework
9.5.1. fixing defect found
9.5.2. recording update status of defect
9.6. follow-up
9.6.1. gathering metric
9.6.2. checking that defect have been addressed
9.6.3. checking on exit criteria
10. Testing activities-N
10.1. Planning
10.2. Monitoring and control
10.2.1. test progress report
10.2.2. test summary report
10.3. test analysis
10.3.1. analyzing/evaluate test basis
10.3.2. design and prioritizing test condition
10.3.2.1. test charters
10.3.3. capturing bi-directional traceability of Test basis
10.3.3.1. reporting of defect in test basis
10.4. test design
10.4.1. design and prioritizing test case
10.4.2. design test env
10.4.2.1. work product
10.4.2.1.1. high-level CTs
10.4.2.1.2. test data, test env
10.4.2.1.3. test conditions defined test analysis
10.4.3. capturing bi-directional traceability of test case, test condition
10.5. test implementtion
10.5.1. prioritizing test procedure
10.5.2. create test suite
10.5.2.1. WP
10.5.2.1.1. test procedure
10.5.2.1.2. test suite
10.5.2.1.3. test data
10.5.2.1.4. test concrete expected result
10.5.3. building test env
10.5.4. preparing test data
10.6. test execution
10.7. test completion
11. Criteria
11.1. Entry
11.1.1. Availability of testable req, user stories, and/or models
11.1.2. availability of test item that have met the exit criteria for any prior test level
11.1.3. Env
11.1.4. test tool
11.1.5. test data/ resources
11.2. Exit
11.2.1. planned test have been executed
11.2.2. a defined level of coverage(req, user stories, acceptance criteria, risk, code) have been achieved
11.2.3. number of unresolved defect is within an agreed limit
11.2.4. number of estimate remaining defect is sufficient low
11.2.5. level of reliability, performance efficiency, usability, security, and other relevant quality characteristics are sufficient.
12. MCCall
12.1. Product transition
12.1.1. Portability
12.1.1.1. phần mềm cài ở môi trường mới có giữ được tính năng như cũ k
12.1.2. Reusability
12.1.2.1. có thể tái sử dụng các module nhỏ k
12.1.3. Interoperability
12.1.3.1. phần mềm có cần interface vs các hệ thống đã có
12.2. Product revision
12.2.1. Maintainability
12.2.1.1. mức công sức cần để bảo trì khi có lỗi, kiến trúc các module ntn
12.2.2. Flexibility
12.2.2.1. nguồn lực thay đổi pmem khi khách hàng thay đổi
12.2.3. testability
12.2.3.1. có hỗ trợ test hay k : tạo file log, backup
12.3. Product Operation
12.3.1. Usability
12.3.1.1. tính khả dụng: quy mô nguồn lực để đào tạo nhân viên mới sử dụng hệ thống
12.3.2. Integrity
12.3.2.1. Tính toàn vẹn: đề cập bảo mật hệ thống với việc ngăn chặn truy cập trái phép
12.3.3. Efficiency
12.3.3.1. Tính hiệu quả: Đề cập tới tài nguyên phần cứng cần để thực hiện các chức năng của phần mềm
12.3.4. Reliability
12.3.4.1. Tính tin cậy: Đề cập tới lỗi khi cung cấp dịch vụ : tỉ lệ lỗi, thời gian hệ thống chết
12.3.5. Correctness
12.3.5.1. Tính đúng đắn: đặc tả độ chính xác, sự toàn vẹn của output