Software Security

Comienza Ya. Es Gratis
ó regístrate con tu dirección de correo electrónico
Software Security por Mind Map: Software Security

1. Part 2: Seven Touchpoints for Software Security

1.1. 3. Introduction to Software Security Touchpoints

1.1.1. Flyover: Seven Terrific Touchpoints

1.1.1.1. 1. Code Review (Tools)

1.1.1.2. 2. Architectural Risk Analysis

1.1.1.3. 3. Penetration Testing

1.1.1.4. 4. Risk-Based Security Testing

1.1.1.5. 5. Abuse Cases

1.1.1.6. 6. Security Requirements

1.1.1.7. 7. Security Operations

1.1.1.8. *. External Analysis

1.1.1.9. Security Requirments

1.1.1.10. Why Only Seven?

1.1.2. Black and White: Two Threads Inextricably Interwined

1.1.3. Moving Left

1.1.4. Touchpoints as Best Practices

1.1.4.1. Coder's Corner

1.1.5. Who Should Do Software Security?

1.1.5.1. Building a Software Security Group

1.1.5.1.1. Don't start with security people

1.1.5.1.2. Software Security in the Academy

1.1.5.1.3. Start with software people

1.1.5.2. Software Security Is a Multidisciplinary Effort

1.1.5.2.1. Creativity in a New Discipline

1.1.6. Touchpoints to Success

1.2. 4. Code Review with a Tool

1.2.1. Catching Implementation Bugs Early (with a Tool)

1.2.1.1. Binary Analysis?!

1.2.2. Aim for Good, Not Perfect

1.2.3. Ancient History

1.2.4. Approaches to Static Analysis

1.2.4.1. A History of Rule Coverage

1.2.4.2. Modern Rules

1.2.5. Tools from Researchland

1.2.5.1. Modern Security Rules Schema

1.2.5.2. A Complete Modern Rule

1.2.6. Commercial Tool Vendors

1.2.6.1. Commercial Source Code Analyzers

1.2.6.2. Key Characteristics of a Tool

1.2.6.2.1. Be designed for security

1.2.6.2.2. Support multiple tiers

1.2.6.2.3. Be extensible

1.2.6.2.4. Be useful for security analysts and developers alike

1.2.6.2.5. Support existing development processes

1.2.6.2.6. Make sense to multiple stakeholders

1.2.6.3. Three Characteristics to Avoid

1.2.6.3.1. Too many false positives

1.2.6.3.2. Spotty integration with IDEs

1.2.6.3.3. Single-minded support for C

1.2.6.4. The Fortify Source Code Analysis Suite

1.2.6.4.1. The Fortify Knowledge Base

1.2.6.4.2. Using Fortify

1.2.7. Touchpoint Process: Code Review

1.2.8. Use a Tool to Find Security Bugs

1.3. 5. Architectural Risk Analysis

1.3.1. Common Themes among Security Risk Analysis Approaches

1.3.1.1. Basic substeps

1.3.1.1.1. Learn as much as possible about the target of analysis

1.3.1.1.2. Discuss security issues surrounding the software

1.3.1.1.3. Determine probability of compromise

1.3.1.1.4. Perform impact analysis

1.3.1.1.5. Rank risks

1.3.1.1.6. Develop a mitigation strategy

1.3.1.1.7. Report findings

1.3.1.2. Risk Analysis in Practice

1.3.1.3. Risk Analysis Fits in the RMF

1.3.2. Traditional Risk Analysis Terminology

1.3.3. Knowledge Requirement

1.3.4. The Necessity of a Forest-Level View

1.3.5. A Traditional Example of a Risk Calculation

1.3.6. Limitations of Traditional Approaches

1.3.7. Modern Risk Analysis

1.3.8. Touchpoint Process: Architectural Risk Analysis

1.3.9. Getting Started with Risk Analysis

1.3.10. Architectural Risk Analysis Is a necessity

1.4. 6. Software Penetration Testing

1.4.1. Software Penetration Testing

1.4.2. Penetration Testing Today

1.4.3. Software Penetration Testing a Better Approach

1.4.4. Incorporating Findings Back into Development

1.4.5. Using Penetration Tests to Assess the Application Landscape

1.4.6. Proper Penetration Testing Is Good

1.5. 7. Risk-Based Security Testing

1.5.1. From Outside → In to Inside → Out

1.5.2. What's So Different about Security?

1.5.3. Risk Management and Security Testing

1.5.4. How to Approach Security Testing

1.5.4.1. Who

1.5.4.2. How

1.5.4.3. An Example: Java Card Security Testing

1.5.4.3.1. Automating Security Testing

1.5.4.3.2. Results: Nonfunctional Security Testing Is Essential

1.5.4.4. Coder's Corner

1.5.5. Thinking about (Malicious) Input

1.5.5.1. eXtreme Programming and Security Testing

1.5.6. Getting Over Input

1.5.7. Leapfrogging the Penetration Test

1.6. 8. Abuse Cases

1.6.1. Holding Software Vendors Accountable

1.6.2. Security Is Not a Set of Features

1.6.3. What You Can't Do

1.6.4. Creating Useful Abuse Cases

1.6.4.1. But No One Would Ever Do That!

1.6.5. Touchpoint Process: Abuse Case Development

1.6.5.1. Creating Anti-Requirements

1.6.5.1.1. Coders Corner

1.6.5.2. Creating an Attack Model

1.6.6. An Abuse Case Example

1.6.6.1. Attack Patterns from Exploiting Software

1.6.6.1.1. Make the client invisible

1.6.6.1.2. Target programs that write to privileged OS resources

1.6.6.1.3. Use a user-supplied configuration file to run commands that elevate privilege

1.6.6.1.4. make use of configuration file search paths

1.6.6.1.5. direct access to executable files

1.6.6.1.6. embedding scripts within scripts

1.6.6.1.7. leverage executable code in nonexecutable files

1.6.6.1.8. argument injection

1.6.6.1.9. command delimiters

1.6.6.1.10. multiple parsers and double escapes

1.6.6.1.11. user-supplied variable passed to filesystem calls

1.6.6.1.12. postfix NULL terminator

1.6.6.1.13. postfix, null terminate, and backslash

1.6.6.1.14. relative path traversal

1.6.6.1.15. client-controlled environment variables

1.6.6.1.16. user-supplied global variables (DEBUG=1, PHP Globals,...)

1.6.6.1.17. session id, resource id, and blind trust

1.6.6.1.18. analog in-band switching signals (aka "Blue Boxing")

1.6.6.1.19. Attack pattern fragment: manipulating terminal devices

1.6.6.1.20. simple script injection

1.6.6.1.21. embedding scripts in nonscript elements

1.6.6.1.22. XSS in HTTP headers

1.6.6.1.23. HTTP query strings

1.6.6.1.24. User-controlled filenames

1.6.6.1.25. passing local filenames to functions that expect a URL

1.6.6.1.26. Meta-characters in E-mail Headers

1.6.6.1.27. Filesystem function injection, content based

1.6.6.1.28. Client-side injection, buffer overflow

1.6.6.1.29. cause web server misclassification

1.6.6.1.30. Alternate Encoding of the leading ghost characters

1.6.6.1.31. using slashes in alternate encoding

1.6.6.1.32. using escaped slashes in alternate encoding

1.6.6.1.33. unicode encoding

1.6.6.1.34. UTF-8 encoding

1.6.6.1.35. URL encoding

1.6.6.1.36. Alternative IP addresses

1.6.6.1.37. slashes and URL encoding combined

1.6.6.1.38. web logs

1.6.6.1.39. overflow binary resource files

1.6.6.1.40. overflow variables and tags

1.6.6.1.41. overflow symbolic links

1.6.6.1.42. MIME conversion

1.6.6.1.43. HTTP cookies

1.6.6.1.44. Filter failure through buffer overflow

1.6.6.1.45. Buffer overflow with environment variables

1.6.6.1.46. Buffer overflow in an API call

1.6.6.1.47. Buffer overflow in local command-line utilities

1.6.6.1.48. parameter expansion

1.6.6.1.49. String format overflow in syslog()

1.6.7. Abuse Cases Are Useful

1.7. 9. Software Security Meets Security Operations

1.7.1. Don't Stand So Close to Me

1.7.2. Kumbaya (for Software Security)

1.7.2.1. Requirements: Abuse Cases

1.7.2.2. Design: Business Risk Analysis

1.7.2.3. Design: Architectural Risk Analysis

1.7.2.4. Test Planning: Security Testing

1.7.2.5. Implementation: Code Review

1.7.2.6. System Testing: Penetration Testing

1.7.2.6.1. Know When Enough Is Too Much

1.7.2.7. Fielded System: Deployment and Operations

1.7.3. Come Together (Right Now)

1.7.3.1. The Infosec Boogey Man

1.7.3.2. Coders Corner

1.7.4. Future's So Bright, I Gotta Wear Shades

2. Part 3: Software Security Grows Up

2.1. 10. An Enterprise Software Security Program

2.1.1. The Business Climate

2.1.2. Building Blocks of Change

2.1.2.1. Overcoming Common Pitfalls

2.1.2.1.1. Over-reliance on Late-Lifecycle Testing

2.1.2.1.2. Management without Measurement

2.1.2.1.3. Training without Assessment

2.1.2.1.4. Lack of High-Level Commitment

2.1.2.2. Cigital change program maturity path sequence (6 phases)

2.1.2.2.1. Stop the bleeding

2.1.2.2.2. Harvest the low-hanging fruit

2.1.2.2.3. Establish a foundation

2.1.2.2.4. Craft core competencies

2.1.2.2.5. Develop differentiators

2.1.2.2.6. Build out nice-to-haves

2.1.3. Building an Improvement Program

2.1.4. Establishing a Metrics Program

2.1.4.1. A Three-Step Enterprise Rollout

2.1.5. Continous Improvement

2.1.6. What about COTS (and Existing Software Applications)?

2.1.6.1. An Enterprise Information Architecture

2.1.7. Adopting a Secure Development Lifecycle

2.2. 11. Knowledge for Software Security

2.2.1. Experience, Expertise, and Security

2.2.2. Security Knowledge: A Unified View

2.2.2.1. Software Security Unified Knowledge Architecture

2.2.2.2. A Bird's-Eye View of Software Security Knowledge Catalogs

2.2.2.2.1. Principles

2.2.2.2.2. Guidelines

2.2.2.2.3. Rules

2.2.2.2.4. Attack patterns

2.2.2.2.5. Historical risks

2.2.2.2.6. Vulnerabilities

2.2.2.2.7. Exploits

2.2.3. Security Knowledge and the Touchpoints

2.2.4. The Department of Homeland Security Build Security In Portal

2.2.4.1. Knowledge Catalog: Principle Item: Principle of Least Privilege

2.2.4.1.1. A Principle

2.2.4.1.2. A Rule

2.2.4.2. Aspects of Software Assurance

2.2.4.2.1. Best Practices

2.2.4.2.2. Knowledge

2.2.4.2.3. Tools

2.2.4.2.4. Business case

2.2.4.2.5. Dynamic navigation

2.2.5. Knowledge Management Is Ongoing

2.2.6. Software Security Now

2.3. 12. A Taxonomy of Coding Errors

2.3.1. On Simplicity: Seven Plus of Minus Two

2.3.1.1. Input Validation and Representation

2.3.1.2. API Abuse

2.3.1.3. Security Features

2.3.1.4. Time and State

2.3.1.5. Error Handling

2.3.1.6. Code Quality

2.3.1.7. Encapsulation

2.3.1.8. Environment

2.3.2. The Phyla

2.3.2.1. Input Validation and Representation

2.3.2.1.1. Buffer Overflow

2.3.2.1.2. Command Injection

2.3.2.1.3. Cross-Site Scripting

2.3.2.1.4. Format String

2.3.2.1.5. HTTP Response Splitting

2.3.2.1.6. Illegal Pointer Value

2.3.2.1.7. Integer Overflow

2.3.2.1.8. Log Forging

2.3.2.1.9. Path Traversal

2.3.2.1.10. Process Control

2.3.2.1.11. Resource Injection

2.3.2.1.12. Setting Manipulation

2.3.2.1.13. SQL Injection

2.3.2.1.14. String Termination Error

2.3.2.1.15. Struts

2.3.2.1.16. Unsafe JNI

2.3.2.1.17. Unsafe Reflection

2.3.2.1.18. XML Validation

2.3.2.2. API Abuse

2.3.2.2.1. Dangerous Function

2.3.2.2.2. Directory Restriction

2.3.2.2.3. Heap Inspection

2.3.2.2.4. J2EE Bad Practices

2.3.2.2.5. Often misused

2.3.2.2.6. Unchecked Return Value

2.3.2.3. Security Features

2.3.2.3.1. Insecure Randomness

2.3.2.3.2. Least Privilege Violation

2.3.2.3.3. Missing Access Control

2.3.2.3.4. Password Management

2.3.2.3.5. Privacy Violation

2.3.2.4. Time and State

2.3.2.4.1. Deadlock

2.3.2.4.2. Failure to Begin a New Session upon Authentication

2.3.2.4.3. File Access Race Condition: TOCTOU

2.3.2.4.4. Insecure Temporary File

2.3.2.4.5. J2EE Bad Practices

2.3.2.4.6. Signal Handling Race Conditions

2.3.2.5. Error Handling

2.3.2.5.1. Catch NullPointerException

2.3.2.5.2. Empty Catch Block

2.3.2.5.3. Overly Broad Catch Block

2.3.2.5.4. Overly Broad Throws Declaration

2.3.2.5.5. Unchecked Return Value

2.3.2.6. Code Quality

2.3.2.6.1. Double Free

2.3.2.6.2. Inconsistent Implementations

2.3.2.6.3. Memory Leak

2.3.2.6.4. Null Dereference

2.3.2.6.5. Obsolete

2.3.2.6.6. Undefined Behavior

2.3.2.6.7. Uninitialized Variable

2.3.2.6.8. Unreleased Resource

2.3.2.6.9. Use After Free

2.3.2.7. Encapsulation

2.3.2.7.1. Comparing Classes by Name

2.3.2.7.2. Data Leaking Between Users

2.3.2.7.3. Leftover Debug Code

2.3.2.7.4. Mobile Code

2.3.2.7.5. Private Array-Typed Field Returned from a Public Method

2.3.2.7.6. Public Data Assigned to Private Array-Typed Field

2.3.2.7.7. System Information Leak

2.3.2.7.8. Trust Boundary Violation

2.3.2.8. Environment

2.3.2.8.1. ASP .NET Misconfiguration

2.3.2.8.2. Insecure Compiler Optimization

2.3.2.8.3. J2EE Misconfiguration

2.3.2.9. More Phyla Needed

2.3.3. A Complete Example

2.3.3.1. Often Misused: Authentication

2.3.3.1.1. Abstract

2.3.3.1.2. Explanation

2.3.3.1.3. Recommendations

2.3.4. Lists, Piles, and Collections

2.3.4.1. Academic Literature

2.3.4.1.1. Vulnerabilities

2.3.4.1.2. Attacks

2.3.4.1.3. Toward a Taxonomy

2.3.4.2. Nineteen Sins Meet Seven Kingdoms

2.3.4.2.1. Input Validation and Representation

2.3.4.2.2. API Abuse

2.3.4.2.3. Security Features

2.3.4.2.4. Time and State

2.3.4.2.5. Error Handling

2.3.4.2.6. Code Quality

2.3.4.2.7. Encapsulation

2.3.4.2.8. Environment

2.3.4.3. Seven Kingdoms and the OWASP Ten

2.3.4.3.1. Input Validation and Representation

2.3.4.3.2. API Abuse

2.3.4.3.3. Security Features

2.3.4.3.4. Time and State

2.3.4.3.5. Error Handling

2.3.4.3.6. Code Quality

2.3.4.3.7. Encapsulation

2.3.4.3.8. Environment

2.3.5. Go Forth (with the Taxonomy) and Prosper

2.3.5.1. Taxonomy goals

2.3.5.1.1. Simple

2.3.5.1.2. Intuitive to a developer

2.3.5.1.3. Practical (rather than theoretical and comprehensive)

2.3.5.1.4. Amenable to automatic identification of errors with static analysis tools

2.3.5.1.5. Adaptable with respect to changes in trends that happen over time

3. Part 1: Software Security Fundamentals

3.1. 1. Defining a Discipline

3.1.1. The Security Problem

3.1.1.1. The Trinity of Trouble: Why the Problem is Growing

3.1.1.1.1. Connectivity

3.1.1.1.2. Extensibility

3.1.1.1.3. Complexity

3.1.1.2. Basic Science

3.1.2. Security Problems in Software

3.1.2.1. Bugs and Flaws and Defects, Oh My!

3.1.2.1.1. Defect

3.1.2.1.2. Bug

3.1.2.1.3. Flaw

3.1.2.1.4. Risk

3.1.2.2. The Range of Defects

3.1.2.3. The Problem with Application Security

3.1.2.3.1. Application Security Testing Tools: Good or Bad?

3.1.2.4. Software Security and Operations

3.1.2.4.1. Security versus Software

3.1.3. Solving the Problem: The Three Pillars of Software Security

3.1.3.1. Pillar 1: Applied Risk Management

3.1.3.2. Pillar 2: Software Security Touchpoints

3.1.3.2.1. Microsoft's Trustworthy Computing Initiative

3.1.3.3. Pillar 3: Knowledge

3.1.4. The Rise of Security Engineering

3.1.4.1. Software Security Is Everyone's Job

3.2. 2. A Risk Management Framework

3.2.1. Putting Risk Management into Practice

3.2.2. How to use This Chapter

3.2.3. The Five Stages of Activity

3.2.3.1. Stage 1: Understand the Business Context

3.2.3.2. Stage 2: Identify the Business and Technical Risks

3.2.3.3. Stage 3: Synthesize and Rank the Risks

3.2.3.4. Stage 4: Define the Risk Mitigation Strategy

3.2.3.5. Stage 5: Carry Out Fixes and Validate

3.2.3.6. Measuring and Reporting on Risk

3.2.4. The RMF Is a Multilevel Loop

3.2.5. Applying the RMF: KillerAppCo's iWare 1.0 Server

3.2.5.1. Understanding the Business Context

3.2.5.1.1. Gathering the Artifacts

3.2.5.1.2. Conducting Project Research

3.2.5.2. Identifying the Business and Technical Risks

3.2.5.2.1. Developing Risk Questionaires

3.2.5.2.2. Interviewing the Target Project Team

3.2.5.2.3. Analyzing the Research and Interview Data

3.2.5.2.4. Uncovering Technical Risks

3.2.5.2.5. Analyzing Software Artifacts

3.2.5.3. Synthesizing and Ranking the Risks

3.2.5.3.1. Reviewing the Risk Data

3.2.5.3.2. Conducting the Business and Technical Peer Review

3.2.5.4. Defining the Risk Mitigation Strategy

3.2.5.4.1. Brainstorming on Risk Mitigation

3.2.5.4.2. Authoring the Risk Analysis Report

3.2.5.4.3. Producing Final Deliverables

3.2.5.5. Carrying Out Fixes and Validating

3.2.6. The Importance of Measurement

3.2.6.1. Measuring Return

3.2.6.2. Measurement and Metrics in the RMF

3.2.7. The Cigital Workbench

3.2.8. Risk Management Is a Framework for Software Security