Access Control Systems and Methodology

This Mind Map covers the Application and Systems Development domain on the Common Body of Knowledge (CBK). This domain addresses the important security concepts that apply to application software development. It outlines the environment where software is designed and developed and explains the critical role software plays in providing information system security.

Começar. É Gratuito
ou inscrever-se com seu endereço de e-mail
Access Control Systems and Methodology por Mind Map: Access Control Systems and Methodology

1. Threats

1.1. Application threats

1.1.1. Buffer overflows

1.1.2. Covert channel

1.1.2.1. Timing channel.

1.1.2.2. Storage channel

1.1.3. Data remanence

1.1.4. Dumpster diving

1.1.5. Eavesdropping

1.1.6. Emanations

1.1.7. Hackers

1.1.8. Impersonation

1.1.9. Internal intruders

1.1.10. Loss of processing capability

1.1.11. Malicious code

1.1.12. Masquerading/man-in-the-middle attacks

1.1.13. Mobile code

1.1.14. Object reuse

1.1.15. Password crackers

1.1.16. Physical access

1.1.17. Replay

1.1.18. Shoulder surfing

1.1.19. Sniffers

1.1.20. Social engineering

1.1.21. Spoofing

1.1.22. Spying

1.1.23. Targeted data mining

1.1.24. Trapdoor

1.1.25. Tunneling

1.2. Transmission Threats

1.2.1. Passive attacks

1.2.1.1. involve monitoring or eavesdropping on transmissions.

1.2.2. Active attacks

1.2.2.1. involve some modification of the data transmission or the creation of a false transmission.

1.2.3. Denial-of-Service (DoS)

1.2.3.1. occurs when invalid data is sent in such a way that it confuses the server software and causes it to crash.

1.2.3.2. Examples

1.2.3.2.1. E-mail spamming

1.2.3.2.2. Distributed Denial-of-Service

1.2.3.2.3. Ping of Death

1.2.3.2.4. Smurf

1.2.3.2.5. SYN Flooding

1.2.3.3. backhoe transmission loss

1.2.3.3.1. backhoe cuts into the cabling system carrying transmission links

1.2.3.3.2. smart pipes - provide damage detection information. Thus, if a cable were damaged, the smart pipe would be able to determine the type of damage to the cable, the physical position of the damage, and transmit a damage detection notification.

1.2.4. Distributed Denial-of-Service (DDoS)

1.2.4.1. requires the attacker to have many compromised hosts which overload a targeted server with packets until the server crashes.

1.2.4.2. A zombie is a computer infected with a daemon/ system agent without the owner’s knowledge and subsequently controlled by an attacker

1.2.4.3. Clients: TFN2K

1.2.4.4. Fixes

1.2.5. Ping of Death

1.2.5.1. Fixes

1.2.6. Smurfing

1.2.6.1. Fixes

1.2.7. SYN Flooding

1.2.7.1. Fixes

1.3. Malicious Code Threats

1.3.1. Virus

1.3.2. Worms

1.3.3. Trojan Horse

1.3.4. Logic Bomb

1.3.5. Fixes

1.3.5.1. Antivirus

1.3.5.2. Awareness

1.4. Password Threats

1.4.1. An unauthorized user attempts to steal the file that contains a list of the passwords.

1.4.2. Users may create weak passwords that are easily guessed.

1.4.3. Social engineering can be used to obtain passwords

1.4.4. Sniffers can be used to intercept a copy of the password as it travels from the client to the authentication mechanism.

1.4.5. Trojan horse code can be installed on a workstation that will present an unauthorized login window to the user.

1.4.6. Hardware or software keyboard intercepts can be used to record all data typed into the keyboard

2. Top Level

2.1. Accountability

2.2. Access Controls

2.2.1. Discretionary Access Control

2.2.2. Mandatory Access Control

2.3. Lattices

2.4. Methods of Attack

2.4.1. Malicious Code

2.4.1.1. Virus

2.4.1.2. Worm

2.4.1.3. Trojan

2.4.1.4. Logic Bomb

2.4.1.5. Trap Doors

2.4.2. Denial of Service

2.4.2.1. Resource Exhaustion

2.4.2.1.1. Fork Bomb

2.4.2.1.2. Flooding

2.4.2.1.3. Spamming

2.4.3. Cramming

2.4.3.1. Buffer Overflow

2.4.3.1.1. Stack Smashing

2.4.3.2. Specifically crafted URLs

2.4.4. Brute Force

2.4.5. Remote Maintenance

2.4.6. TOC/TOU

2.4.6.1. Time of Check

2.4.6.2. Time of Use

2.4.6.3. Exploits time base vulnerabilities

2.4.7. Interrupts

2.4.7.1. Faultline Attacks

2.4.7.2. Exploits hardware vulnerabilities

2.4.8. Code alteration

2.4.8.1. Root kits

2.4.8.2. When someone has altered

2.4.8.2.1. your code

2.4.9. Inference

2.4.9.1. Learning something through

2.4.9.1.1. analysis

2.4.9.2. Traffic analysis

2.4.10. Browsing

2.4.10.1. Sift through large volumes of

2.4.10.1.1. data for information

2.5. Overview

2.5.1. Controlling who can do what

2.5.2. Access Controls protect CIA

2.5.3. Access Controls reduce Risk

2.6. Threats to Access Control

2.6.1. User distrust of biometrics

2.6.1.1. Order of Acceptance

2.6.1.1.1. Voice Pattern

2.6.1.1.2. Keystroke Pattern

2.6.1.1.3. Signature

2.6.1.1.4. Hand Geometry

2.6.1.1.5. Hand Print

2.6.1.1.6. Finger Print

2.6.1.1.7. Iris

2.6.1.1.8. Retina Pattern

2.6.2. Misuse of privilege

2.6.3. Poor administration knowledge

2.7. Current Practices

2.7.1. Implement MAC if possible

2.7.2. Use third party tools in RBAC

2.7.2.1. for NDS and AD

2.7.3. Layered defences

2.7.4. Tokens

2.7.5. Biometrics

3. Systems and Methodologies

3.1. Mandatory (MAC)

3.1.1. All data has classification

3.1.2. All users have clearances

3.1.3. All clearances centrally controlled and cannot be overridden

3.1.3.1. Users cannot change security attributes at request

3.1.4. Subjects can only access objects if they have the right access level (clearance)

3.1.5. Also known as Lattice Based Access Control (LBAC)

3.1.6. Examples of MAC

3.1.6.1. Linux

3.1.6.1.1. RSBAC Adamantix Project

3.1.6.1.2. SE by NSA

3.1.6.1.3. LIDS

3.1.6.2. eTrust CA-ACF2

3.1.6.3. Multics-based Honeywell

3.1.6.4. SCOMP

3.1.6.5. Pump

3.1.6.6. Purple Penelope

3.1.7. Strengths

3.1.7.1. Controlled by system and cannot be overridden

3.1.7.2. Not subject to user error

3.1.7.3. Enforces strict controls on multi security systems

3.1.7.4. Helps prevent information leakage

3.1.8. Weaknesses

3.1.8.1. Protects only information in Digital Form

3.1.8.2. Assumes following:

3.1.8.2.1. Trusted users/administrators

3.1.8.2.2. Proper clearances have been applied to subjects

3.1.8.2.3. Users do not share accounts or access

3.1.8.2.4. Proper physical security is in place

3.2. Discretionary (DAC)

3.2.1. User can manage

3.2.1.1. Owners can change security attributes

3.2.2. Administrators can determine access to objects

3.2.3. Examples of DAC

3.2.3.1. Windows NT4.0

3.2.3.2. Most *NIX versions

3.2.3.3. Win2K can be included when

3.2.3.3.1. context is limited to files and

3.2.3.3.2. folders

3.2.4. Strengths

3.2.4.1. Convenient

3.2.4.2. Flexible

3.2.4.3. Gives users control

3.2.4.3.1. Ownership concept

3.2.4.4. Simple to understand

3.2.4.5. Software Personification

3.2.5. Weaknesses

3.2.5.1. No distinction between users

3.2.5.1.1. and programs

3.2.5.1.2. Processes are user surrogates

3.2.5.1.3. Processes can change access

3.2.5.1.4. DAC generally assumes a

3.2.5.1.5. Subject to user arbitrary discretion

3.2.5.2. Higher possiblity of unintended

3.2.5.2.1. results

3.2.5.2.2. Open to malicious software

3.2.5.2.3. Errors lead to possible great

3.2.5.2.4. No protection against even

3.3. Non-Discretionary

3.4. Role based (RBAC)

3.4.1. Assigns users to roles or groups based on organizational functions

3.4.2. Groups given authorization to certain data

3.4.3. Centralized Authority

3.4.4. Database Management

3.4.5. Based on Capabilities

3.4.6. Access rights established for each role

3.4.7. Examples of RBAC

3.4.7.1. Database functionality

3.4.7.1.1. Adjusting the schema

3.4.7.1.2. Default Sorting Order

3.4.7.1.3. Ability to Query (Select)

3.4.7.2. Microsoft Roles

3.4.7.2.1. Data Reader

3.4.7.2.2. Data Writer

3.4.7.2.3. DENY Data Reader

3.4.7.2.4. DENY Data Writer

3.5. Rule-Based (RSBAC)

3.5.1. Actions based on Subjects

3.5.1.1. operating on Objects

3.5.2. Based on Generalized Framework

3.5.2.1. for Access Control by Abrams and

3.5.2.2. LaPadula

3.6. List Based (Access Control LIsts)

3.6.1. Associates lists of Users and

3.6.1.1. their Privileges with each object

3.6.2. Each object has a list of default

3.6.2.1. privileges for unlisted users

3.7. Token Based

3.7.1. Associates a list of objects and their privileges with each User

3.7.2. Opposite of List Based

3.8. New Implementations

3.8.1. Context Based Access Control (CBAC)

3.8.1.1. XML Data Restrictions

3.8.1.2. Quotas

3.8.1.3. Preceeding actions

3.8.2. Privacy Aware RBAC (PARBAC)

4. Terms and Principles

4.1. Data owner

4.1.1. CEO

4.1.2. CFO

4.2. Data custodian

4.2.1. CIO

4.2.2. DBA

4.2.3. Server Admin

4.2.4. Network Admin

4.2.5. System Admin

4.3. Least Privilege

4.3.1. Access control needs good administration

4.3.2. Availability versus security

4.3.2.1. Most Secure = No Access

4.3.3. What are the business needs

4.3.4. Reduce the misuse of Privilege

4.4. Centralized Contol

4.5. Decentralized Contol

4.6. Separation of Duties

4.6.1. Break jobs into multiple segments

4.6.2. More critical the job the more segmentation

4.7. Rotation of Duties

4.7.1. Rotate persons though roles

4.7.2. Prevent over familiarization with roles

4.7.3. Forced Leaves

4.7.3.1. Helps detect fraud

4.8. Access Control Model Terminology

4.8.1. Subjects (Active)

4.8.1.1. Users

4.8.1.2. Processes

4.8.2. Objects (Passive)

4.8.2.1. Files

4.8.2.2. Directories

4.8.2.3. pipes

4.8.2.4. devices

4.8.2.5. sockets

4.8.2.6. ports

4.8.3. Rules (Filters)

4.8.3.1. UNIX

4.8.3.1.1. Read

4.8.3.1.2. Write

4.8.3.1.3. Execute

4.8.3.2. Windows NT4

4.8.3.2.1. Read

4.8.3.2.2. Write

4.8.3.2.3. Execute

4.8.3.2.4. No Access

4.8.4. Labels (Sensitivity)

4.8.4.1. Users/Subjects = Clearances

4.8.4.2. Data objects = Classifications

4.8.4.3. In addition to rules

4.8.4.3.1. Can be used to group Objects

4.8.4.3.2. Can be used to group Subjects

4.8.5. Interaction

4.8.5.1. Subject assigned Security Attributes

4.8.5.2. Objects assigned security attributes

4.8.5.3. Rules = Attributes

4.8.5.4. Rules evaluated in Security Reference Monitor to allow or disallow interaction

4.8.5.5. Interaction dictated by policy

4.8.5.5.1. What are the business rules?

4.8.5.5.2. How are the rules enforced?

4.8.6. Types of Access Control Systems for File Systems

4.8.6.1. Mandatory

4.8.6.2. Discretionary

4.8.6.3. Role Based

4.8.6.4. Must use Reference Monitor

4.8.6.4.1. Ensures interactions between Subjects and Objects are:

4.9. pranksters

4.9.1. hacker who conduct tricks on others, but are not intending to inflict any long-lasting harm.

5. Techniques

5.1. Access Management

5.1.1. Account Administration

5.1.1.1. Most important step

5.1.1.2. Verifies individual before providing access

5.1.1.3. Good time for orientation/training

5.1.2. Maintenance

5.1.2.1. Review Account data

5.1.2.2. Update periodically

5.1.3. Monitoring

5.1.3.1. Logging

5.1.3.2. Review

5.1.4. Revocation

5.1.4.1. Prompt revocation

5.2. Access Control Modes

5.2.1. Information Flow

5.2.1.1. Manages access by evaluating system as a whole

5.2.1.2. Emphasizes Garbage in Garbage out

5.2.1.3. Closely related to Lattice

5.2.1.3.1. Assigned classes dictate whether an object being accessed by a subject can flow into another class

5.2.1.4. Defined:

5.2.1.4.1. A type of dependency that relates two versions of the same object, and thus transformation of one state into another, at successive points in time.

5.2.1.5. the tuple

5.2.1.5.1. subject

5.2.1.5.2. object

5.2.1.5.3. operation

5.2.1.6. related to access models

5.2.1.6.1. in lattice one security class is given to each entity in the system. A flow relation among the security classes is defined to denote that information in one class (s1) can flow into another class (s2).

5.2.1.6.2. in the mandatory model, the access rule (s,o,t) is specified so that the flow relation between the subject (s) and the object (o) holds. Read and Write are the only considered forms of operations (t)

5.2.1.6.3. in the role based model, a role is defined in a set of operations on objects. The role represents a function or job in the application. The access rule is defined to bind a subject to the roles.

5.2.2. State Machine

5.2.2.1. Example: Authentication

5.2.2.1.1. Unauthenticated

5.2.2.1.2. Authentication Pending

5.2.2.1.3. Authenticated

5.2.2.1.4. Authorization Pending

5.2.2.1.5. Authorized

5.2.2.2. Captures the state of a system at a given point of time

5.2.2.3. Monitors changes introduced after the initial state

5.2.2.3.1. By chronology

5.2.2.3.2. By Event

5.2.3. Covert Channels

5.2.3.1. Information flows from higher to lower classifications

5.2.3.2. Can be introduced deliberately

5.2.3.3. Can not be stopped

5.2.3.4. Uses normal system resources to signal information

5.2.3.5. Additional reading

5.2.3.5.1. Sans Reading Room

5.2.3.5.2. ucsb.edu

5.2.4. Non-Interference

5.2.4.1. Based on variations in the input there should be no way to predict the output

5.2.4.2. Each input processing path should be independent and have no internal relationships

6. Access Control Measures

6.1. Preventive

6.1.1. try to Prevent attacks from occuring

6.1.2. Can be partially effective with Defence in Depth

6.1.3. Not always effective

6.1.4. Works with Deterrent measures

6.1.5. Examples

6.1.5.1. Physical

6.1.5.1.1. Fences

6.1.5.1.2. Guards

6.1.5.1.3. Alternate Power Source

6.1.5.1.4. Fire Extinguisher

6.1.5.1.5. Badges, ID Cards

6.1.5.1.6. Mantraps

6.1.5.1.7. Turnstiles

6.1.5.1.8. Limiting access to physical resources through the use of bollards, locks, alarms, or

6.1.5.2. Administrative

6.1.5.2.1. Policies and procedures

6.1.5.2.2. Security awareness training

6.1.5.2.3. Separation of duties

6.1.5.2.4. Security reviews and audits

6.1.5.2.5. Rotation of duties

6.1.5.2.6. Procedures for recruiting and terminating employees

6.1.5.2.7. Security clearances

6.1.5.2.8. Background checks

6.1.5.2.9. Alert supervision

6.1.5.2.10. Performance evaluations

6.1.5.2.11. Mandatory vacation time

6.1.5.3. Technical

6.1.5.3.1. Access control software, such as firewalls, proxy servers

6.1.5.3.2. Anti-virus software

6.1.5.3.3. Passwords

6.1.5.3.4. Smart cards/biometrics/badge systems

6.1.5.3.5. Encryption

6.1.5.3.6. Dial-up callback systems

6.1.5.3.7. Audit trails

6.1.5.3.8. Intrusion detection systems (IDSs)

6.1.6. Firewalls

6.1.6.1. Packet Filtering

6.1.6.1.1. Decision based on IP and Port

6.1.6.1.2. Does not know state

6.1.6.1.3. very fast

6.1.6.2. Stateful

6.1.6.2.1. Knows if incoming packet was

6.1.6.2.2. Unknown packets discarded

6.1.6.3. Proxy

6.1.6.3.1. Slow

6.1.6.3.2. Never a connection from

6.1.7. Network Vulnerability Scanner

6.1.7.1. Nessus

6.1.7.2. GFI LanGuard

6.1.7.3. ISS

6.1.7.4. NAI

6.1.8. Vulnerability Assessment

6.1.8.1. Scanning key servers

6.1.8.2. Looks for common known

6.1.8.2.1. vulnerabilities

6.1.9. Penetration Tests

6.1.9.1. Simulates an attacker trying to

6.1.9.1.1. break in

6.1.9.2. Finds weaknesses

6.1.9.3. Only as good as the attacker

6.1.9.4. Does not provide

6.1.9.4.1. comprehensive view

6.1.9.5. Usually done after Vulnerability

6.1.9.5.1. Assessment

6.1.10. Security Assessment

6.1.10.1. Comprehensive view of

6.1.10.1.1. Network Security

6.1.10.2. Analyzes entire network from inside

6.1.10.3. Creates a complete list of risks

6.1.10.3.1. against critical assets

6.2. Detective

6.2.1. Assumes Attack is Successful

6.2.2. Tries to detect AFTER an attack occurs

6.2.3. Time critical when attack is occuring

6.2.4. Examples

6.2.4.1. Physical

6.2.4.1.1. Motion Detectors

6.2.4.1.2. CCTV

6.2.4.1.3. Smoke Detectors

6.2.4.1.4. Sensors

6.2.4.1.5. Alarms

6.2.4.2. Administrative

6.2.4.2.1. Audits

6.2.4.2.2. Regular performance reviews

6.2.4.2.3. Background Investigations

6.2.4.2.4. Force users to take leaves

6.2.4.2.5. Rotation of duties

6.2.4.3. Technical

6.2.4.3.1. Audits

6.2.4.3.2. Intrusion Detection Systems

6.2.5. Intrusion Detection Systems

6.2.5.1. Pattern Matching

6.2.5.2. Anomaly Detection

6.3. Other

6.3.1. Deterrent

6.3.1.1. Discourages security violations (Preventative)

6.3.1.2. Examples

6.3.1.2.1. Administrative

6.3.1.2.2. Physical

6.3.1.2.3. Technical

6.3.2. Compensating

6.3.2.1. Provide alternatives to other controls

6.3.3. Corrective

6.3.3.1. Reacts to an attack and takes corrective action for data recovery

6.3.4. Recovery

6.3.4.1. Restores the operating state to normal after an attack or system failure

6.4. Areas of Application

6.4.1. Administrative

6.4.2. Physical

6.4.3. Technical

7. Identity, Authentication, and Authorization

7.1. Identity and Authentication are not the same thing

7.1.1. Identity is who you say you are

7.1.2. Authentication is the process of verifying your Identity

7.2. Identity

7.2.1. User Identity enables accountability

7.2.2. Positive Identification

7.2.3. Negative Identification

7.2.4. Weak in terms of enforcement

7.3. Authentication

7.3.1. Validates Identity

7.3.2. Involves stronger measure that

7.3.3. indentification

7.3.4. Usually requires a key piece of information only the user would know

7.3.5. User Acceptance needed for success

7.3.6. Must meet business requirements

7.3.7. Methods of Authentication

7.3.7.1. Something you

7.3.7.1.1. know

7.3.7.1.2. have

7.3.7.1.3. are (Biometrics)

7.3.7.2. Somewhere you are

7.3.7.2.1. Based on GPS

7.3.7.2.2. Costly

7.3.7.2.3. Works well with

7.3.7.3. Strong Authentication

7.3.7.3.1. Two Factor

7.3.7.3.2. Multi-Factor

7.3.7.4. Centralized Control

7.3.7.4.1. RADIUS

7.3.7.4.2. TACACS+

7.3.7.4.3. Domains and Trusts

7.3.8. Protocols

7.3.8.1. Originally designed

7.3.8.1.1. for use with PPP

7.3.8.1.2. Password Authentication

7.3.8.1.3. Challenge Handshake

7.3.8.2. Windows related

7.3.8.2.1. Win2K native is secure

7.3.8.2.2. Win2K in compatability mode is weakened by LM

7.3.8.2.3. LM Support needed for

7.3.8.2.4. LanManager (LM)

7.3.8.2.5. NTLM and NTLM2

7.3.8.3. Kerberos

7.3.8.3.1. Much more secure

7.3.8.3.2. Still some concerns

7.3.8.3.3. Now in use in Windows

7.3.8.3.4. Features

7.3.8.3.5. Process

7.3.8.3.6. Strengths

7.4. Authorization

7.4.1. What a subject can do once Authenticated

7.4.2. Most systems do a poor job

7.4.3. Tied closely to POLP

8. Access Control Models

8.1. Lattice

8.1.1. Deals with Information Flow

8.1.2. Formalizes network security models

8.1.3. Shows how information can or cannot flow

8.1.4. Drawn as a graph with directed arrows

8.1.5. Properties of a Lattice

8.1.5.1. A set of elements

8.1.5.2. A partial Ordering relation

8.1.5.3. The property that any two elements must have unique least upper bound and greatest lower bound

8.2. Confidentiality: Bell-LaPadula

8.2.1. Deals with confidentiality

8.2.2. Two Key principles

8.2.2.1. No Read Up (Simple Property)

8.2.2.2. No Write Down (Property)

8.2.2.2.1. Prevents write-down trojans for declassifying data

8.2.3. Also: Strong Property

8.2.3.1. No read down

8.2.3.2. No write up

8.2.3.3. Can only act on a single level

8.2.4. Tranquility Properties

8.2.4.1. Weak Tranquility:

8.2.4.1.1. Security labels of subjects never change

8.2.4.1.2. in such a way as to violate a defined

8.2.4.1.3. security policy

8.2.4.2. Strong tranquility property:

8.2.4.2.1. Labels never change during system operation

8.3. Integrity: Biba

8.3.1. Deals with integrity

8.3.2. Opposite of BLP

8.3.2.1. No read down

8.3.2.2. No write up

8.3.3. Two key principles

8.3.3.1. Simple integrity property

8.3.3.1.1. A user cannot write data to a higher level than they are assigned

8.3.3.1.2. A user cannot read data of a lower integrity level than theirs

8.3.3.2. Integrity Property

8.3.4. Developed by Ken Biba in 1975

8.4. Commercial: Clark-Wilson

8.4.1. Deals with Integrity

8.4.2. Adapted for Commercial use

8.4.3. Two Properties

8.4.3.1. Internal Consistency

8.4.3.1.1. Properties of the internal state of the system

8.4.3.2. External Consistency

8.4.3.2.1. Relation of the internal state of a system to the outside world

8.4.4. Separation of Duties

8.4.5. Rules

8.4.5.1. Integrity Monitoring (certification)

8.4.5.1.1. Notions

8.4.5.2. Integrity Preserving (enforcement)

8.4.5.2.1. How integrity of constrained items is maintained

8.4.5.2.2. Subjects Identities are Authenticated

8.4.5.2.3. How integrity of constrained items is maintained

8.4.5.2.4. Triples are carefully maintained

8.4.5.2.5. Transformational proceedures executed serially and not in parallel

8.4.6. Triples

8.4.6.1. subject

8.4.6.2. program

8.4.6.3. object

8.5. Others