Começar. É Gratuito
ou inscrever-se com seu endereço de e-mail
CCSP por Mind Map: CCSP

1. Cloud Application Security

1.1. Determining Data Sensitivity and Importance

1.1.1. Independence and the ability to present a true and accurate account of information types along with the requirements for confidentiality, integrity, and availability may be the difference between a successful project and a failure.

1.1.2. “CLOUD-FRIENDLINESS” QUESTIONS: What would the impact be if

1.1.2.1. The information/data became widely public and widely distributed (including crossing geographic boundaries)?

1.1.2.2. An employee of the cloud provider accessed the application?

1.1.2.3. The process or function was manipulated by an outsider?

1.1.2.4. The process or function failed to provide expected results?

1.1.2.5. The information/data were unexpectedly changed?

1.1.2.6. The application was unavailable for a period of time?

1.2. Application Programming Interfaces (APIs)

1.2.1. Representational State Transfer (REST): A software architecture style consisting of guidelines and best practices for creating scalable web service

1.2.1.1. Uses simple HTTP protocol

1.2.1.2. Supports many different data formats like JSON, XML, YAML, etc.

1.2.1.3. Performance and scalability are good and uses caching

1.2.1.4. Widely used

1.2.2. Simple Object Access Protocol (SOAP): A protocol specification for exchanging structured information in the implementation of web services in computer networks

1.2.2.1. Uses SOAP envelope and then HTTP (or FTP/SMTP, etc.) to transfer the data

1.2.2.2. Only supports XML format

1.2.2.3. Slower performance, scalability can be complex, and caching is not possible

1.2.2.4. Used where REST is not possible, provides WS-* features

1.3. Common Pitfalls of Cloud Security Application Deployment

1.3.1. ON-PREMISE DOES NOT ALWAYS TRANSFER (AND VICE VERSA)

1.3.1.1. Present performance and functionality may not be transferable. Current configurations and applications may be hard to replicate on or through cloud services.

1.3.1.1.1. First, they were not developed with cloud-based services in mind. The continued evolution and expansion of cloud-based service offerings looks to enhance previous technologies and development, not always maintaining support for more historical development and systems.

1.3.1.1.2. Second, not all applications can be “forklifted” to the cloud. Forklifting an application is the process of migrating an entire application the way it runs in a traditional infrastructure with minimal code changes.

1.3.2. NOT ALL APPS ARE “CLOUD-READY”

1.3.2.1. Business critical systems were developed, tested, and assessed in on-premise or traditional environments to a level where confidentiality and integrity have been verified and assured. Many high-end applications come with distinct security and regulatory restrictions or rely on legacy coding projects.

1.3.3. LACK OF TRAINING AND AWARENESS

1.3.3.1. New development techniques and approaches require training and a willingness to utilize new services.

1.3.4. DOCUMENTATION AND GUIDELINES (OR LACK THEREOF)

1.3.4.1. Developers have to follow relevant documentation, guidelines, methodologies, processes, and lifecycles in order to reduce opportunities for unnecessary or heightened risk to be introduced. Disconnect between some providers and developers on how to utilize, integrate, or meet vendor requirements for development might exist

1.3.5. COMPLEXITIES OF INTEGRATION

1.3.5.1. When developers and operational resources do not have open or unrestricted access to supporting components and services, integration can be complicated, particularly where the cloud provider manages infrastructure, applications, and integration platforms.

1.3.5.2. From a troubleshooting perspective, it can prove difficult to track or collect events and transactions across interdependent or underlying components. In an effort to reduce these complexities, where possible (and available), the cloud provider’s API should be used.

1.3.6. OVERARCHING CHALLENGES

1.3.6.1. developers must keep in mind two key risks associated with applications that run in the cloud

1.3.6.1.1. Multi-tenancy

1.3.6.1.2. Third-party administrators

1.3.6.2. developers must understand the security requirements based on the

1.3.6.2.1. Deployment model (public, private, community, hybrid) that the application will run in

1.3.6.2.2. Service model (IaaS, PaaS, or SaaS)

1.3.6.3. developers must be aware that metrics will always be required

1.3.6.3.1. cloud-based applications may have a higher reliance on metrics than internal applications to supply visibility into who is accessing the application and the actions they are performing.

1.3.6.4. developers must be aware of encryption dependencies for

1.3.6.4.1. Encryption of data at rest

1.3.6.4.2. Encryption of data in transit

1.3.6.4.3. Data masking (or data obfuscation)

1.4. Software Development Lifecycle (SDLC) Process for a Cloud Environment

1.4.1. SDLC PROCESS MODELS PHASES

1.4.1.1. 1. Planning and requirements analysis: Business (functional and non-functional), quality-assurance and security requirements and standards are being determined and risks associated with the project are being identified. This phase is the main focus of the project managers and stakeholders.

1.4.1.2. 2.Defining: The defining phase is meant to clearly define and document the product requirements in order to place them in front of the customers and get them approved. This is done through a requirement specification document, which consists of all the product requirements to be designed and developed during the project lifecycle.

1.4.1.3. 3.Designing: System design helps in specifying hardware and system requirements and also helps in defining overall system architecture. The system design specifications serve as input for the next phase of the model. Threat modeling and secure design elements should be undertaken and discussed here.

1.4.1.4. 4.Developing: Upon receiving the system design documents, work is divided into modules/units and actual coding starts. This is typically the longest phase of the software development lifecycle. Activities include code review, unit testing, and static analysis.

1.4.1.5. 5.Testing: After the code is developed, it is tested against the requirements to make sure that the product is actually solving the needs gathered during the requirements phase. During this phase, unit testing, integration testing, system testing, and acceptance testing are all conducted.

1.4.2. SECURE OPERATIONS PHASE

1.4.2.1. Proper software configuration management and versioning is essential to application security. There are some tools

1.4.2.1.1. Puppet: Puppet is a configuration management system that allows you to define the state of your IT infrastructure and then automatically enforces the correct state.

1.4.2.1.2. Chef: With Chef, you can automate how you build, deploy, and manage your infrastructure. The Chef server stores your recipes as well as other configuration data. The Chef client is installed on each server, virtual machine, container, or networking device you manage (called nodes). The client periodically polls the Chef server for the latest policy and the state of your network. If anything on the node is out of date, the client brings it up to date.

1.4.2.2. Activities

1.4.2.2.1. Dynamic analysis

1.4.2.2.2. Vulnerability assessments and penetration testing (as part of a continuous monitoring plan)

1.4.2.2.3. Activity monitoring

1.4.2.2.4. Layer-7 firewalls (e.g., web application firewalls)

1.4.3. DISPOSAL PHASE

1.4.3.1. Challenge: ensure that data is properly disposed

1.4.3.1.1. Crypto-shredding is effectively summed up as the deletion of the key used to encrypt data that’s stored in the cloud.

1.5. Assessing Common Vulnerabilities

1.5.1. (OWASP) Top 10

1.5.1.1. “Injection: Includes injection flaws such as SQL, OS, LDAP, and other injections. These occur when untrusted data is sent to an interpreter as part of a command or query. If the interpreter is successfully tricked, it will execute the unintended commands or access data without proper authorization.

1.5.1.2. “Broken authentication and session management: Application functions related to authentication and session in management are often not implemented correctly, allowing attackers to compromise passwords, keys, or session tokens or to exploit other implementation flaws to assume other users’ identities.

1.5.1.3. “Cross-site scripting (XSS): XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation or escaping. XSS allows attackers to execute scripts in the victim’s browser, which can hijack user sessions, deface websites, or redirect the user to malicious sites.

1.5.1.4. “Insecure direct object references: A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data.

1.5.1.5. “Security misconfiguration: Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, and platform. Secure settings should be defined, implemented, and maintained, as defaults are often insecure. Additionally, software should be kept up to date.

1.5.1.6. “Sensitive data exposure: Many web applications do not properly protect sensitive data, such as credit cards, tax IDs, and authentication credentials. Attackers may steal or modify such weakly protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data deserves extra protection, such as encryption at rest or in transit, as well as special precautions when exchanged with the browser.

1.5.1.7. “Missing function-level access control: Most web applications verify function-level access rights before making that functionality visible in the UI. However, applications need to perform the same access control checks on the server when each function is accessed. If requests are not verified, attackers will be able to forge requests in order to access functionality without proper authorization.

1.5.1.8. “Cross-site request forgery (CSRF): A CSRF attack forces a logged-on victim’s browser to send a forged HTTP request, including the victim’s session cookie and any other automatically included authentication information, to a vulnerable web application. This allows the attacker to force the victim’s browser to generate requests that the vulnerable application thinks are legitimate requests from the victim.

1.5.1.9. “Using components with known vulnerabilities: Components, such as libraries, frameworks, and other software modules, almost always run with full privileges. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications using components with known vulnerabilities may undermine application defenses and enable a range of possible attacks and impacts.

1.5.1.10. “Invalidated redirects and forwards: Web applications frequently redirect and forward users to other pages and websites, and use untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites or use forwards to access unauthorized pages.”

1.5.2. NIST Framework for Improving Critical Infrastructure Cybersecurity

1.5.2.1. Parts

1.5.2.1.1. Framework Core: Cybersecurity activities and outcomes divided into five functions: Identify, Protect, Detect, Respond, and Recover

1.5.2.1.2. Framework Profile: To help the company align activities with business requirements, risk tolerance, and resources

1.5.2.1.3. Framework Implementation Tiers: To help organizations categorize where they are with their approach

1.5.2.2. Framework provides a common taxonomy and mechanism for organizations to

1.5.2.2.1. Describe their current cybersecurity posture

1.5.2.2.2. Describe their target state for cybersecurity

1.5.2.2.3. Identify and prioritize opportunities for improvement within the context of a continuous and repeatable process

1.5.2.2.4. Assess progress toward the target state

1.5.2.2.5. Communicate among internal and external stakeholders about cybersecurity risk

1.6. Cloud-Specific Risks

1.6.1. Applications that run in a PaaS environment may need security controls baked into them

1.6.1.1. encryption may be needed to be programmed into applications

1.6.1.2. logging may be difficult depending on what the cloud service provider can offer your organization

1.6.1.3. ensure that one application cannot access other applications on the platform unless it’s allowed access through a control

1.6.2. CSA: The Notorious Nine: Cloud Computing Top Threats in 2013

1.6.2.1. Data breaches: If a multi-tenant cloud service database is not properly designed, a flaw in one client’s application could allow an attacker access not only to that client’s data but to every other client’s data as well.

1.6.2.2. Data loss: Any accidental deletion by the cloud service provider, or worse, a physical catastrophe such as a fire or earthquake, could lead to the permanent loss of customers’ data unless the provider takes adequate measures to back up data. Furthermore, the burden of avoiding data loss does not fall solely on the provider’s shoulders. If a customer encrypts his or her data before uploading it to the cloud but loses the encryption key, the data will be lost as well.

1.6.2.3. Account hijacking: If attackers gain access to your credentials, they can eavesdrop on your activities and transactions, manipulate data, return falsified information, and redirect your clients to illegitimate sites. Your account or service instances may become a new base for the attacker.

1.6.2.4. Insecure APIs: Cloud computing providers expose a set of software interfaces or APIs that customers use to manage and interact with cloud services. Provisioning, management, orchestration, and monitoring are all performed using these interfaces. The security and availability of general cloud services is dependent on the security of these basic APIs. From authentication and access control to encryption and activity monitoring, these interfaces must be designed to protect against both accidental and malicious attempts to circumvent policy.

1.6.2.5. Denial of service: By forcing the victim cloud service to consume inordinate amounts of finite system resources such as processor power, memory, disk space, or network bandwidth, the attacker causes an intolerable system slowdown

1.6.2.6. Malicious insiders: CERN defines an insider threat as “A current or former employee, contractor, or other business partner who has or had authorized access to an organization’s network, system, or data and intentionally exceeded or misused that access in a manner that negatively affected the confidentiality, integrity, or availability of the organization’s information or information systems.”

1.6.2.7. Abuse of cloud services: It might take an attacker years to crack an encryption key using his own limited hardware, but using an array of cloud servers, he might be able to crack it in minutes. Alternately, he might use that array of cloud servers to stage a DDoS attack, serve malware, or distribute pirated software.

1.6.2.8. Insufficient due diligence: Too many enterprises jump into the cloud without understanding the full scope of the undertaking. Without a complete understanding of the CSP environment, applications, or services being pushed to the cloud, and operational responsibilities such as incident response, encryption, and security monitoring, organizations are taking on unknown levels of risk in ways they may not even comprehend but that are a far departure from their current risks.

1.6.2.9. Shared technology issues: Whether it’s the underlying components that make up this infrastructure (CPU caches, GPUs, etc.) that were not designed to offer strong isolation properties for a multi-tenant architecture (IaaS), re-deployable platforms (PaaS), or multi-customer applications (SaaS), the threat of shared vulnerabilities exists in all delivery models. A defensive in-depth strategy is recommended and should include compute, storage, network, application and user security enforcement, and monitoring, whether the service model is IaaS, PaaS, or SaaS. The key is that a single vulnerability or misconfiguration can lead to a compromise across an entire provider’s cloud.

1.7. Threat Modeling

1.7.1. Threat modeling is performed once an application design is created. The goal of threat modeling is to determine any weaknesses in the application and the potential ingress, egress, and actors involved before it is introduced to production.

1.7.2. STRIDE THREAT MODEL

1.7.2.1. Spoofing: Attacker assumes identity of subject

1.7.2.2. Tampering: Data or messages are altered by an attacker

1.7.2.3. Repudiation: Illegitimate denial of an event

1.7.2.4. Information disclosure: Information is obtained without authorization

1.7.2.5. Denial of service: Attacker overloads system to deny legitimate access

1.7.2.6. Elevation of privilege: Attacker gains a privilege level above what is permitted

1.7.3. APPROVED APPLICATION PROGRAMMING INTERFACES (APIS)

1.7.3.1. Benefits of API

1.7.3.1.1. Programmatic control and access

1.7.3.1.2. Automation

1.7.3.1.3. Integration with third-party tools

1.7.3.2. CSP must ensure that there is a formal approval process in place for all APIs (internal and external)

1.7.4. SOFTWARE SUPPLY CHAIN (API) MANAGEMENT

1.7.4.1. Consuming software that is being developed by a third party or accessed with or through third-party libraries to create or enable functionality, without having a clear understanding of the origins of the software and code in question leads to a situation where there is complex and highly dynamic software interaction taking place between and among one or more services and systems within the organization and between organizations via the cloud.

1.7.4.2. It is important to assess all code and services for proper and secure functioning no matter where they are sourced

1.7.5. SECURING OPEN SOURCE SOFTWARE

1.7.5.1. Software that has been openly tested and reviewed by the community at large is considered by many security professionals to be more secure than software that has not undergone such a process.

1.8. Identity and Access Management (IAM)

1.8.1. Identity and Access Management (IAM) includes people, processes, and systems that are used to manage access to enterprise resources by ensuring that the identity of an entity is verified and then granting the correct level of access based on the protected resource, this assured identity, and other contextual information

1.8.2. IDENTITY MANAGEMENT

1.8.2.1. Identity management is a broad administrative area that deals with identifying individuals in a system and controlling their access to resources within that system by associating user rights and restrictions with the established identity.

1.8.3. ACCESS MANAGEMENT

1.8.3.1. Authentication identifies the individual and ensures that he is who he claims to be. It establishes identity by asking, “Who are you?” and “How do I know I can trust you?”

1.8.3.2. Authorization evaluates “What do you have access to?” after authentication occurs.

1.8.3.3. Policy management establishes the security and access policies based on business needs and degree of acceptable risk.

1.8.3.4. Federation is an association of organizations that come together to exchange information as appropriate about their users and resources in order to enable collaborations and transactions

1.8.3.4.1. Federated Identity Management

1.8.3.5. Identity repository includes the directory services for the administration of user account attributes.

1.9. Multi-Factor Authentication

1.9.1. adds an extra level of protection to verify the legitimacy of a transaction.

1.9.2. What they know (e.g., password)

1.9.3. What they have (e.g., display token with random numbers displayed)

1.9.4. What they are (e.g., biometrics)

1.9.5. Step-up authentication is an additional factor or procedure that validates a user’s identity, normally prompted by high-risk transactions or violations according to policy rules. Methods:

1.9.5.1. Challenge questions

1.9.5.2. Out-of-band authentication (a call or SMS text message to the end user)

1.9.5.3. Dynamic knowledge-based authentication (questions unique to the end user)

1.10. Supplemental Security Devices

1.10.1. used to add additional elements and layers to a defense-in-depth architecture.

1.10.2. WAF

1.10.2.1. A Web Application Firewall (WAF) is a layer-7 firewall that can understand HTTP traffic.

1.10.2.2. A cloud WAF can be extremely effective in the case of a denial-of-service (DoS) attack; several cases exist where a cloud WAF was used to successfully thwart DoS attacks of 350Gbs and 450Gbs.

1.10.3. DAM

1.10.3.1. Database Activity Monitoring (DAM) is a layer-7 monitoring device that understands SQL commands.

1.10.3.2. DAM can be agent-based (ADAM) or network-based (NDAM).

1.10.3.3. A DAM can be used to detect and stop malicious commands from executing on an SQL server.

1.10.4. XML

1.10.4.1. XML gateways transform how services and sensitive data are exposed as APIs to developers, mobile users, and cloud users.

1.10.4.2. XML gateways can be either hardware or software.

1.10.4.3. XML gateways can implement security controls such as DLP, antivirus, and anti-malware services.

1.10.5. Firewalls

1.10.5.1. Firewalls can be distributed or configured across the SaaS, PaaS, and IaaS landscapes; these can be owned and operated by the provider or can be outsourced to a third party for the ongoing management and maintenance.

1.10.5.2. Implementation of firewalls in the cloud will need to be installed as software components (e.g., host-based firewall).

1.10.6. API Gateway

1.10.6.1. An API gateway is a device that filters API traffic; it can be installed as a proxy or as a specific part of your applications stack before data is processed.

1.10.6.2. API gateway can implement access control, rate limiting, logging, metrics, and security filtering.

1.11. Cryptography

1.11.1. In Transit

1.11.1.1. Transport Layer Security (TLS): A protocol that ensures privacy between communicating applications and their users on the Internet.

1.11.1.2. Secure Sockets Layer: The standard security technology for establishing an encrypted link between a web server and a browser. This link ensures that all data passed between the web server and browsers remain private and integral.

1.11.1.3. VPN (e.g., IPSEC gateway): A network that is constructed by using public wires—usually the Internet—to connect to a private network, such as a company’s internal network.

1.11.2. At rest

1.11.2.1. Whole instance encryption: A method for encrypting all of the data associated with the operation and use of a virtual machine, such as the data stored at rest on the volume, disk I/O, and all snapshots created from the volume, as well as all data in transit moving between the virtual machine and the storage volume.

1.11.2.2. Volume encryption: A method for encrypting a single volume on a drive. Parts of the hard drive will be left unencrypted when using this method. (Full disk encryption should be used to encrypt the entire contents of the drive, if that is what is desired).

1.11.2.3. File/directory encryption: A method for encrypting a single file/directory on a drive.

1.11.3. There are times when the use of encryption may not be the most appropriate or functional choice for a system protection element, due to design, usage, and performance concerns. As a result, additional technologies and approaches become necessary

1.11.3.1. Tokenization generates a token (often a string of characters) that is used to substitute sensitive data, which is itself stored in a secured location such as a database.

1.11.3.2. Data masking is a technology that keeps the format of a data string but alters the content.

1.11.3.3. Sandbox isolates and utilizes only the intended components, while having appropriate separation from the remaining components (i.e., the ability to store personal information in one sandbox, with corporate information in another sandbox). Within cloud environments, sandboxing is typically used to run untested or untrusted code in a tightly controlled environment.

1.12. Application Virtualization

1.12.1. creates an encapsulation from the underlying operating system.

1.12.2. Examples

1.12.2.1. “Wine” allows for some Microsoft applications to run on a Linux platform.

1.12.2.2. Windows XP mode in Windows 7

1.12.3. Assurance and validation techniques

1.12.3.1. Software assurance: Software assurance encompasses the development and implementation of methods and processes for ensuring that software functions as intended while mitigating the risks of vulnerabilities, malicious code, or defects that could bring harm to the end user.

1.12.3.2. Verification and validation: In order for project and development teams to have confidence and to follow best practice guidelines, verification and validation of coding at each stage of the development process are required. Coupled with relevant segregation of duties and appropriate independent review, verification and validation look to ensure that the initial concept and delivered product is complete.

1.12.3.2.1. verify that requirements are specified and measurable

1.12.3.2.2. test plans and documentation are comprehensive and consistently applied to all modules and subsystems and integrated with the final product.

1.12.3.2.3. Verification and validation should be performed at each stage of the SDLC and in line with change management components.

1.13. Cloud-Based Functional Data

1.13.1. the data collected, processed, and transferred by the separate functions of the application can have separate legal implications depending on how that data is used, presented, and stored.

1.13.2. Breaking down systems to the functions and services that have legal implications from those that don’t is essential to the overall security posture of your cloud-based systems and overall enterprise need to meet contractual, legal, and regulatory requirements.

1.14. Cloud-Secure Development Lifecycle

1.14.1. the purpose of a cloud-secure development lifecycle: Understanding that security must be “baked in” from the very onset of an application being created/consumed by an organization leads to a higher reasonable assurance that applications are properly secured prior to being used by an organization

1.14.2. ISO/IEC 27034-1

1.14.2.1. “Information Technology – Security Techniques – Application Security.”: defines concepts, frameworks, and processes to help organizations integrate security within their software development lifecycle.

1.14.2.2. ORGANIZATIONAL NORMATIVE FRAMEWORK (ONF)

1.14.2.2.1. Business context: Includes all application security policies, standards, and best practices adopted by the organization

1.14.2.2.2. Regulatory context: Includes all standards, laws, and regulations that affect application security

1.14.2.2.3. Technical context: Includes required and available technologies that are applicable to application security

1.14.2.2.4. Specifications: Documents the organization’s IT functional requirements and the solutions that are appropriate to address these requirements

1.14.2.2.5. Roles, responsibilities, and qualifications: Documents the actors within an organization who are related to IT applications

1.14.2.2.6. Processes: Related to application security

1.14.2.2.7. Application security control library: Contains the approved controls that are required to protect an application based on the identified threats, the context, and the targeted level of trust

1.14.2.3. APPLICATION NORMATIVE FRAMEWORK (ANF)

1.14.2.3.1. The ANF maintains the applicable portions of the ONF that are needed to enable a specific application to achieve a required level of security or the targeted level of trust. The ONF to ANF is a one-to-many relationship, where one ONF will be used as the basis to create multiple ANFs.

1.14.2.4. APPLICATION SECURITY MANAGEMENT PROCESS (ASMP)

1.14.2.4.1. ASMPmmanages and maintains each ANF

1.14.2.4.2. Specifying the application requirements and environment

1.14.2.4.3. Assessing application security risks

1.14.2.4.4. Creating and maintaining the ANF

1.14.2.4.5. Provisioning and operating the application

1.14.2.4.6. Auditing the security of the application

1.15. Application Security Testing

1.15.1. STATIC APPLICATION SECURITY TESTING (SAST)

1.15.1.1. a white-box test, where an analysis of the application source code, byte code, and binaries is performed by the application test without executing the application code.

1.15.1.2. Goal: determine coding errors and omissions that are indicative of security vulnerabilities

1.15.1.3. SAST can be used to find cross-site scripting errors, SQL injection, buffer overflows, unhandled error conditions, as well as potential back doors.

1.15.1.4. SAST typically delivers more comprehensive results than those found using Dynamic Application Security Testing (DAST)

1.15.2. DYNAMIC APPLICATION SECURITY TESTING (DAST)

1.15.2.1. a black-box test, where the tool must discover individual execution paths in the application being analyzed.

1.15.2.2. DAST is mainly considered effective when testing exposed HTTP and HTML interfaces of web applications.

1.15.3. RUNTIME APPLICATION SELF PROTECTION (RASP)

1.15.3.1. is generally considered to focus on applications that possess self-protection capabilities built into their runtime environments, which have full insight into application logic, configuration, and data and event flows.

1.15.4. VULNERABILITY ASSESSMENTS AND PENETRATION TESTING

1.15.4.1. both play a significant role and support security of applications and systems prior to an application going into and while in a production environment.

1.15.4.2. Vulnerability assessments are often performed as white-box tests, where the assessor knows that application and they have complete knowledge of the environment the application runs in.

1.15.4.3. Penetration testing is a process used to collect information related to system vulnerabilities and exposures, with the view to actively exploit the vulnerabilities in the system. Penetration testing is often a black-box test

1.15.4.4. SaaS providers are most likely not to grant permission for penetration tests to occur by clients. Generally, only a SaaS provider’s resources will be permitted to perform penetration tests on the SaaS application.

1.15.5. SECURE CODE REVIEWS

1.15.5.1. informal

1.15.5.1.1. one or more individuals examining sections of the code, looking for vulnerabilities.

1.15.5.2. formal

1.15.5.2.1. trained teams of reviewers that are assigned specific roles as part of the review process, as well as the use of a tracking system to report on vulnerabilities found.

1.15.6. OPEN WEB APPLICATION SECURITY PROJECT (OWASP) RECOMMENDATIONS

1.15.6.1. Identity management testing

1.15.6.2. Authentication testing

1.15.6.3. Authorization testing

1.15.6.4. Session management testing

1.15.6.5. Input validation testing

1.15.6.6. Testing for error handling

1.15.6.7. Testing for weak cryptography

1.15.6.8. Business logic testing

1.15.6.9. Client-side testing

2. Operations

2.1. Modern Datacenters and Cloud Service Offerings

2.1.1. providers are to take into account the challenges and complexities associated with differing outlooks, drivers, requirements, and services.

2.2. Factors That Impact Datacenter Design

2.2.1. legal and regulatory requirements because the geographic location of the datacenter impacts its jurisdiction

2.2.2. contingency, failover, and redundancy involving other datacenters in different locations are important to understand

2.2.3. the type of services (PaaS, IaaS, and SaaS) the cloud

2.2.4. automating service enablement

2.2.5. consolidation of monitoring capabilities

2.2.6. reducing mean time to repair (MTTR)

2.2.7. reducing mean time between failure (MTBF)

2.2.8. LOGICAL DESIGN

2.2.8.1. All logical design decisions should be mapped to specific compliance requirements, such as logging, retention periods, and reporting capabilities for auditing. There also needs to be ongoing monitoring systems designed to enhance effectiveness.

2.2.8.2. Multi-Tenancy

2.2.8.2.1. The multi-tenant nature of a cloud deployment requires a logical design that partitions and segregates client/customer data.

2.2.8.2.2. Multi-tenant networks, in a nutshell, are datacenter networks that are logically divided into smaller, isolated networks. They share the physical networking gear but operate on their own network without visibility into the other logical networks.

2.2.8.3. Cloud Management Plane

2.2.8.3.1. The cloud management plane needs to be logically isolated although physical isolation may offer a more secure solution. It provides:

2.2.8.4. Virtualization Technology

2.2.8.4.1. Communications access (permitted and not permitted), user access profiles, and permissions, including API access

2.2.8.4.2. Secure communication within and across the management plane

2.2.8.4.3. Secure storage (encryption, partitioning, and key management)

2.2.8.4.4. Backup and disaster recovery along with failover and replication

2.2.8.5. Other Logical Design Considerations

2.2.8.5.1. Design for segregation of duties so datacenter staff can access only the data needed to do their job.

2.2.8.5.2. Design for monitoring of network traffic. The management plane should also be monitored for compromise and abuse. Hypervisor and virtualization technology need to be considered when designing the monitoring capability. Some hypervisors may not allow enough visibility for adequate monitoring. The level of monitoring will depend on the type of cloud deployment.

2.2.8.5.3. Automation and the use of APIs are essential for a successful cloud deployment. The logical design should include the secure use of APIs and a method to log API use.

2.2.8.5.4. Logical design decisions should be enforceable and monitored. For example, access control should be implemented with an identity and access management system that can be audited.

2.2.8.5.5. Consider the use of software-defined networking tools to support logical isolation.

2.2.8.6. Logical Design Levels

2.2.8.6.1. Logical design for data separation needs to be incorporated at the following levels

2.2.8.7. Service Model

2.2.8.7.1. IaaS, many of the hypervisor features can be used to design and implement security

2.2.8.7.2. PaaS, logical design features of the underling platform and database can be leveraged to implement security

2.2.8.7.3. SaaS, same as above plus additional measures in the application can be used to enhance security

2.2.9. PHYSICAL DESIGN

2.2.9.1. Considerations

2.2.9.1.1. Does the physical design protect against environmental threats such as flooding, earthquakes, and storms?

2.2.9.1.2. Does the physical design include provisions for access to resources during disasters to ensure the datacenter and its personnel can continue to operate safely? Examples include

2.2.9.1.3. Are there physical security design features that limit access to authorized personnel? Some examples include

2.2.9.2. Building or Buying

2.2.9.2.1. If you build the datacenter, the organization will have the most control over the design and security of it. However, there is a significant investment required to build a robust datacenter.

2.2.9.2.2. Buying a datacenter or leasing space in a datacenter may be a cheaper alternative. With this option, there may be limitations on design inputs. The leasing organization will need to include all security requirements in the RFP and contract.

2.2.9.2.3. When using a shared datacenter, physical separation of servers and equipment will need to be included in the design.

2.2.9.3. Datacenter Design Standards

2.2.9.3.1. BICSI (Building Industry Consulting Service International Inc.):The ANSI/BICSI 002-2014 standard covers cabling design and installation

2.2.9.3.2. IDCA The (International Datacenter Authority): The Infinity Paradigm covers datacenter location, facility structure, and infrastructure and applications

2.2.9.3.3. NFPA (The National Fire Protection Association): NFPA 75 and 76 standards specify how hot/cold aisle containment is to be carried out, and NFPA standard 70 requires the implementation of an emergency power off button to protect first responders in the datacenter in case of emergency

2.2.9.3.4. Uptime Institute’s Datacenter Site Infrastructure Tier Standard Topology

2.2.10. ENVIRONMENTAL DESIGN CONSIDERATIONS

2.2.10.1. Temperature and Humidity Guidelines

2.2.10.1.1. The American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE)

2.2.10.1.2. Temperature control locations

2.2.10.2. HVAC Considerations

2.2.10.2.1. the lower the temperature in the data center is, the greater the cooling costs per month will be

2.2.10.3. Air Management for Datacenters

2.2.10.3.1. all the design and configuration details minimize or eliminate mixing between the cooling air supplied to the equipment and the hot air rejected from the equipment

2.2.10.3.2. key design issues: configuration of

2.2.10.4. Cable Management

2.2.10.4.1. Under-floor and over-head obstructions, which often interfere with the distribution of cooling air. Such interferences can significantly reduce the air handlers’ airflow and negatively affect the air distribution.

2.2.10.4.2. Cable congestion in raised-floor plenums, which can sharply reduce the total airflow as well as degrade the airflow distribution through the perforated floor tiles.

2.2.10.4.3. Instituting a cable mining program (i.e., a program to remove abandoned or inoperable cables) as part of an ongoing cable management plan will help optimize the air delivery performance of datacenter cooling systems.

2.2.10.5. Aisle Separation and Containment

2.2.10.5.1. Strict hot aisle/cold aisle configurations can significantly increase the air-side cooling capacity of a datacenter’s cooling system

2.2.10.5.2. The rows of racks are placed back-to-back, and holes through the rack (vacant equipment slots) are blocked off on the intake side to create barriers that reduce recirculation. Additionally, cable openings in raised floors and ceilings should be sealed as tightly as possible.

2.2.10.5.3. One recommended design configuration supplies cool air via an under-floor plenum to the racks; the air then passes through the equipment in the rack and enters a separated, semi-sealed area for return to an overhead plenum

2.2.10.6. HVAC Design Considerations

2.2.10.6.1. The local climate will impact the HVAC design requirements.

2.2.10.6.2. Redundant HVAC systems should be part of the overall design.

2.2.10.6.3. The HVAC system should provide air management that separates the cool air from the heat exhaust of the servers.

2.2.10.6.4. Consideration should be given to energy efficient systems

2.2.10.6.5. Backup power supplies should be provided to run the HVAC system for the amount of time required for the system to stay up.

2.2.10.6.6. The HVAC system should filter contaminants and dust.

2.2.11. MULTI-VENDOR PATHWAY CONNECTIVITY (MVPC)

2.2.11.1. There should be redundant connectivity from multiple providers into the datacenter. This will help prevent a single point of failure for network connectivity.

2.2.11.2. The redundant path should provide the minimum expected connection speed for datacenter operations.

2.2.12. IMPLEMENTING PHYSICAL INFRASTRUCTURE FOR CLOUD ENVIRONMENTS

2.2.12.1. Cloud computing removes the traditional silos within the datacenter and introduces a new level of flexibility and scalability to the IT organization.

2.3. Enterprise Operations

2.3.1. Large enterprises need to isolate HR records, finance, customer credit card details, and so on.

2.3.2. Resources externally exposed for out-sourced projects require separation from internal corporate environments

2.3.3. Healthcare organizations must ensure patient record confidentiality.

2.3.4. Universities need to partition student user services from business operations, student administrative systems, and commercial or sensitive research projects.

2.3.5. Service providers must separate billing, CRM, payment systems, reseller portals, and hosted environments.

2.3.6. Financial organizations need to securely isolate client records and investment, wholesale, and retail banking services.

2.3.7. Government agencies must partition revenue records, judicial data, social services, operational systems, and so on.

2.4. Secure Configuration of Hardware

2.4.1. Private and public cloud providers must enable all customer data, communication, and application environments to be securely separated, protected, and isolated from other tenants. To accomplish these goals, all hardware inside the datacenter will need to be securely configured. This includes:

2.4.1.1. BEST PRACTICES FOR SERVERS

2.4.1.1.1. Secure build: To implement fully, follow the specific recommendations of the operating system vendor to securely deploy their operating system.

2.4.1.1.2. Secure initial configuration: This may mean many different things depending on a number of variables, such as OS vendor, operating environment, business requirements, regulatory requirements, risk assessment, and risk appetite, as well as workload(s) to be hosted on the system

2.4.1.1.3. Secure ongoing configuration maintenance: Achieved through a variety of mechanisms, some vendor-specific, some not.

2.4.1.2. BEST PRACTICES FOR STORAGE CONTROLLERS

2.4.1.2.1. Initiator: The consumer of storage, typically a server with an adapter card in it called a Host Bus Adapter (HBA). The initiator “initiates” a connection over the fabric to one or more ports on your storage system, which are called target ports.

2.4.1.2.2. Target: The ports on your storage system that deliver storage volumes (called target devices or LUNs) to the initiators.

2.4.1.2.3. iSCSI traffic should be segregated from general traffic. Layer-2 VLANs are a particularly good way to implement this segregation.

2.4.1.2.4. Oversubscription is permissible on general-purpose LANs, but you should not use an oversubscribed configuration for iSCSI.

2.4.1.2.5. iSCSI Implementation Considerations

2.4.1.3. NETWORK CONTROLLERS BEST PRACTICES

2.4.1.3.1. Major differences between physical and virtual switches

2.4.1.3.2. With a physical switch, when a dedicated network cable or switch port goes bad, only one server goes down

2.4.1.3.3. with virtualization, one cable could offer connectivity to 10 or more virtual machines (VMs), causing a loss in connectivity to multiple VMs.

2.4.1.3.4. connecting multiple VMs requires more bandwidth, which must be handled by the virtual switch.

2.4.1.4. VIRTUAL SWITCHES BEST PRACTICES

2.4.1.4.1. Redundancy is achieved by assigning at least two physical NICs to a virtual switch with each NIC connecting to a different physical switch.

2.4.1.4.2. Network Isolation

2.4.1.4.3. The network that is used to move live virtual machines from one host to another does so in clear text. That means it may be possible to “sniff” the data or perform a man-in-the-middle attack when a live migration occurs.

2.4.1.4.4. When dealing with internal and external networks, always create a separate isolated virtual switch with its own physical network interface cards and never mix internal and external traffic on a virtual switch.

2.4.1.4.5. Lock down access to your virtual switches so that an attacker cannot move VMs from one network to another and so that VMs do not straddle an internal and external network.

2.4.1.4.6. For a better virtual network security strategy, use security applications that are designed specifically for virtual infrastructure and integrate them directly into the virtual networking layer. This includes network intrusion detection and prevention systems, monitoring and reporting systems, and virtual firewalls that are designed to secure virtual switches and isolate VMs. You can integrate physical and virtual network security to provide complete datacenter protection.

2.4.1.4.7. If you use network-based storage such as iSCSI or Network File System, use proper authentication. For iSCSI, bidirectional Challenge-Handshake Authentication Protocol (or CHAP) authentication is best. Be sure to physically isolate storage network traffic because the traffic is often sent as clear text. Anyone with access to the same network could listen and reconstruct files, alter traffic, and possibly corrupt the network.

2.5. Installation and Configuration of Virtualization Management Tools for the Host

2.5.1. The virtualization platform will determine what management tools need to be installed on the host. The latest tools should be installed on each host, and the configuration management plan should include rules on updating these tools.

2.5.2. LEADING PRACTICES

2.5.2.1. Defense in depth: Implement the tool(s) used to manage the host as part of a larger architectural design that mutually reinforces security at every level of the enterprise. The tool(s) should be seen as a tactical element of host management, one that is linked to operational elements such as procedures and strategic elements such as policies.

2.5.2.2. Access control: Secure the tool(s) and tightly control and monitor access to them.

2.5.2.3. Auditing/monitoring: Monitor and track the use of the tool(s) throughout the enterprise to ensure proper usage is taking place.

2.5.2.4. Maintenance: Update and patch the tool(s) as required to ensure compliance with all vendor recommendations and security bulletins.

2.5.3. RUNNING A PHYSICAL INFRASTRUCTURE FOR CLOUD ENVIRONMENTS

2.5.3.1. Considerations when sharing resources include

2.5.3.1.1. Legal: Simply by sharing the environment in the cloud, you may put your data at risk of seizure. Exposing your data in an environment shared with other companies could give the government “reasonable cause” to seize your assets because another company has violated the law.

2.5.3.1.2. Compatibility: Storage services provided by one cloud vendor may be incompatible with another vendor’s services should you decide to move from one to the other.

2.5.3.1.3. Control: If information is encrypted while passing through the cloud, does the customer or cloud vendor control the encryption/decryption keys? Make sure you control the encryption/decryption keys, just as if the data were still resident in the enterprise’s own servers.

2.5.3.1.4. Log data: As more and more mission-critical processes are moved to the cloud, SaaS suppliers will have to provide log data in a real-time, straightforward manner, probably for their administrators as well as their customers’ personnel. Since the SaaS provider’s logs are internal and not necessarily accessible externally or by clients or investigators, monitoring is difficult.

2.5.3.1.5. PCI-DSS access: Since access to logs is required for Payment Card Industry Data Security Standard (PCI-DSS) compliance and may be requested by auditors and regulators, security managers need to make sure to negotiate access to the provider’s logs as part of any service agreement.

2.5.3.1.6. Upgrades and changes: Cloud applications undergo constant feature additions. The speed at which applications change in the cloud will affect both the SDLC and security. A secure SDLC may not be able to provide a security cycle that keeps up with changes that occur so quickly.

2.5.3.1.7. Failover technology: Having proper failover technology is a component of securing the cloud that is often overlooked. The company can survive if a non-mission-critical application goes offline, but this may not be true for mission-critical applications.

2.5.3.1.8. Compliance: SaaS makes the process of compliance more complicated, since it may be difficult for a customer to discern where his data resides on a network controlled by the SaaS provider, or a partner of that provider, which raises all sorts of compliance issues of data privacy, segregation, and security.

2.5.3.1.9. Regulations: Compliance with government regulations are much more challenging in the SaaS environment. The data owner is still fully responsible for compliance.

2.5.3.1.10. Outsourcing: Outsourcing means losing significant control over data, and while this is not a good idea from a security perspective, the business ease and financial savings will continue to increase the usage of these services. You need to work with your company’s legal staff to ensure that appropriate contract terms are in place to protect corporate data and provide for acceptable service level agreements.

2.5.3.1.11. Placement of security: Cloud-based services will result in many mobile IT users accessing business data and services without traversing the corporate network. This will increase the need for enterprises to place security controls between mobile users and cloud-based services. Placing large amounts of sensitive data in a globally accessible cloud leaves organizations open to large, distributed threats. Attackers no longer have to come onto the premises to steal data, and they can find it all in the one “virtual” location.

2.5.3.1.12. Virtualization: Virtualization efficiencies in the cloud require virtual machines from multiple organizations to be co-located on the same physical resources. Although traditional datacenter security still applies in the cloud environment, physical segregation and hardware-based security cannot protect against attacks between virtual machines on the same server. Administrative access is through the Internet rather than the controlled and restricted direct or on-premises connection that is adhered to in the traditional datacenter model. This increases risk and exposure and will require stringent monitoring for changes in system control and access control restriction.

2.5.3.1.13. Virtual machine: The dynamic and fluid nature of virtual machines will make it difficult to maintain the consistency of security and ensure that records can be audited. The ease of cloning and distribution between physical servers could result in the propagation of configuration errors and other vulnerabilities. Proving the security state of a system and identifying the location of an insecure virtual machine will be challenging. The co-location of multiple virtual machines increases the attack surface and risk of virtual machine-to-virtual machine compromise.

2.5.3.1.14. Operating system and application files: Operating system and application files are on a shared physical infrastructure in a virtualized cloud environment and require system, file, and activity monitoring to provide confidence and auditable proof to enterprise customers that their resources have not been compromised or tampered with. In the cloud computing environment, the enterprise subscribes to cloud computing resources, and the responsibility for patching is the subscriber’s rather than the cloud computing vendor’s. The need for patch maintenance vigilance is imperative. Lack of due diligence in this regard could rapidly make the task unmanageable or impossible.

2.5.3.1.15. Data fluidity: Enterprises are often required to prove that their security compliance is in accord with regulations, standards, and auditing practices, regardless of the location of the systems at which the data resides. Data is fluid in cloud computing and may reside in on-premises physical servers, on-premises virtual machines, or off-premises virtual machines running on cloud computing resources, and this will require some rethinking on the part of auditors and practitioners alike.

2.5.4. CONFIGURING ACCESS CONTROL AND SECURE KVM

2.5.4.1. Isolated data channels: Located in each KVM port and make it impossible for data to be transferred between connected computers through the KVM.

2.5.4.2. Tamper-warning labels on each side of the KVM: These provide clear visual evidence if the enclosure has been compromised

2.5.4.3. Housing intrusion detection: Causes the KVM to become inoperable and the LEDs to flash repeatedly if the housing has been opened.

2.5.4.4. Fixed firmware: Cannot be reprogrammed, preventing attempts to alter the logic of the KVM.

2.5.4.5. Tamper-proof circuit board: It’s soldered to prevent component removal or alteration.

2.5.4.6. Safe buffer design: Does not incorporate a memory buffer, and the keyboard buffer is automatically cleared after data transmission, preventing transfer of keystrokes or other data when switching between computers.

2.5.4.7. Selective USB access: Only recognizes human interface device USB devices (such as keyboards and mice) to prevent inadvertent and insecure data transfer.

2.5.4.8. Push-button control: Requires physical access to KVM when switching between connected computers.

2.6. Securing the Network Configuration

2.6.1. NETWORK ISOLATION

2.6.1.1. All networks should be monitored and audited to validate separation.

2.6.1.2. All management of the datacenter systems should be done on isolated networks. Strong authentication methods should be used on the management network to validate identity and authorize usage

2.6.1.3. Access to the storage controllers should also be granted over isolated network components that are non-routable to prevent the direct download of stored data and to restrict the likelihood of unauthorized access or accidental discovery.

2.6.1.4. Customer access should be provisioned on isolated networks. This isolation can be implemented through the use of physically separate networks or via VLANs.

2.6.1.5. TLS and IPSec can be used for securing communications in order to prevent eavesdropping.

2.6.1.6. Secure DNS (DNSSEC) should be used to prevent DNS poisoning.

2.6.2. PROTECTING VLANS

2.6.2.1. VLAN Communication

2.6.2.1.1. Broadcast packets sent by one of the workstations will reach all the others in the VLAN.

2.6.2.1.2. Broadcasts sent by one of the workstations in the VLAN will not reach any workstations that are not in the VLAN.

2.6.2.1.3. Broadcasts sent by workstations that are not in the VLAN will never reach workstations that are in the VLAN.

2.6.2.1.4. The workstations can all communicate with each other without needing to go through a gateway.

2.6.2.2. VLAN Advantages

2.6.2.2.1. The ability to isolate network traffic to certain machines or groups of machines via association with the VLAN allows for the opportunity to create secured pathing of data between endpoints

2.6.2.2.2. It is a building block that when combined with other protection mechanisms allows for data confidentiality to be achieved.

2.6.3. USING TRANSPORT LAYER SECURITY (TLS)

2.6.3.1. TLS is made up of two layers:

2.6.3.1.1. TLS record protocol: Provides connection security and ensures that the connection is private and reliable. Used to encapsulate higher-level protocols, among them TLS handshake protocol.

2.6.3.1.2. TLS handshake protocol: Allows the client and the server to authenticate each other and to negotiate an encryption algorithm and cryptographic keys before data is sent or received.

2.6.4. USING DOMAIN NAME SYSTEM (DNS)

2.6.4.1. Domain Name System Security Extensions (DNSSEC)

2.6.4.1.1. DNSSEC provides origin authority, data integrity, and authenticated denial-of-existence.

2.6.4.1.2. Validation of DNS responses occurs through the use of digital signatures that are included with DNS responses

2.6.4.2. Threats to the DNS Infrastructure

2.6.4.2.1. Footprinting: The process by which DNS zone data, including DNS domain names, computer names, and Internet Protocol (IP) addresses for sensitive network resources, is obtained by an attacker.

2.6.4.2.2. Denial-of-service attack: When an attacker attempts to deny the availability of network services by flooding one or more DNS servers in the network with queries.

2.6.4.2.3. Data modification: An attempt by an attacker to spoof valid IP addresses in IP packets that the attacker has created. This gives these packets the appearance of coming from a valid IP address in the network. With a valid IP address the attacker can gain access to the network and destroy data or conduct other attacks.

2.6.4.2.4. Redirection: When an attacker can redirect queries for DNS names to servers that are under the control of the attacker.

2.6.4.2.5. Spoofing: When a DNS server accepts and uses incorrect information from a host that has no authority giving that information. DNS spoofing is in fact malicious cache poisoning where forged data is placed in the cache of the name servers.

2.6.4.2.6. Cache poisoning: Attackers sometimes exploit vulnerabilities or poor configuration choices in DNS servers, bug, vulnerabilities in the DNS protocol itself -- to inject fraudulent addressing information into caches. Users accessing the cache to visit the targeted site would find themselves instead at a server controlled by the attacker.

2.6.4.2.7. Typosquatting: The practice of registering a domain name that is confusingly similar to an existing popular brand – typosquatting

2.6.5. USING INTERNET PROTOCOL SECURITY (IPSEC)

2.6.5.1. Supports

2.6.5.1.1. network-level peer authentication

2.6.5.1.2. data origin authentication

2.6.5.1.3. data integrity

2.6.5.1.4. encryption

2.6.5.1.5. replay protection

2.6.5.2. Challenges

2.6.5.2.1. Configuration management

2.6.5.2.2. Performance

2.7. Identifying and Understanding Server Threats

2.7.1. OS bugs, missconfiguration

2.7.2. Threat actors

2.7.3. General guidelines should be addressed when identifying and understanding threats

2.7.3.1. Use an asset management system that has configuration management capabilities to enable documentation of all system configuration items (CIs) authoritatively.

2.7.3.2. Use system baselines to enforce configuration management throughout the enterprise. In configuration management

2.7.3.2.1. A “baseline” is an agreed-upon description of the attributes of a product, at a point in time that serves as a basis for defining change.

2.7.3.2.2. A “change” is a movement from this baseline state to a next state.

2.7.3.2.3. Consider automation technologies that will help with the creation, application, management, updating, tracking, and compliance checking for system baselines.

2.7.3.2.4. Develop and use a robust change management system to authorize the required changes that need to be made to systems over time.

2.7.3.2.5. The use of an exception reporting system to force the capture and documentation of any activities undertaken that are contrary to the “expected norm” with regard to the lifecycle of a system under management.

2.7.3.2.6. The use of vendor-specified configuration guidance and best practices as appropriate based on the specific platform(s) under management.

2.8. Using Stand-Alone Hosts

2.8.1. The business seeks to

2.8.1.1. Create isolated, secured, dedicated hosting of individual cloud resources; the use of a stand-alone host would be an appropriate choice.

2.8.1.2. Make the cloud resources available to end users so they appear as if they are independent of any other resources and are “isolated”; either a stand-alone host or a shared host configuration that offers multi-tenant secured hosting capabilities would be appropriate

2.8.2. Stand-alone host availability considerations

2.8.2.1. Regulatory issues

2.8.2.2. Current security policies in force

2.8.2.3. Any contractual requirements that may be in force for one or more systems, or areas of the business

2.8.2.4. The needs of a certain application or business process that may be using the system in question

2.8.2.5. The classification of the data contained in the system

2.9. Using Clustered Hosts

2.9.1. RESOURCE SHARING

2.9.1.1. Reservations

2.9.1.2. Limits

2.9.1.3. Shares

2.9.2. DISTRIBUTED RESOURCE SCHEDULING (DRS)/COMPUTE RESOURCE SCHEDULING

2.9.2.1. Provide highly available resources to your workloads

2.9.2.2. Balance workloads for optimal performance

2.9.2.2.1. The initial workload placement across the cluster as a VM is powered on is the beginning point for all load-balancing operations.

2.9.2.2.2. Load balancing is achieved through a movement of the VM between hosts in the cluster in order to achieve/maintain the desired compute resource allocation thresholds specified for the DRS service.

2.9.2.3. Scale and manage computing resources without service disruption

2.10. Accounting for Dynamic Operation

2.10.1. In outsourced and public deployment models, cloud computing also can provide elasticity. This refers to the ability for customers to quickly request, receive, and later release as many resources as needed.

2.10.2. If an organization is large enough and supports a sufficient diversity of workloads, an on-site private cloud may be able to provide elasticity to clients within the consumer organization.

2.10.3. Smaller on-site private clouds will, exhibit maximum capacity limits similar to those of traditional datacenters.

2.11. Using Storage Clusters

2.11.1. CLUSTERED STORAGE ARCHITECTURES

2.11.1.1. A tightly coupled cluster has a physical backplane into which controller nodes connect. While this backplane fixes the maximum size of the cluster, it delivers a high-performance interconnect between servers for load-balanced performance and maximum scalability as the cluster grows.

2.11.1.2. A loosely coupled cluster offers cost-effective building blocks that can start small and grow as applications demand. A loose cluster offers performance, I/O, and storage capacity within the same node. As a result, performance scales with capacity and vice versa.

2.11.2. STORAGE CLUSTER GOALS

2.11.2.1. Meet the required service levels as specified in the SLA

2.11.2.2. Provide for the ability to separate customer data in multi-tenant hosting environments

2.11.2.3. Securely store and protect data through the use of confidentiality, integrity, and availability mechanisms such as encryption, hashing, masking, and multi-pathing

2.12. Using Maintenance Mode

2.12.1. Maintenance mode can apply to both data stores as well as hosts

2.12.2. Maintenance mode is tied to is the SLA.

2.12.3. Enter maintenance mode, operate within it, and exit it successfully using the vendor-specific guidance and best practices.

2.13. Providing High Availability on the Cloud

2.13.1. MEASURING SYSTEM AVAILABILITY

2.13.2. HIGH AVAILABILITY APPROACHES

2.13.2.1. the use of redundant architectural elements to safeguard data in case of failure, such as a drive mirroring solution.

2.13.2.2. the use of multiple vendors within the cloud architecture to provide the same services. This allows you to build certain systems that need a specified level of availability to be able to switch, or failover, to an alternate provider’s system within the specified time period defined in the SLA that is used to define and manage the availability window for the system.

2.14. The Physical Infrastructure for Cloud Environments

2.14.1. An infrastructure built for cloud computing provides numerous benefits

2.14.1.1. Flexible and efficient utilization of infrastructure investments

2.14.1.2. Faster deployment of physical and virtual resources

2.14.1.3. Higher application service levels

2.14.1.4. Less administrative overhead

2.14.1.5. Lower infrastructure, energy, and facility costs

2.14.1.6. Increased security

2.14.2. Servers

2.14.3. Virtualization

2.14.4. Storage

2.14.5. Network

2.14.6. Management

2.14.7. Security

2.14.8. Backup and recovery

2.14.9. Infrastructure systems

2.15. Configuring Access Control for Remote Access

2.15.1. Some of the threats with regard to remote access are as follows

2.15.1.1. Lack of physical security controls

2.15.1.2. Unsecured networks

2.15.1.3. Infected endpoints accessing the internal network

2.15.1.4. External access to internal resources

2.15.2. Controlling remote access

2.15.2.1. Tunneling via a VPN—IPSec or SSL

2.15.2.2. Remote Desktop Protocol (RDP) allows for desktop access to remote systems

2.15.2.3. Access via a secure terminal

2.15.2.4. Deployment of a DMZ

2.15.3. Cloud environment access requirements

2.15.3.1. Encrypted transmission of all communications between the remote user and the host

2.15.3.2. Secure login with complex passwords and/or certificate-based login

2.15.3.3. Two-factor authentication providing enhanced security

2.15.3.4. A log and audit of all connection

2.15.3.5. A secure baseline should be established, and all deployments and updates should be made from a change- and version-controlled master image.

2.15.3.6. Sufficient supporting infrastructure and tools should be in place to allow for the patching and maintenance of relevant infrastructure without any impact on the end user/customer.

2.16. Performing Patch Management

2.16.1. THE PATCH MANAGEMENT PROCESS

2.16.1.1. Vulnerability detection and evaluation by the vendor

2.16.1.2. Subscription mechanism to vendor patch notifications

2.16.1.3. Severity assessment of the patch by the receiving enterprise using that software

2.16.1.4. Applicability assessment of the patch on target systems

2.16.1.5. Opening of tracking records in case of patch applicability

2.16.1.6. Customer notification of applicable patches, if required

2.16.1.7. Change management

2.16.1.8. Successful patch application verification

2.16.1.9. Issue and risk management in case of unexpected troubles or conflicting actions

2.16.1.10. Closure of tracking records with all auditable artifacts

2.16.2. EXAMPLES OF AUTOMATION

2.16.2.1. Notification Automation

2.16.2.1.1. Vulnerability severity is assessed

2.16.2.1.2. A security patch or an interim solution is provided

2.16.2.1.3. This information is entered into a system

2.16.2.1.4. Automated e-mail notifications are sent to predefined accounts in a straightforward process

2.16.2.2. Security patch applicability

2.16.2.3. The creation of tracking records and their assignment to predefined resolver groups, in case of matching.

2.16.2.4. Change record creation, change approval, and change implementation (if agreed-upon maintenance windows have been established and are being managed via SLAs).

2.16.2.5. Verification of the successful implementation of security patches.

2.16.2.6. Creation of documentation to support that patching has been successfully accomplished.

2.16.3. CHALLENGES OF PATCH MANAGEMENT

2.16.3.1. The lack of service standardization. For enterprises transitioning to the cloud, lack of standardization is the main issue. For example, a patch management solution tailored to one customer often cannot be used or easily adopted by another customer.

2.16.3.2. Patch management is not simply using a patch tool to apply patches to endpoint systems, but rather, a collaboration of multiple management tools and teams, for example, change management and patch advisory tools.

2.16.3.3. In a large enterprise environment, patch tools need to be able to interact with a large number of managed entities in a scalable way and handle the heterogeneity that is unavoidable in such environments.

2.16.3.4. To avoid problems associated with automatically applying patches to endpoints, thorough testing of patches beforehand is absolutely mandatory.

2.16.3.5. Multiple Time Zones

2.16.3.5.1. In a cloud environment, virtual machines that are physically located in the same time zone can be configured to operate in different time zones. When a customer’s VMs span multiple time zones, patches need to be scheduled carefully so the correct behavior is implemented.

2.16.3.5.2. For some patches, the correct behavior is to apply the patches at the same local time of each virtual machine

2.16.3.5.3. For other patches, the correct behavior is to apply at the same absolute time to avoid mixed-mode problem where multiple versions of a software are concurrently running, resulting in data corruption.

2.16.3.6. VM Suspension and Snapshot

2.16.3.6.1. There are additional modes of operations available to system administrators and users, such as VM suspension and resume, snapshot, and revert back. The management console that allows use of these operations needs to be tightly integrated with the patch management and compliance processes.

2.17. Performance Monitoring

2.17.1. OUTSOURCING MONITORING

2.17.1.1. Having HR check references

2.17.1.2. Examining the terms of any SLA or contract being used to govern service terms

2.17.1.3. Executing some form of trial of the managed service in question before implementing into production

2.17.2. HARDWARE MONITORING

2.17.2.1. Extend monitoring of the four key subsystems

2.17.2.1.1. Network: Excessive dropped packets

2.17.2.1.2. Disk: Full disk or slow reads and writes to the disks (IOPS)

2.17.2.1.3. Memory: Excessive memory usage or full utilization of available memory allocation

2.17.2.1.4. CPU: Excessive CPU utilization

2.17.2.2. Additional items that exist in the physical plane of these systems, such as CPU temperature, fan speed, and ambient temperature within the datacenter hosting the physical hosts.

2.17.3. REDUNDANT SYSTEM ARCHITECTURE

2.17.3.1. Allow for additional hardware items to be incorporated directly into the system as either an online real-time component

2.17.3.2. Share the load of the running system, or in a hot standby mode

2.17.3.3. Allow for a controlled failover, to minimize downtime

2.17.4. MONITORING FUNCTIONS

2.17.4.1. The use of any vendor-supplied monitoring capabilities to their fullest extent is necessary in order to maximize system reliability and performance.

2.17.4.2. Monitoring hardware may provide early indications of hardware failure and should be treated as a requirement to ensure stability and availability of all systems being managed.

2.17.4.3. Some virtualization platforms offer the capability to disable hardware and migrate live data from the failing hardware if certain thresholds are met.

2.18. Backing Up and Restoring the Host Configuration challenges

2.18.1. Control: The ability to decide, with high confidence, who and what is allowed to access consumer data and programs and the ability to perform actions (such as erasing data or disconnecting a network) with high confidence both that the actions have been taken and that no additional actions were taken that would subvert the consumer’s intent

2.18.2. Visibility: The ability to monitor, with high confidence, the status of a consumer’s data and programs and how consumer data and programs are being accessed by others.

2.19. Implementing Network Security Controls: Defense in Depth

2.19.1. FIREWALLS

2.19.1.1. Host-Based Software Firewalls

2.19.1.2. Configuration of Ports Through the Firewall

2.19.2. LAYERED SECURITY

2.19.2.1. Intrusion Detection System

2.19.2.1.1. Network Intrusion Detection Systems (NIDSs)

2.19.2.1.2. Host Intrusion Detection Systems (HIDSs)

2.19.2.2. Intrusion Prevention System

2.19.2.2.1. It can reconfigure other security controls, such as a firewall or router, to block an attack; some IPS devices can even apply patches if the host has particular vulnerabilities.

2.19.2.2.2. Some IPS can remove the malicious contents of an attack to mitigate the packets, perhaps deleting an infected attachment from an e-mail before forwarding the e-mail to the user.

2.19.2.3. Combined IDS and IPS (IDPS)

2.19.3. UTILIZING HONEYPOTS

2.19.4. CONDUCTING VULNERABILITY ASSESSMENTS

2.19.4.1. conduct external vulnerability assessments to validate any internal assessments.

2.19.5. LOG CAPTURE AND LOG MANAGEMENT

2.19.5.1. Log data should be

2.19.5.1.1. Protected and consideration given to the external storage of log data

2.19.5.1.2. Part of the backup and disaster recovery plans of the organization

2.19.5.2. NIST SP 800-92 recommendations

2.19.5.2.1. Develop standard processes for performing log management.

2.19.5.2.2. Define its logging requirements and goals as part of the planning process.

2.19.5.2.3. Develop policies that clearly define mandatory requirements and suggested recommendations for log management activities, including log generation, transmission, storage, analysis, and disposal.

2.19.5.2.4. Ensure that related policies and procedures incorporate and support the log management requirements and recommendations.

2.19.5.3. Organizations should prioritize log management appropriately throughout the organization. After an organization defines its requirements and goals for the log management process, it should prioritize the requirements and goals based on the perceived reduction of risk and the expected time and resources needed to perform log management functions.

2.19.5.4. Organizations should create and maintain a log management infrastructure. A log management infrastructure consists of the hardware, software, networks, and media used to generate, transmit, store, analyze, and dispose of log data. They typically perform several functions that support the analysis and security of log data.

2.19.5.4.1. Major factors to consider in the design

2.19.5.5. Organizations should establish standard log management operational processes. The major log management operational processes typically include configuring log sources, performing log analysis, initiating responses to identified events, and managing long-term storage. Administrators have other responsibilities as well, such as the following:

2.19.5.5.1. Monitoring the logging status of all log sources

2.19.5.5.2. Monitoring log rotation and archival processes

2.19.5.5.3. Checking for upgrades and patches to logging software and acquiring, testing, and deploying them

2.19.5.5.4. Ensuring that each logging host’s clock is synched to a common time source

2.19.5.5.5. Reconfiguring logging as needed based on policy changes, technology changes, and other factors

2.19.5.5.6. Documenting and reporting anomalies in log settings, configurations, and processes

2.19.6. USING SECURITY INFORMATION AND EVENT MANAGEMENT (SIEM)

2.19.6.1. A locally hosted SIEM system offers easy access and lower risk of external disclosure

2.19.6.2. An external SIEM system may prevent tampering of data by an attacker

2.19.6.3. Sample Controls and Effective Mapping to an SIEM Solution https://www.cisecurity.org/critical-controls/download.cfm

2.19.6.3.1. Critical Control 1: Inventory of Authorized and Unauthorized Devices

2.19.6.3.2. Critical Control 2: Inventory of Authorized and Unauthorized Software

2.19.6.3.3. Critical Control 3: Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers

2.19.6.3.4. Critical Control 10: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches

2.19.6.3.5. Critical Control 12: Controlled Use of Administrative Privileges

2.19.6.3.6. Critical Control 13: Boundary Defense

2.20. Developing a Management Plan

2.20.1. MAINTENANCE

2.20.1.1. schedule system repair and maintenance

2.20.1.2. schedule customer notifications

2.20.1.3. ensure adequate resources are available to meet expected demand and service level agreement requirements

2.20.1.4. ensure that appropriate change-management procedures are implemented and followed

2.20.1.5. ensure all appropriate security protections and safeguards continue to apply to all hosts while in maintenance mode and to all virtual machines while they are being moved and managed on alternate hosts as a result of maintenance mode activities being performed on their primary host.

2.20.2. ORCHESTRATION

2.21. Building a Logical Infrastructure for Cloud Environments

2.21.1. LOGICAL DESIGN

2.21.1.1. Lacks specific details such as technologies and standards while focusing on the needs at a general level

2.21.1.2. Communicates with abstract concepts, such as a network, router, or workstation, without specifying concrete details

2.21.2. PHYSICAL DESIGN

2.21.2.1. Is created from a logical network design

2.21.2.2. Will often expand elements found in a logical design

2.21.3. SECURE CONFIGURATION OF HARDWARE-SPECIFIC REQUIREMENTS

2.21.3.1. Storage Controllers Configuration

2.21.3.1.1. Turn off all unnecessary services, such as web interfaces and management services that will not be needed or used.

2.21.3.1.2. Validate that the controllers can meet the estimated traffic load based on vendor specifications and testing (1 GB | 10 GB | 16 GB | 40 GB).

2.21.3.1.3. Deploy a redundant failover configuration such as a NIC team.

2.21.3.1.4. Deploy a multipath solution.

2.21.3.1.5. Change default administrative passwords for configuration and management access to the controller.

2.21.3.2. Networking Models

2.21.3.2.1. Traditional Networking Model

2.21.3.2.2. Converged Networking Model

2.22. Running a Logical Infrastructure for Cloud Environments

2.22.1. BUILDING A SECURE NETWORK CONFIGURATION

2.22.1.1. VLANs: Allow for the logical isolation of hosts on a network. In a cloud environment, VLANs can be utilized to isolate the management network, storage network, and the customer networks. VLANs can also be used to separate customer data.

2.22.1.2. Transport Layer Security (TLS): Allows for the encryption of data in transit between hosts. Implementation of TLS for internal networks will prevent the “sniffing” of traffic by a malicious user. A TLS VPN is one method to allow for remote access to the cloud environment.

2.22.1.3. DNS: DNS servers should be locked down and only offer required services and utilize Domain Name System Security Extensions (DNSSEC) when feasible. DNSSEC is a set of DNS extensions that provide authentication, integrity, and “authenticated denial-of-existence” for DNS data. Zone transfers should be disabled. If an attacker comprises DNS, they may be able to hijack or reroute data.

2.22.1.4. IPSec: IPSec VPN is one method to remotely access the cloud environment. If an IPSec VPN is utilized, IP whitelisting, only allowing approved IP addresses, is considered a best practice for access. Two-factor authentication can also be used to enhance security.

2.22.2. OS HARDENING VIA APPLICATION BASELINE

2.22.2.1. Capturing a Baseline

2.22.2.1.1. A clean installation of the target OS must be performed (physical or virtual).

2.22.2.1.2. All non-essential services should be stopped and set to disabled in order to ensure that they do not run.

2.22.2.1.3. All non-essential software should be removed from the system.

2.22.2.1.4. All required security patches should be downloaded and installed from the appropriate vendor repository.

2.22.2.1.5. All required configuration of the host OS should be accomplished per the requirements of the baseline being created.

2.22.2.1.6. The OS baseline should be audited to ensure that all required items have been configured properly.

2.22.2.1.7. Full documentation should be created, captured, and stored for the baseline being created.

2.22.2.1.8. An image of the OS baseline should be captured and stored for future deployment. This image should be placed under change management control and have appropriate access controls applied.

2.22.2.1.9. The baseline OS image should also be placed under the Configuration Management system and cataloged as a Configuration Item (CI).

2.22.2.1.10. The baseline OS image should be updated on a documented schedule for security patches and any additional required configuration updates as needed.

2.22.2.2. Baseline Configuration by Platform

2.22.2.2.1. Windows

2.22.2.2.2. Linux

2.22.2.2.3. VMware

2.22.3. AVAILABILITY OF A GUEST OS

2.22.3.1. High availability should be used where the goal is to minimize the impact of system downtime

2.22.3.2. Fault tolerance should be used where the goal is to eliminate system downtime as a threat to system availability altogether

2.23. Managing the Logical Infrastructure for Cloud Environments

2.23.1. ACCESS CONTROL FOR REMOTE ACCESS

2.23.1.1. Key benefits of a remote access solution for the cloud can include

2.23.1.1.1. Secure access without exposing the privileged credential to the end user, eliminating the risk of credential exploitation or key logging.

2.23.1.1.2. Accountability of who is accessing the datacenter remotely with a tamper-proof audit trail.

2.23.1.1.3. Session control over who can access, enforcement of workflows such as managerial approval, ticketing integration, session duration limitation, and automatic termination when idle.

2.23.1.1.4. Real-time monitoring to view privileged activities as they are happening or as a recorded playback for forensic analysis. Sessions can be remotely terminated or intervened with when necessary for more efficient and secure IT compliance and cyber security operations.

2.23.1.1.5. Secure isolation between the remote user’s desktop and the target system they are connecting to so that any potential malware does not spread to the target systems.

2.23.2. OS BASELINE COMPLIANCE MONITORING AND REMEDIATION

2.23.3. BACKING UP AND RESTORING THE GUEST OS CONFIGURATION

2.24. Implementation of Network Security Controls

2.24.1. LOG CAPTURE AND ANALYSIS

2.24.1.1. Log data needs to be collected and analyzed both for the hosts as well as for the guest

2.24.1.2. Centralization and offsite storage of log data can prevent tampering provided the appropriate access controls and monitoring systems are put in place.

2.24.2. MANAGEMENT PLAN IMPLEMENTATION THROUGH THE MANAGEMENT PLANE

2.24.3. ENSURING COMPLIANCE WITH REGULATIONS AND CONTROLS

2.24.3.1. Establishing explicit, comprehensive SLAs for security, continuity of operations, and service quality is key for any organization.

2.24.3.2. Compliance responsibilities of the provider and the customer should be clearly delineated in contracts and SLAs.

2.24.3.3. Consider the provider and customers’ geographic locations.

2.24.3.4. certain agreements focusing on premise service provisioning may be in place but not structured appropriately to encompass a full cloud services solution

2.25. Using an IT Service Management (ITSM) Solution

2.25.1. Ensure portfolio management, demand management, and financial management are all working together for efficient service delivery to customers and effective charging for services if appropriate

2.25.2. Involve all the people and systems necessary to create alignment and ultimately success

2.26. Considerations for Shadow IT

2.26.1. Shadow IT expenditures backup 44%

2.26.2. Shadow IT expenditures file sharing SW 36%

2.26.3. Shadow IT expenditures archiving 33%

2.27. Operations Management

2.27.1. INFORMATION SECURITY MANAGEMENT

2.27.1.1. Security management

2.27.1.2. Security policy

2.27.1.3. Information security organization

2.27.1.4. Asset management

2.27.1.5. Human resources security

2.27.1.6. Physical and environmental security

2.27.1.7. Communications and operations management

2.27.1.8. Access control

2.27.1.9. Information systems acquisition, development, and maintenance

2.27.1.10. Provider and customer responsibilities

2.27.2. CONFIGURATION MANAGEMENT

2.27.2.1. The development and implementation of new configurations; they should apply to the hardware and software configurations of the cloud environment

2.27.2.2. Quality evaluation of configuration changes and compliance with established security baselines

2.27.2.3. Changing systems, including testing and deployment procedures; they should include adequate oversight of all configuration changes

2.27.2.4. The prevention of any unauthorized changes in system configurations

2.27.3. CHANGE MANAGEMENT

2.27.3.1. Change-Management Objectives

2.27.3.1.1. Respond to a customer’s changing business requirements while maximizing value and reducing incidents, disruption, and re-work.

2.27.3.1.2. Respond to business and IT requests for change that will align services with business needs.

2.27.3.1.3. Ensure that changes are recorded and evaluated.

2.27.3.1.4. Ensure that authorized changes are prioritized, planned, tested, implemented, documented, and reviewed in a controlled manner.

2.27.3.1.5. Ensure that all changes to configuration items are recorded in the configuration management system.

2.27.3.1.6. Optimize overall business risk. It is often correct to minimize business risk, but sometimes it is appropriate to knowingly accept a risk because of the potential benefit.

2.27.3.2. Change-Management Process

2.27.3.2.1. The development and acquisition of new infrastructure and software

2.27.3.2.2. Quality evaluation of new software and compliance with established security baselines

2.27.3.2.3. Changing systems, including testing and deployment procedures; they should include adequate oversight of all changes

2.27.3.2.4. Preventing the unauthorized installation of software and hardware

2.27.4. INCIDENT MANAGEMENT

2.27.4.1. Event vs. Incidents

2.27.4.1.1. An event is defined as a change of state that has significance for the management of an IT service or other configuration item. The term can also be used to mean an alert or notification created by an IT service, configuration item, or monitoring tool. Events often require IT operations staff to take actions and lead to incidents being logged.

2.27.4.1.2. An incident is defined as an unplanned interruption to an IT service or reduction in the quality of an IT service.

2.27.4.2. Purpose of Incident Response

2.27.4.2.1. Restore normal service operation as quickly as possible

2.27.4.2.2. Minimize the adverse impact on business operations

2.27.4.2.3. Ensure service quality and availability are maintained

2.27.4.3. Objectives of Incident Response

2.27.4.3.1. Ensure that standardized methods and procedures are used for efficient and prompt response, analysis, documentation ongoing management, and reporting of incidents

2.27.4.3.2. Increase visibility and communication of incidents to business and IT support staff

2.27.4.3.3. Enhance business perception of IT by using professional approach in quickly resolving and communicating incidents when they occur

2.27.4.3.4. Align incident management activities with those of the business

2.27.4.3.5. Maintain user satisfaction

2.27.4.4. Incident Management Plan

2.27.4.4.1. Definitions of an incident by service type or offering

2.27.4.4.2. Customer and provider roles and responsibilities for an incident

2.27.4.4.3. Incident management process from detection to resolution

2.27.4.4.4. Response requirements

2.27.4.4.5. Media coordination

2.27.4.4.6. Legal and regulatory requirements such as data breach notification

2.27.4.5. Incident Classification

2.27.4.5.1. Impact = Effect upon the business

2.27.4.5.2. Urgency = Extent to which the resolution can bear delay

2.27.4.5.3. Priority = Urgency x Impact

2.27.5. PROBLEM MANAGEMENT

2.27.5.1. A problem is the unknown cause of one or more incidents, often identified as a result of multiple similar incidents.

2.27.5.2. A known error is an identified root cause of a problem.

2.27.5.3. A workaround is a temporary way of overcoming technical difficulties (i.e., incidents or problems).

2.27.6. RELEASE AND DEPLOYMENT MANAGEMENT

2.27.6.1. Define and agree upon deployment plans

2.27.6.2. Create and test release packages

2.27.6.3. Ensure integrity of release packages

2.27.6.4. Record and track all release packages in the Definitive Media Library (DML)

2.27.6.5. Manage stakeholders

2.27.6.6. Check delivery of utility and warranty (utility + warranty = value in the mind of the customer)

2.27.6.7. Utility is the functionality offered by a product or service to meet a specific need; it’s what the service does.

2.27.6.8. Warranty is the assurance that a product or service will meet agreed-upon requirements (SLA); it’s how the service is delivered.

2.27.6.9. Manage risks

2.27.6.10. Ensure knowledge transfer

2.27.7. SERVICE LEVEL MANAGEMENT

2.27.7.1. Service level agreements (SLAs) are negotiated with the customers.

2.27.7.2. Operational level agreements (OLAs) are SLAs negotiated between internal business units within the enterprise.

2.27.7.3. Underpinning Contracts (UCs) are external contracts negotiated between the organization and vendors or suppliers.

2.27.8. AVAILABILITY MANAGEMENT

2.27.9. CAPACITY MANAGEMENT

2.27.10. BUSINESS CONTINUITY MANAGEMENT

2.27.10.1. The difference between BC and BCM https://www.iso.org/obp/ui/#iso:std:iso:22301:ed-1:v2:en

2.27.10.1.1. Business continuity (BC) is defined as the capability of the organization to continue delivery of products or services at acceptable predefined levels following a disruptive incident. (Source: ISO 22301:2012)

2.27.10.1.2. Business continuity management (BCM) is defined as a holistic management process that identifies potential threats to an organization and the impacts to business operations those threats, if realized, might cause, and that provides a framework for building organizational resilience with the capability of an effective response that safeguards the interests of its key stakeholders, reputation, brand, and value-creating activities. (Source: ISO 22301:2012)

2.27.10.2. Continuity Management Plan

2.27.10.2.1. Required capability and capacity of backup systems

2.27.10.2.2. Trigger events to implement the plan

2.27.10.2.3. Clearly defined roles and responsibilities by name and title

2.27.10.2.4. Clearly defined continuity and recovery procedures

2.27.10.2.5. Notification requirements

2.27.11. CONTINUAL SERVICE IMPROVEMENT (CSI) MANAGEMENT

2.27.12. HOW MANAGEMENT PROCESSES RELATE TO EACH OTHER

2.27.12.1. Release and Deployment Management and Change Management

2.27.12.2. Release and Deployment Management Role and Incident and Problem Management

2.27.12.3. Release and Deployment Management and Configuration Management

2.27.12.4. Release and Deployment Management Is Related to Availability Management

2.27.12.5. Release and Deployment Management and the Help/Service Desk

2.27.12.6. Configuration Management and Availability Management

2.27.12.7. Configuration Management and Change Management

2.27.12.8. Service Level Management and Change Management

2.27.13. INCORPORATING MANAGEMENT PROCESSES

2.28. Managing Risk in Logical and Physical Infrastructures http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-39.pdf

2.28.1. FRAMING RISK

2.28.2. RISK ASSESSMENT

2.28.2.1. Risk

2.28.2.1.1. Threats to organizations (i.e., operations, assets, or individuals) or threats directed through organizations against other organizations

2.28.2.1.2. Vulnerabilities internal and external to organizations

2.28.2.1.3. The harm (i.e., adverse impact) that may occur given the potential for threats exploiting vulnerabilities

2.28.2.1.4. The likelihood that harm will occur

2.28.2.2. Conducting a Risk Assessment

2.28.2.2.1. Qualitative Risk Assessment

2.28.2.2.2. Quantitative assessments

2.28.2.2.3. Identifying Vulnerabilities

2.28.2.2.4. Identifying Threats

2.28.2.2.5. Selecting Tools and Techniques for Risk Assessment

2.28.2.2.6. Likelihood Determination

2.28.2.2.7. Determination of Impact

2.28.2.2.8. Determination of Risk

2.28.2.2.9. Critical Aspects of Risk Assessment: at least cover following

2.28.3. RISK RESPONSE

2.28.3.1. Developing alternative courses of action for responding to risk

2.28.3.2. Evaluating the alternative courses of action

2.28.3.3. Determining appropriate courses of action consistent with organizational risk tolerance

2.28.3.4. Implementing risk responses based on selected courses of action

2.28.3.4.1. Risk can be accepted

2.28.3.4.2. Risk can be avoided

2.28.3.4.3. Risk can be transferred

2.28.3.4.4. Risk can be mitigated

2.28.4. RISK MONITORING

2.28.4.1. Determine the ongoing effectiveness of risk responses (consistent with the organizational risk frame)

2.28.4.2. Identify risk-impacting changes to organizational information systems and the environments in which the systems operate

2.28.4.3. Verify that planned risk responses are implemented and information security requirements derived from and traceable to organizational missions/business functions, federal legislation, directives, regulations, policies, standards, and guidelines are satisfied

2.29. Collection and Preservation of Digital Evidence

2.29.1. CLOUD FORENSICS CHALLENGES

2.29.1.1. Control over data

2.29.1.2. Multi-tenancy

2.29.1.3. Data volatility

2.29.1.3.1. Chain of custody

2.29.1.4. Evidence acquisition

2.29.2. DATA ACCESS WITHIN SERVICE MODELS

2.29.2.1. SaaS

2.29.2.1.1. Access Control

2.29.2.2. PaaS

2.29.2.2.1. Data

2.29.2.2.2. Application

2.29.2.2.3. Access Control

2.29.2.3. IaaS

2.29.2.3.1. OS

2.29.2.3.2. Middleware

2.29.2.3.3. Runtime

2.29.2.3.4. Data

2.29.2.3.5. Application

2.29.2.3.6. Access Control

2.29.2.4. Local

2.29.2.4.1. Networking

2.29.2.4.2. Storage

2.29.2.4.3. Servers

2.29.2.4.4. Virtualization

2.29.2.4.5. OS

2.29.2.4.6. Middleware

2.29.2.4.7. Runtime

2.29.2.4.8. Data

2.29.2.4.9. Application

2.29.2.4.10. Access Control

2.29.3. FORENSICS READINESS

2.29.3.1. Performing regular backups of systems and maintaining previous backups for a specific period of time

2.29.3.2. Enabling auditing on workstations, servers, and network devices

2.29.3.3. Forwarding audit records to secure centralized log servers

2.29.3.4. Configuring mission-critical applications to perform auditing, including recording all authentication attempts

2.29.3.5. Maintaining a database of file hashes for the files of common OS and application deployments, and using file integrity checking software on particularly important assets

2.29.3.6. Maintaining records (e.g., baselines) of network and system configurations

2.29.3.7. Establishing data-retention policies that support performing historical reviews of system and network activity, complying with requests or requirements to preserve data relating to ongoing litigation and investigations, and destroying data that is no longer needed

2.29.4. PROPER METHODOLOGIES FOR FORENSIC COLLECTION OF DATA

2.29.4.1. Collection

2.29.4.1.1. Data Acquisition

2.29.4.1.2. Challenges

2.29.4.1.3. Additional Steps

2.29.4.1.4. Collecting Data from a Host OS

2.29.4.1.5. Collecting Data from a Guest OS

2.29.4.1.6. Collecting Metadata

2.29.4.2. Examination

2.29.4.2.1. Bypassing or mitigating OS or application features that obscure data and code, such as data compression, encryption, and access control mechanisms

2.29.4.2.2. Using text and pattern searches to identify pertinent data, such as finding documents that mention a particular subject or person or identifying e-mail log entries for a particular e-mail address

2.29.4.2.3. Using a tool that can determine the type of contents of each data file, such as text, graphics, music, or a compressed file archive

2.29.4.2.4. Using knowledge of data file types to identify files that merit further study, as well as to exclude files that are of no interest to the examination

2.29.4.2.5. Using any databases containing information about known files to include or exclude files from further consideration

2.29.4.3. Analysis

2.29.4.3.1. Should include identifying people, places, items, and events and determining how these elements are related so that a conclusion can be reached. Often, this effort will include correlating data among multiple sources.

2.29.4.4. Reporting

2.29.4.4.1. Alternative explanations

2.29.4.4.2. Audience consideration

2.29.4.4.3. Actionable information

2.29.5. THE CHAIN OF CUSTODY

2.29.5.1. When an item is gathered as evidence, that item should be recorded in an evidence log with a description, the signature of the individual gathering the item, a signature of a second individual witnessing the item being gathered, and an accurate time and date.

2.29.5.2. Whenever that item is stored, the location in which the item is stored should be recorded, along with the item’s condition. The signatures of the individual placing the item in storage and of the individual responsible for that storage location should also be included, along with an accurate time and date.

2.29.5.3. Whenever an item is removed from storage, it should be recorded, along with the item’s condition and the signatures of the person removing the item and the person responsible for that storage location, along with an accurate time and date.

2.29.5.4. Whenever an item is transported, that item’s point of origin, method of transport, and the item’s destination should be recorded, as well as the item’s condition at origination and destination. Also record the signatures of the people performing the transportation and a responsible party at the origin and destination witnessing its departure and arrival, along with accurate times and dates for each.

2.29.5.5. Whenever any action, process, test, or other handling of an item is to be performed, a description of all such actions to be taken, and the person(s) who will perform such actions, should be recorded. The signatures of the person taking the item to be tested and of the person responsible for the items storage should be recorded, along with an accurate time and date.

2.29.5.6. Whenever any action, process, test, or other handling of an item is performed, record a description of all such actions, along with accurate times and dates for each. Also record the person performing such actions, any results or findings of such actions, and the signatures of at least one person of responsibility as witness that the actions were performed as described, along with the resulting findings as described.

2.29.6. EVIDENCE MANAGEMENT

2.30. Managing Communications with Relevant Parties

2.30.1. THE FIVE WS AND ONE H

2.30.1.1. Who: Who is the target of the communication?

2.30.1.2. What: What is the communication designed to achieve?

2.30.1.3. When: When is the communication best delivered/most likely to reach its intended target(s)?

2.30.1.4. Where: Where is the communication pathway best managed from?

2.30.1.5. Why: Why is the communication being initiated in the first place?

2.30.1.6. How: How is the communication being transmitted and how is it being received?

2.30.2. COMMUNICATING WITH VENDORS/PARTNERS

2.30.2.1. Communication paths

2.30.2.2. Emergency communication paths should be established and tested with all vendors.

2.30.2.3. Categorizing, or ranking, a vendor/supplier on some sort of scale is critical

2.30.3. COMMUNICATING WITH CUSTOMERS

2.30.3.1. SLAs are a form of communication that clarify responsibilities

2.30.3.1.1. What percentage of the time services will be available

2.30.3.1.2. The number of users that can be served simultaneously

2.30.3.1.3. Specific performance benchmarks to which actual performance will be periodically compared

2.30.3.1.4. The schedule for notification in advance of network changes that may affect users

2.30.3.1.5. Help/service desk response time for various classes of problems

2.30.3.1.6. Remote access availability

2.30.3.1.7. Usage statistics that will be provided

2.30.4. COMMUNICATING WITH REGULATORS

2.30.5. COMMUNICATING WITH OTHER STAKEHOLDERS

3. Legal and Compliance

3.1. International Legislation Conflicts

3.1.1. copyright law

3.1.2. intellectual property

3.1.3. violation of patents

3.1.4. breaches of data protection

3.1.5. legislative requirements

3.1.6. privacy-related components

3.2. Legislative Concepts

3.2.1. International Law

3.2.1.1. International conventions, whether general or particular, establishing rules expressly recognized by contesting states

3.2.1.2. International custom, as evidence of a general practice accepted as law

3.2.1.3. The general principles of law recognized by civilized nations

3.2.1.4. Judicial decisions and the teachings of the most highly qualified publicists of the various nations, as subsidiary means for the determination of rules of law

3.2.2. State Law

3.2.3. Copyright/Piracy Laws

3.2.4. Enforceable Governmental Request(s)

3.2.5. Intellectual Property Rights

3.2.6. Privacy Laws

3.2.7. The Doctrine of the Proper Law

3.2.8. Criminal Law

3.2.9. Tort Law

3.2.9.1. It seeks to compensate victims for injuries suffered by the culpable action or inaction of others.

3.2.9.2. It seeks to shift the cost of such injuries to the person or persons who are legally responsible for inflicting them.

3.2.9.3. It seeks to discourage injurious, careless, and risky behavior in the future.

3.2.9.4. It seeks to vindicate legal rights and interests that have been compromised, diminished, or emasculated.

3.2.10. Restatement (Second) Conflict of Laws

3.3. Frameworks and Guidelines Relevant to Cloud Computing

3.3.1. ORGANIZATION FOR ECONOMIC COOPERATION AND DEVELOPMENT (OECD)—PRIVACY & SECURITY GUIDELINES

3.3.1.1. National privacy strategies

3.3.1.2. Privacy management programs

3.3.1.3. Data security breach notification

3.3.2. ASIA PACIFIC ECONOMIC COOPERATION (APEC) PRIVACY FRAMEWORK

3.3.2.1. Framework that is made up of four parts,

3.3.2.1.1. Part 1: Preamble

3.3.2.1.2. Part II: Scope

3.3.2.1.3. Part III: Information Privacy Principles

3.3.2.1.4. Part IV: Implementation

3.3.2.2. The nine principles

3.3.2.2.1. Preventing Harm

3.3.2.2.2. Notice

3.3.2.2.3. Collection Limitation

3.3.2.2.4. Use of Personal Information

3.3.2.2.5. Choice

3.3.2.2.6. Integrity of Personal Information

3.3.2.2.7. Security Safeguards

3.3.2.2.8. Access and Correction

3.3.2.2.9. Accountability

3.3.3. EU DATA PROTECTION DIRECTIVE

3.3.3.1. It does not apply to the processing of data:

3.3.3.1.1. By a natural person in the course of purely personal or household activities

3.3.3.1.2. In the course of an activity that falls outside the scope of community law, such as operations concerning public safety, defense or state security

3.3.3.2. The quality of the data

3.3.3.3. The legitimacy of data processing

3.3.3.3.1. For the performance of a contract to which the data subject is party

3.3.3.3.2. For compliance with a legal obligation to which the controller is subject

3.3.3.3.3. In order to protect the vital interests of the data subject

3.3.3.3.4. For the performance of a task carried out in the public interest

3.3.3.3.5. For the purposes of the legitimate interests pursued by the controller

3.3.3.4. Special categories of processing

3.3.3.5. Information to be given to the data subject

3.3.3.6. The data subject’s right of access to data

3.3.3.6.1. Confirmation as to whether or not data relating to him/her is being processed and communication of the data undergoing processing

3.3.3.6.2. The rectification, erasure, or blocking of data the processing of which does not comply with the provisions of this directive either because of the incomplete or inaccurate nature of the data, and the notification of these changes to third parties to whom the data has been disclosed

3.3.3.7. Exemptions and restrictions

3.3.3.8. The right to object to the processing of data

3.3.3.9. The confidentiality and security of processing

3.3.3.10. The notification of processing to a supervisory authority

3.3.3.11. Scope

3.3.4. GENERAL DATA PROTECTION REGULATION

3.3.5. EPRIVACY DIRECTIVE

3.4. Common Legal Requirements

3.4.1. United States Federal Laws

3.4.2. United States State Laws

3.4.3. Standards

3.4.4. International Regulations and Regional Regulations

3.4.5. Contractual Obligations

3.4.6. Restrictions of Cross-border Transfers

3.5. Legal Controls and Cloud Providers

3.6. eDiscovery

3.6.1. EDISCOVERY CHALLENGES

3.6.1.1. Is the cloud under your control?

3.6.1.2. Who is controlling or hosting the relevant data?

3.6.1.3. Does this mean that it is under “the provider’s” control?

3.6.2. CONSIDERATIONS AND RESPONSIBILITIES OF EDISCOVERY

3.6.3. REDUCING RISK

3.6.4. CONDUCTING EDISCOVERY INVESTIGATIONS

3.6.4.1. SaaS-based eDiscovery

3.6.4.2. Hosted eDiscovery (provider)

3.6.4.3. Third-party eDiscovery

3.7. Cloud Forensics and ISO/IEC 27050-1

3.8. Protecting Personal Information in the Cloud

3.8.1. PII is “any information about an individual maintained by an agency, including any information that can be used to distinguish or trace an individual’s identity, such as name, Social Security Number, date and place of birth, mother’s maiden name, or biometric records; and any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.”

3.8.2. DIFFERENTIATING BETWEEN CONTRACTUAL AND REGULATED PERSONALLY IDENTIFIABLE INFORMATION (PII)

3.8.2.1. Contractual PII

3.8.2.2. Regulated PII

3.8.2.2.1. Reasons for regulation

3.8.2.2.2. Mandatory Breach Reporting

3.8.2.3. Contractual Components

3.8.2.3.1. Scope of processing

3.8.2.3.2. Use of subcontractors

3.8.2.3.3. Removal/deletion of data

3.8.2.3.4. Appropriate/required data security controls

3.8.2.3.5. Location(s) of data

3.8.2.3.6. Return of data/restitution of data

3.8.2.3.7. Audits/right to audit subcontractors

3.8.3. COUNTRY-SPECIFIC LEGISLATION AND REGULATIONS RELATED TO PII/DATA PRIVACY/DATA PROTECTION

3.8.3.1. European Union

3.8.3.1.1. Directive 95/46 EC

3.8.3.1.2. EU General Data Protection Regulation 2012

3.8.3.1.3. United Kingdom and Ireland

3.8.3.2. Argentina

3.8.3.2.1. Argentina’s legislative basis, over and above the constitutional right of privacy, is the Personal Data Protection Act 2000. This act openly tracks the EU directive, resulting in the EU commission’s approval of Argentina as a country offering an adequate level of data protection. This means personal data can be transferred between Europe and Argentina as freely as if Argentina were part of the EEA.

3.8.3.3. United States

3.8.3.3.1. The Federal Trade Commission (FTC) and other associated U.S. regulators do hold that the applicable U.S. laws and regulations apply to the data after it leaves its jurisdiction, and the U.S. regulated entities remain liable for the following:

3.8.3.3.2. Safe Harbor

3.8.3.3.3. EU View on U.S. Privacy

3.8.3.3.4. The Health Insurance Portability and Accountability Act of 1996 (HIPAA)

3.8.3.3.5. The Gramm-Leach-Bliley Act (GLBA)

3.8.3.3.6. The Stored Communication Act

3.8.3.3.7. The Sarbanes-Oxley Act (SOX)

3.8.3.4. Australia and New Zealand

3.8.3.4.1. Regulations in Australia and New Zealand make it extremely difficult for enterprises to move sensitive information to cloud providers that store data outside of Australian/New Zealand borders. The Office of the Australian Information Commissioner (OAIC) provides oversight and governance on data privacy regulations of sensitive personal information.

3.8.3.4.2. The Australian National Privacy Act of 1988 provides guidance and regulates how organizations collect, store, secure, process, and disclose personal information. It lists the National Privacy Principles (NPP) to ensure that organizations holding personal information handle and process it responsibly.

3.8.3.4.3. Within the privacy principles, the following components are addressed for personal information:

3.8.3.4.4. Since March 2014, the revised Privacy Amendment Act introduces a set of new principles, focusing on the handling of personal information, now called the Australian Privacy Principles (APPs).

3.8.3.5. Russia

3.8.3.5.1. Data Localization Law valid from September 1, 2015

3.8.3.6. Switzerland

3.8.3.6.1. Data Processing by Third Parties

3.8.3.6.2. Transferring Personal Data Abroad

3.8.3.6.3. Data Security

3.9. Auditing in the Cloud

3.9.1. INTERNAL AND EXTERNAL AUDITS

3.9.1.1. Internal audit acts as a third line of defense after the business/IT functions and risk management functions through

3.9.1.1.1. Independent verification of the cloud program’s effectiveness

3.9.1.1.2. Providing assurance to the board and risk management function(s) of the organization with regard to the cloud risk exposure

3.9.1.1.3. performs a number of cloud audits such as

3.9.1.2. Another potential source of independent verification on internal controls will be audits performed by external auditors. An external auditor’s scope varies greatly from an internal audit, whereas the external audit usually focuses on the internal controls over financial reporting.

3.9.2. TYPES OF AUDIT REPORTS

3.9.2.1. Service Organization Controls 1 (SOC 1)

3.9.2.1.1. Users

3.9.2.1.2. Concern

3.9.2.1.3. Detail Required

3.9.2.2. Service Organization Controls 2 (SOC 2)

3.9.2.2.1. Users

3.9.2.2.2. Concern

3.9.2.2.3. Detail Required

3.9.2.2.4. Type 1

3.9.2.2.5. Type 2

3.9.2.3. Service Organization Controls 3 (SOC 3)

3.9.2.3.1. Users

3.9.2.3.2. Concern

3.9.2.3.3. Detail Required

3.9.2.4. Agreed Upon Procedures (AUP)

3.9.2.5. Cloud Security Alliance’s Security, Trust and Assurance Registry (STAR) program

3.9.2.6. EuroCloud Star Audit (ESCA) program

3.9.3. IMPACT OF REQUIREMENT PROGRAMS BY THE USE OF CLOUD SERVICES

3.9.3.1. Due to the nature of the cloud, auditors need to re-think how they audit and obtain evidence to support their audit.

3.9.3.1.1. What is the universal population to sample from?

3.9.3.1.2. What would be the sampling methods in a highly dynamic environment?

3.9.3.1.3. How do you know that the virtualized server you are auditing was the same server over time?

3.9.4. ASSURING CHALLENGES OF THE CLOUD AND VIRTUALIZATION

3.9.4.1. In order to obtain assurance and conduct appropriate auditing on the virtual machines/hypervisor, the CSP must:

3.9.4.1.1. Understand the virtualization management architecture

3.9.4.1.2. Verify systems are up to date and hardened according to best-practice standards

3.9.4.1.3. Verify configuration of hypervisor according to organizational policy

3.9.5. INFORMATION GATHERING

3.9.5.1. Initial scoping of requirements

3.9.5.2. Market analysis

3.9.5.3. Review of services

3.9.5.4. Solutions assessment

3.9.5.5. Feasibility study

3.9.5.6. Supplementary evidence

3.9.5.7. Competitor analysis

3.9.5.8. Risk review/risk assessment

3.9.5.9. Auditing

3.9.5.10. Contract/service level agreement review

3.9.6. AUDIT SCOPE

3.9.6.1. Audit Scope Statements

3.9.6.1.1. General statement of focus and objectives

3.9.6.1.2. Scope of audit (including exclusions)

3.9.6.1.3. Type of audit (certification, attestation, and so on)

3.9.6.1.4. Security assessment requirements

3.9.6.1.5. Assessment criteria (including ratings)

3.9.6.1.6. Acceptance criteria

3.9.6.1.7. Deliverables

3.9.6.1.8. Classification (confidential, highly confidential, secret, top secret, public, and so on)

3.9.6.1.9. circulation list,

3.9.6.1.10. key individuals associated with the audit

3.9.6.2. Audit Scope Restrictions

3.9.6.2.1. typically specify operational components, along with asset restrictions, which include acceptable times and time periods (e.g., time of day) and acceptable and non-accepted testing methods (e.g., no destructive testing).

3.9.6.2.2. indemnification of any liability for systems performance degradation, along with any other adverse effects, will be required where technical testing is being performed.

3.9.6.3. Gap Analysis

3.9.6.3.1. Stages that are carried out prior to commencing a gap analysis review:

3.9.6.3.2. The value of such an assessment is:

3.9.7. CLOUD AUDITING GOALS

3.9.7.1. Ability to understand, measure, and communicate the effectiveness of cloud service provider controls and security to organizational stakeholders/executives

3.9.7.2. Proactively identify any control weaknesses or deficiencies, while communicating these both internally and to the cloud service provider

3.9.7.3. Obtain levels of assurance and verification as to the cloud service provider’s ability to meet the SLA and contractual requirements, while not relying on reporting or cloud service provider reports

3.9.8. AUDIT PLANNING

3.9.8.1. Defining Audit Objectives

3.9.8.1.1. Document and define audit objectives

3.9.8.1.2. Define audit outputs and format

3.9.8.1.3. Define frequency and audit focus

3.9.8.1.4. Define the number of auditors and subject matter experts required

3.9.8.1.5. Ensure alignment with audit/risk management processes (internal)

3.9.8.2. Defining Audit Scope

3.9.8.2.1. Ensure the core focus and boundaries to which the audit will operate

3.9.8.2.2. Document list of current services/resources utilized from cloud provider(s)

3.9.8.2.3. Define key components of services (storage, utilization, processing, etc.)

3.9.8.2.4. Define cloud services to be audited (IaaS, PaaS, and SaaS)

3.9.8.2.5. Define geographic locations permitted/required

3.9.8.2.6. Define locations for audits to be undertaken

3.9.8.2.7. Define key stages to audit (information gathering, workshops, gap analysis, verification evidence, etc.)

3.9.8.2.8. Document key points of contact within the cloud service provider as well as internally

3.9.8.2.9. Define escalation and communication points

3.9.8.2.10. Define criteria and metrics to which the cloud service provider will be assessed

3.9.8.2.11. Ensure criteria is consistent with the SLA and contract

3.9.8.2.12. Factor in “busy periods” or organizational periods (financial yearend, launches, new services, etc.)

3.9.8.2.13. Ensure findings captured in previous reports or stated by the cloud service provider are actioned/verified

3.9.8.2.14. Ensure previous non-conformities/high-risk items are re-assessed/verified as part of the audit process

3.9.8.2.15. Ensure any operational or business changes internally have been captured as part of the audit plan (reporting changes, governance, etc.)

3.9.8.2.16. Agree on final reporting dates (conscious of business operations and operational availability)

3.9.8.2.17. Ensure findings are captured and communicated back to relevant business stakeholders/executives

3.9.8.2.18. Confirm report circulation/target audience

3.9.8.2.19. Document risk management/risk treatment processes to be utilized as part of any remediation plans

3.9.8.2.20. Agree on a ticketing/auditable process for remediation actions (ensuring traceability and accountability)

3.9.8.3. Conducting the Audit

3.9.8.3.1. Adequate staff

3.9.8.3.2. Adequate tools

3.9.8.3.3. Schedule

3.9.8.3.4. Supervision of audit

3.9.8.3.5. Reassessment

3.9.8.4. Refining the Audit Process/Lessons Learned

3.9.8.4.1. Ensure that approach and scope are still relevant

3.9.8.4.2. When any provider changes have occurred, these should be factored in

3.9.8.4.3. Ensure reporting details are sufficient to enable clear, concise, and appropriate business decisions to be made

3.9.8.4.4. Determine opportunities for reporting improvement/enhancement

3.9.8.4.5. Ensure that duplication of efforts is minimal (crossover or duplication with other audit/risk efforts)

3.9.8.4.6. Ensure audit criteria and scope are still accurate (factoring in business changes)

3.9.8.4.7. Have a clear understanding of what levels of information/details could be collected using automated methods/mechanisms

3.9.8.4.8. Ensure the right skillsets are available and utilized to provide accurate results and reporting

3.9.8.4.9. Ensure the Plan, Do, Check, and Act (PDCA) is also applied to the cloud service provider auditing planning/processes

3.10. Standard Privacy Requirements (ISO/IEC 27018)

3.10.1. Consent

3.10.2. Control

3.10.3. Transparency

3.10.4. Communication

3.10.5. Independent and yearly audit

3.11. Generally Accepted Privacy Principles (GAPP)

3.11.1. The entity defines, documents, communicates, and assigns accountability for its privacy policies and procedures.

3.11.2. The entity provides notice about its privacy policies and procedures and identifies the purposes for which personal information is collected, used, retained, and disclosed.

3.11.3. The entity describes the choices available to the individual and obtains implicit or explicit consent with respect to the collection, use, and disclosure of personal information.

3.11.4. The entity collects personal information only for the purposes identified in the notice.

3.11.5. The entity limits the use of personal information to the purposes identified in the notice and for which the individual has provided implicit or explicit consent. The entity retains personal information for only as long as necessary to fulfill the stated purposes or as required by law or regulations and thereafter appropriately disposes of such information.

3.11.6. The entity provides individuals with access to their personal information for review and update.

3.11.7. The entity discloses personal information to third parties only for the purposes identified in the notice and with the implicit or explicit consent of the individual.

3.11.8. The entity protects personal information against unauthorized access (both physical and logical).

3.11.9. The entity maintains accurate, complete, and relevant personal information for the purposes identified in the notice.

3.11.10. The entity monitors compliance with its privacy policies and procedures and has procedures to address privacy-related inquiries, complaints, and disputes.

3.12. Internal Information Security Management System (ISMS)

3.12.1. THE VALUE OF AN ISMS

3.12.1.1. ISMS ensures that a structured, measured, and ongoing view of security is taken across an organization, allowing security impacts and risk-based decisions to be taken. Of crucial importance is the “top-down” sponsorship and endorsement of information security across the business, highlighting its overall value and necessity.

3.12.2. INTERNAL INFORMATION SECURITY CONTROLS SYSTEM: ISO 27001:2013 DOMAINS

3.12.2.1. A.5—Security Policy Management

3.12.2.2. A.6—Corporate Security Management

3.12.2.3. A.7—Personnel Security Management

3.12.2.4. A.8—Organizational Asset Management

3.12.2.5. A.9—Information Access Management

3.12.2.6. A.10—Cryptography Policy Management

3.12.2.7. A.11—Physical Security Management

3.12.2.8. A.12—Operational Security Management

3.12.2.9. A.13—Network Security Management

3.12.2.10. A.14—System Security Management

3.12.2.11. A.15—Supplier Relationship Management

3.12.2.12. A.16—Security Incident Management

3.12.2.13. A.17—Security Continuity Management

3.12.2.14. A.18—Security Compliance Management

3.12.3. REPEATABILITY AND STANDARDIZATION

3.12.3.1. the existence and continued use of an internal ISMS will assist in standardizing and measuring security across the organization and beyond its perimeters. Given that cloud computing may well be both an internal and external solution for the organization, it is a strong recommendation that the ISMS has sight of and factors in reliance and dependencies on third parties for the delivery of business services.

3.13. Implementing Policies

3.13.1. ORGANIZATIONAL POLICIES

3.13.1.1. form the basis of functional policies that can reduce the likelihood of:

3.13.1.1.1. Financial loss

3.13.1.1.2. Irretrievable loss of data

3.13.1.1.3. Reputational damage

3.13.1.1.4. Regulatory and legal consequences

3.13.1.1.5. Misuse/abuse of systems and resources

3.13.2. FUNCTIONAL POLICIES

3.13.2.1. Information security policy

3.13.2.2. Information technology policy

3.13.2.3. Data classification policy

3.13.2.4. Acceptable usage policy

3.13.2.5. Network security policy

3.13.2.6. Internet use policy

3.13.2.7. E-mail use policy

3.13.2.8. Password policy

3.13.2.9. Virus and spam policy

3.13.2.10. Software security policy

3.13.2.11. Data backup policy

3.13.2.12. Disaster recovery policy

3.13.2.13. Remote access policy

3.13.2.14. Segregation of duties policy

3.13.2.15. Third-party access policy

3.13.2.16. Incident response/incident management policy

3.13.2.17. Human resources security policy

3.13.2.18. Employee background checks/screening policy

3.13.2.19. Legal compliance policy/guidelines

3.13.3. BRIDGING THE POLICY GAPS

3.13.3.1. When the policy requirements cannot be fulfilled by cloud-based services, there needs to be an agreed-upon list or set of mitigation controls or techniques. You should not revise the policies to reduce or lower the requirements if at all possible. All changes and variations to policy should be explicitly listed and accepted by all relevant risk and business stakeholders.

3.14. Identifying and Involving the Relevant Stakeholders

3.14.1. STAKEHOLDER IDENTIFICATION CHALLENGES

3.14.1.1. Defining the enterprise architecture (which can be a sizeable task, if not currently in place)

3.14.1.2. Independently/objectively viewing potential options and solutions (where individuals may be conflicted due to roles/functions)

3.14.1.3. Objectively selecting the appropriate service(s) and provider

3.14.1.4. Engaging with the users and IT personnel who will be impacted, particularly if their job is being altered or removed

3.14.1.5. Identifying direct and indirect costs (training, up skilling, reallocating, new tasks, responsibilities, etc.)

3.14.1.6. Extending of risk management and enterprise risk management

3.14.2. GOVERNANCE CHALLENGES

3.14.2.1. Audit requirements and extension or additional audit activities

3.14.2.2. Verify all regulatory and legal obligations will be satisfied as part of the NDA/contract

3.14.2.3. Establish reporting and communication lines both internal to the organization and for cloud service provider(s)

3.14.2.4. Ensure that where operational procedures and processes are changed (due to use of cloud services), all documentation and evidence is updated accordingly

3.14.2.5. Ensure all business continuity, incident management/response, and disaster recovery plans are updated to reflect changes and interdependencies

3.14.3. COMMUNICATION COORDINATION with business units should include

3.14.3.1. Information technology

3.14.3.2. Information security

3.14.3.3. Vendor management

3.14.3.4. Compliance

3.14.3.5. Audit

3.14.3.6. Risk

3.14.3.7. Legal

3.14.3.8. Finance

3.14.3.9. Operations

3.14.3.10. Data protection/privacy

3.14.3.11. Executive committee/directors

3.15. Impact of Distributed IT Models

3.15.1. COMMUNICATIONS/CLEAR UNDERSTANDING

3.15.1.1. Traditional IT deployment and operations typically allow clear line of sight or understanding of the personnel, their roles, functions, and core areas of focus, allowing for far more access to individuals, either on a name basis or based on their roles. Communications allow for collaboration, information sharing, and the availability of relevant details and information when necessary. This can be from an operations, engineering, controls, or development.

3.15.1.2. Distributed IT models challenge and essentially redefine the roles, functions, and ability for “face-to-face communications” or direct interactions, such as emails, phone calls, or messengers. Distributed IT models brings structured, regimented, and standardized requests. From a security perspective, this can be seen as an enhancement in many cases, thus alleviating and removing the opportunity for untracked changes or for bypassing change management controls, along with the risks associated with implementing changes or amendments without proper testing and risk management being taken into account.

3.15.2. COORDINATION/MANAGEMENT OF ACTIVITIES

3.15.2.1. Bringing in an independent and focused group of subject matter experts whose focus is on the delivery of such projects and functionality can make for a swift rollout or deployment.

3.15.3. GOVERNANCE OF PROCESSES/ACTIVITIES

3.15.3.1. Effective governance allows for “peace of mind” and a level of confidence to be established in an organization. This is even more true with distributed IT and the use of IT services or solutions across dispersed organizational boundaries from a variety of users.

3.15.3.2. IT department may now need to pull information from a number of sources and providers, leading to

3.15.3.2.1. Increased number of sources for information

3.15.3.2.2. Varying levels of cooperation

3.15.3.2.3. Varying levels of information/completeness

3.15.3.2.4. Varying response times and willingness to assist

3.15.3.2.5. Multiple reporting formats/structures

3.15.3.2.6. Lack of cohesion in terms of activities and focus

3.15.3.2.7. Requirement for additional resources/interactions with providers

3.15.3.2.8. Minimal evidence available to support claims/verify information

3.15.3.2.9. Disruption or discontent from internal resources (where job function or role may have undergone change)

3.15.4. COORDINATION IS KEY

3.15.4.1. Interacting with and collecting information from multiple sources places requires coordination of efforts, including defining how these processes will be managed from the outset.

3.15.5. SECURITY REPORTING

3.15.5.1. An independent report of the security posture of the virtualized machines in a format that illustrates any high, medium, or low risks (typical of audit reports), or alternatively be based on industry ratings such as Common Vulnerabilities and Exploits (CVE) or Common Vulnerability Scoring System (CVSS) scoring.

3.15.5.2. Common approaches also include reporting against the OWASP Top 10 and SANS Top 20 listings.

3.16. Implications of the Cloud to Enterprise Risk Management

3.16.1. RISK PROFILE

3.16.1.1. The risk profile is determined by an organization’s willingness to take risks, as well as the threats to which it is itself exposed. It should identify the level of risk to be accepted, how risks are taken, and how risk-based decision making is performed. Additionally, the risk profile should take into account potential costs and disruptions should one or more risks be exploited.

3.16.2. RISK APPETITE

3.16.2.1. when assessing and measuring the relevant risks in cloud service offerings, it’s best to have a systematic, measurable, and pragmatic approach.

3.16.2.2. Many “emerging” or rapid-growth companies will be more likely to take significant risks when utilizing cloud computing services to be “first to market.”

3.16.3. DIFFERENCE BETWEEN DATA OWNER/CONTROLLER AND DATA CUSTODIAN/PROCESSOR

3.16.3.1. The data subject is an individual who is the subject of personal data.

3.16.3.2. The data controller is a person who (either alone or jointly with other persons) determines the purposes for which and the manner in which any personal data are processed.

3.16.3.3. The data processor in relation to personal data is any person (other than an employee of the data controller) who processes the data on behalf of the data controller.

3.16.3.4. Data stewards are commonly responsible for data content, context, and associated business rules.

3.16.3.5. Data custodians are responsible for the safe custody, transport, and storage of the data, and implementation of business rules.

3.16.3.6. Data owners hold the legal rights and complete control over a single piece or set of data elements. Data owners also possess the ability to define distribution and associated policies.

3.16.4. SERVICE LEVEL AGREEMENT (SLA)

3.16.4.1. Should cover at minimum:

3.16.4.1.1. Availability (e.g., 99.99% of services and data)

3.16.4.1.2. Performance (e.g., expected response times vs. maximum response times)

3.16.4.1.3. Security/privacy of the data (e.g., encrypting all stored and transmitted data)

3.16.4.1.4. Logging and reporting (e.g., audit trails of all access and the ability to report on key requirements/indicators)

3.16.4.1.5. Disaster recovery expectations (e.g., worse-case recovery commitment, recovery time objectives [RTO], maximum period of tolerable disruption [MPTD])

3.16.4.1.6. Location of the data (e.g., ability to meet requirements/consistent with local legislation)

3.16.4.1.7. Data format/structure (e.g., data retrievable from provider in readable and intelligent format)

3.16.4.1.8. Portability of the data (e.g., ability to move data to a different provider or to multiple providers)

3.16.4.1.9. Identification and problem resolution (e.g., helpline, call center, or ticketing system)

3.16.4.1.10. Change-management process (e.g., changes such as updates or new services)

3.16.4.1.11. Dispute-mediation process (e.g., escalation process and consequences)

3.16.4.1.12. Exit strategy with expectations on the provider to ensure a smooth transition

3.16.4.2. SLA Components

3.16.4.2.1. Uptime Guarantees

3.16.4.2.2. SLA Penalties

3.16.4.2.3. SLA Penalty Exclusions

3.16.4.3. Security Recommendations

3.16.4.3.1. Immediate notification of any security or privacy breach as soon as the provider is aware is highly recommended.

3.16.4.3.2. Since the CSP is ultimately responsible for the organization’s data and alerting its customers, partners, or employees of any breach, it is particularly critical for companies to determine what mechanisms are in place to alert customers if any security breaches do occur and establishing SLAs determining the time frame the cloud provider has to alert you of any breach.

3.16.4.3.3. The time frames you have to respond within will vary by jurisdiction but may be as little as 48 hours. Be aware that if law enforcement becomes involved in a provider security incident, it may supersede any contractual requirement to notify you or to keep you informed.

3.16.4.4. Key SLA Elements to be assessed before agreeing to SLA

3.16.4.4.1. Assessment of risk environment (e.g., service, vendor, and ecosystem)

3.16.4.4.2. Risk profile (of the SLA and the company providing services)

3.16.4.4.3. Risk appetite (what level of risk is acceptable?)

3.16.4.4.4. Responsibilities (clear definition and understanding of who will do what)

3.16.4.4.5. Regulatory requirements (will these be met under the SLA?)

3.16.4.4.6. Risk mitigation (which mitigation techniques/controls can reduce risks?)

3.16.4.4.7. Different risk frameworks (what frameworks are to be used to assess the ongoing effectiveness, along with how the provider will manage risks?)

3.16.4.5. Ensuring Quality of Service (QoS)

3.16.4.5.1. Availability: This looks to measure the uptime (availability) of the relevant service(s) over a specified period as an overall percentage, that is, 99.99%.

3.16.4.5.2. Outage Duration: This looks to capture and measure the loss of service time for each instance of an outage; for example, 1/1/201X—09:20 start—10:50 restored—1 hour 30 minutes loss of service/outage.

3.16.4.5.3. Mean Time Between Failures: This looks to capture the indicative or expected time between consecutive or recurring service failures, that is, 1.25 hours/day of 365 days.

3.16.4.5.4. Capacity Metric: This looks to measure and report on capacity capabilities and the ability to meet requirements.

3.16.4.5.5. Performance Metrics: Utilizing and actively identifying areas, factors, and reasons for “bottlenecks” or degradation of performance. Typically, performance is measured and expressed as requests/connections per minute.

3.16.4.5.6. Reliability Percentage Metric: Listing the success rate for responses and based on agreed criteria, that is, 99% success rate in transactions completed to the database.

3.16.4.5.7. Storage Device Capacity Metric: Listing metrics and characteristics related to storage device capacity; typically provided in gigabytes.

3.16.4.5.8. Server Capacity Metric: These look to list the characteristics of server capacity, based and influenced by CPUs, CPU frequency in GHz, RAM, virtual storage, and other storage volumes.

3.16.4.5.9. Instance Startup Time Metric: Indicates or reports on the length of time required to initialize a new instance, calculated from the time of request (by user or resource), and typically measured in seconds and minutes.

3.16.4.5.10. Response Time Metric: Reports on the time required to perform the requested operation or tasks; typically measured based on the number of requests and response times in milliseconds.

3.16.4.5.11. Completion Time Metric: Provides the time required to complete the initiated/requested task, typically measured by the total number of requests as averaged in seconds.

3.16.4.5.12. Mean-Time to Switchover Metric: Provides the expected time to switch over from a service failure to a replicated failover instance. This is typically measured in minutes and captured from commencement to completion.

3.16.4.5.13. Mean-Time System Recovery Metric: Highlights the expected time for a complete recovery to a resilient system in the event of or following a service failure/outage. This is typically measured in minutes, hours, and days.

3.16.4.5.14. Scalability Component Metrics: Typically used to analyze customer use, behavior, and patterns that can allow for the auto-scaling and auto-shrinking of servers.

3.16.4.5.15. Storage Scalability Metric: Indicates the storage device capacity available in the event/where increased workloads and storage requirements are necessary.

3.16.4.5.16. Server Scalability Metric: Indicates the available server capacity that can be utilized/called upon where changes in increased workloads are required.

3.17. Risk Mitigation

3.17.1. RISK-MANAGEMENT METRICS

3.17.2. DIFFERENT RISK FRAMEWORKS

3.17.2.1. ISO 31000:2009

3.17.2.1.1. 11 key principles as a guiding set of rules to enable senior decision makers and organizations to manage risks

3.17.2.1.2. core component of ISO 31000:2009 is management endorsement, support, and commitment to ensure overall accountability and support.

3.17.2.1.3. focuses on risk identification, analysis, and evaluation through to risk treatment.

3.17.2.2. European Network and Information Security Agency (ENISA)

3.17.2.3. National Institute of Standards and Technology (NIST)—Cloud Computing Synopsis and Recommendations

3.18. Understanding Outsourcing and Contract Design

3.19. Business Requirements

3.20. Vendor Management

3.20.1. RISK EXPOSURE

3.20.1.1. Is the provider an established technology provider?

3.20.1.2. Is this cloud service a core business of the provider?

3.20.1.3. Where is the provider located?

3.20.1.4. Is the company financially stable?

3.20.1.5. Is the company subject to any takeover bids or significant sales of business units?

3.20.1.6. Is the company outsourcing any aspect of the service to a third party?

3.20.1.7. Are there contingencies where key third-party dependencies are concerned?

3.20.1.8. Does the company conform/is it certified against relevant security and professional standards/frameworks?

3.20.1.9. How will the provider satisfy relevant regulatory, legal, and other compliance requirements?

3.20.1.10. How will the provider ensure the ongoing confidentiality, integrity, and availability of your information assets if placed in the cloud environment (where relevant)?

3.20.1.11. Are adequate business continuity/disaster recovery processes in place?

3.20.1.12. Are reports or statistics available from any recent events or incidents affecting cloud services availability?

3.20.1.13. Is interoperability a key component to facilitate ease of transition or movement between cloud providers?

3.20.1.14. Are there any unforeseeable regulatory-driven compliance requirements?

3.20.2. ACCOUNTABILITY OF COMPLIANCE

3.20.3. COMMON CRITERIA ASSURANCE FRAMEWORK

3.20.4. CSA SECURITY, TRUST, AND ASSURANCE REGISTRY (STAR)

3.20.4.1. Level 1, Self-Assessment

3.20.4.2. Level 2, Attestation

3.20.4.3. Level 3, Ongoing Monitoring Certification

3.21. Cloud Computing Certification: Cloud Certification Schemes List (CCSL) and Cloud Certification Schemes Metaframework (CCSM)

3.21.1. CCSL

3.21.1.1. Certified Cloud Service—TUV Rhineland

3.21.1.2. Cloud Security Alliance (CSA) Attestation—OCF level 2

3.21.1.3. Cloud Security Alliance (CSA) Certification—OCF level 2

3.21.1.4. Cloud Security Alliance (CSA) Self Assessment—OCF level 1

3.21.1.5. EuroCloud Self Assessment

3.21.1.6. EuroCloud Start Audit Certification

3.21.1.7. ISO/IEC 27001 Certification

3.21.1.8. Payment Card Industry Data Security Standard (PCI-DSS) v3

3.21.1.9. LEET Security Rating Guide

3.21.1.10. AICPA Service Organization Control (SOC) 1

3.21.1.11. AICPA Service Organization Control (SOC) 2

3.21.1.12. AICPA Service Organization Control (SOC) 3

3.21.2. CCSM security objectives

3.21.2.1. 1. Information security policy

3.21.2.2. 2. Risk management

3.21.2.3. 3. Security roles

3.21.2.4. 4. Security in Supplier relationships

3.21.2.5. 5. Background checks

3.21.2.6. 6. Security knowledge and training

3.21.2.7. 7. Personnel changes

3.21.2.8. 8. Physical and environmental security

3.21.2.9. 9. Security of supporting utilities

3.21.2.10. 10. Access control to network and information systems

3.21.2.11. 11. Integrity of network and information systems

3.21.2.12. 12. Operating procedures

3.21.2.13. 13. Change management

3.21.2.14. 14. Asset management

3.21.2.15. 15. Security incident detection and response

3.21.2.16. 16. Security incident reporting

3.21.2.17. 17. Business continuity

3.21.2.18. 18. Disaster recovery capabilities

3.21.2.19. 19. Monitoring and logging policies

3.21.2.20. 20. System tests

3.21.2.21. 21. Security assessments

3.21.2.22. 22. Checking compliance

3.21.2.23. 23. Cloud data security

3.21.2.24. 24. Cloud interface security

3.21.2.25. 25. Cloud software security

3.21.2.26. 26. Cloud interoperability and portability

3.21.2.27. 27. Cloud monitoring and log access

3.22. Contract Management

3.22.1. IMPORTANCE OF IDENTIFYING CHALLENGES EARLY

3.22.1.1. Understanding the contractual requirements will form the organization’s baseline and checklist for the right to audit.

3.22.1.2. Understanding the gaps will allow the organization to challenge and request changes to the contract before signing acceptance.

3.22.1.3. The CSP will have an idea of what he/she is working with and the kind of leverage he/she will have during the audit.

3.22.2. KEY CONTRACT COMPONENTS

3.22.2.1. Performance measurement—how will this be performed and who is responsible for the reporting?

3.22.2.2. Service Level Agreements (SLAs)

3.22.2.3. Availability and associated downtime

3.22.2.4. Expected performance and minimum levels of performance

3.22.2.5. Incident response

3.22.2.6. Resolution timeframes

3.22.2.7. Maximum and minimum period for tolerable disruption

3.22.2.8. Issue resolution

3.22.2.9. Communication of incidents

3.22.2.10. Investigations

3.22.2.11. Capturing of evidence

3.22.2.12. Forensic/eDiscovery processes

3.22.2.13. Civil/state investigations

3.22.2.14. Tort law/copyright

3.22.2.15. Control and compliance frameworks

3.22.2.16. ISO 27001/2

3.22.2.17. COBIT

3.22.2.18. PCI DSS

3.22.2.19. HIPAA

3.22.2.20. GLBA

3.22.2.21. PII

3.22.2.22. Data protection

3.22.2.23. Safe Harbor

3.22.2.24. U.S. Patriot Act

3.22.2.25. Business Continuity and disaster recovery

3.22.2.26. Priority of restoration

3.22.2.27. Minimum levels of security and availability

3.22.2.28. Communications during outages

3.22.2.29. Personnel checks

3.22.2.30. Background checks

3.22.2.31. Employee/third-party policies

3.22.2.32. Data retention and disposal

3.22.2.33. Retention periods

3.22.2.34. Data destruction

3.22.2.35. Secure deletion

3.22.2.36. Regulatory requirements

3.22.2.37. Data access requests

3.22.2.38. Data protection/freedom of information

3.22.2.39. Key metrics and performance related to quality of service (QoS)

3.22.2.40. Independent assessments/certification of compliance

3.22.2.41. Right to audit (including period or frequencies permitted)

3.22.2.42. Ability to delegate/authorize third parties to carry out audits on your behalf

3.22.2.43. Penalties for nonperformance

3.22.2.44. Delayed or degraded performance penalties

3.22.2.45. Payment of penalties (supplemented by service or financial payment)

3.22.2.46. Backup of media, and relevant assurances related to the format and structure of the data

3.22.2.47. Restrictions and prohibiting the use of your data by the CSP without prior consent, or for stated purposes

3.22.2.48. Authentication controls and levels of security

3.22.2.49. Two-factor authentication

3.22.2.50. Password and account management

3.22.2.51. Joiner, mover, leaver (JML) processes

3.22.2.52. Ability to meet and satisfy existing internal access control policies

3.22.2.53. Restrictions and associated non-disclosure agreements (NDAs) from the cloud service provider related to data and services utilized

3.22.2.54. Any other component and requirements deemed necessary and essential

3.23. Supply Chain Management

3.23.1. SUPPLY CHAIN RISK

3.23.1.1. You should obtain regular updates of a clear and concise listing of all dependencies and reliance on third parties, coupled with the key suppliers.

3.23.1.2. Where single points of failure exist, these should be challenged and acted upon in order to reduce outages and disruptions to business processes.

3.23.1.3. Organizations need a way to quickly prioritize hundreds or thousands of contracts to determine which of them, and which of their suppliers’ suppliers, pose a potential risk.

3.23.2. CLOUD SECURITY ALLIANCE (CSA) CLOUD CONTROLS MATRIX (CCM)

3.23.3. THE ISO 28000:2007 SUPPLY CHAIN STANDARD

3.23.3.1. certification against ISO 28000:2007

3.23.3.1.1. Security management policy

3.23.3.1.2. Organizational objectives

3.23.3.1.3. Risk-management program(s)/practices

3.23.3.1.4. Documented practices and records

3.23.3.1.5. Supplier relationships

3.23.3.1.6. Roles, responsibilities, and relevant authorities

3.23.3.1.7. Use of Plan, Do, Check, Act (PDCA)

3.23.3.1.8. Organizational procedures and related processes

4. PII as “any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual’s identity, such as name, Social Security Number, date and place of birth, mother’s maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.”

5. Architectural Concepts and Design Requirements

5.1. Roles, characteristics, and technologies

5.1.1. NIST: Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

5.1.2. DRIVERS

5.1.2.1. Costs associated with the ownership of their current IT infrastructure solutions

5.1.2.2. The desire to reduce IT complexity

5.1.2.3. Risk reduction: Testing solution before investments

5.1.2.4. Scalability

5.1.2.5. Elasticity

5.1.2.6. Consumption-based pricing

5.1.2.7. Virtualization: Single view of resources

5.1.2.8. Cost: The pay-per-usage model

5.1.2.9. Business agility

5.1.2.10. Mobility: Access from around the globe

5.1.2.11. Collaboration/Innovation: Work simultaneously

5.1.3. SECURITY/RISKS AND BENEFITS

5.1.3.1. Managing reputational risk

5.1.3.1.1. Strategic alignment

5.1.3.1.2. Effective board oversight

5.1.3.1.3. Integration of risk into strategy setting and business planning

5.1.3.1.4. Cultural alignment

5.1.3.1.5. Strong corporate values and a focus on compliance

5.1.3.1.6. Operational focus

5.1.3.1.7. Strong control environment

5.1.3.2. Compliance (Legal,Regulatory)

5.1.3.3. Privacy

5.1.3.4. Distributed/Multi Tenant Security Environment

5.1.4. DEFINITIONS

5.1.4.1. Anything as a Service (XaaS): The growing diversity of services available over the Internet via cloud computing as opposed to being provided locally, or on-premises.

5.1.4.2. Apache CloudStack: An open source cloud computing and Infrastructure as a Service (IaaS) platform developed to help make creating, deploying, and managing cloud services easier by providing a complete “stack” of features and components for cloud environments.

5.1.4.3. Business Continuity: The capability of the organization to continue delivery of products or services at acceptable predefined levels following a disruptive incident.

5.1.4.4. Business Continuity Management: A holistic management process that identifies potential threats to an organization and the impacts to business operations those threats, if realized, might cause, and that provides a framework for building organizational resilience with the capability of an effective response that safeguards the interests of its key stakeholders, reputation, brand, and value-creating activities.

5.1.4.5. Business Continuity Plan: The creation of a strategy through the recognition of threats and risks facing a company, with an eye to ensure that personnel and assets are protected and able to function in the event of a disaster.

5.1.4.6. Cloud App (Cloud Application): Short for cloud application, cloud app describes a software application that is never installed on a local computer. Instead, it is accessed via the Internet.

5.1.4.7. Cloud Application Management for Platforms (CAMP): CAMP is a specification designed to ease management of applications—including packaging and deployment—across public and private cloud computing platforms.

5.1.4.8. Cloud Backup: Cloud backup, or cloud computer backup, refers to backing up data to a remote, cloud-based server. As a form of cloud storage, cloud backup data is stored in and accessible from multiple distributed and connected resources that comprise a cloud.

5.1.4.9. Cloud Backup Service Provider: A third-party entity that manages and distributes remote, cloud-based data backup services and solutions to customers from a central datacenter.

5.1.4.10. Cloud Backup Solutions: Cloud backup solutions enable enterprises or individuals to store their data and computer files on the Internet using a storage service provider, rather than storing the data locally on a physical disk, such as a hard drive or tape backup.

5.1.4.11. Cloud Computing: A type of computing, comparable to grid computing, that relies on sharing computing resources rather than having local servers or personal devices to handle applications.

5.1.4.12. Cloud Computing Accounting Software: Cloud computing accounting software is accounting software that is hosted on remote servers. It provides accounting capabilities to businesses in a fashion similar to the SaaS (Software as a Service) business model.

5.1.4.13. Cloud Computing Reseller: A company that purchases hosting services from a cloud server hosting or cloud computing provider and then re-sells them to its own customers.

5.1.4.14. Cloud Database: A database accessible to clients from the cloud and delivered to users on demand via the Internet. Also referred to as Database as a Service (DBaaS)

5.1.4.15. Cloud Enablement: The process of making available one or more of the following services and infrastructures to create a public cloud computing environment: cloud provider, client, and application.

5.1.4.16. Cloud OS: A phrase frequently used in place of Platform as a Service (PaaS) to denote an association to cloud computing.

5.1.4.17. Cloud Portability: In cloud computing terminology, this refers to the ability to move applications and their associated data between one cloud provider and another—or between public and private cloud environments.

5.1.4.18. Cloud Migration: The process of transitioning all or part of a company’s data, applications, and services from on-site premises behind the firewall to the cloud, where the information can be provided over the Internet on an on-demand basis.

5.1.4.19. Cloud Provider: A service provider who offers customers storage or software solutions available via a public network, usually the Internet. The cloud provider dictates both the technology and operational procedures involved.

5.1.4.20. Cloud Provisioning: The deployment of a company’s cloud computing strategy, which typically first involves selecting which applications and services will reside in the public cloud and which will remain on-site behind the firewall or in the private cloud.

5.1.4.21. Enterprise Application: Describes applications—or software—that a business uses to assist the organization in solving enterprise problems.

5.1.4.22. Cloud Server Hosting: A type of hosting in which hosting services are made available to customers on demand via the Internet.

5.1.4.23. Cloud Storage: “The storage of data online in the cloud,” whereby a company’s data is stored in and accessible from multiple distributed and connected resources that comprise a cloud.

5.1.4.24. Cloud Testing: Load and performance testing conducted on the applications and services provided via cloud computing—particularly the capability to access these services—in order to ensure optimal performance and scalability under a wide variety of conditions.

5.1.4.25. Desktop as a Service (DaaS): A form of virtual desktop infrastructure (VDI) in which the VDI is outsourced and handled by a third party.

5.1.4.26. Enterprise Cloud Backup: Enterprise-grade cloud backup solutions typically add essential features such as archiving and disaster recovery to cloud backup solutions.

5.1.4.27. Eucalyptus: An open source cloud computing and Infrastructure as a Service (IaaS) platform for enabling private clouds.

5.1.4.28. Event: A change of state that has significance for the management of an IT service or other configuration item.

5.1.4.29. Hybrid Cloud Storage: A combination of public cloud storage and private cloud storage where some critical data resides in the enterprise’s private cloud and other data is stored and accessible from a public cloud storage provider.

5.1.4.30. Incident: An unplanned interruption to an IT service or reduction in the quality of an IT service.

5.1.4.31. Infrastructure as a Service (IaaS): IaaS is defined as computer infrastructure, such as virtualization, being delivered as a service.

5.1.4.32. Managed Service Provider: An IT service provider where the customer dictates both the technology and operational procedures

5.1.4.33. Mean Time Between Failure (MTBF): The measure of the average time between failures of a specific component, or part of a system.

5.1.4.34. Mean Time To Repair (MTTR): The measure of the average time it should take to repair a failed component, or part of a system.

5.1.4.35. Mobile Cloud Storage: A form of cloud storage that applies to storing an individual’s mobile device data in the cloud and providing the individual with access to the data from anywhere.

5.1.4.36. Multi-Tenant: In cloud computing, multi-tenant is the phrase used to describe multiple customers using the same public cloud.

5.1.4.37. Online Backup: In storage technology, online backup means to back up data from your hard drive to a remote server or computer using a network connection.

5.1.4.38. Personal Cloud Storage: A form of cloud storage that applies to storing an individual’s data in the cloud and providing the individual with access to the data from anywhere.

5.1.4.39. Platform as a Service (PaaS): The process of deploying onto the cloud infrastructure consumer-created or acquired applications that are created using programming languages, libraries, services, and tools supported by the provider.

5.1.4.40. Private Cloud Storage: A form of cloud storage where the enterprise data and cloud storage resources both reside within the enterprise’s datacenter and behind the firewall.

5.1.4.41. Problem: The unknown cause of one or more incidents, often identified as a result of multiple similar incidents.

5.1.4.42. Public Cloud Storage: A form of cloud storage where the enterprise and storage service provider are separate and the data is stored outside of the enterprise’s datacenter.

5.1.4.43. Storage Cloud: Refers to the collection of multiple distributed and connected resources responsible for storing and managing data online in the cloud.

5.1.5. ROLES

5.1.5.1. Cloud Customer: An individual or entity that utilizes or subscribes to cloud-based services or resources.

5.1.5.2. Cloud Provider: A company that provides cloud-based platform, infrastructure, application, or storage services to other organizations and/or individuals, usually for a fee, otherwise known to clients “as a service.”

5.1.5.3. Cloud Backup Service Provider: A third-party entity that manages and holds operational responsibilities for cloud-based data backup services and solutions to customers from a central datacenter.

5.1.5.4. Cloud Services Broker (CSB): Typically a third-party entity or company that looks to extend or enhance value to multiple customers of cloud-based services through relationships with multiple cloud service providers.

5.1.5.5. Cloud Service Auditor: Third-party organization that verifies attainment of SLAs (service level agreements).

5.1.6. CHARACTERISTICS

5.1.6.1. On-Demand Self-Service: The cloud service(s) provided that enables the provision of cloud resources on demand

5.1.6.2. Broad Network Access: The cloud, by its nature is an “always on” and “always accessible” offering for users to have widespread access to resources, data, and other assets.

5.1.6.3. Resource Pooling

5.1.6.4. Rapid Elasticity: Allows the user to obtain additional resources, storage, compute power, and so on, as the user’s need or workload requires.

5.1.6.5. Measured Service: Cloud computing offers a unique and important component that traditional IT deployments have struggled to provide—resource usage can be measured, controlled, reported, and alerted upon

5.1.7. ACTIVITIES

5.1.7.1. Cloud Administrator: This individual is typically responsible for the implementation, monitoring, and maintenance of the cloud within the organization

5.1.7.2. Cloud Application Architect: This person is typically responsible for adapting, porting, or deploying an application to a target cloud environment.

5.1.7.3. Cloud Architect: This role will determine when and how a private cloud meets the policies and needs of an organization’s strategic goals and contractual requirements (from a technical perspective).

5.1.7.4. Cloud Data Architect: This individual is similar to the Cloud Architect; the Data Architect’s role is to ensure the various storage types and mechanisms utilized within the cloud environment meet and conform to the relevant SLAs and that the storage components are functioning according to their specified requirements.

5.1.7.5. Cloud Developer: This person focuses on development for the cloud infrastructure itself.

5.1.7.6. Cloud Operator: This individual is responsible for daily operational tasks and duties that focus on cloud maintenance and monitoring activities.

5.1.7.7. Cloud Service Manager: This person typically responsible for policy design, business agreement, pricing model, and some elements of the SLA

5.1.7.8. Cloud Storage Administrator: This role focuses on relevant user groups and the mapping, segregations, bandwidth, and reliability of storage volumes assigned.

5.1.7.9. Cloud User/Cloud Customer: This individual is a user accessing either paid for or free cloud services and resources within a cloud.

5.1.8. CATEGORIES

5.1.8.1. IaaS, “the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).”

5.1.8.1.1. Requirements

5.1.8.1.2. Benefits

5.1.8.2. PaaS, “the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure, including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.”

5.1.8.2.1. Capabilities

5.1.8.2.2. Benefits

5.1.8.3. SaaS, “The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.”

5.1.8.3.1. Models

5.1.8.3.2. Benefits

5.1.9. DEPLOYMENT MODELS

5.1.9.1. PUBLIC CLOUD

5.1.9.1.1. “the cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider.”

5.1.9.1.2. Benefits

5.1.9.2. PRIVATE CLOUD

5.1.9.2.1. “the cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.”

5.1.9.2.2. Benefits

5.1.9.3. HYBRID CLOUD

5.1.9.3.1. “the cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).”

5.1.9.3.2. Benefits

5.1.9.4. COMMUNITY CLOUD

5.1.9.4.1. “the cloud infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and it may exist on or off premises.”

5.1.9.4.2. Benefits

5.2. Architecture Design principles

5.2.1. FRAMEWORKS

5.2.1.1. Business Operation Support Services BOSS

5.2.1.1.1. Sherwood Applied Business Security Architecture (SABSA)

5.2.1.2. Information Technology Operation and Support ITOS

5.2.1.2.1. IT Infrastructure Library (ITIL)

5.2.1.3. Presentation, Application, Information, Infrastructure Services

5.2.1.3.1. The Open Group Architecture Framework (TOGAF)

5.2.1.4. Security and Risk Management

5.2.1.4.1. Jericho/Open Group The Jericho forum now is part of the Open Group Security Forum.

5.2.2. KEY PRINCIPLES

5.2.2.1. Define protections that enable trust in the cloud,

5.2.2.2. Develop cross-platform capabilities and patterns for proprietary and open source providers.

5.2.2.3. Facilitate trusted and efficient access, administration, and resiliency to the customer/consumer.

5.2.2.4. Provide direction to secure information that is protected by regulations.

5.2.2.5. The architecture must facilitate proper and efficient identification, authentication, authorization, administration, and auditability.

5.2.2.6. Centralize security policy, maintenance operation, and oversight functions.

5.2.2.7. Access to information must be secure yet still easy to obtain.

5.2.2.8. Delegate or federate access control where appropriate.

5.2.2.9. Must be easy to adopt and consume, supporting the design of security patterns.

5.2.2.10. The architecture must be elastic, flexible, and resilient, supporting multi-tenant, multi-landlord platforms.

5.2.2.11. The architecture must address and support multiple levels of protection, including network, operating system, and application security needs.

5.2.3. KEY REQUIREMENTS

5.2.3.1. Interoperability: Interoperability defines how easy it is to move and reuse application components regardless of the provider, platform, OS, infrastructure, location, storage, and the format of data or APIs.

5.2.3.1.1. Investments do not become prematurely technologically obsolete.

5.2.3.1.2. Organizations are able to easily change cloud service providers to flexibly and cost-effectively support their mission

5.2.3.1.3. Organizations can economically acquire commercial and develop private clouds using standards-based products, processes, and services.

5.2.3.2. Portability: Portability is a key aspect to consider when selecting cloud providers since it can both help prevent vendor lock-in and deliver business benefits by allowing identical cloud deployments to occur in different cloud provider solutions, either for the purposes of disaster recovery or for the global deployment of a distributed single solution.

5.2.3.3. Availability: Systems and resource availability defines the success or failure of a cloud-based service.

5.2.3.4. Security: the ability to measure, obtain assurance, and integrate contractual obligations to minimum levels of security are the keys to success.

5.2.3.5. Privacy: Challenge - no uniform or international privacy directives, laws, regulations, or controls exist, leading to a separate, disparate, and segmented mesh of laws and regulations being applicable depending on the geographic location where the information may reside (data at rest) or be transmitted (data in transit).

5.2.3.6. Resiliency: represents the ability to continue service and business operations in the event of a disruption or event.

5.2.3.7. Performance: In order for optimum performance to be experienced through the use of cloud services, the provisioning, elasticity, and other associated components should always focus on performance.

5.2.3.8. Governance: The term “governance” relating to processes and decisions looks to define actions, assign responsibilities, and verify performance.

5.2.3.9. Service Level Agreements (SLAs): Key benefits when compared with traditional-based environments or “in-house IT.” These include downtime, upgrades, updates, patching, vulnerability testing, application coding, test and development, support, and release management. Many of these require the provider to take these areas and activities very seriously, as failing to do so will have an impact on their bottom line.

5.2.3.10. Auditability: Auditability allows for users and the organization to access, report, and obtain evidence of actions, controls, and processes that were performed or run by a specified user.

5.2.3.11. Regulatory Compliance: Organization’s requirement to adhere to relevant laws, regulations, guidelines, and specifications relevant to its business, specifically dictated by the nature, operations, and functions it provides or utilizes to its customers.

5.3. Security concepts

5.3.1. KEY SECURITY COMPONENTS

5.3.1.1. Network Security and Perimeter

5.3.1.1.1. Key elements

5.3.1.1.2. Different meanings Network perimeter under different guises and deployment models.

5.3.1.2. Cryptography

5.3.1.2.1. Encryption

5.3.1.2.2. Key Management

5.3.1.3. IAM and Access Control

5.3.1.3.1. Provisioning and de-provisioning

5.3.1.3.2. Centralized directory services

5.3.1.3.3. Privileged user management

5.3.1.3.4. Authentication and access management: In the event that one of the activities mentioned above is not carried out regularly as part of an ongoing managed process, this can weaken the overall security posture.

5.3.1.4. Data and Media Sanitization

5.3.1.4.1. Vendor lock-in

5.3.1.4.2. Cryptographic erasure

5.3.1.4.3. Data overwriting

5.3.1.5. Virtualization Security

5.3.1.5.1. Hypervisor

5.3.1.5.2. HV Security types

5.3.2. COMMON THREATS

5.3.2.1. Data Breaches

5.3.2.1.1. the nature of cloud deployments and multi-tenancy, virtual machines, shared databases, application design, integration, APIs, cryptography deployments, key management, and multiple locations of data all combine to provide a highly amplified and dispersed attack surface, leading to greater opportunity for data breaches.

5.3.2.1.2. the rise of smart devices, tablets, increased workforce mobility, BYOD

5.3.2.2. Data Loss

5.3.2.2.1. Does the provider/customer have responsibility for data backup?

5.3.2.2.2. In the event that backup media containing the data is obtained, does this include all data or only a portion of the information?

5.3.2.2.3. Where data has become corrupt, or overwritten, can an import or restore be performed?

5.3.2.2.4. Where accidental data deletion has occurred from the customer side, will the provider facilitate the restoration of systems and information in multi-tenancy environments or on shared platforms?

5.3.2.3. Account or Service traffic hijacking

5.3.2.3.1. Means

5.3.2.3.2. Attackers goals

5.3.2.4. Insecure Provider interfaces and APIs

5.3.2.5. Denial of Service

5.3.2.6. Malicious insiders

5.3.2.7. Abuse of Cloud Services

5.3.2.8. Insufficient Due Diligence

5.3.2.8.1. Due diligence is the act of investigating and understanding the risks a company faces.

5.3.2.8.2. Due care is the development and implementation of policies and procedures to aid in protecting the company, its assets, and its people from threats.

5.3.2.9. Shared Technology Vulnerabilities: providers should implement a layered approach to securing the various components, and a defense-in-depth strategy should include compute, storage, network, application, and user security enforcement and monitoring.

5.3.3. OPEN WEB APPLICATION SECURITY PROJECT (OWASP) TOP TEN SECURITY THREATS

5.3.3.1. A1—Injection: Injection flaws, such as SQL, OS, and LDAP injection occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization.

5.3.3.2. A2—Broken Authentication and Session Management: Application functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users’ identities.

5.3.3.3. A3—Cross-Site Scripting (XSS): XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation or escaping. XSS allows attackers to execute scripts in the victim’s browser, which can hijack user sessions, deface websites, or redirect the user to malicious sites.

5.3.3.4. A4—Insecure Direct Object References: A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data.

5.3.3.5. A5—Security Misconfiguration: Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, and platform. Secure settings should be defined, implemented, and maintained, as defaults are often insecure. Additionally, software should be kept up to date.

5.3.3.6. A6—Sensitive Data Exposure: Many web applications do not properly protect sensitive data, such as credit cards, tax IDs, and authentication credentials. Attackers may steal or modify such weakly protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data deserves extra protection such as encryption at rest or in transit, as well as special precautions when exchanged with the browser.

5.3.3.7. A7—Missing Function Level Access Control: Most web applications verify function-level access rights before making that functionality visible in the UI. However, applications need to perform the same access control checks on the server when each function is accessed. If requests are not verified, attackers will be able to forge requests in order to access functionality without proper authorization.

5.3.3.8. A8—Cross-Site Request Forgery (CSRF): A CSRF attack forces a logged-on victim’s browser to send a forged HTTP request, including the victim’s session cookie and any other automatically included authentication information, to a vulnerable web application. This allows the attacker to force the victim’s browser to generate requests the vulnerable application thinks are legitimate requests from the victim.

5.3.3.9. A9—Using Components with Known Vulnerabilities: Components, such as libraries, frameworks, and other software modules, almost always run with full privileges. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications using components with known vulnerabilities may undermine application defences and enable a range of possible attacks and impacts.

5.3.3.10. A10—Unvalidated Redirects and Forwards: Web applications frequently redirect and forward users to other pages and websites, and use untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites, or use forwards to access unauthorized pages.

5.3.4. SECURITY CONSIDERATIONS FOR DIFFERENT CLOUD CATEGORIES

5.3.4.1. IAAS

5.3.4.1.1. Virtual Machine Attacks

5.3.4.1.2. Virtual Network: The virtual network contains the virtual switch software that controls multiplexing traffic between the virtual NICs of the installed VMs and the physical NICs of the host.

5.3.4.1.3. Hypervisor Attacks: Hackers consider the hypervisor a potential target because of the greater control afforded by lower layers in the system.

5.3.4.1.4. VM-Based Rootkits (VMBRs): These rootkits act by inserting a malicious hypervisor on the fly or modifying the installed hypervisor to gain control over the host workload. In some hypervisors such as Xen, the hypervisor is not alone in administering the VMs.

5.3.4.1.5. Virtual Switch Attacks: The virtual switch is vulnerable to a wide range of layer II attacks such as a physical switch. These attacks include virtual switch configurations, VLANs and trust zones, and ARP tables.

5.3.4.1.6. Denial-of-Service (DoS) Attacks: Denial-of-service attacks in a virtual environment form a critical threat to VMs, along with all other dependent and associated services.

5.3.4.1.7. Co-Location: Multiple VMs residing on a single server and sharing the same resources increase the attack surface and the risk of VM-to-VM or VM-to-hypervisor compromise.

5.3.4.1.8. Multi-Tenancy: Different users within a cloud share the same applications and the physical hardware to run their VMs.

5.3.4.1.9. Workload Complexity: Server aggregation duplicates the amount of workload and network traffic that runs inside the cloud physical servers, which increases the complexity of managing the cloud workload.

5.3.4.1.10. Loss of Control: Users are not aware of the location of their data and services, and the cloud providers run VMs and are not aware of their contents

5.3.4.1.11. Network Topology: The cloud architecture is very dynamic, and the existing workload changes over time because of creating and removing VMs. In addition, the mobile nature of the VMs that allows VMs to migrate from one server to another leads to non-predefined network topology.

5.3.4.1.12. Logical Network Segmentation: Within IaaS, the requirement for isolation alongside the hypervisor remains a key and fundamental activity to reduce external sniffing, monitoring, and interception of communications and others within the relevant segments.

5.3.4.1.13. No Physical Endpoints: Due to server and network virtualization, the number of physical endpoints (e.g., switches, servers, NICs) is reduced. These physical endpoints are traditionally used in defining, managing, and protecting IT assets.

5.3.4.1.14. Single Point of Access: Virtualized servers have a limited number of access points (NICs) available to all VMs.

5.3.4.2. PAAS

5.3.4.2.1. System/Resource Isolation: PaaS tenants should not have shell access to the servers running their instances

5.3.4.2.2. User-Level Permissions: Each instance of a service should have its own notion of user-level entitlements (permissions)

5.3.4.2.3. User Access Management: key emphasis is placed on the agreement, implementation of the rules, and organizational policies for access to data and assets

5.3.4.2.4. Protection Against Malware/Backdoors/Trojans

5.3.4.3. SAAS

5.3.4.3.1. Data Segregation: As a result of multi-tenancy, multiple users can store their data using the applications provided by SaaS. Within these architectures, the data of various users will reside at the same location or across multiple locations and sites.

5.3.4.3.2. Data Access and Policies: The challenge associated with this is to map existing security policies, processes, and standards to meet and match the policies enforced by the cloud provider.

5.3.4.3.3. Web Application Security: cloud services rely on a robust, hardened, and regularly assessed web application to deliver services to its users. The fundamental difference with cloud-based services versus traditional web applications is their footprint and the attack surface that they will present.

5.3.5. CLOUD SECURE DATA LIFECYCLE

5.3.5.1. Create: New digital content is generated or existing content is modified.

5.3.5.2. Store: Data is committed to a storage repository, which typically occurs directly after creation.

5.3.5.3. Use: Data is viewed, processed, or otherwise used in some sort of activity (not including modification).

5.3.5.4. Share: Information is made accessible to others—users, partners, customers, and so on.

5.3.5.5. Archive: Data leaves active use and enters long-term storage.

5.3.5.6. Destroy: Data is permanently destroyed using physical or digital means.

5.3.6. INFORMATION/DATA GOVERNANCE TYPES

5.3.6.1. Information Classification: High-level description of valuable information categories (e.g., highly confidential, regulated).

5.3.6.2. Information Management Policies: What activities are allowed for different information types?

5.3.6.3. Location and Jurisdictional Policies: Where can data be geographically located? What are the legal and regulatory implications or ramifications?

5.3.6.4. Authorizations: Who is allowed to access different types of information?

5.3.6.5. Custodianship: Who is responsible for managing the information at the bequest of the owner?

5.3.7. BUSINESS CONTINUITY/DISASTER RECOVERY PLANNING

5.3.7.1. Critical Success Factors

5.3.7.1.1. Understanding your responsibilities versus the cloud provider’s responsibilities.

5.3.7.1.2. Customer responsibilities.

5.3.7.1.3. Cloud provider responsibilities.

5.3.7.1.4. Understand any interdependencies/third parties (supply chain risks)

5.3.7.1.5. Order of restoration (priority)—who/what gets priority?

5.3.7.1.6. Appropriate frameworks/certifications held by the facility, services, and processes.

5.3.7.1.7. Right to audit/make regular assessments of continuity capabilities.

5.3.7.1.8. Communications of any issues/limited services.

5.3.7.1.9. Is there a need for backups to be held on-site/off-site or with another cloud provider?

5.3.7.1.10. Clearly state and ensure the SLA addresses which components of business continuity/disaster recovery are covered and to what degree they are covered.

5.3.7.1.11. Penalties/compensation for loss of service.

5.3.7.1.12. Recovery Time Objectives (RTO)/Recovery Point Objectives (RPO)

5.3.7.1.13. Loss of integrity or confidentiality (are these both covered?)

5.3.7.1.14. Points of contact and escalation processes.

5.3.7.1.15. Where failover to ensure continuity is utilized, does this maintain compliance and ensure the same or greater level of security controls?

5.3.7.1.16. When changes are made that could impact the availability of services, that these are communicated in a timely manner.

5.3.7.1.17. Data ownership, data custodians, and data processing responsibilities are clearly defined within the SLA.

5.3.7.1.18. Where third parties and key supply chain are required to ensure that availability of services is maintained, that the equivalent or greater levels of security are met, as per the agreed-upon SLA between the customer and provider.

5.3.7.2. Important SLA Components

5.3.7.2.1. Undocumented single points of failure should not exist

5.3.7.2.2. Migration to alternate provider(s) should be possible within agreed-upon timeframes

5.3.7.2.3. Whether all components will be supported by alternate cloud providers in the event of a failover or on-site/on-premise services would be required

5.3.7.2.4. Automated controls should be enabled to allow customers to verify data integrity

5.3.7.2.5. Where data backups are included, incremental backups should allow the user to select the desired settings, including desired coverage, frequency, and ease of use for recovery point restoration options

5.3.7.2.6. Regular assessment of the SLA and any changes that may impact the customer’s ability to utilize cloud computing components for disaster recovery should be captured at regular and set intervals.

5.4. Cost–benefit analysis

5.4.1. Resource pooling: Resource sharing is essential to the attainment of significant cost savings when adopting a cloud computing strategy.

5.4.2. Shift from CapEx to OpEx: The shift from capital expenditure (CapEx) to operational expenditure (OpEx) is seen as a key factor for many organizations

5.4.3. Factor in time and efficiencies: Given that organizations rarely acquire used technology or servers, almost all purchases are of new and recently developed technology.

5.4.4. Include depreciation: Lease cloud services, as opposed to constantly investing in technologies that become outdated in relatively short time periods.

5.4.5. Reduction in maintenance and configuration time: Most of maintaining, operating, patching, updating, supporting, engineering, rebuilding, duties (if not all—depending on cloud service) are handled by the cloud provider

5.4.6. Shift in focus: Technology and business personnel being able to focus on the key elements of their role, instead of the daily “firefighting” and responding to issues and technology components

5.4.7. Utilities costs: Outside of the technology and operational elements, from a utilities cost perspective, massive savings can be achieved with the reduced requirement for power, cooling, support agreements, datacenter space, racks, cabinets, and so on.

5.4.8. Software and licensing costs: Software and relevant licensing costs present a major cost saving as well, as you only pay for the licensing used versus the bulk or enterprise licensing levels of traditional non-cloud-based infrastructure models.

5.4.9. Pay per usage: As outlined by the CapEx versus OpEx elements, cloud computing gives businesses a new and clear benefit—pay per usage.

5.5. Certification Against Criteria

5.5.1. INTERNATIONAL

5.5.1.1. ISO/IEC 27001: consists of 35 control objectives and 114 controls spread over 14 domains.

5.5.1.1.1. Information Security Policies

5.5.1.1.2. Organization of Information Security

5.5.1.1.3. Human Resources Security

5.5.1.1.4. Asset Management

5.5.1.1.5. Access Control

5.5.1.1.6. Cryptographic

5.5.1.1.7. Physical and Environmental Security

5.5.1.1.8. Operations Security

5.5.1.1.9. Communications Security

5.5.1.1.10. System Acquisition, Development, and Maintenance

5.5.1.1.11. Supplier Relationship

5.5.1.1.12. Information Security Incident Management

5.5.1.1.13. Information Security Business Continuity Management

5.5.1.1.14. Compliance

5.5.1.2. SOC I/SOC II/SOC III: Statement on Auditing Standards 70 (SAS 70) was replaced by Service Organization Control (SOC) Type I and Type II reports in 2011. SOC reports are performed in accordance with Statement on Standards for Attestation Engagements (SSAE) 16

5.5.1.2.1. SOC I reports focus solely on controls at a service provider that are likely to be relevant to an audit of a subscriber’s financial statements.

5.5.1.2.2. SOC II SOC II reporting was specifically designed for IT-managed service providers and cloud computing.

5.5.1.2.3. SOC III Reporting also uses the Trust Services Principles but provides only the auditor’s report on whether the system achieved the specified principle, without disclosing relevant details and sensitive information.

5.5.2. NATIONAL

5.5.2.1. NIST SP 800-53

5.5.2.1.1. Amendments 4th Rev.

5.5.2.1.2. Key components

5.5.3. INDUSTRY

5.5.3.1. PCI DSS

5.5.3.1.1. Merchant Levels Based on Transactions

5.5.3.1.2. Merchant Requirements

5.5.4. SYSTEM AND SUBSYSTEM

5.5.4.1. Common Criteria

5.5.4.1.1. Common Criteria Components

5.5.4.2. FIPS 140-2

5.5.4.2.1. Specifications

5.5.4.2.2. Goal: accredit and distinguish secure and well-architected cryptographic modules produced by private sector vendors who seek to or are in the process of having their solutions and services certified for use in U.S. Government departments

5.5.4.2.3. Levels

6. Cloud Data Security

6.1. The Cloud Data Lifecycle Phases

6.1.1. 1.Create: The generation or acquisition of new digital content, or the alteration/updating of existing content.

6.1.1.1. The creation phase is the preferred time to classify content according to its sensitivity.

6.1.2. 2.Store: The act of committing the digital data to some sort of storage repository. Typically occurs nearly simultaneously with creation.

6.1.2.1. Controls such as encryption, access policy, monitoring, logging, and backups should be implemented to avoid data threats.

6.1.3. 3.Use: Data is viewed, processed, or otherwise used in some sort of activity, not including modification.

6.1.3.1. Data in use is most vulnerable because it might be transported into unsecure locations such as workstations, and in order to be processed, it is must be unencrypted.

6.1.3.2. Controls such as Data Loss Prevention (DLP), Information Rights Management (IRM), and database and file access monitors should be implemented in order to audit data access and prevent unauthorized access.

6.1.4. 4.Share: Information is made accessible to others, such as between users, to customers, and to partners.

6.1.4.1. Technologies such as DLP can be used to detect unauthorized sharing, and IRM technologies can be used to maintain control over the information.

6.1.5. 5.Archive: Data leaving active use and entering long-term storage. Archiving data for a long period of time can be challenging.

6.1.5.1. Storage compatibility might be an issue over time

6.1.5.2. Regulatory requirements must be addressed and different tools and providers might be part of this phase.

6.1.6. 6.Destroy: The data is removed from the cloud provider.

6.1.6.1. Consideration should be made according to regulation, type of cloud being used (IaaS vs. SaaS), and the classification of the data.

6.2. Location and Access of Data

6.2.1. Location

6.2.1.1. Who are the actors that potentially have access to data I need to protect?

6.2.1.2. What is/are the potential location(s) for data I have to protect?

6.2.1.3. What are the controls in each of those locations?

6.2.1.4. At what phases in each lifecycle can data move between locations?

6.2.1.5. How does data move between locations (via what channels)?

6.2.1.6. Where are these actors coming from (what locations, and are they trusted or untrusted)?

6.2.2. Access

6.2.2.1. who can access relevant data

6.2.2.2. how they are able to access it (device and channels)

6.3. Functions, Actors, and Controls of the Data

6.3.1. DATA FUNCTIONS: Each function is performed in a location by an actor

6.3.1.1. Access: View/access the data, including copying, file transfers, and other exchanges of information. Lifecycle mapping: all phases

6.3.1.2. Process: Perform a transaction on the data. Update it, use it in a business processing transaction, and so on. Lifecycle mapping: Create, Use phases

6.3.1.3. Store: Store the data (in a file, database, etc.). Lifecycle mapping: Store, Archive phases

6.3.2. CONTROLS: act as a mechanism to restrict a list of possible actions down to allowed or permitted actions. They can be of a preventative, detective (monitoring), or corrective nature.

6.3.3. Actors: Documenting what functions at what location are actors allowed helps to design appropriate controls.

6.4. Cloud Services, Products, and Solutions

6.4.1. Processing data and running applications (compute servers)

6.4.2. Moving data (networking)

6.4.3. Preserving or storing data (storage)

6.4.3.1. Data Storage Types

6.4.3.1.1. IAAS

6.4.3.1.2. PAAS

6.4.3.1.3. SAAS

6.4.3.2. Data Storage Threats

6.4.3.2.1. Unauthorized usage: In the cloud, data storage can be manipulated into unauthorized usage, such as by account hijacking or uploading illegal content.

6.4.3.2.2. Unauthorized access: Unauthorized access can happen due to hacking, improper permissions in a multi-tenant’s environments, or an internal cloud provider employee.

6.4.3.2.3. Liability due to regulatory non-compliance: Certain controls (i.e., encryption) might be required in order to certain regulations. Not all cloud services enable all relevant data controls.

6.4.3.2.4. Denial of service (DoS) and distributed denial of service (DDoS) attacks on storage: Availability is a strong concern for cloud storage. Without data no instances can launch.

6.4.3.2.5. Corruption/modification and destruction of data: This can be caused by a wide variety of sources: human error, hardware or software failure, events such as fire or flood, or intentional hacks.

6.4.3.2.6. Data leakage/breaches: Consumers should always be aware that cloud data are exposed to data breaches. It can be external or coming from a cloud provider employee with storage access. Data tends to be replicated and moved in the cloud, which increase the likelihood of a leak.

6.4.3.2.7. Theft or accidental loss of media: This threat applies to portable storage, but as cloud datacenters grow and storage devices are getting smaller, there are increasingly more vectors for them to experience theft or similar threats as well.

6.4.3.2.8. Malware attack or introduction: The goal of almost every malware is eventually reaching the data storage.

6.4.3.2.9. Improper treatment or sanitization after end of use: End of use is challenging in cloud computing since usually we cannot enforce physical destruction of media. But the dynamic nature of data, where data is kept in different storages with multiple tenants, mitigates the risk that digital remnants can be located.

6.4.4. Relevant Data Security Technologies

6.4.4.1. Data Leakage Prevention (DLP): For auditing and preventing unauthorized data exfiltration

6.4.4.1.1. Components

6.4.4.1.2. Architecture

6.4.4.1.3. Cloud-Based DLP Considerations

6.4.4.1.4. Cloud DLP policy should address

6.4.4.2. Encryption: For preventing unauthorized data viewing

6.4.4.2.1. Challenges

6.4.4.2.2. Architecture

6.4.4.3. Obfuscation, anonymization, tokenization, and masking: Different alternatives for protecting data without encryption

6.4.4.3.1. Data Masking/Data Obfuscation: process of hiding, replacing, or omitting sensitive information from a specific dataset.

6.4.4.3.2. Data Anonymization: Direct identifiers and indirect identifiers form two primary components for identification of individuals, users, or indeed personal information. Anonymization is the process of removing the indirect identifiers in order to prevent data analysis tools or other intelligent mechanisms from collating or pulling data from multiple sources to identify individual or sensitive information.

6.4.4.3.3. Tokenization: is the process of substituting a sensitive data element with a non-sensitive equivalent, referred to as a token. Tokenization is used to safeguard the sensitive data in a secure, protected, or regulated environment.

6.4.4.4. Data Dispersion Technique: Data dispersion is similar to a RAID solution, but it is implemented differently. Storage blocks are replicated to multiple physical locations across the cloud

6.4.4.5. Emerging Technologies

6.4.4.5.1. Bit splitting: involves splitting up and storing encrypted information across different cloud storage services.

6.4.4.5.2. Homomorphic encryption: enables processing of encrypted data without the need to decrypt the data. It allows the cloud customer to upload data to a Cloud Service Provider for processing without the requirement to decipher the data first.

6.5. Data Discovery

6.5.1. Trends

6.5.1.1. Big data: On big data projects, data discovery is more important and more challenging. Not only is the volume of data that must be efficiently processed for discovery larger, but the diversity of sources and formats presents challenges that make many traditional methods of data discovery fail. Cases where big data initiatives also involve rapid profiling of high-velocity big data make data profiling harder and less feasible using existing toolsets.

6.5.1.2. Real-time analytics: The ongoing shift toward (nearly) real-time analytics has created a new class of use cases for data discovery. These use cases are valuable but require data discovery tools that are faster, more automated, and more adaptive.

6.5.1.3. Agile analytics and agile business intelligence: Data scientists and business intelligence teams are adopting more agile, iterative methods of turning data into business value. They perform data discovery processes more often and in more diverse ways, for example, when profiling new datasets for integration, seeking answers to new questions emerging this week based on last week’s new analysis, or finding alerts about emerging trends that may warrant new analysis work streams.

6.5.2. Analysis Methods

6.5.2.1. Metadata: This is data that describes data, and all relational databases store metadata that describes tables and column attributes.

6.5.2.2. Labels: When data elements are grouped with a tag that describes the data. This can be done at the time the data is created, or tags can be added over time to provide additional information and references to describe the data. In many ways, it is just like metadata but slightly less formal.

6.5.2.3. Content analysis: In this form of analysis, we investigate the data itself by employing pattern matching, hashing, statistical, lexical, or other forms of probability analysis.

6.5.3. Issues

6.5.3.1. Poor data quality: Data visualization tools are only as good as the information that is inputted.

6.5.3.2. Dashboards: Users modify data and change fields with no audit trail. This can lead to inconsistent insight and flawed decisions, drive up administration costs, and inevitably create multiple versions of the truth. Security poses a problem with data discovery tools. IT staff typically have little or no control over these types of solutions, which means they cannot protect sensitive information. This can result in unencrypted data being cached locally and viewed by or shared with unauthorized users.

6.5.3.3. Hidden costs: A common data discovery technique is to put all of the data into server RAM to take advantage of the inherent input/output rate improvements over disk.

6.5.4. Challenges in the Cloud

6.5.4.1. Identify data location: hard to find ways to secure the data that users are accessing in real time, from multiple locations, across multiple platforms.

6.5.4.2. Accessing the data: Not all data stored in the cloud can be accessed easily. Sometimes customers do not have the necessary administrative rights to access their data on demand, or long-term data can be visible to the customer but not accessible to download in acceptable formats for use offline.

6.5.4.2.1. Limits on the volume of data that will be accessible

6.5.4.2.2. The ability to collect/examine large amounts of data

6.5.4.2.3. Whether any/all related metadata will be preserved

6.5.4.3. Preservation and maintenance: Preservation requirements should be clearly documented for, and supported by, the cloud provider as part of the SLA.

6.6. Data Classification

6.6.1. Categories: should match the data controls to be used

6.6.1.1. Data type (format, structure)

6.6.1.2. Jurisdiction (of origin, domiciled) and other legal constraints

6.6.1.3. Context

6.6.1.4. Ownership

6.6.1.5. Contractual or business constraints

6.6.1.6. Trust levels and source of origin

6.6.1.7. Value, sensitivity, and criticality (to the organization or to third party)

6.6.1.8. Obligation for retention and preservation

6.6.2. Challenges with Cloud Data

6.6.2.1. Data creation: The CSP needs to ensure that proper security controls are in place so that whenever data is created or modified by anyone, they are forced to classify or update the data as part of the creation/modification process.

6.6.2.2. Classification controls: Controls could be administrative (as guidelines for users who are creating the data), preventive, or compensating.

6.6.2.3. Metadata: Classifications can sometimes be made based on the metadata that is attached to the file, such as owner or location. This metadata should be accessible to the classification process in order to make the proper decisions.

6.6.2.4. Classification data transformation: Controls should be placed to make sure that the relevant property or metadata can survive data object format changes and cloud imports and exports.

6.6.2.5. Reclassification consideration: Cloud applications must support a reclassification process based on the data lifecycle.

6.7. Data Privacy Acts

6.7.1. Key Questions

6.7.1.1. What information in the cloud is regulated under data-protection laws?

6.7.1.2. Who is responsible for personal data in the cloud?

6.7.1.3. Whose laws apply in a dispute?

6.7.1.4. Where is personal data processed?

6.7.2. GLOBAL P&DP LAWS

6.7.2.1. US:“Consumer Privacy Bill of Rights” 2012

6.7.2.2. EU directive 95/46/EC “on the protection of individuals with regard to the processing of personal data and on the free movement of such data.” replaced in 2014

6.7.2.3. EU enacted a privacy directive (e-privacy directive) 2002/58/EC “concerning the processing of personal data and the protection of privacy in the electronic communications sector.” This directive contains provisions concerning data breaches and the use of cookies.

6.7.2.4. EU General Data Protection Regulation 2014

6.7.2.5. EU directive for privacy in the Police and Criminal Justice sector

6.7.2.6. APEC (Asian-Pacific Economic Cooperation council) Privacy Framework

6.7.3. DIFFERENCES BETWEEN JURISDICTION AND APPLICABLE LAW

6.7.3.1. Applicable law: This determines the legal regime applicable to a certain matter.

6.7.3.2. Jurisdiction: This usually determines the ability of a national court to decide a case or enforce a judgment or order.

6.7.4. ESSENTIAL REQUIREMENTS IN P&DP LAWS

6.7.4.1. Typical Meanings for Common Privacy Terms

6.7.4.1.1. Data subject: An identifiable subject who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural, or social identity (such as telephone number, or IP address).

6.7.4.1.2. Personal data: Any information relating to an identified or identifiable natural person. There are many types of personal data, such as sensitive/health data, and biometric data. According to the type of personal data, the P&DP laws usually set out specific privacy and data-protection obligations (e.g., security measures, data subject’s consent for the processing).

6.7.4.1.3. Processing: Operations that are performed upon personal data, whether or not by automatic means, such as collection, recording, organization, storage, adaptation, or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, blocking, erasure, or destruction.

6.7.4.1.4. Controller: The natural or legal person, public authority, agency, or any other body that alone or jointly with others determines the purposes and means of the processing of personal data; where the purposes and means of processing are determined by national or community laws or regulations, the controller or the specific criteria for his nomination may be designated by national or community law.

6.7.4.1.5. Processor: A natural or legal person, public authority, agency, or any other body that processes personal data on behalf of the controller.

6.7.4.2. Privacy Roles for Customers and Service Providers

6.7.4.2.1. The customer determines the ultimate purpose of the processing and decides on the outsourcing or the delegation of all or part of the concerned activities to external organizations. Therefore, the customer acts as a controller.

6.7.4.2.2. When the service provider supplies the means and the platform, acting on behalf of the customer, it is considered to be a data processor. May be situations in which a service provider is considered either a joint controller or a controller in his own right, depending on concrete circumstances.

6.7.4.2.3. In a cloud services environment, it is not always easy to properly identify and assign the roles of controller and processor between the customer and the service provider

6.7.4.3. Responsibility Depending on the Type of Cloud Services

6.7.4.3.1. SaaS: The customer determines/collects the data to be processed with a cloud service (CS), while the service provider essentially makes the decisions of how to carry out the processing and implement specific security controls.

6.7.4.3.2. PaaS: The customer has higher possibility to determine the instruments of processing, although the terms of the services are not usually negotiable.

6.7.4.3.3. IaaS: The customer has a high level of control on data, processing functionalities, tools, and related operational management, thus achieving a very high level of responsibility in determining purposes and means of processing.

6.7.4.3.4. The main rule for identifying a controller is to search who determines purpose and scope of processing, in the SaaS and PaaS types, the service provider could also be considered a controller/joint controller with the customer. The proper identification of the controller and processor roles is essential for clarifying the P&DP liabilities of customer and service provider, as well as the applicable law.

6.7.4.4. Implementation of data discovery together with data-classification techniques represent the foundation of Data Leakage/Loss Prevention (DLP) and of Data Protection (DP), which is applied to personal data processing in order to operate in compliance with the P&DP laws.

6.7.4.4.1. Implementation of Data Discovery

6.7.4.4.2. Classification of Discovered Sensitive Data for the purpose of compliance with the applicable Privacy and Data Protection (P&DP) laws plays an essential role in the operative control of those elements that are the feeds of the P&DP fulfillments.

6.7.4.4.3. Data discovery solutions together with data classification techniques will provide an effective enabler factor for their ability to comply with the controller P&DP instructions.

6.7.4.5. Mapping and Definition of Controls

6.7.4.5.1. Key privacy cloud service factors:

6.7.4.5.2. Privacy Level Agreement (PLA)

6.7.4.5.3. Essential P&DP Requirements and PLA

6.7.4.5.4. Application of Defined Controls for Personally Identifiable Information (PII)

6.8. Data Rights Management Objectives

6.8.1. Features

6.8.1.1. Information Rights Management (IRM) adds an extra layer of access controls on top of the data object or document. The Access Control List (ACL) determines who can open the document and what they can do with it and provides granularity that flows down to printing, copying, saving, and similar options.

6.8.1.2. Because IRM contains ACLs and is embedded into the original file, IRM is agnostic to the location of the data, unlike other preventative controls that depended on file location. IRM protection will travel with the file and provide continuous protection.

6.8.1.3. IRM is useful for protecting sensitive organization content such as financial documents. However, it is not limited to only documents; IRM can be implemented to protect emails, web pages, database columns, and other data objects.

6.8.1.4. IRM is useful for setting up a baseline for the default Information Protection Policy, that is, all documents created by a certain user, at a certain location, will receive a specific policy.

6.8.2. IRM cloud challenges

6.8.2.1. Strong identity infrastructure is a must when implementing IRM, and the identity infrastructure should expand to customers, partners, and any other organizations with which data is shared.

6.8.2.2. IRM requires that each resource will be provisioned with an access policy. Each user accessing the resource will be provisioned with account and keys. Provisions should be made securely and efficiently in order for the implementation to be successful. Automation of provisioning of IRM resource access policy can help in implementing that goal. Automated policy provision can be based on file location, keywords, or origin of the document.

6.8.2.3. Access to resources can be granted per user bases or according to user role using an RBAC model. Provisioning of users and roles should be integrated into IRM policies. Since in IRM most of the classification is in the user responsibility, or based on automated policy, implementing the right RBAC policy is crucial.

6.8.2.4. Identity infrastructure can be implemented by creating a single location where users are created and authenticated or by creating federation and trust between different repositories of user identities in different systems. Carefully consider the most appropriate method based on the security requirements of the data.

6.8.2.5. Most IRM implementations will force end users to install a local IRM agent either for key storage or for authenticating and retrieving the IRM content. This feature may limit certain implementations that involve external users and should be considered part of the architecture planning prior to deployment.

6.8.2.6. When reading IRM-protected files, the reader software should be IRM-aware. Adobe and Microsoft products in their latest versions have good IRM support, but other readers could encounter compatibility issues and should be tested prior to deployment.

6.8.2.7. The challenges of IRM compatibility with different operating systems and different document readers increase when the data needs to be read on mobile devices. The usage of mobile platforms and IRM should also be tested carefully.

6.8.2.8. IRM can integrate into other security controls such as DLP and documents discovery tools, adding extra benefits.

6.8.3. Key capabilities common to IRM solutions

6.8.3.1. Persistent protection: Ensures that documents, messages, and attachments are protected at rest, in transit, and even after they’re distributed to recipients

6.8.3.2. Dynamic policy control: Allows content owners to define and change user permissions (view, forward, copy, or print) and recall or expire content even after distribution

6.8.3.3. Automatic expiration: Provides the ability to automatically revoke access to documents, emails, and attachments at any point, thus allowing information security policies to be enforced wherever content is distributed or stored

6.8.3.4. Continuous audit trail: Provides confirmation that content was delivered and viewed and offers proof of compliance with your organization’s information security policies

6.8.3.5. Support for existing authentication security infrastructure: Reduces administrator involvement and speeds deployment by leveraging user and group information that exists in directories and authentication systems

6.8.3.6. Mapping for repository access control lists (ACLs): Automatically maps the ACL-based permissions into policies that control the content outside the repository

6.8.3.7. Integration with all third-party email filtering engines: Allows organizations to automatically secure outgoing email messages in compliance with corporate information security policies and federal regulatory requirements

6.8.3.8. Additional security and protection capabilities

6.8.3.8.1. Determining who can access a document

6.8.3.8.2. Prohibiting printing of an entire document or selected portions

6.8.3.8.3. Disabling copy/paste and screen capture capabilities

6.8.3.8.4. Watermarking pages if printing privileges are granted

6.8.3.8.5. Expiring or revoking document access at any time

6.8.3.8.6. Tracking all document activity through a complete audit trail

6.8.3.9. Support for email applications: Provides interface and support for email programs such as Microsoft Outlook and IBM Lotus Notes

6.8.3.10. Support for other document types: Other document types, besides Microsoft Office and PDF, can be supported as well

6.9. Data-Protection Policies

6.9.1. Data retention: an organization’s established protocol for keeping information for operational or regulatory compliance needs

6.9.1.1. Defines

6.9.1.1.1. Retention periods

6.9.1.1.2. Data formats

6.9.1.1.3. Data security

6.9.1.1.4. Data-retrieval procedures for the enterprise

6.9.1.2. Components

6.9.1.2.1. Legislation, regulation, and standards requirements: Data-retention considerations are heavily dependent on the data type and the required compliance regimes associated with it.

6.9.1.2.2. Data mapping: The process of mapping all relevant data in order to understand data types (structured and unstructured), data formats, file types, and data locations (network drives, databases, object, or volume storage).

6.9.1.2.3. Data classification: Classifying the data based on locations, compliance requirements, ownership, or business usage, in other words, its “value.” Classification is also used in order to decide on the proper retention procedures for the enterprise.

6.9.1.2.4. Data-retention procedure: For each data category, the data-retention procedures should be followed based on the appropriate data-retention policy that governs the data type. How long the data is to be kept, where (physical location, and jurisdiction), and how (which technology and format) should all be spelled out in the policy and implemented via the procedure. The procedure should also include backup options, retrieval requirements, and restore procedures, as required and necessary for the data types being managed.

6.9.1.2.5. Monitoring and maintenance: Procedures for making sure that the entire process is working, including review of the policy and requirements to make sure that there are no changes.

6.9.2. Data deletion: safe disposal of data once it is no longer needed. Failure to do so may result in data breaches and/or compliance failures.

6.9.2.1. Reasons

6.9.2.1.1. Regulation or legislation: Certain laws and regulations require specific degrees of safe disposal for certain records.

6.9.2.1.2. Business and technical requirements: Business policy may require safe disposal of data. Also, processes such as encryption might require safe disposal of the clear text data after creating the encrypted copy.

6.9.2.2. Disposal Options

6.9.2.2.1. Physical destruction: Physically destroying the media by incineration, shredding, or other means.

6.9.2.2.2. Degaussing: Using strong magnets for scrambling data on magnetic media such as hard drive and tapes.

6.9.2.2.3. Overwriting: Writing random data over the actual data. The more times the overwriting process occurs, the more thorough the destruction of the data is considered to be.

6.9.2.2.4. Encryption: Using an encryption method to rewrite the data in an encrypted format to make it unreadable without the encryption key.

6.9.3. Data archiving: process of identifying and moving inactive data out of current production systems and into specialized long-term archival storage systems.

6.9.3.1. Data-encryption procedures: Long-term data archiving with an encryption could present a challenge for the organization with regard to key management.

6.9.3.2. Data monitoring procedures: Data stored in the cloud tends to be replicated and moved. In order to maintain data governance, it is required that all data access and movements be tracked and logged to make sure that all security controls are being applied properly throughout the data lifecycle.

6.9.3.3. Ability to perform eDiscovery and granular retrieval: Archive data may be subject to retrieval according to certain parameters such as dates, subject, authors, and so on. The archiving platform should provide the ability to do eDiscovery on the data in order to decide which data should be retrieved.

6.9.3.4. Backup and disaster recovery options: All requirements for data backup and restore should be specified and clearly documented.

6.9.3.5. Data format and media type: The format of the data is an important consideration because it may be kept for an extended period of time. Proprietary formats can change, thereby leaving data in a useless state, so choosing the right format is very important. The same consideration must be made for media storage types as well.

6.9.3.6. Data restoration procedures: Data restoral testing should be initiated periodically to make sure that the process is working. The trial data restore should be made into an isolated environment to mitigate risks, such as restoring an old virus or accidently overwriting existing data.

6.10. Events

6.10.1. SOURCES

6.10.1.1. SaaS: minimal control of, and access to, event and diagnostic data, it is recommended to specify required data access requirements in the cloud SLA or contract with the cloud service provider.

6.10.1.1.1. Webserver logs

6.10.1.1.2. Application server logs

6.10.1.1.3. Database logs

6.10.1.1.4. Guest operating system logs

6.10.1.1.5. Host access logs

6.10.1.1.6. Virtualization platform logs and SaaS portal logs

6.10.1.1.7. Network captures

6.10.1.1.8. Billing records

6.10.1.2. PaaS: control of, and access to, event and diagnostic data. Because the applications that will be monitored are being built and designed by the organization directly, the level of application data that can be extracted and monitored is up to the developers.

6.10.1.2.1. Input validation failures, for example, protocol violations, unacceptable encodings, and invalid parameter names and values

6.10.1.2.2. Output validation failures, for example, database record set mismatch and invalid data encoding

6.10.1.2.3. Authentication successes and failures

6.10.1.2.4. Authorization (access control) failures

6.10.1.2.5. Session management failures, for example, cookie session identification value modification

6.10.1.2.6. Application errors and system events

6.10.1.2.7. Application and related systems start-ups and shut-downs, and logging initialization (starting, stopping, or pausing)

6.10.1.2.8. Use of higher-risk functionality

6.10.1.2.9. Legal and other opt-ins

6.10.1.3. IaaS: control of, and access to, event and diagnostic data

6.10.1.3.1. Cloud or network provider perimeter network logs

6.10.1.3.2. Logs from DNS servers

6.10.1.3.3. Virtual machine monitor (VMM) logs

6.10.1.3.4. Host operating system and hypervisor logs

6.10.1.3.5. API access logs

6.10.1.3.6. Management portal logs

6.10.1.3.7. Packet captures

6.10.1.3.8. Billing records

6.10.2. EVENT ATTRIBUTE REQUIREMENTS

6.10.2.1. When

6.10.2.1.1. Log date and time (international format).

6.10.2.1.2. Event date and time. The event time stamp may be different to the time of logging

6.10.2.1.3. Interaction identifier.

6.10.2.2. Where

6.10.2.2.1. Application identifier, for example, name and version

6.10.2.2.2. Application address, for example, cluster/host name or server IPv4 or IPv6 address and port number, workstation identity, and local device identifier

6.10.2.2.3. Service name and protocol

6.10.2.2.4. Geolocation

6.10.2.2.5. Window/form/page, for example, entry point URL and HTTP method for a web application and dialog box name

6.10.2.2.6. Code location, including the script and module name

6.10.2.3. Who (human or machine user)

6.10.2.3.1. Source address, including the user’s device/machine identifier, user’s IP address, cell/RF tower ID, and mobile telephone number

6.10.2.3.2. User identity (if authenticated or otherwise known), including the user database table primary key value, username, and license number

6.10.2.4. What

6.10.2.4.1. Type of event

6.10.2.4.2. Severity of event (0=emergency, 1=alert, ..., 7=debug), (fatal, error, warning, info, debug, and trace)

6.10.2.4.3. Security-relevant event flag (if the logs contain non-security event data too)

6.10.2.4.4. Description

6.10.2.5. Additional considerations

6.10.2.5.1. Secondary time source (GPS) event date and time.

6.10.2.5.2. Action, which is the original intended purpose of the request. Examples are log in, refresh session ID, log out, and update profile.

6.10.2.5.3. Object, for example, the affected component or other object (user account, data resource, or file), URL, session ID, user account, or file.

6.10.2.5.4. Result status. Whether the action aimed at the object was successful (can be Success, Fail, or Defer).

6.10.2.5.5. Reason. Why the status occurred, for example, the user was not authenticated in the database check, incorrect credentials.

6.10.2.5.6. HTTP status code (for web applications only). The status code returned to the user (often 200 or 301).

6.10.2.5.7. Request HTTP headers or HTTP user agent (web applications only).

6.10.2.5.8. User type classification, for example, public, authenticated user, CMS user, search engine, authorized penetration tester, and uptime monitor.

6.10.2.5.9. Analytical confidence in the event detection, for example, low, medium, high, or a numeric value.

6.10.2.5.10. Responses seen by the user and/or taken by the application, for example, status code, custom text messages, session termination, and administrator alerts.

6.10.2.5.11. Extended details, for example, stack trace, system error messages, debug information, HTTP request body, and HTTP response headers and body.

6.10.2.5.12. Internal classifications, for example, responsibility and compliance references.

6.10.2.5.13. External classifications

6.10.3. STORAGE AND ANALYSIS

6.10.3.1. Preservation is defined by ISO 27037:2012 as the “process to maintain and safeguard the integrity and/or original condition of the potential digital evidence.”

6.10.3.2. Evidence preservation helps assure admissibility in a court of law.

6.10.3.3. Storage requires strict access controls to protect the items from accidental or deliberate modification, as well as appropriate environment controls.

6.10.3.4. Event logging mechanism should be tamper-proof in order to avoid the risks of faked event logs.

6.10.4. SECURITY AND INFORMATION EVENT MANAGEMENT (SIEM)=SEM+SIM

6.10.4.1. security management that deals with real-time monitoring, correlation of events, notifications, and console views is commonly known as security event management (SEM)

6.10.4.2. provides long-term storage, analysis, and reporting of log data is known as security information management (SIM)

6.10.4.3. Capabilities

6.10.4.3.1. Data aggregation: Log management aggregates data from many sources, including network, security, servers, databases, and applications, providing the ability to consolidate monitored data to help avoid missing crucial events.

6.10.4.3.2. Correlation: Looks for common attributes and links events together into meaningful bundles.

6.10.4.3.3. Alerting: The automated analysis of correlated events and production of alerts, to notify recipients of immediate issues.

6.10.4.3.4. Dashboards: Tools can take event data and turn it into informational charts to assist in seeing patterns or identifying activity that is not forming a standard pattern.

6.10.4.3.5. Compliance: Applications can be employed to automate the gathering of compliance data, producing reports that adapt to existing security, governance, and auditing processes.

6.10.4.3.6. Retention: Employing long-term storage of historical data to facilitate correlation of data over time and to provide the retention necessary for compliance requirements.

6.10.4.3.7. Forensic analysis: The ability to search across logs on different nodes and time periods based on specific criteria.

6.10.4.4. Challenges

6.10.4.4.1. targeted attack detection requires in-depth knowledge of internal systems, the kind found in corporate security teams.

6.10.4.4.2. trouble with recognizing the low-and-slow attacks

6.10.4.4.3. need to have access to the data gathered by the cloud provider’s monitoring infrastructure.

6.10.4.4.4. access to monitoring data would need to be specified as part of the SLA

6.11. Supporting Continuous Operations

6.11.1. Audit logging: Higher levels of assurance are required for protection, retention, and lifecycle management of audit logs. They must adhere to the applicable legal, statutory, or regulatory compliance obligations and provide unique user access accountability to detect potentially suspicious network behaviors and/or file integrity anomalies through to forensic investigative capabilities in the event of a security breach.

6.11.1.1. New event detection: The goal of auditing is to detect information security events. Policies should be created that define what a security event is and how to address it.

6.11.1.2. Adding new rules: Rules are built in order to allow detection of new events. Rules allow for the mapping of expected values to log files in order to detect events. In continuous operation mode, rules have to be updated to address new risks.

6.11.1.3. Reduction of false positives: The quality of the continuous operations audit logging is dependent on the ability to reduce over time the amount of false positives in order to maintain operational efficiency. This requires constant improvement of the rule set in use.

6.11.2. Contract/authority maintenance: Points of contact for applicable regulatory authorities, national and local law enforcement, and other legal jurisdictional authorities should be maintained and regularly updated as per the business need

6.11.3. Secure disposal: Policies and procedures must be established with supporting business processes and technical measures implemented for the secure disposal and complete removal of data from all storage media.

6.11.4. Incident response legal preparation: In the event a follow-up action concerning a person or organization after an information security incident requires legal action, proper forensic procedures, including chain of custody, should be required for preservation and presentation of evidence to support potential legal action subject to the relevant jurisdictions.

6.12. Chain of Custody and Non-Repudiation

6.12.1. Chain of custody is the preservation and protection of evidence from the time it is collected until the time it is presented in court.

6.12.1.1. collection

6.12.1.2. possession

6.12.1.3. condition

6.12.1.4. location

6.12.1.5. transfer

6.12.1.6. access to

6.12.1.7. any analysis performed

7. Cloud Platform and Infrastructure Security

7.1. Cloud environment

7.1.1. First Level Terms

7.1.1.1. Cloud Service Consumer: Person or organization that maintains a business relationship with, and uses service from, the Cloud Service Providers

7.1.1.2. Cloud Service Provider: Person, organization, or entity responsible for making a service available to service consumers

7.1.1.3. Cloud Carrier: The intermediary that provides connectivity and transport of cloud services between the Cloud Service Providers and Cloud Consumers

7.1.1.3.1. physical cabling (copper or fiber), which is a bandwidth-limiting factor

7.1.1.3.2. switches for local interconnects and routers for more complex network connectivity and flexibility.

7.1.1.3.3. VLANs (virtual LANs) separate local traffic into distinct “broadcast domains.”

7.1.2. Physical infrastructure components

7.1.2.1. Design: four-tier classification scheme for datacenters. Tier 1 is a basic center, and tier 4 has the most redundancy.

7.1.2.2. Characteristics

7.1.2.2.1. High volume of expensive hardware, up to hundreds of thousands of servers in a single facility

7.1.2.2.2. High power densities, up to 10kW (kilowatts) per square meter

7.1.2.2.3. Enormous and immediate impact of downtime on all dependent business.

7.1.2.2.4. Data center owners can provide multiple levels of service. The basic level is often summarized as “power, pipe, and ping.”

7.1.2.2.5. Electrical power and cooling pipe, that is, air conditioning. “Power” and “pipe” limit the density with which servers can be stacked in the datacenter.

7.1.2.2.6. Power density is expressed in kW per rack

7.1.2.2.7. Network connectivity.

7.1.2.2.8. Data center providers (co-location) could provide floor space, rack space, and cages (lockable floor space) on any level of aggregation.

7.1.3. Virtual infrastructure components

7.1.3.1. Network

7.1.3.1.1. Software Defined Networks: provides a clearly defined and separate network control plane to manage network traffic that is separated from the forwarding plane.

7.1.3.1.2. Functionality

7.1.3.2. Compute

7.1.3.2.1. Ability to manage and allocate CPU and RAM resources effectively, either on a per-guest OS basis or on a per-host basis within a resource cluster.

7.1.3.2.2. Virtualization: provides a shared resource pool that can be managed to maximize the number of guest operating systems running on each host.

7.1.3.2.3. Scalability: with virtualization, there is the ability to run multiple operating systems (guests) and their associated applications on a single host.

7.1.3.2.4. Hypervisor: a piece of software, firmware, or hardware that gives the impression to the guest operating systems that they are operating directly on the physical hardware of the host.

7.1.3.3. Storage

7.1.3.3.1. object storage: objects (files) are stored with additional metadata (content type, redundancy required, creation date, etc.). These objects are accessible through APIs and potentially through a web user interface.

7.1.4. Management plane: create, start, and stop virtual machine instances and provision them with the proper virtual resources such as CPU, memory, permanent storage, and network connectivity.

7.1.4.1. runs on its own set of servers and will have dedicated connectivity to the physical machines under management.

7.1.4.2. the most powerful tool in the entire cloud infrastructure, it will also integrate authentication, access control, and logging and monitoring of resources used.

7.1.4.3. used by the most privileged users: those who install and remove hardware, system software, firmware, and so on.

7.1.4.4. the pathway for individual tenants who will have limited and controlled access to the cloud’s resources.

7.1.4.5. APIs allow automation of control tasks. A graphical user interface (i.e., web page) is typically built on top of those APIs.

7.2. Management of Cloud Computing Risks

7.2.1. Corporate governance: risks around cloud computing should be judged in relation to the corporate goals.

7.2.2. Enterprise risk management is the set of processes and structure to systematically manage all risks to the enterprise. This explicitly covers supply chain risks and third-party risks, the biggest of which is typically the failure of an external provider to deliver the services that are contracted.

7.2.3. Risk Assessments/Analysis

7.2.3.1. Risk Categories

7.2.3.1.1. Policy and Organization Risks

7.2.3.1.2. General Risks

7.2.3.1.3. Virtualization Risks

7.2.3.1.4. Cloud-Specific Risks

7.2.3.1.5. Legal Risks

7.2.3.1.6. Non-Cloud-Specific Risks

7.2.3.2. Cloud Attack Vectors

7.2.3.2.1. Cloud computing uses new technology such as virtualization, federated identity management, and automation through a management interface.

7.2.3.2.2. Cloud computing introduces external service providers.

7.2.3.2.3. Guest breakout

7.2.3.2.4. Identity compromise, either technical or social (e.g., through employees of the provider)

7.2.3.2.5. API compromise, for example by leaking API credentials

7.2.3.2.6. Attacks on the provider’s infrastructure and facilities (e.g., from a third-party administrator that may be hosting with the provider)

7.2.3.2.7. Attacks on the connecting infrastructure (cloud carrier)

7.2.4. Countermeasure Strategies Across the Cloud

7.2.4.1. multiple layers of defense against any risk

7.2.4.1.1. for a control that directly addresses a risk, there should be an additional control to catch the failure of the first control. These controls are referred to as compensating controls.

7.2.4.2. CONTINUOUS UPTIME. This implies that every component is redundant.

7.2.4.2.1. It makes the infrastructure resilient against component failure.

7.2.4.2.2. It allows individual components to be updated without affecting the cloud infrastructure uptime.

7.2.4.3. AUTOMATION OF CONTROLS. Controls should be automated as much as possible, thus ensuring their immediate and comprehensive implementation.

7.2.4.3.1. integrate software into the build process of virtual machine images that

7.2.4.3.2. automated system for configuration and resilience makes it possible to replace the running instance with a fresh, updated one. This is often referred to as the baseline image.

7.2.4.4. ACCESS CONTROLS. Depending on the service and deployment models, the responsibility and actual execution of the control can lie with the cloud consumer, with the cloud provider, or both.

7.2.4.4.1. Cloud services should deploy a user-centric approach for effective access control, in which every user request is bundled with the user identity. Particular attention is required for enabling adequate access to external auditors, without jeopardizing the infrastructure.

7.2.4.4.2. Building access

7.2.4.4.3. Computer floor access

7.2.4.4.4. Cage or rack access

7.2.4.4.5. Access to physical servers (hosts)

7.2.4.4.6. Hypervisor access (API or management plane)

7.2.4.4.7. Guest operating system access (VMs)

7.2.4.4.8. Developer access

7.2.4.4.9. Customer access

7.2.4.4.10. Database access rights

7.2.4.4.11. Vendor access

7.2.4.4.12. Remote access

7.2.4.4.13. Application/software access to data (SaaS)

7.2.5. Security controls management

7.2.5.1. Physical and Environmental Protections

7.2.5.1.1. KEY REGULATIONS

7.2.5.1.2. CONTROLS

7.2.5.1.3. PROTECTING DATACENTER FACILITIES

7.2.5.2. System and Communication Protections

7.2.5.2.1. AUTOMATION OF CONFIGURATION

7.2.5.2.2. RESPONSIBILITIES OF PROTECTING THE CLOUD SYSTEM

7.2.5.2.3. FOLLOWING THE DATA LIFECYCLE

7.2.5.3. Virtualization Systems Controls

7.2.5.3.1. The virtualization components include compute, storage, and network, all governed by the management plane. These components merit specific attention. As they implement cloud multi-tenancy, they are a prime source of both cloud-specific risks and compensating controls.

7.2.5.3.2. Management plane GUI and API

7.2.5.3.3. Isolation of the management network with respect to other networks. Separate physical network to meet regulatory and compliance requirements

7.2.5.3.4. The virtualization system components implement controls that isolate tenants. This includes not only confidentiality and integrity but also availability. Fair, policy-based resource allocation over tenants is also a function of the virtualization system components. For this, capacity monitoring of all relevant physical and virtual resources should be considered. This includes network, disk, memory, and CPU.

7.2.5.3.5. Trust zones can be used to segregate the physical infrastructure

7.2.5.3.6. The virtualization layer is also a potential residence for other controls (traffic analysis, DLP, virus scanning)

7.2.5.3.7. Procedures for snapshotting live images should be incorporated into incident response procedures to facilitate cloud forensics.

7.2.5.3.8. The virtualization infrastructure should also enable the tenants to implement the appropriate security controls

7.2.5.4. Managing Identification, Authentication, and Authorization in the Cloud Infrastructure

7.2.5.4.1. Identity in cloud computing can be federated across multiple collaborating parties. This implies a split between “identity providers” and “relying parties,” who rely on identities to be issued (provided) by the providers.

7.2.5.4.2. MANAGING IDENTIFICATION

7.2.5.4.3. MANAGING AUTHORIZATION

7.2.5.4.4. ACCOUNTING FOR RESOURCES

7.2.5.4.5. MANAGING IDENTITY AND ACCESS MANAGEMENT

7.2.5.4.6. MAKING ACCESS DECISIONS

7.2.5.4.7. THE ENTITLEMENT PROCESS

7.2.5.4.8. THE ACCESS CONTROL DECISION-MAKING PROCESS

7.2.5.5. Risk Audit Mechanisms

7.2.5.5.1. The purpose of a risk audit is to provide reasonable assurance that adequate risk controls exist and are operationally effective.

7.2.5.5.2. Evidence is essential component of audits

7.2.5.5.3. CLOUD COMPUTING AUDIT

7.3. Disaster recovery and business continuity management

7.3.1. BCDR Relevant Cloud Infrastructure

7.3.1.1. SCENARIOS

7.3.1.1.1. ON-PREMISE, CLOUD AS BCDR

7.3.1.1.2. CLOUD CONSUMER, PRIMARY PROVIDER BCDR

7.3.1.1.3. CLOUD CONSUMER, ALTERNATIVE PROVIDER BCDR

7.3.1.2. PLANNING FACTORS

7.3.1.2.1. The important assets: data and processing

7.3.1.2.2. The current locations of these assets

7.3.1.2.3. The networks between the assets and the sites of their processing

7.3.1.2.4. Actual and potential location of workforce and business partners in relation to the disaster event

7.3.1.3. CHARACTERISTICS

7.3.1.3.1. Rapid elasticity and on-demand self-service lead to flexible infrastructure that can be quickly deployed to execute an actual disaster recovery without hitting any unexpected ceilings.

7.3.1.3.2. Broad network connectivity, which reduces operational risk.

7.3.1.3.3. Cloud infrastructure providers have resilient infrastructure, and an external BCDR provider has the potential for being very experienced and capable as their technical and people resources are being shared across a number of tenants.

7.3.1.3.4. Pay-per-use can mean that the total BCDR strategy can be a lot cheaper than alternative solutions. During normal operation, the BCDR solution is likely to have a low cost. Even a trial of an actual DR will have a low run cost.

7.3.2. Business Requirements

7.3.2.1. Glossary

7.3.2.1.1. Recovery Point Objective (RPO) helps determine how much information must be recovered and restored

7.3.2.1.2. Recovery Time Objective (RTO) is a time measure of how fast you need each system to be up and running in the event of a disaster or critical failure.

7.3.2.1.3. Recovery Service Level (RSL). RSL is a percentage measurement (0–100%) of how much computing power is necessary based on the percentage of the production system needed during a disaster.

7.3.2.2. Questions that need to be answered before an optimal cloud BCDR strategy can be developed

7.3.2.2.1. Is the data sufficiently valuable for additional BCDR strategies?

7.3.2.2.2. What is the required recovery point objective (RPO); that is, what data loss would be tolerable?

7.3.2.2.3. What is the required recovery time objective (RTO); that is, what unavailability of business functionality is tolerable?

7.3.2.2.4. What kinds of “disasters” are included in the analysis?

7.3.2.2.5. Does that include provider failure?

7.3.2.2.6. What is the necessary Recovery Service Level (RSL) for the systems covered by the plan?

7.3.3. Risk management

7.3.3.1. Risks threatening the assets

7.3.3.1.1. Damage from natural causes and disasters, as well as deliberate attacks, including fire, flood, atmospheric electrical discharge, solar induced geomagnetic storm, wind, earthquake, tsunami, explosion, nuclear accident, volcanic activity, biological hazard, civil unrest, mudslide, tectonic activity, and other forms of natural or man-made disaster

7.3.3.1.2. Wear and tear of equipment

7.3.3.1.3. Availability of qualified staff

7.3.3.1.4. Utility service outages (e.g., power failures and network disruptions)

7.3.3.1.5. Failure of a provider to deliver services

7.3.3.2. Risks threatening the BCDR execution

7.3.3.2.1. A BCDR strategy typically involves a redundant architecture, or failover tactic. Such architectures intrinsically add complication to the existing solution. Because of that, it will have new failure modes and will require additional skills.

7.3.3.2.2. Most BCDR strategies will still have common failure modes. For example, the mitigation of VM failure by introducing a failover cluster will still have a residual risk of failure of the zone in which the cluster is located. Likewise, multi-zone architectures will still be vulnerable to region failures.

7.3.3.2.3. The DR site is likely to be geographically remote from any primary sites. This may impact performance because of network bandwidth and latency considerations. In addition, there could be regulatory compliance concerns if the DR site is in a different jurisdiction.

7.3.3.3. Concerns About The BCDR Scenarios

7.3.3.3.1. ON-PREMISE, CLOUD AS BCDR: workloads on physical machines may need to be converted to workloads in a virtual environment

7.3.3.3.2. CLOUD CONSUMER, PRIMARY PROVIDER BCDR: consider load-balancing functionality and available bandwidth between the redundant facilities of the cloud provider.

7.3.3.3.3. CLOUD CONSUMER, ALTERNATIVE PROVIDER BCDR

7.3.4. BCDR Strategies

7.3.4.1. LOCATION

7.3.4.1.1. The relevant locations to be considered depend on the geographic scale of the calamity anticipated.

7.3.4.1.2. Power or network failure may be mitigated in a different zone in the same datacenter

7.3.4.1.3. Flooding, fire, and earthquakes will likely require locations that are more remote.

7.3.4.2. DATA REPLICATION

7.3.4.2.1. block level

7.3.4.2.2. file level

7.3.4.2.3. database level

7.3.4.2.4. in bulk

7.3.4.2.5. on the byte level

7.3.4.3. FUNCTIONALITY REPLICATION

7.3.4.3.1. re-creating the processing capacity on a different location.

7.3.4.3.2. active passive

7.3.4.3.3. active mode

7.3.4.3.4. many applications have extensive connections to other providers

7.3.4.4. PLANNING, PREPARING, AND PROVISIONING

7.3.4.4.1. about tooling, functionality, and processes that lead up to the actual DR failover response

7.3.4.5. FAILOVER CAPABILITY

7.3.4.5.1. requires some form of load balancer to redirect user service requests to the appropriate services.

7.3.4.6. RETURNING TO NORMAL

7.3.4.6.1. return to normal would be back to the original provider (or in-house infrastructure, as the case may be). Alternatively, the original provider may no longer be a viable option, in which case the DR provider becomes the “new normal.”

7.3.5. Developing And Implementing The Plan

7.3.5.1. THE SCOPE

7.3.5.1.1. The BCDR plan and its implementation are embedded in an information security strategy

7.3.5.1.2. clearly defined roles

7.3.5.1.3. risk assessment

7.3.5.1.4. classification

7.3.5.1.5. policy

7.3.5.1.6. awareness

7.3.5.1.7. training

7.3.5.2. GATHERING REQUIREMENTS AND CONTEXT

7.3.5.2.1. identification of critical business processes and their dependence on specific data and services

7.3.5.2.2. Services characteristics

7.3.5.2.3. Services descriptions

7.3.5.2.4. SLA

7.3.5.2.5. risks

7.3.5.2.6. threats

7.3.5.2.7. internal policies and procedures

7.3.5.2.8. applicable legal, statutory, or regulatory compliance obligations

7.3.5.3. ANALYSIS OF THE PLAN

7.3.5.3.1. purpose is to translate BCDR requirements into INPUTS that will be used in the design phase

7.3.5.4. RISK ASSESSMENT

7.3.5.4.1. Elasticity of the cloud provider—can they provide all the resources if BCDR is invoked?

7.3.5.4.2. Will any new cloud provider address all contractual issues and SLA requirements?

7.3.5.4.3. Available network bandwidth for timely replication of data.

7.3.5.4.4. Available bandwidth between the impacted user base and the BCDR locations.

7.3.5.4.5. Legal/licensing risks—there may be legal or licensing constraints that prohibit the data or functionality to be present in the backup location.

7.3.5.5. PLAN DESIGN

7.3.5.5.1. objective is to establish and evaluate candidate architecture solutions and flesh out procedures and workflow

7.3.5.5.2. How will the BCDR solution be invoked?

7.3.5.5.3. What is the manual or automated procedure for invoking the failover services?

7.3.5.5.4. How will the business use of the service be impacted during the failover, if at all?

7.3.5.5.5. How will the DR be tested?

7.3.5.5.6. Finally, what resources will be required to set it up, to turn it on, and to return to normal?

7.3.5.6. OTHER PLAN CONSIDERATIONS

7.3.5.6.1. On the primary platform, BCDR activities are likely to include the implementation of functionality for enabling data replication on a regular or continuous schedule and functionality to automatically monitor for any contingency that might arise and raise a failover event.

7.3.5.6.2. On the DR platform, the required infrastructure and services will need to be built up and brought into trial production mode.

7.3.5.7. PLANNING, EXERCISING, ASSESSING, AND MAINTAINING THE PLAN

7.3.5.7.1. Testing strategy

7.3.5.7.2. The testing scope and objectives should

7.3.5.7.3. Test plans

7.3.5.8. TEST PLAN REVIEW

7.3.5.8.1. Review process

7.3.5.8.2. The type or combination of testing methods employed by an organization should be determined by

7.3.5.8.3. Testing methods include

7.3.5.8.4. Tabletop Exercise/Structured Walk-Through Test

7.3.5.8.5. Walk-Through Drill/Simulation Test

7.3.5.8.6. Functional Drill/Parallel Test

7.3.5.8.7. Full-Interruption/Full-Scale Test

7.3.5.9. TESTING AND ACCEPTANCE TO PRODUCTION

7.3.5.9.1. business continuity plan, as any other security incident response plan, is subject to testing at planned intervals or upon significant organizational or environmental changes