Which of the following is a detective control to address unauthorized network access?

At AWS, security is our top priority, and our mission is to work backward from customer outcomes to build secure and scalable cloud services. Most customers with regulatory compliance requirements need a set of tools and processes to make sure that sensitive data isn’t lost or misused, or that unauthorized users don’t have a way to obtain unintended access to data. As part of cloud transformation projects, customers look to understand how traditional sensitive data protection solutions, such as Data Loss Prevention (DLP), work in conjunction with AWS Services and Features to enforce only intended data access.

In this post, we’ll guide you through the best practices that provide defense-in-depth against inadvertent data access. We also recommend using multiple services and features, such as Amazon Macie, AWS Organizations, and AWS Key Management Service (AWS KMS), to build multiple layers of controls as part of your strategy to protect against data inadvertent access. Using multiple layers of controls helps to reduce reliance on any single layer of defense to prevent and detect unintended access to sensitive data.

Any strategy for implementing security controls should be thought of as a journey that follows the lifecycle of a resource. When there is a requirement to use an AWS Service, ‘Directive controls’ should be established that define security requirements around how resources are created, operated, maintained, and decommissioned. ‘Preventative controls’ don’t allow resources to be instantiated that don’t adhere to ‘directive controls’. ‘Detective controls’ provide continuous monitoring while the resource is in use and until the time it’s decommissioned. Finally, ‘responsive controls’ are designed to automate the remediation of configurations not aligned with organizational security policies.

This post will provide a set of strategies covering the following control areas and topics:

  • Prevention
    • How to secure the pipeline used for deployment, as well as controls that can be placed within the pipeline.
    • AWS Identity and Access Management (IAM) features such as Service Control Policies (SCPs), resource policies, VPC endpoint polices, and condition keys that should be used to enforce intended data access patterns.
    • Data protection techniques that help mitigate unintended access to data.
  • Detection
    • Identify key AWS Services and features that provide continuous monitoring, such as AWS Config, IAM Access Analyzer, Amazon GuardDuty.
  • Response
    • How to progress from manual to automated processes to correct or address issues quickly helps to remediate noncompliant resources.

Which of the following is a detective control to address unauthorized network access?

Prevention

A key part of your overall protection strategy is implementing ‘preventative controls’. The objective is to implement guardrails that don’t allow the creation of an insecure configuration in the first place. This reduces the overall effort needed to build securely by design, because developers are provided with feedback early in the development lifecycle regarding whether or not configurations for resources align with security requirements for your organization. Key strategies for building guardrails include securing deployment pipelines, implementing SCPs, implementing a Data Perimeter, and employing data protection techniques.

Which of the following is a detective control to address unauthorized network access?

Secure pipelines

A fundamental shift that often occurs with cloud transformations is to declare and manage cloud infrastructure using code instead of manual interactions. Infrastructure-as-Code (IaC) is a practice where traditional infrastructure management techniques are supplemented and often replaced by using code-based tools and software development techniques. This allows for automating the provisioning of cloud resources through deployment pipelines.

Securing deployment pipelines is a process by which security controls are applied to the tooling used to deploy infrastructure and application code to your environment. This process focuses on “Security of the Pipeline” to make sure that the pipeline itself is hardened. Examples of pipeline hardening are limiting which teams can submit code, segmentation of code for infrastructure and workloads, encryption at rest is in use, and logging is enabled for auditing purposes. This also means that human users aren’t allowed to make changes outside of the pipeline for production environments.

In addition to securing how the pipeline operates, the approach to “Security in the Pipeline” means that pipeline design incorporates mechanisms to evaluate the security of resources being built by the pipeline. The types of inspection options should be tailored for the purposes of the pipeline. For example, a pipeline that deploys workloads on containers should evaluate the settings of the container manifest to make sure that the container isn’t running as root, or that container images have been scanned and found to be free of critical or high software vulnerabilities.

Pipelines that are focused on delivering IaC should have a tool to inspect for configurations that could lead to unauthorized access to data, such as resource policies that allow public access. AWS offers several tools that provide policy-as-code checks, such as the (cdk-nag) and Cloudformation Guard (cfn-guard). Both of these tools have pre-built rulesets that can inspect IaC for misconfiguration before resources are created in your AWS account. For example, both cfn-guard and cdk-nag offer the ability to validate IaC against rules that align to NIST 800-53 Revision 5 or the Payment Card Data Security Standard 3.2.1. You can also create custom rules to evaluate resources against standards for your organization. For more information on the available rules, visit the cdk-nag and cloudformation guard github repositories.

When considering how to prevent unauthorized data access, look for rules that focus on the following:

  • Identifying resources that were created with external access, such as an Amazon Redshift cluster that is publicly accessible.
  • Lack of encryption at rest, such as not using a Customer Managed KMS Key.
  • Lack of encryption in transit, such as an Amazon Redshift cluster not requiring Transport Layer Security (TLS).
  • Overly permissive IAM, such as granting the Administrator IAM policy.
  • “Least access” networking configuration, such as a security group that allows inbound access from anywhere.

Building automated IaC checks into the pipelines helps development teams move faster and more securely. As developers submit code to repositories, hooks can be implemented to run IaC checks and provide fast feedback to developers on settings that must be corrected before proceeding further. Thresholds that align with organizational standards or workload sensitivity can be set in pipelines to automatically deny a pull request with critical issues from moving further down the pipeline toward deployment.

SCPs and security invariants

SCPs are a type of organization policy that you can use to manage permissions in your Organization. SCPs offer central control over the maximum available permissions for all of the accounts in your organization. SCPs help you make sure that your accounts stay within your organization’s access control guidelines and are typically also thought of as preventative guardrails. Leveraging SCPs are crucial to the strategy of preventing unauthorized data access because they enable you to establish security invariants, or settings that must always be configured across your organization.

SCPs are an important tool that helps prevent unintended data access by making it easier to manage and enforce permissions at scale, whether you have 100 or 1000 accounts. When thinking about using SCPs to enforce security invariants, there are several common scenarios, such as denying an account the ability to leave your Organization, turning off Amazon CloudTrail Logging, or disabling GuardDuty. Although those are great starting points, you should look to extend these guardrails to prevent unintended access.

A focus area for preventing unintended access should be preventing unintended external interactions. First determine the services and regions needed by your organization, and implement an SCP that enforces the use of allowed services and regions. This helps to limit the scope of what must be secured around what’s actually needed by your organization. From there, determine which of these services have valid business requirements for external accessibility. The default approach should be to restrict public accessibility. Here are four examples of how blocking public access could become a Security Invariant enforced via SCPs:

  • Amazon Simple Storage Service (Amazon S3)
    • Enable the Account level Block Public Access setting within the Amazon S3 service.
    • Prevent calling the “s3:PutAccountPublicAccessBlock” API used to turn off this setting.
  • Amazon Elastic Map Reduce (Amazon EMR)
    • Review Amazon EMR block public access settings and remove any port exceptions unless there is a valid business justification and compensating security controls that mitigate risks associated with public Amazon EMR clusters.
    • Prevent calling elasticmapreduce:PutBlockPublicAccessConfiguration.
  • Amazon Simple Notification Service (Amazon SNS)
    • Review Amazon SNS use cases to determine valid types of endpoints such as http or https.
    • Identify authorized domains that can be configured, such as domains owned only by your organization.
    • Only allow https endpoints to specified domains. And only allow email endpoints to authorized domains.
  • Amazon Resource Access Manager (Amazon RAM)
    • Identify use cases that require sharing AWS resources outside of your account or organization.
    • Prevent creating or updating a resource share that allows for external principals, such as “ram:RequestedAllowsExternalPrincipals”: “true”.

Once you’ve put controls in place to prevent external access based on obvious external use cases, you can refine access management by implementing guardrails focusing on restricting access to trusted identities, trusted resources, and expected networks, also known as a ‘data perimeter’.

Data perimeter

The concept of building a data perimeter is to leverage permissions guardrails, such as SCPs, resource policies, and VPC endpoint policies to only allow access based on your intended organizational constructs. For a more in-depth discussion on data perimeters, review the Data Perimeters whitepaper. Data perimeters are another set of tools that help prevent unintended access to your data.

Some key terms to understand when learning about data perimeters:

  • Trusted identities: Principals (IAM roles or users) within your AWS accounts, or AWS services acting on your behalf.
  • Trusted resources: Resources owned by your AWS accounts or by AWS services acting on your behalf.
  • Expected networks: Your on-premises data centers and VPCs, or networks of AWS services acting on your behalf.

The intent is to design so that only “My” trusted identities can access “My” resources from “My” networks. Once established, the data perimeter works to make sure that unintended principals can’t access your resources, authorized principals can’t access resources outside of your AWS estate, or that access to your AWS estate isn’t allowed from unintended network locations. The following graphic shows several examples of the data perimeter concept.

Which of the following is a detective control to address unauthorized network access?

The controls to enforce a data perimeter are implemented through the application of global condition keys in various types of IAM policies. The following table summarizes applicable condition keys based on the type of data perimeter control.

Data perimeter Control objective Implemented by using Primary IAM capability
Identity

Only trusted identities can access my resources.

Only trusted identities are allowed from my network.

Resource-based policies

VPC endpoint policies

aws:PrincipalOrgID
aws:PrincipalIsAWSServiceaws:PrincipalOrgID
Resource

My identities can access only trusted resources.

Only trusted resources can be accessed from my network.

SCPs

VPC endpoint policies

aws:ResourceOrgID

aws:ResourceOrgID

Network

My identities can access resources only from expected networks.

My resources can only be accessed from expected networks.

SCPs

Resource-based policies

aws:SourceIp
aws:SourceVpc
aws:SourceVpce
aws:ViaAWSServiceaws:SourceIp
aws:SourceVpc
aws:SourceVpce
aws:ViaAWSService
aws:PrincipalIsAWSService

As an example of how to apply a data perimeter control, the global condition key aws:ResourceOrgId can be used in policies to restrict access to resources owned by your Organization. If you would like to restrict Authorized principals to only access Amazon S3 buckets owned by your organization, then you could build an SCP that compares the Organization ID of an S3 bucket in a request context to the organization ID of the principal making the request. The API call is denied if the OrgId doesn’t match.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyS3AccessOutsideMyBoundary",
      "Effect": "Deny",
      "Action": [
        "s3:*"
      ],
      "Resource": "arn:aws:s3:::*/*",
      "Condition": {
        "StringNotEquals": {
          "aws:ResourceOrgID": "${aws:PrincipalOrgID}"
        }
      }
    }
  ]
}

Understanding and implementing guardrails to enforce a data perimeter is a great way to prevent unintended access.

Data protection

The first step of protecting data is to identify what data your organization defines as sensitive. This is the process of data classification. Within Financial Services, common types of sensitive data include Personally Identifiable Information (PII), such as a United States Social Security Number, or Payment Card Industry (PCI) data, such as a credit card Number.

There are multiple ways to design for data protection. Identify approved locations where sensitive data is allowed to exist, and group AWS accounts that contain similar data classifications into an Organizational Unit (OU) with Organizations. This approach lets you apply guardrails that are more stringent for accounts with sensitive data than accounts that don’t contain sensitive information.

As a form of masking, you can tokenize data to replace sensitive data with random values to lower risks associated with unintended data access. Apply encryption in transit to prevent threats from eavesdropping on communication channels, and encrypt sensitive data at rest. When using robust encryption algorithms, encrypted data is rendered useless without the associated decryption key.

Although each of these are best practices, using AWS Key Management Service (KMS) offers an additional layer of protection because the service incorporates an authentication component when using AWS managed keys and customer managed keys.

AWS KMS supports AWS managed keys and has integration with over 100 services, such as Amazon S3, Amazon EMR, Amazon Redshift, Amazon Relational Database Service (Amazon RDS), and Amazon DynamoDB (see region support) for data encryption using keys managed in AWS KMS. For example, Amazon S3 will automatically encrypt data as it’s written to disk and decrypt it when accessed, provided the setting is enabled. To accomplish this, Amazon offers multiple key management options.

AWS managed keys are KMS keys in your account that are created, managed, and used on your behalf by an AWS service integrated with AWS KMS to protect your resources in the service.

The KMS keys that you create are customer managed keys. Customer managed keys are KMS keys in your AWS account that you create, own, and manage. You have full control over these KMS keys, including establishing and maintaining their key policies, IAM policies, and grants, enabling and disabling them, rotating their cryptographic material, adding tags, creating aliases that refer to the KMS keys, and scheduling the KMS keys for deletion.

When using either an AWS managed or customer managed KMS key, a request to decrypt data requires the requestor to be authenticated and be authorized for the KMS action that is part of the request context. All other requests would be denied by IAM due to a lack of permissions.

In the following example, suppose an Amazon S3 bucket in Account A is accidentally made public and permissions allow anyone to get any object in the bucket. Any s3:GetObject API calls would succeed since the bucket and objects are public. In contrast, any objects that were encrypted using a KMS key couldn’t be accessed because kms:Decrypt API calls wouldn’t be allowed by default for users outside of Account A due to lack of authorization to use the KMS key.

Which of the following is a detective control to address unauthorized network access?

By adding data protection using a KMS key to encrypt resources within your account, you also benefit from having a layer of authentication required as well. This strategy provides a defense in depth approach for preventing unintended access to data.

Preventative controls summary

  1. Secure pipelines. This means to first focus on securing the infrastructure and tooling that comprise the pipeline, such as making sure that only authorized users have access to code repos or the ability to run a deployment. Then employ tooling within the pipeline to prevent the creation of resources that don’t comply with your organizations policies.
  2. Use SCPs to enforce Security Invariants, or settings that must always be configured across your organization.
  3. Build a data perimeter to enforce that access to your AWS accounts and resources is only allowed by identities that you trust, that interactions are only with resources that you trust, and that access is from networks that you trust.
  4. Data protection using KMS Keys provides an authentication component in addition to content encryption.

Detection

The Security perspective of the AWS Cloud Adoption Framework (CAF) identifies nine capabilities which help you achieve the confidentiality, integrity, and availability of your data. Specifically, the Threat Detection capability complements the Prevention techniques described in the previous section.

The detective capability establishes monitoring and intelligence that enables customers to identify and respond to unsanctioned activities, which can be either unintentional or malicious. Detective controls monitor traffic, generate alerts, take audit actions, and identify when preventative controls are disabled. Therefore, they reflect a key security design principle – ‘Enable Traceability’. Ultimately, this is an implementation that integrates logs and a metric collection with systems potentially capable to automatically investigate and act.

Services choices for Detective Controls

The specific implementation of ‘Detective Controls’ for financial services customers is based on configuration choices that create a common frame of reference. Therefore, the interaction with the platform is captured by processing logs, events, and monitoring that allows for auditing, automated analysis, and alarming. It’s critical to analyze logs, gain visibility to spot issues before they impact business, and respond timely so that customers can identify potential security incidents. The typical categories for services that support ‘Detective Controls’ span over the following areas (see the following figure).

Which of the following is a detective control to address unauthorized network access?

Every implementation is unique, reflecting the financial services customer’s requirements, where AWS Services are configured accordingly to customer and compliance requirements. The following is a description of powerful services for use cases that are frequently used and configured accordingly for stable financial services implementations.

Security Hub aggregates, organizes, and prioritizes security alerts, or findings from multiple AWS services, such as GuardDuty, Amazon Inspector, IAM Access Analyzer, AWS Firewall Manager, Macie, as well as from other AWS Partner solutions. Security Hub has a status field and supports custom actions, which allows for building workflows around identifying and remediating findings. It will be used as a consolidation point and can forward findings into other Security Integration and Event Management (SIEM) Tools such as Splunk.

For continuous monitoring and recording, AWS has implemented managed config rules based on AWS Config. In addition, custom config rules can be designed and then reported to Security Hub as well. There is the opportunity to implement auto-remediation to enhance AWS Config from a detection mechanism to a near real-time responsive control.

According to an AWS Survey, only 20% of 50+ customers report that they have some automation to collect audit data. The sad truth is that most Governance, Risk, and Compliance teams spend up to 60% of their time just collecting data. Only 14% of Chief Audit Executives say they provide assurance on risk from disruptive innovation (IIA OnRisk Study). Audit Manager (AM) is one of the disruptive innovations, which is just the beginning.

AWS Audit Manager helps simplify how you assess risk and monitor your compliance with regulations and industry standards. Audit Manager was released in 2020 as a new service that helps you continuously audit your AWS usage and automates evidence collection to make it easier for you to assess whether your policies, procedures, and activities are operating effectively. Using a prebuilt or customized framework, you can launch an Audit Manger assessment to begin collecting and organizing evidence, such as Security Hub findings, in accordance with the requirements of an industry standard or regulation, such as

  • the ‘Health Information Trust Alliance Common Security Framework’ (HITRUST CSF),
  • the ‘Health Insurance Portability and Accountability Act’ (HIPAA) Security Rule, or
  • the ‘Center for Internet Security; (CIS) AWS Foundations Benchmark standard.

Audit Manager includes pre-built assessment frameworks from AWS and AWS partners, e.g., CIS, PCI DSS, GxP, GDPR, HITRUST, HIPAA, FedRAMP, etc. In addition, Audit Manager supports custom-defined controls and compliance frameworks.

GuardDuty is a native AWS intrusion detection service. It reads data sources such as VPC flow logs, CloudTrail management event logs, CloudTrail S3 data event logs, and DNS logs. GuardDuty analyzes these log sources for indications of potentially malicious activity. GuardDuty findings are forwarded to Security Hub where they can be tracked and managed. Required resource configurations for use of the specified resources will need to have been identified within the AWS accounts as part of the AWS Architecture Requirements documentation.

Macie is a fully managed data security and data privacy service that uses ML and pattern matching to discover and protect your sensitive data in AWS. Many of our Financial Services customers leverage Macie to enhance data discovery capabilities for various use cases:

  • Identifying non-tokenized PCI data in S3 buckets,
  • Align account data classification,
  • Validate appropriate tags for account data classification,
  • Extend data classification for RDS and DynamoDB,
  • Cross-reference Macie with GuardDuty S3 findings, and
  • Insights into S3 policy findings, such as public buckets, lack of encryption, or cross-sharing with other accounts.

For other use cases where sensitive data types are limited to a limited number of sensitive data types (PII, PAN, Merchant ID, etc.), Macie is a good choice wherever Amazon S3 is used or data can be converted to Amazon S3 with reasonable methods and performance.

Furthermore, for Data Lake environments, you can continue to use the native AWS DLP service ‘Macie’ for detecting sensitive data. AWS has worked with AWS Data and Analytics Competency Partners on an architecture called the De-Identified Data Lake (DIDL). A de-identified data lake solves the data privacy problem by de-identifying and protecting sensitive information before it even enters a Data Lake. Since it minimizes the storage and the use of PII, there is less risk of data breaches and misuse of data. Therefore, the compliance costs can be lower, but without losing the ability to understand and use your data for competitive advantage. The approach supports a dual-test process of using a partner discovery tool on the front-end, and then making sure that no PII has made it into the DIDL using Macie.

Backend detective controls

Even more important than detecting sensitive data is preventing sensitive data going into backend systems such as networks, databases, storage, objects, and logs. AWS Services and features provide the building blocks to simplify creating differentiated applications, and thus can help minimize the issue from the data initiation position. Implementing preventative practices such as data perimeter techniques, VPC endpoints with policies, and combining with detective controls (GuardDuty, AWS Config) helps build a DLP strategy. AWS Partner Technologies could focus on protecting front-end services and work in conjunction with the initial backend prevention techniques.

For considering backend detective controls, it’s helpful to look at common customer use cases with preventative scenarios relevant for the financial industry:

Customer Use Case Allow Deny Perimeter Controls Detective Control Services
Make sure that my company data isn’t accidently or intentionally exposed to unauthorized parties Trusted principals, trusted resources, expected network locations Trusted principals to untrusted resources Identity: IAM/SCP policies, Resource based policies GuardDuty,
IAM Access Analyzer
Make sure that employees can’t “bring their own credentials” to access non-corporate AWS accounts and intentionally or unintentionally exfiltrate company data Trusted principals, trusted resources, expected network locations Untrusted principals to trusted resources Resource based policies,
Identity: IAM/SCP policies
Data Encryption, Tokenization, KMS, Application Level Encryption, Security Hub, GuardDuty, CloudTrail
Make sure that developers on-premises or applications in VPC can only access company approved data stores Trusted principals, trusted resources, expected network locations Trusted principals to untrusted resources VPC endpoint policy,
Resource based policies
Amazon Detective, KMS, VPC Reachability Analyzer
Make sure that corporate users’ and applications’ credentials can only be used from the company managed networks Trusted principals, trusted resources, expected network locations Unexpected network locations (VPC) VPC endpoint policy, Identity: IAM/SCP policies IAM Access Analyzer

Preventing unauthorized access using robust preventative and detective capabilities requires practicing proper hygiene by applying various techniques, such as encryption, tokenization, data decomposition, and solutions to detect malicious activity. Two key services to consider as part of your overall detection strategy are as follows:

  • GuardDuty – detects unusual access patterns that may indicate unauthorized attempts to access data. If suspicious activity is detected, then you can initiate incident response procedures. For IAM related findings, we recommend that you examine the entity in question and make sure that their permissions follow the best practice of least privilege. If the activity is unexpected, then the credentials may be compromised and remediation action for the compromised AWS credentials. You can also import external threat lists into GuardDuty to enhance detection capabilities.
  • IAM Access Analyzer – uses provable security to analyze all access paths and provide comprehensive analysis of external access to your resources. You can validate your policies using IAM Access Analyzer policy checks. You can create or edit a policy using the AWS Command Line Interface (AWS CLI), AWS API, or JSON policy editor in the IAM console. Access Analyzer validates your policy against IAM policy grammar and best practices. You can view policy validation check findings that include security warnings, errors, general warnings, and suggestions for your policy. These findings provide actionable recommendations that help you author policies that are functional and conform to security best practices.

The effective application of detective controls lets you get the information that you need to respond to changes and incidents. A robust detection mechanism with partner provider integration into a security information and event monitoring (SIEM) system enables you to respond quickly to security vulnerabilities and the continuous improvement of the security posture of your data. Let’s review some of the detective controls that you can implement:

  • Detection of unauthorized traffic: Continuous monitoring of VPC flow logs enables you to identify and remediate any security anomalies. GuardDuty is a managed service that provides managed threat intelligence in the cloud. GuardDuty takes feeds from VPC flow logs, CloudTrail, and DNS logs. It uses ML, pattern-based signatures, and external threat intelligence sources to provide you with actionable findings. You can consume these findings as Amazon CloudWatch events to drive automated responses.
  • Configuration drift: Configuration drift is the condition in which a system’s current security configuration drifts away from its desired hardened state. Configuration drift can be caused by modifications to a system subsequent to its initial deployment. To make sure of the integrity of the security posture of your databases, you must identify, report, and remediate configuration drift. You can use AWS Config with DynamoDB to detect configuration drift easily at the table level and with Amazon RDS for database instances, security groups, snapshots, subnet groups, and event subscriptions.
  • Amazon Detective now enables you to interactively examine the details of the VPC network flows of your Amazon Elastic Compute Cloud (Amazon EC2) instances. Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities. Detective automatically collects VPC flow logs from your monitored accounts, aggregates them by EC2 instance, and presents visual summaries and analytics about these network flows. Detective doesn’t require VPC Flow Logs to be configured and doesn’t impact existing flow log collection.
  • The VPC Reachability Analyzer is a configuration analysis tool that enables you to perform connectivity testing between a source resource and a destination resource in your VPCs. When the destination is reachable, Reachability Analyzer produces hop-by-hop details of the virtual network path between the source and the destination. When the destination isn’t reachable, Reachability Analyzer identifies the blocking component. For example, paths can be blocked by configuration issues in a security group, network ACL, route table, or load balancer.

Detective controls typically seek to uncover problems in a company’s processes past occurrence. They provide visibility into malicious activities, breaches, and attacks, which require fast response action. Prevention is arguably the most powerful control to prevent users and malicious actors from deploying any non-compliant resources or configurations, but without detective controls, the financial services company won’t even know which controls require expansion and optimization.

Response

Responsive controls are an important part of the security capability journey. When a detective control identifies a configuration that deviates from expected settings, a responsive control can remediate the issue through manual or automated means. This enhances the overall effectiveness of your security posture. Most responsive controls start off being manual, which provides a critical capability to be able to launch production workloads. However, responsive measures should increasingly become more automated as your cloud capabilities mature, and as part of your long-term responsive control strategy.

Which of the following is a detective control to address unauthorized network access?

Manual responses

Manual response actions are typically documented as Incident Response (IR) plans. Although full coverage of IR activities is beyond the scope of this document, portions related to data loss will be covered. For more details, refer to the AWS Security Incident Response Guide.

When the detective controls described in this document detect data exfiltration attempts, successful or not, the formal incident response plans should be triggered. Similarly, when a deviation from your baseline does occur (such as by a misconfiguration), you must respond and investigate. An IR plan will typically include the following steps to:

  • Stop malicious activity, such as data transfer,
  • Collect and preserve evidence, and
  • Perform analysis to determine the root cause that allowed the malicious activity.

For unauthorized data access, stopping continued access will typically involve revoking permissions and/or restricting access from the suspicious entities. This can include modifying Amazon EC2 security group rules to isolate a compromised instance.

Evidence collection should be covered by logging and monitoring services, such as CloudTrail and VPC flow logs, and be combined with other logs, such as those from your Identity Provider, to gather details needed to reconstruct the events that occurred leading to alert generation.

Performing a Root Cause Analysis (RCA) is a critical step in the process of determining the series of events leading to a security event, but more importantly, in identifying how to prevent recurrence. You should ask questions such as: “Is a preventative control missing?”, “Was a control disabled?”, or “Have threats changed that require a new or modified preventative control?” The RCA should service as either a validation of preventative controls or as an indication that adjustments are needed.

Since building a security incident response program is an iterative process, we encourage you to start small by developing playbooks and runbooks for critical use cases, leverage basic capabilities, and work to create an initial library of incident response mechanisms. Once you have these basic capabilities in place, continue to iterate by having game days to test your procedures, or by prioritizing steps that will your team to scale faster, such as through automation. This initial work should include your legal department, as well as teams that don’t have “security” in their job description (such as Human Resources or Product Owners), so that you can better understand the impact that IR plans have on your business outcomes.

It’s important for the IR team to understand how they would use AWS tools, processes, and logs to respond to an incident. With inexperienced teams, there can be roadblocks that prevent a timely response:

  • Too many alerts or no alerts
  • Alerts that don’t include enough information to enable actionable responses
  • Threat Detection and IR skills shortage
  • Lack of, or wrong expectations about automation
  • Static IR plans

Therefore, for an effective IR response, you must have a well-defined Security IR Plan in place, which would include IR playbooks and runbooks to help guide actions during a security event. Playbooks define individual processes, and enable consistent and prompt responses to failure scenarios by documenting the investigation process in playbooks. Playbooks are the predefined steps to perform to identify an issue. The results from any process step are used to determine the next steps to take until the issue is identified or escalated. ‘Classic’ Playbooks rely heavily on individuals as responder as outlined in the following chart:

Which of the following is a detective control to address unauthorized network access?

Automated responses

Automated response actions are highly desirable. These take time to develop and deploy in ways that don’t inadvertently block valid data access or disrupt production. However, a fully tested and validated automated response capability provides a specific responsive method to swiftly react to events. The objective is to enable these for automatic response, remediation, and containment. The chart below illustrates a multistep, automated IR process showing how a DETECT service leads into INVESTIGATE, and then AUTOMATION is used to RESPOND and RECOVER to the event.

Which of the following is a detective control to address unauthorized network access?

AWS provides you with detailed visibility through the ‘Detection’ as outlined in the prior chapter. Most of these touch points, at least the ones related to Incident Response, can be passed to AWS Lambda for remediation actions. Therefore, Lambda becomes a very important tool in your IR toolbox.

In the previous figure, AWS Config tracks configuration changes of your AWS resources, or AWS Config rules continuously monitor configuration settings. When a change or deviation is detected, AWS Systems Manager automation documents (runbooks) have predefined remediations that can be executed to remediate non-compliant resources. The runbook could also trigger a Lambda function as part of remediation steps.

Playbooks deal with overarching responses to larger issues or events and may include multiple runbooks and personnel within them. They should include all of the steps required to identify the scope or to remediate an incident. They should address your specific details such as escalation and disclosure requirements.

Which of the following is a detective control to address unauthorized network access?

Runbooks enable consistent and prompt responses to well understood events by documenting procedures in runbooks. Runbooks are the predefined procedures to achieve a specific outcome. Runbooks should contain the minimum information necessary to successfully perform the procedure. Start with a valid effective manual process, implement it in code, and trigger automated execution where appropriate. This makes sure of consistency, speeds responses, and reduces errors caused by manual processes.

In summary, AWS recommends the following IR best practices:

  • Host recurring (at least annual) security IR simulations.
  • Prioritize building additional Runbooks based on the frequency of occurrence or severity of impact.
  • Identify new Simulate tactics, techniques, and procedures (TTPs).
  • Know what you have and what you need — Appropriately preserve logs, snapshots, and other evidence by copying them to a centralized security cloud account. Use tags, metadata, and mechanisms that enforce retention policies.
  • Do things that scale — Scalability is one of the benefits of the cloud. You should strive to match the scalability of your organization’s approach to cloud computing and reduce the time between detection and response.
  • Iteratively automate the mundane – As you see incidents repeat, build mechanisms that programmatically triage and respond to common situations. Use human responses for unique, new, and sensitive incidents.
  • Iterate on incident response processes (runbooks and playbooks).

Conclusion

Increasingly, Financial Services customers look to leverage AWS Services to build differentiated applications that make it easier to process, analyze, and extract insights from growing and diverse data sets. Whether the resulting analysis or data sources themselves contain sensitive information, customers want to achieve the following key objectives:

  • Define sensitive data based on business and compliance requirements;
  • Identify sensitive data storage at scale;
  • Apply data protection policies that help enforce governance, assurance, and compliance requirements; and
  • Monitor data access and the security posture of the data.

Building a protection strategy should be thought of as a capability journey that incorporates protection, detection, and response capabilities as multiple layers of controls that work in concert with each other. The strategies outlined in this post work well in conjunction with other traditional data protection strategies, such as the use of DLP solutions.