-
“VPC Supported” means whether the resource can be launched inside a VPC.
-
If you use AWS resources that support VPC (◯), you must use a VPC.
-
The main decision point is whether to launch serverless resources with “◯ (Selectable)” VPC support inside a VPC. This is discussed below.
For security, AWS Security Hub recommends that Lambda functions run inside a VPC ([Lambda.3] control). Running Lambda functions in a VPC can improve security and network control, as described below:
Deploying resources in a VPC improves security and network control.
It also provides scalability and high availability across multiple Availability Zones.
You can customize VPC deployments to fit your application needs.
Source:
Lambda Security Hub Controls - AWS Security Hub
Benefits of running serverless resources like Lambda functions inside a VPC include:
-
Control outbound traffic from Lambda functions using Security Groups and Network ACLs
-
Use VPC endpoints to restrict access to specific AWS accounts or resources
-
Monitor Lambda network traffic with VPC Flow Logs
-
Filter Lambda traffic using Network Firewall
These measures help reduce risk if a Lambda function is compromised. However, serverless resources generally have lower risk than EC2 instances, since they don’t allow direct file uploads or logins.
Below is a flowchart to help decide whether to launch serverless resources inside a VPC.
VPC Design Best Practices
How should you structure VPCs and subnets in member accounts?
The structure of VPC subnets should be based on routing and security requirements. Common subnet types include:
Choose subnet types based on your organization’s policies and system requirements.
-
Public subnets: Decide which AWS account will host internet-facing resources. If your policy is to keep these only in external accounts, don’t create public subnets in member accounts.
-
Secure subnets: Use for databases or sensitive resources when strict security is needed.
-
Firewall subnets: Use for proxies like Network Firewall when you need to monitor or control traffic.
-
Transit subnets: Use when connecting to other VPCs or on-premises via Transit Gateway. Check if your policy is to manage VPC connections centrally.
-
(Static IP) subnets: Use for resources that need fixed IPs for management.
How do you choose the CIDR block for member account VPCs?
The CIDR block of a VPC defines its IP address range. When connecting to on-premises or external systems, choose the CIDR block carefully to avoid conflicts. If you previously managed IP ranges in spreadsheets, consider using AWS IPAM (IP Address Manager) for better management.
If you plan to add or expand VPCs in the future, reserve a larger CIDR block up front. If you run out of IP ranges, you may need to design the system so that networks remain isolated (e.g., VPC-to-VPC or VPC-to-on-premises are not connected).
How is DNS name resolution handled?
Name resolution for AWS resources in a VPC depends on the source and destination. Consider these scenarios:
-
Resources inside the VPC
-
On-premises resources
-
Internet resources
Two key points:
-
Resources in a VPC can have public DNS names
-
To integrate with external DNS servers, use Route 53 resolver endpoints
1. Public DNS names for VPC resources
Resources in a VPC can have public DNS names. As long as they have internet access, name resolution works from on-premises or the internet. If you use Route 53 private hosted zones for custom DNS, those names are not resolvable from the internet. To enable public DNS, set “Enable DNS hostnames” and “Enable DNS support” in your VPC settings.
2. Integrating with external DNS servers using resolver endpoints
To connect with DNS servers outside your VPC, use Route 53 resolver endpoints. These allow VPC resources to send DNS queries to external servers. There are two types:
-
Inbound endpoint: Accepts DNS queries from on-premises DNS servers (for resolving VPC resource names from on-premises)
-
Outbound endpoint: Sends DNS queries from VPC resources to on-premises DNS servers (for resolving on-premises resource names from the VPC)
-
Related resources
If you use Active Directory, you can also send DNS queries to the AD DNS server using an outbound endpoint.
Resolver endpoints are billed per endpoint:
$0.125/hour per ENI
Source:
Pricing - Amazon Route 53 | AWS
While not expensive, costs can add up as the number of VPCs grows. Consider consolidating resolver endpoints when using multiple accounts.
Summary: Name resolution options
The table below summarizes name resolution options by source and destination:
1: Use outbound endpoint
2: Use inbound endpoint
There are many options, but for high availability, use a DNS server dedicated to each environment. For VPC resources, use Amazon Route 53 Resolver; for on-premises resources, use your on-premises DNS server. If you have minimal integration or want to save costs, consider other options.
Where should you send VPC Flow Logs?
VPC Flow Logs record network traffic information within a VPC. You can send VPC Flow Logs to:
-
CloudWatch Logs
-
S3 bucket
-
Kinesis Data Firehose
Choose the destination based on your log analysis and monitoring needs. Since VPC Flow Logs are often used for long-term auditing, S3 is a common choice. If you have an external log platform, use Kinesis Data Firehose to transfer logs. For real-time monitoring or analysis, CloudWatch Logs is recommended.
Logs can be stored in the account that owns the VPC or in a central log aggregation account (log archive account). If your organization has a dedicated log account, send logs there.
How should you use VPC endpoints?
VPC endpoints let resources in a VPC connect directly to AWS services without using the public internet. With VPC endpoint policies, you can control which AWS services and resources are accessible, improving security and reducing the risk of data leaks.
Sometimes VPC endpoints are used just to avoid the public internet, but this isn’t always necessary. Traffic through an internet gateway actually stays on AWS’s private network and doesn’t traverse the public internet, so a VPC endpoint isn’t always required for security.
If two instances communicate using public IP addresses, or if an instance communicates with a public AWS service endpoint, does the traffic go over the internet?
No. When using public IP addresses, all communication between AWS-hosted instances and services uses the AWS private network.
Packets originating from the AWS network and destined for the AWS network remain on the AWS global network, except for traffic to/from the AWS China region.
All data flowing over the AWS global network is automatically encrypted at the physical layer before leaving secure facilities. There are also additional encryption layers, such as for VPC cross-region peering and TLS connections.
Source:
FAQ - Amazon VPC | AWS
There are two main reasons to use VPC endpoints:
-
To allow resources outside the VPC (e.g., on-premises) to connect to AWS services directly, without using the public internet
-
To use VPC endpoint policies to restrict which AWS services and resources can be accessed, improving security
For example, if you access AWS services from on-premises via a Site-to-Site VPN, a VPC endpoint lets you connect directly without using the public internet. For operational needs like accessing the AWS Management Console or S3, VPC endpoints can improve security. However, using many endpoints during development can increase costs.
With VPC endpoint policies, you can restrict access to specific AWS services or resources. For example, the following policy allows access only to resources within your organization, preventing data from being sent to external accounts.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRequestsByOrgsIdentitiesToOrgsResources",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "*",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:PrincipalOrgID": "my-org-id",
"aws:ResourceOrgID": "my-org-id"
}
}
},
{
"Sid": "AllowRequestsByAWSServicePrincipals",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "*",
"Resource": "*",
"Condition": {
"Bool": {
"aws:PrincipalIsAWSService": "true"
}
}
}
]
}
Setting up VPC endpoints for all AWS services can increase costs and operational overhead. As a minimum, configure VPC endpoints for database and storage services with higher data leakage risk, such as those below.
Who manages VPC resources?
Responsibility for VPC resources should be defined by your organization’s operational rules and security policies. Typical roles include:
-
Network Administrator
: Designs and configures the VPC, security groups, and network ACLs. Sometimes this is handled by a platform admin who manages AWS accounts and security services.
-
Security Administrator
: Sets VPC security policies, monitors VPC Flow Logs, and handles security incidents.
-
Operations Staff
: Operates, monitors, and troubleshoots VPC resources.
-
Developer
: Develops and deploys applications that use VPC resources.
These roles may vary depending on your organization’s size and structure. Sometimes developers handle everything; in other cases, network admins design and build the VPC while developers focus on applications. Here’s an example of role assignments when network admins design and build the VPC:
Security groups are often designed and built by developers to meet system-specific needs. In some organizations, network admins handle security groups, but this can slow down development. To keep development moving quickly, it’s usually best for developers to design and build security groups themselves.