BETA FEATURE
This feature is currently in open beta and still in development, but we encourage you to try it out!
Set up your AWS VPC flow logs to send them to New Relic One.
Prerequisites
Set up AWS VPC flow logs monitoring in New Relic One
To send your VPC flow logs to New Relic One, follow these steps:
- Create a private ECR registry and upload the ktranslate image
- Create a Lambda function from the ECR image
- Validate your settings
1. Create a private ECR registry and upload the ktranslate image
Authenticate to your registry by running:
bash$aws ecr get-login-password --region $AWS_ACCOUNT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_ACCOUNT_REGION.amazonaws.comCreate a repository to hold the
ktranslate
image by running:bash$aws ecr create-repository --repository-name ktranslate --image-scanning-configuration scanOnPush=true --region $AWS_ACCOUNT_REGIONPull the
ktranslate
image from Docker Hub by running:bash$docker pull kentik/ktranslate:v2Tag the image to push to your docker repository by running:
bash$docker tag kentik/ktranslate:v2 $AWS_ACCOUNT_ID.dkr.ecr.$AWS_ACCOUNT_REGION.amazonaws.com/ktranslate:v2Push the image to your docker repository by running:
bash$docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_ACCOUNT_REGION.amazonaws.com/ktranslate:v2
After running these steps, you should see an output similar to the following:
$The push refers to repository [$AWS_ACCOUNT_ID.dkr.ecr.$AWS_ACCOUNT_REGION.amazonaws.com/ktranslate]$870d899ac0b0: Pushed $0a4768abd477: Pushed $b206b92a2843: Pushed $22abafd3e6c9: Pushed $1335c3725252: Pushed $7188c9350e77: Pushed $2b75f71baacd: Pushed $ba50c5652654: Pushed $80bbd31930ea: Pushed $c3d2a28a326e: Pushed $1a058d5342cc: Pushed $v2: digest: sha256:4cfe36919ae954063203a80f69ca1795280117c44947a09d678b4842bb8e4dd2 size: 2624
2. Create a Lambda function from the ECR image
The Lambda function you create must reside in the same AWS Region as the S3 bucket where you store your VPC flow logs. To create a Lambda function defined as a container image, follow the following steps:
- Navigate to the Lambda service in your AWS console and select Create function.
- Select the Container image tile at the top of the screen, and:
- Name your function.
- Click Browse Images and choose the ktranslate image with the
v2
tag you pushed to ECR. - Keep the architecture on x86_64, accept the default permissions, and click Create function.
- On the landing page for your new function, select the Configuration tab, and:
- In General configuration, change the timeout value to
0 min 20 sec
. - In the Permissions section, click Execution role for your function, which will open a new browser tab for IAM.
- On the Permissions tab, click Add permissions, select Attach policies and add the
AmazonS3ReadOnlyAccess
to grant your function access to the S3 bucket your VPC flow logs are in.
- On the Permissions tab, click Add permissions, select Attach policies and add the
- Back on your function's browser tab, in the Environment variables section, click Edit and add the Lambda environment variables.
- On the Triggers section, click Add trigger, and:
- Select the S3 type.
- Select the bucket where you store your VPC Flow Logs.
- Choose the All object create events event type.
- Optionally, if your bucket has a custom folder in the root directory outside the
AWSLogs
directory, you can add it in the Prefix section. - Accept the Recursive Invocation warning and click Add.
At this point, your Lambda function is deployed and listening for new events on your S3 bucket.
3. Validate your settings
Tip
It can take several minutes for data to first appear in your account as the export of VPC flow logs to S3 usually runs on a 5-minute cycle.
To confirm your Lambda function is working as expected, do one of the following:
- Go to one.newrelic.com > Explorer and you will begin to see
VPC Network
entities. You can click them and investigate the various metrics each one is sending. - Go to one.newrelic.com > Query your data and to get a quick summary of the recent VPCs that you have flow logs from, run the following NRQL query:FROM KFlow SELECT count(*) FACET device_name WHERE provider = 'kentik-vpc'
- In your AWS Console, click the Monitor tab in your function's landing page, where you can track important metrics like invocation, error count, and success rate. You can also investigate the error logs from recent invocations.
Tip
We recommend you to add serverless monitoring from New Relic One to your new Lambda function. This way, you'll proactively monitor the health of the function and get alerts in case of problems.
Find and use your metrics
All VPC flow logs exported from the ktranslate
Lambda function use the KFlow
namespace, via the New Relic Event API. Currently, these are the fields populated from this integration:
Attribute | Type | Description |
---|---|---|
application | String | The class of program generating the traffic in this flow record. This is derived from the lowest numeric value from |
dest_vpc | String | The name of the VPC the traffic in this flow record is targeting, if known. |
device_name | String | The name of the VPC this flow record was exported from. |
dst_addr | String | The target IPv4 address for this flow record. |
dst_as | Numeric | The target Autonomous System Number for this flow record. |
dst_as_name | String | The target Autonomous System Name for this flow record. |
dst_endpoint | String | The target |
dst_geo | String | The target country for this flow record, if known. |
flow_direction | String | The direction of flow for this record, from the point of view of the interface where the traffic was captured. Valid options are |
in_bytes | Numeric | The number of bytes transferred for ingress flow records. |
in_pkts | Numeric | The number of packets transferred for ingress flow records. |
l4_dst_port | Numeric | The target port for this flow record. |
l4_src_port | Numeric | The source port for this flow record. |
out_bytes | Numeric | The number of bytes transferred for egress flow records. |
out_pkts | Numeric | The number of packets transferred for egress flow records. |
protocol | String | The display name of the protocol used in this flow record, derived from the numeric IANA protocol number |
provider | String | This attribute is used to uniquely identify various sources of data from |
sample_rate | Numeric | The rate at which |
source_vpc | String | The name of the VPC the traffic in this flow record originated from, if known. |
src_addr | String | The source IPv4 address for this flow record. |
src_as | Numeric | The source Autonomous System Number for this flow record. |
src_as_name | String | The source Autonomous System Name for this flow record. |
src_endpoint | String | The source |
src_geo | String | The source country for this flow record, if known. |
start_time | Numeric | The time, in Unix seconds, when the first packet of the flow was received within the aggregation interval. This might be up to 60 seconds after the packet was transmitted or received on the network interface. |
timestamp | Numeric | The time, in Unix seconds, when this flow record was received by the New Relic Event API. |
Environment variables for AWS Lambda functions
When you're configuring your AWS Lambda function, you need to set up the following environment variables:
Key | Value | Required |
---|---|---|
KENTIK_MODE |
| √ |
NEW_RELIC_API_KEY | The New Relic license key for your account | √ |
NR_ACCOUNT_ID | Your New Relic account ID | √ |
NR_REGION | The New Relic datacenter region for your account. The possible values are | |
KENTIK_SAMPLE_RATE | The rate of randomized sampling |
Tip
For S3 objects with less than 100 flow records, ktranslate
will revert to a sample rate of 1
and process every record. For S3 objects with more than 100 flow records, ktranslate
will use the configured value of KENTIK_SAMPLE_RATE
, which has a default of 1000
. Meaning that every record in the object has a 1:1000 change of being sampled.