Datadog recommends using a Kinesis Data Stream as input when using the Datadog destination with Amazon Data Firehose. It gives you the ability to forward your logs to multiple destinations, in case Datadog is not the only consumer for those logs. If Datadog is the only destination for your logs, or if you already have a Kinesis Data Stream with your logs, you can ignore step one.
- Optionally, use the Create a Data Stream
section of the Amazon Kinesis Data Streams developer guide in AWS to
create a new Kinesis data stream. Name the stream something descriptive,
like
DatadogLogStream. - Go to Amazon Data Firehose.
- Click Create Firehose stream.
- Set the source:
Amazon Kinesis Data Streamsif your logs are coming from a Kinesis Data StreamDirect PUTif your logs are coming directly from a CloudWatch log group
- Set the destination as
Datadog. - Provide a name for the delivery stream.
- In the Destination settings, choose the
Datadog logsHTTP endpoint URL that corresponds to your Datadog site. - Paste your API key into the API key field. You can get or create an API key from the Datadog API Keys page.
If you prefer to use Secrets Manager authentication, add in your
Datadog API key in the full JSON format in the value field as follows:
{"api_key":"<YOUR_API_KEY>"}. - Optionally, configure the Retry duration, the buffer settings, or add Parameters, which are attached as tags to your logs.
Note: Datadog has an intake limit of 65,536 events per batch and recommends setting the Buffer size to2 MiBif the logs are single line messages. - In the Backup settings, select an S3 backup bucket to receive any failed events that exceed the retry duration.
Note: To ensure any logs that fail through the delivery stream are still sent to Datadog, set the Datadog Forwarder Lambda function to forward logs from this S3 bucket. - Click Create Firehose stream.
Create an IAM role and permissions policy to enable CloudWatch Logs to put data into your Kinesis stream.
- Set the source:
- Ensure that
logs.amazonaws.comorlogs.<region>.amazonaws.comis configured as the service principal in the role’s Trust relationships. For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "logs",
"Effect": "Allow",
"Principal": {
"Service": "logs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
- Ensure that the role’s attached permissions policy allows the
firehose:PutRecordfirehose:PutRecordBatch,kinesis:PutRecord, andkinesis:PutRecordsactions. If you’re using a Kinesis Data Stream, specify its ARN in the Resource field. If you’re not using a Kinesis Data Stream, specify the ARN of your Amazon Data Firehose stream in the Resource field.
For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch",
"kinesis:PutRecord",
"kinesis:PutRecords"
],
"Resource": "arn:aws:firehose:us-east-1:*****:deliverystream/PUT-DOG-bhrnd"
}
]
}
Use the Subscription filters with Kinesis Data Streams example (steps 3 to 6) for an example of setting this up with the AWS CLI.
Console
Follow these steps to create a subscription filter through the AWS console.
Go to your log group in CloudWatch and click on the Subscription filters tab, then Create.
- If you are sending logs through a Kinesis Data Stream, select
Create Kinesis subscription filter. - If you are sending logs directly from your log group to your Amazon Data Firehose delivery stream, select
Create Amazon Data Firehose subscription filter.
- If you are sending logs through a Kinesis Data Stream, select
Select the data stream or Firehose delivery stream as applicable, as well as the IAM role previously created.
Provide a name for the subscription filter, and click Start streaming.
Important note: The destination of the subscription filter must be in the same account as the log group, as described in the Amazon CloudWatch Logs API Reference.