lunes, 29 de septiembre de 2025

Ingesting logs to Datadog from aws using Kinesis

Datadog recommends using a Kinesis Data Stream as input when using the Datadog destination with Amazon Data Firehose. It gives you the ability to forward your logs to multiple destinations, in case Datadog is not the only consumer for those logs. If Datadog is the only destination for your logs, or if you already have a Kinesis Data Stream with your logs, you can ignore step one.

  1. Optionally, use the Create a Data Stream section of the Amazon Kinesis Data Streams developer guide in AWS to create a new Kinesis data stream. Name the stream something descriptive, like DatadogLogStream.
  2. Go to Amazon Data Firehose.
  3. Click Create Firehose stream.
    1. Set the source:
      • Amazon Kinesis Data Streams if your logs are coming from a Kinesis Data Stream
      • Direct PUT if your logs are coming directly from a CloudWatch log group
    2. Set the destination as Datadog.
    3. Provide a name for the delivery stream.
    4. In the Destination settings, choose the Datadog logs HTTP endpoint URL that corresponds to your Datadog site.
    5. Paste your API key into the API key field. You can get or create an API key from the Datadog API Keys page. If you prefer to use Secrets Manager authentication, add in your Datadog API key in the full JSON format in the value field as follows: {"api_key":"<YOUR_API_KEY>"}.
    6. Optionally, configure the Retry duration, the buffer settings, or add Parameters, which are attached as tags to your logs.
      Note: Datadog has an intake limit of 65,536 events per batch and recommends setting the Buffer size to 2 MiB if the logs are single line messages.
    7. In the Backup settings, select an S3 backup bucket to receive any failed events that exceed the retry duration.
      Note: To ensure any logs that fail through the delivery stream are still sent to Datadog, set the Datadog Forwarder Lambda function to forward logs from this S3 bucket.
    8. Click Create Firehose stream.

     

    Create an IAM role and policy

    Create an IAM role and permissions policy to enable CloudWatch Logs to put data into your Kinesis stream.

  4. Ensure that logs.amazonaws.com or logs.<region>.amazonaws.com is configured as the service principal in the role’s Trust relationships. For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "logs",
"Effect": "Allow",
"Principal": {
"Service": "logs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
  1. Ensure that the role’s attached permissions policy allows the firehose:PutRecord firehose:PutRecordBatch, kinesis:PutRecord, and kinesis:PutRecords actions. If you’re using a Kinesis Data Stream, specify its ARN in the Resource field. If you’re not using a Kinesis Data Stream, specify the ARN of your Amazon Data Firehose stream in the Resource field.
    For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch",
"kinesis:PutRecord",
"kinesis:PutRecords"
],
"Resource": "arn:aws:firehose:us-east-1:*****:deliverystream/PUT-DOG-bhrnd"
}
]
}

Use the Subscription filters with Kinesis Data Streams example (steps 3 to 6) for an example of setting this up with the AWS CLI.

 

Console

Follow these steps to create a subscription filter through the AWS console.

  1. Go to your log group in CloudWatch and click on the Subscription filters tab, then Create.

    • If you are sending logs through a Kinesis Data Stream, select Create Kinesis subscription filter.
    • If you are sending logs directly from your log group to your Amazon Data Firehose delivery stream, select Create Amazon Data Firehose subscription filter.
  2. Select the data stream or Firehose delivery stream as applicable, as well as the IAM role previously created.

  3. Provide a name for the subscription filter, and click Start streaming.

Important note: The destination of the subscription filter must be in the same account as the log group, as described in the Amazon CloudWatch Logs API Reference.

 

 

martes, 16 de septiembre de 2025

Run gitlab as a container

Run the container:
docker run -p 8000:80 gitlab/gitlab-ce

docker run -d \
  --name gitlab \
  --hostname localhost \
  -p 8000:80 -p 8443:443 -p 8022:22 \
  -v gitlab_config:/etc/gitlab \
  -v gitlab_logs:/var/log/gitlab \
  -v gitlab_data:/var/opt/gitlab \
  --shm-size 256m \
  gitlab/gitlab-ce:latest

Check the container running:
docker ps | grep gitlab-ce

Now check the root password to access gitlab:
docker exec -it IDCONTAINER cat /etc/gitlab/initial_root_password

Install a gitlab-runner:

curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash

sudo apt install gitlab-runner 


Register the runner:
sudo gitlab-runner register



There you need to enter the gitlab instance, token and the executor.

For executor:

docker

For the image:
docker:20.10.16

 

domingo, 7 de septiembre de 2025

A way to provide useful information about your ESP32 chip and its memory

This code snippet not only prints "Hello world!" but also provides useful information about your ESP32 chip and its memory. It then enters a countdown loop before restarting the board.

 

#include <stdio.h>
#include "sdkconfig.h"
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "esp_chip_info.h"
#include "esp_flash.h"
#include "esp_system.h"

void app_main(void)
{
printf("Hello world!\n");

/* Print chip information */
esp_chip_info_t chip_info;
uint32_t flash_size;
esp_chip_info(&chip_info);
// Before:
// printf("This is %s chip with %d CPU core(s), %s%s%s%s, ",

// After:
printf("This is %s chip with %d CPU core(s), %s%s%s, ",
CONFIG_IDF_TARGET,
chip_info.cores,
(chip_info.features & CHIP_FEATURE_BT) ? "/BT" : "",
(chip_info.features & CHIP_FEATURE_BLE) ? "/BLE" : "",
(chip_info.features & CHIP_FEATURE_EMB_FLASH) ? "/embedded flash" : "");

unsigned major_rev = chip_info.revision / 100;
unsigned minor_rev = chip_info.revision % 100;
printf("silicon revision v%d.%d, ", major_rev, minor_rev);
if(esp_flash_get_size(NULL, &flash_size) == ESP_OK) {
printf("%" PRIu32 "MB %s flash\n", flash_size / (1024 * 1024),
(chip_info.features & CHIP_FEATURE_EMB_FLASH) ? "embedded" : "external");
}

printf("Minimum free heap size: %" PRIu32 " bytes\n", esp_get_minimum_free_heap_size());

for (int i = 10; i >= 0; i--) {
printf("Restarting in %d...\n", i);
vTaskDelay(1000 / portTICK_PERIOD_MS);
}
printf("Restarting now.\n");
fflush(stdout);
esp_restart();
}

 

miércoles, 3 de septiembre de 2025

Lilygo board first steps

 First run:

sudo usermod -a -G dialout alejandro 

Several ways to check where your LilyGO board is connected on Ubuntu

 1. Use lsusb to list USB devices

 

2. Check /dev/tty* devices

ls /dev/ttyUSB* /dev/ttyACM* 2>/dev/null



3. Use dmesg to see connection messages
dmesg | tail -20

 

4. List all serial devices with details 

ls -la /dev/serial/by-id/