Self-Hosted Vessel Email Alerts with AWS Lambda and SES
The thing you actually want is an email.
Not a dashboard you have to remember to open. Not a webhook you have to write a server for. An email — the kind your bank sends when a charge clears, the kind your airline sends when the gate changes — that quietly arrives in your inbox and tells you the Ever Given just entered Suez, or that one of your chartered tankers' ETA has shifted by six hours.
VesselAPI sends notifications as webhooks and WebSocket messages. That's the right contract for software, the wrong contract for humans. Webhooks are how systems talk. Email is how systems talk to people. The translation layer between them — the thing that turns a webhook stream into vessel port arrival departure email notifications, the sort of thing your inbox already knows how to thread, sort, and search — is small enough to read in one sitting, which is what this post is.
Full source for this post: vessel-api/vessel-email-alerts-aws-sam on GitHub →The Architecture in One Diagram
Five components:
- API Gateway HTTP API is the public URL VesselAPI POSTs to. The HTTP API flavor (not REST API) is roughly a third of the price and supports everything we need for one route.
- Lambda does the work: verify, deduplicate, render, send.
- DynamoDB stores delivery IDs we've already processed so retries from VesselAPI don't double-send. One row per event, 24-hour TTL.
- SES sends the mail.
- IAM glues the function's permissions together.
The function only runs when a webhook arrives, and at idle the whole thing costs zero.
Need a VesselAPI key? See the pricing & plans →A note before we start: the events VesselAPI emits — port arrival, port departure, ETA changed, geofence enter/exit — are derived from AIS, not ground truth. AIS-reported destinations and ETAs are entered by the bridge and are sometimes stale, abbreviated ("FOR ORDERS"), or misspelled. Geofence events apply hysteresis to avoid edge-flapping, but you'll still see the occasional false positive — a vessel drifting past a polygon edge in a busy anchorage, a duplicate broadcast from two coastal stations, an ETA tweak that flips back two minutes later. Worth knowing if you're tuning thresholds for an operational workflow; for "tell me when this ship enters that port," the defaults are fine.
Step 1: Verify an Email Address in SES
In the SES console, pick your region (we'll use us-east-1 throughout — change to taste), open Verified identities, click Create identity, and create an Email address identity. AWS sends a confirmation email; click the link.
That's it. We're using a verified email identity rather than a domain because it's instant — no DNS to wait on. For production you'll want a domain identity with DKIM, but for alerts to your own inbox an email identity is fine.
Step 2: Scaffold the Project
The project layout:
vessel-email-alerts/
├── template.yaml
└── src/
├── handler.py
├── render.py
└── requirements.txt
requirements.txt is empty — boto3 ships with the Lambda Python runtime, and we're not pulling in a templating library. Plain f-strings handle this.
Here's handler.py in full:
import os
import json
import hmac
import hashlib
import base64
import logging
import boto3
from datetime import datetime, timezone, timedelta
from botocore.exceptions import ClientError
from render import render_email
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Module-scope clients are reused across warm invocations -- one TLS
# handshake per container, not one per request.
ses = boto3.client("ses")
ddb = boto3.client("dynamodb")
WEBHOOK_SECRET = os.environ["WEBHOOK_SECRET"].encode()
TO_ADDRESS = os.environ["TO_ADDRESS"]
FROM_ADDRESS = os.environ["FROM_ADDRESS"]
IDEMPOTENCY_TABLE = os.environ["IDEMPOTENCY_TABLE"]
def lambda_handler(event, _context):
headers = {k.lower(): v for k, v in (event.get("headers") or {}).items()}
raw = event.get("body") or ""
body: bytes = (
base64.b64decode(raw) if event.get("isBase64Encoded") else raw.encode("utf-8")
)
if not _verify_signature(body, headers.get("x-signature-256", "")):
return _resp(401, "invalid signature")
delivery_id = headers.get("x-delivery-id")
if not delivery_id:
return _resp(400, "missing X-Delivery-ID")
if not _claim_delivery(delivery_id):
# Already processed -- ack so vesselapi stops retrying.
return _resp(200, "duplicate")
payload = json.loads(body)
evt = payload["event"]
subject, html, text = render_email(evt)
try:
ses.send_email(
Source=FROM_ADDRESS,
Destination={"ToAddresses": [TO_ADDRESS]},
Message={
"Subject": {"Data": subject},
"Body": {
"Html": {"Data": html},
"Text": {"Data": text},
},
},
)
except Exception:
# Release the claim so the next retry can try again. Without this,
# any SES failure would silently lose the email.
_release_delivery(delivery_id)
raise
return _resp(200, "sent")
def _verify_signature(body: bytes, sig_header: str) -> bool:
if not sig_header.startswith("sha256="):
return False
expected = sig_header[len("sha256="):]
mac = hmac.new(WEBHOOK_SECRET, body, hashlib.sha256).hexdigest()
return hmac.compare_digest(mac, expected)
def _claim_delivery(delivery_id: str) -> bool:
ttl = int((datetime.now(timezone.utc) + timedelta(hours=24)).timestamp())
try:
ddb.put_item(
TableName=IDEMPOTENCY_TABLE,
Item={
"delivery_id": {"S": delivery_id},
"expires_at": {"N": str(ttl)},
},
ConditionExpression="attribute_not_exists(delivery_id)",
)
return True
except ClientError as e:
if e.response["Error"]["Code"] == "ConditionalCheckFailedException":
return False
raise
def _release_delivery(delivery_id: str) -> None:
try:
ddb.delete_item(
TableName=IDEMPOTENCY_TABLE,
Key={"delivery_id": {"S": delivery_id}},
)
except ClientError as e:
logger.error("failed to release claim (delivery_id=%s): %s", delivery_id, e)
def _resp(status: int, body: str):
return {"statusCode": status, "body": body}
About a hundred lines, abridged a little for the post — the version in the example repo also has structured logging on every branch and a try/except around the JSON parse that returns 400 for malformed bodies. Two lines above are the load-bearing ones: the compare_digest call and the _release_delivery call. They're the difference between a pipeline that's fine on the happy path and one that survives the unhappy paths. We'll come back to both.
And render.py. There's one renderer per event type and a small dispatcher. Three of the seven for shape — port arrival, ETA change, and the generic fallback:
from datetime import datetime
def render_email(evt: dict):
handler = _RENDERERS.get(evt["type"], _render_generic)
return handler(evt)
def _vessel_label(v: dict) -> str:
name = v.get("vesselName") or "Unknown vessel"
imo = v.get("imo")
return f"{name} (IMO {imo})" if imo else name
def _fmt_time(iso: str) -> str:
try:
return datetime.fromisoformat(iso.replace("Z", "+00:00")).strftime("%Y-%m-%d %H:%M UTC")
except Exception:
return iso
def _render_port_arrival(evt):
v = _vessel_label(evt["vessel"])
port = evt["data"]["portEvent"]["port"]
when = _fmt_time(evt["timestamp"])
subject = f"{v} arrived at {port['name']}"
text = f"{v} arrived at {port['name']}, {port.get('country', '')} on {when}."
html = f"<p><strong>{v}</strong> arrived at <strong>{port['name']}</strong>.</p><p>Reported {when}.</p>"
return subject, html, text
def _render_eta_changed(evt):
v = _vessel_label(evt["vessel"])
change = evt["data"]["etaChange"]
prev = _fmt_time(change["previousEta"])
cur = _fmt_time(change["currentEta"])
shift = change["shiftMinutes"]
subject = f"{v} ETA shifted by {shift} min"
text = f"{v} ETA changed: {prev} -> {cur} (shift: {shift} minutes)."
html = f"<p><strong>{v}</strong> ETA shifted by {shift} minutes.</p><p>Previous: {prev}<br>Current: {cur}</p>"
return subject, html, text
def _render_generic(evt):
v = _vessel_label(evt["vessel"])
subject = f"{v}: {evt['type']}"
body = f"{evt['type']} event for {v} at {_fmt_time(evt['timestamp'])}."
return subject, f"<p>{body}</p>", body
_RENDERERS = {
"port.arrival": _render_port_arrival,
"port.departure": _render_port_departure, # same shape as arrival
"eta.eta_changed": _render_eta_changed,
"eta.destination_changed": _render_destination_changed,
"position.geofence_enter": _render_geofence,
"position.geofence_exit": _render_geofence,
}
The other four renderers (departure, destination changed, draught changed, geofence) follow the same shape — pull the relevant fields from evt["data"], format them, return a (subject, html, text) triple. The structure that scales is the dispatcher dict; adding a Slack or Teams channel later is the same pattern with a different output format.
The Two Non-Obvious Parts
It would be tempting to skim past two of those lines. They're the lines that decide whether this thing works or quietly turns into a problem in three months.
HMAC Verification, and Why hmac.compare_digest
Every VesselAPI webhook arrives with an X-Signature-256 header: sha256= followed by the hex of HMAC-SHA256(webhook_secret, raw_request_body).
The instinct is to skip this. The Lambda is on HTTPS. The URL is unguessable. Why bother? Because the URL leaks. It ends up in CloudWatch logs, in someone's terminal scrollback, in screenshots, in an AWS console somebody screen-shared on a call. Anyone who knows the URL can POST a fake event to it. Without verification, your inbox is now a free email-sending service for whoever finds it.
The verification is two lines:
mac = hmac.new(WEBHOOK_SECRET, body, hashlib.sha256).hexdigest()
return hmac.compare_digest(mac, expected)
The reason it's hmac.compare_digest and not == is the load-bearing detail of this whole post. A naive == short-circuits at the first byte mismatch — meaning the comparison is faster when the strings differ early, slower when they differ late. That timing difference is enough for an attacker to recover the signature one byte at a time, given enough requests. compare_digest always takes the same amount of time regardless of where the strings diverge. Use it.
One byte-handling note: keep body as bytes end-to-end. hmac.new wants bytes, json.loads accepts bytes, and skipping the decode()/encode() round-trip saves you from a class of bugs that only show up when the webhook source ever sends non-UTF-8 content.
Idempotency, and What Happens When SES Fails
VesselAPI retries failed webhook deliveries with exponential backoff. That's a feature: a transient Lambda timeout doesn't lose the event. It's also a problem: the same event can arrive at your Lambda more than once, and without protection you'll send the same email twice. Or three times. Or thirteen.
Every webhook arrives with a unique X-Delivery-ID. We use it as the primary key in a small DynamoDB table with a conditional put: if the row already exists, the put fails atomically, we return 200, and no email goes out. The 24-hour TTL means the table never grows.
We return 200 for duplicates rather than an error. If we returned an error, VesselAPI would retry, the put would fail again, we'd return another error, and we'd loop forever. Returning 200 says "I've handled this," and the chain stops.
The trap, which is easy to fall into and tempting to call "good enough": claim the delivery ID and then call SES. If SES throws — throttle, transient 5xx, suppression-list hit — the claim is now sitting in DynamoDB with no email behind it. The next retry from VesselAPI looks up the delivery ID, finds the row, returns 200 duplicate, and the email is silently lost forever. That's the failure mode _release_delivery exists for: on any SES exception, delete the claim row and re-raise so the retry reaches a fresh slate.
The release itself can also fail (DynamoDB throttling, tiny window, very unlucky). When that happens we log loudly and let the original SES error propagate — the retry will be ack'd as a duplicate this once, but the failure is visible in CloudWatch instead of vanishing into the gap between two AWS services. The unit test for the whole sequence lives in test_handler.py in the example repo — it's the case that catches the bug if you ever refactor the order of those two calls.
Step 3: Deploy with SAM
Drop template.yaml next to src/:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Self-hosted vessel email alerts (vesselapi -> Lambda -> SES)
Parameters:
WebhookSecret:
Type: String
NoEcho: true
Description: The secret you'll configure on the vesselapi notification.
ToAddress:
Type: String
Description: Verified SES recipient (the inbox the alerts go to).
FromAddress:
Type: String
Description: Verified SES sender identity.
Resources:
IdempotencyTable:
Type: AWS::DynamoDB::Table
Properties:
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: delivery_id
AttributeType: S
KeySchema:
- AttributeName: delivery_id
KeyType: HASH
TimeToLiveSpecification:
AttributeName: expires_at
Enabled: true
EmailSenderFunction:
Type: AWS::Serverless::Function
Properties:
Runtime: python3.12
Handler: handler.lambda_handler
CodeUri: ./src
Timeout: 10
MemorySize: 256
Environment:
Variables:
WEBHOOK_SECRET: !Ref WebhookSecret
TO_ADDRESS: !Ref ToAddress
FROM_ADDRESS: !Ref FromAddress
IDEMPOTENCY_TABLE: !Ref IdempotencyTable
Policies:
- DynamoDBCrudPolicy:
TableName: !Ref IdempotencyTable
- Statement:
- Effect: Allow
Action: ses:SendEmail
Resource: !Sub 'arn:aws:ses:${AWS::Region}:${AWS::AccountId}:identity/${FromAddress}'
Events:
Webhook:
Type: HttpApi
Properties:
Path: /webhook
Method: post
Outputs:
WebhookUrl:
Description: Paste this into your vesselapi notification's webhook_url.
Value: !Sub 'https://${ServerlessHttpApi}.execute-api.${AWS::Region}.amazonaws.com/webhook'
Function, API, table, IAM role, log group — about fifty lines of YAML. The equivalent Terraform is noticeably more verbose because it has to express each resource individually instead of leaning on AWS::Serverless::Function's built-in defaults. NoEcho: true keeps the secret out of CloudFormation outputs and the AWS console; the env-var value is still readable to anyone with lambda:GetFunctionConfiguration, which is the usual tradeoff and the reason the production checklist below moves it to Secrets Manager. The ses:SendEmail policy is scoped to the verified FromAddress identity ARN — a compromised function role can't be used to send mail from other identities in the account.
Build and deploy:
sam build
sam deploy --guided \
--parameter-overrides \
WebhookSecret=<some-long-random-string> \
ToAddress=you@example.com \
FromAddress=you@example.com
--guided walks you through stack name, region, and the IAM confirmation. Pick us-east-1, accept the IAM prompt, and let it run. About 90 seconds later:
Outputs
-----------------------------------------
Key WebhookUrl
Value https://abc123xyz.execute-api.us-east-1.amazonaws.com/webhook
Save the URL. For the secret, openssl rand -hex 32 is a sensible default — use the same value for both the SAM parameter and the VesselAPI notification config below.
Step 4: Register the Webhook with VesselAPI
curl -X POST https://api.vesselapi.com/v1/notifications \
-H "Authorization: Bearer $VESSELAPI_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "lambda-email-alerts",
"imos": [9811000],
"event_types": ["port.arrival", "port.departure", "eta.eta_changed"],
"webhook_url": "https://abc123xyz.execute-api.us-east-1.amazonaws.com/webhook",
"webhook_secret": "<the same secret you passed to SAM>"
}'
9811000 is the IMO of the Ever Given itself — handy if you want a vessel that moves often and is easy to recognise in your inbox while you verify the pipeline. Swap in the IMOs you actually want to watch. The event_types filter is optional; omit it to get every event type for the watched vessels.
Step 5: Send a Test Event
VesselAPI exposes a test endpoint that fires a synthetic event through your full delivery path:
curl -X POST https://api.vesselapi.com/v1/notifications/<id>/test \
-H "Authorization: Bearer $VESSELAPI_KEY"
A few seconds later, the email should be in your inbox.
When It Doesn't Work the First Time
It usually doesn't. Two things go wrong on first deploy, in roughly this order:
MessageRejected: Email address is not verified— SES is in sandbox and either theFromAddressorToAddressisn't a verified identity. Verify both, retry. If you used the same email for both, you only need to verify it once.invalid signaturein the Lambda logs — the secret in the SAM parameter doesn't match the one in the VesselAPI notification. They have to be byte-for-byte identical, including no trailing newlines.
Check the CloudWatch log group at /aws/lambda/<stack-name>-EmailSenderFunction-<hash>.
What This Costs
For a single user with ten watched vessels and a few events a day, in us-east-1 as of April 2026:
| Service | Volume | Cost |
|---|---|---|
| API Gateway HTTP API | ~100 req/mo | $0.0001 |
| Lambda | well inside free tier | $0 |
| DynamoDB pay-per-request | ~100 writes/mo | $0.0001 |
| SES | 100 emails/mo | $0.01 |
| CloudWatch logs | small | pennies |
Total: a few cents to a dollar a month, dominated by SES if you send a lot. Idle cost is zero. AWS prices drift; check the current rates if you scale this beyond personal use.
Going to Production
The deploy above is the right shape for "alerts to my own inbox." For anything wider, four changes — each incremental, none of which break the pipeline above:
- Move out of SES sandbox. Request production access in the SES console; AWS Support's initial response usually arrives within 24 hours, full approval can take longer.
- Verify a domain identity with DKIM. Send
From: alerts@yourdomain.comand have it display as authenticated rather than as a generic AWS noreply. - Wire up bounce and complaint handling. Attach an SES Configuration Set and route
BounceandComplaintevents to an SNS topic. A single typo'dToAddresscan otherwise quietly land you on the SES suppression list. - Move the webhook secret to Secrets Manager. A small SAM change; the function reads it on cold start and caches it.
A future post will go deep on each of these. For now, the pipeline you have is sufficient.
What Comes Next
This is part 1 of a series on building your own notification consumers on top of VesselAPI webhooks. Part 2 keeps the same Lambda, the same HMAC verify, the same DynamoDB idempotency — and swaps the email renderer for Slack Block Kit, Microsoft Teams Adaptive Cards, or Discord embeds. The dispatcher pattern in render.py is the seam to plug those in: one function per channel, one config flag to pick which one runs.
People often treat maritime email alerts vs API-based vessel monitoring as a choice. They aren't — they're layers. The API integration is the thing your services react to. The email pipeline above it is what a human reads at 2am. Now your inbox knows when your ships move.