A 200 status code doesn't mean your API is working. It means your server responded. The response could contain an empty array, an error message wrapped in a 200, or data from a stale cache. Real API monitoring goes deeper than "is it up?"
This guide walks you through exactly how to monitor an API endpoint, from quick manual checks with curl to fully automated monitoring with alerts. Every example is something you can copy, paste, and run right now.
What to Actually Check on an API Endpoint
Most people start and stop at the HTTP status code. That catches maybe 60% of real API failures. Here's the full checklist:
| Check | What It Catches | Priority |
|---|---|---|
| HTTP status code | Server errors (500), auth failures (401/403), not found (404) | Essential |
| Response time | Slow queries, connection pool exhaustion, overloaded servers | Essential |
| Response body content | Wrong data, empty results, error messages in 200 responses | Important |
| SSL certificate validity | Expired or misconfigured certificates | Important |
| Response headers | Missing CORS headers, wrong content type, rate limit warnings | Nice to have |
| DNS resolution time | DNS misconfiguration, slow DNS providers | Nice to have |
At minimum, check status codes and response time. Add response body validation for any endpoint where the data matters (payment APIs, authentication, data retrieval). For a deeper look at why API monitoring differs from website monitoring, see our API uptime monitoring guide.
Manual API Monitoring With curl
Before setting up automated monitoring, it helps to know how to check an API endpoint manually. curl is the standard tool for this.
Check Status Code
The simplest check. Returns just the HTTP status code:
curl -s -o /dev/null -w "%{http_code}" https://api.example.com/v1/health
This outputs 200 if the endpoint is healthy. Anything else means something is wrong. The -s flag silences progress output, -o /dev/null discards the response body, and -w prints the status code.
Check Response Time
Measure exactly where time is being spent:
curl -s -o /dev/null -w "DNS: %{time_namelookup}s\nConnect: %{time_connect}s\nTLS: %{time_appconnect}s\nFirst byte: %{time_starttransfer}s\nTotal: %{time_total}s\n" https://api.example.com/v1/health
Example output:
DNS: 0.023s
Connect: 0.045s
TLS: 0.112s
First byte: 0.187s
Total: 0.195s
If "First byte" (time to first byte, or TTFB) is over 1 second, your API is slow. If "DNS" is high, your DNS provider is the bottleneck. If "TLS" is high relative to "Connect," your SSL handshake is slow.
Check Response Body
Verify the response contains what you expect:
# Check if response contains a specific string
curl -s https://api.example.com/v1/health | grep -q '"status":"ok"' && echo "HEALTHY" || echo "UNHEALTHY"
# Pretty-print JSON response for inspection
curl -s https://api.example.com/v1/users/1 | python3 -m json.tool
# Check response and status code together
STATUS=$(curl -s -o /tmp/response.json -w "%{http_code}" https://api.example.com/v1/health)
if [ "$STATUS" = "200" ] && grep -q '"status":"ok"' /tmp/response.json; then
echo "API is healthy"
else
echo "API check failed (status: $STATUS)"
fi
Check SSL Certificate Expiry
echo | openssl s_client -connect api.example.com:443 2>/dev/null | openssl x509 -noout -enddate
Returns something like notAfter=Jun 15 12:00:00 2026 GMT. If that date is within 14 days, you have a problem brewing.
Monitoring Authenticated API Endpoints
Most production APIs require authentication. Your monitoring needs to authenticate too, or you'll just be monitoring your 401 response time.
API Key in Header
The most common pattern. The API key goes in an Authorization or custom header:
# Bearer token (most common)
curl -s -H "Authorization: Bearer sk_live_abc123" https://api.example.com/v1/account
# Custom header
curl -s -H "X-API-Key: abc123" https://api.example.com/v1/status
# Query parameter (less common, less secure)
curl -s "https://api.example.com/v1/status?api_key=abc123"
When setting up monitoring, most tools let you add custom headers. In Notifier, you can add custom HTTP headers when creating a monitor, so you can include your API key or bearer token.
Basic Authentication
curl -s -u username:password https://api.example.com/v1/admin/health
Basic auth sends credentials as a Base64-encoded header. Most monitoring tools support this directly with username/password fields.
OAuth Tokens
OAuth adds complexity because tokens expire. You have two options:
- Long-lived tokens: Some APIs issue tokens that last months or years. Treat these like API keys. Just put them in the Authorization header.
- Short-lived tokens: If tokens expire every hour, you need a monitoring tool that supports token refresh, or you can monitor an unauthenticated health check endpoint instead.
Practical tip:
Create a dedicated API key or service account for monitoring. Don't use a personal token that expires when someone changes their password. Give it read-only access to the minimum endpoints you need to check. Label it clearly (e.g., "uptime-monitor-readonly") so nobody accidentally revokes it.
Response Body Validation: Beyond Status Codes
Status codes lie. Here are real scenarios where an API returns 200 but is effectively broken:
-
Empty results:
{"users": [], "total": 0}when there should be thousands of users. The database connection failed silently and the API returned an empty result set instead of an error. -
Cached error: A CDN cached a 200 response that contains
{"error": "service unavailable"}from a previous outage. The status code is 200 because the CDN served it successfully. - Degraded response: The API returns partial data. Product listings are missing prices because the pricing service is down, but the main API still returns 200.
- Wrong version: A deployment went wrong and the API is serving v1 responses on a v2 endpoint. Status code 200, but the response schema is completely wrong.
Keyword Monitoring
The simplest form of response validation. Check that the response contains (or doesn't contain) a specific string:
# Alert if "status":"ok" is NOT in the response
curl -s https://api.example.com/v1/health | grep -q '"status":"ok"'
# Alert if "error" IS in the response
curl -s https://api.example.com/v1/health | grep -q '"error"' && echo "ERROR FOUND"
Most monitoring tools support this. In Notifier, you can set a keyword that must appear in the response body. If the keyword is missing, you get alerted. This catches the "200 but broken" scenarios that status code monitoring misses entirely.
Building a Health Check Endpoint
The best approach is to build a dedicated health endpoint that validates your API's dependencies internally:
# Python/Flask example
@app.route('/health')
def health():
checks = {}
# Check database
try:
db.session.execute('SELECT 1')
checks['database'] = 'ok'
except Exception:
checks['database'] = 'error'
# Check Redis
try:
redis_client.ping()
checks['cache'] = 'ok'
except Exception:
checks['cache'] = 'error'
# Check external API dependency
try:
resp = requests.get('https://api.stripe.com/v1', timeout=5)
checks['stripe'] = 'ok' if resp.status_code == 200 else 'error'
except Exception:
checks['stripe'] = 'error'
status = 'ok' if all(v == 'ok' for v in checks.values()) else 'degraded'
code = 200 if status == 'ok' else 503
return jsonify({'status': status, 'checks': checks}), code
Now your monitoring tool just needs to check for "status":"ok" in the response. If any dependency fails, the health endpoint returns 503 and your monitor catches it.
Monitoring REST vs GraphQL Endpoints
REST Endpoints
REST APIs are straightforward to monitor. Each endpoint has its own URL, and you check them individually:
# Monitor individual REST endpoints
curl -s -o /dev/null -w "%{http_code}" https://api.example.com/v1/users
curl -s -o /dev/null -w "%{http_code}" https://api.example.com/v1/products
curl -s -o /dev/null -w "%{http_code}" https://api.example.com/v1/orders
For REST, create a separate monitor for each critical endpoint. A /users endpoint being down is a different problem than /payments being down, and you want separate alerts for each.
GraphQL Endpoints
GraphQL is trickier because everything goes through a single URL (typically /graphql). A simple GET check on that URL only tells you the server is running, not that your queries work.
# Monitor a GraphQL endpoint with a test query
curl -s -X POST https://api.example.com/graphql \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-token" \
-d '{"query": "{ __typename }"}'
The { __typename } introspection query is the lightest possible GraphQL health check. It verifies the GraphQL server is running and can parse queries. For deeper checks, send a small query that touches your actual data:
# Deeper check: verify data layer is working
curl -s -X POST https://api.example.com/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ users(first: 1) { id } }"}' | grep -q '"data"' && echo "OK" || echo "FAIL"
GraphQL gotcha:
GraphQL APIs often return HTTP 200 even when the query fails. Errors are inside the response body under an "errors" key. You must validate the response body, not just the status code. Check that "data" is present and "errors" is absent.
Common API Failure Modes and What Catches Them
| Failure | Symptoms | Caught By |
|---|---|---|
| Server crash | Connection refused, timeout | Status code check |
| Database down | 500 error or empty results with 200 | Status code + body validation |
| Slow queries | Response time > 2s, eventual timeout | Response time monitoring |
| Rate limiting | 429 status code | Status code check |
| Auth token expired | 401 status code | Status code check |
| SSL certificate expired | Connection error before HTTP | SSL monitoring |
| CDN cached error | 200 with error in body | Body validation only |
| Partial data | 200 with incomplete response | Body validation only |
| Wrong API version deployed | 200 with unexpected schema | Body validation only |
Notice the pattern: status code checks catch obvious failures, but body validation catches the subtle ones. The subtle ones are usually worse because they go undetected longer.
Setting Up Automated API Monitoring
Manual curl checks are useful for debugging, but you need automated monitoring to catch problems before your users do. Here's how to set it up.
Option 1: Use a Monitoring Service (Recommended)
The fastest path. Create a monitor, set your URL, add any authentication headers, and configure alerts. The service checks your endpoint from multiple locations on a schedule.
With Notifier, the setup takes about 30 seconds:
- 1. Add your API endpoint URL (e.g.,
https://api.yourapp.com/v1/health) - 2. Set custom HTTP headers for authentication if needed
- 3. Add a keyword to validate in the response body (e.g.,
"status":"ok") - 4. Choose your check interval (down to 30 seconds on Team and Enterprise plans)
- 5. Configure alerts: email, SMS, phone call, or Slack
If the endpoint returns a non-200 status code, the keyword is missing from the response, or the request times out, you get an alert within minutes. For critical APIs (payment processing, authentication), SMS or phone call alerts ensure you don't miss a failure at 3 AM.
Option 2: DIY With a Script
If you want full control, you can build your own monitoring with a script and cron:
#!/bin/bash
# api-monitor.sh โ Check API health and alert on failure
ENDPOINT="https://api.example.com/v1/health"
EXPECTED_KEYWORD='"status":"ok"'
TIMEOUT=10
SLACK_WEBHOOK="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
# Make the request
RESPONSE=$(curl -s --max-time $TIMEOUT -w "\n%{http_code}" "$ENDPOINT" \
-H "Authorization: Bearer $API_KEY")
# Split response body and status code
HTTP_CODE=$(echo "$RESPONSE" | tail -1)
BODY=$(echo "$RESPONSE" | sed '$d')
# Check status code
if [ "$HTTP_CODE" != "200" ]; then
curl -s -X POST "$SLACK_WEBHOOK" \
-d "{\"text\": \"API ALERT: $ENDPOINT returned $HTTP_CODE\"}"
exit 1
fi
# Check response body
if ! echo "$BODY" | grep -q "$EXPECTED_KEYWORD"; then
curl -s -X POST "$SLACK_WEBHOOK" \
-d "{\"text\": \"API ALERT: $ENDPOINT returned 200 but keyword missing\"}"
exit 1
fi
echo "OK"
Then schedule it with cron:
*/5 * * * * /usr/local/bin/api-monitor.sh
The DIY approach works, but it has obvious limitations: it monitors from a single location, it requires you to maintain the script, and if the server running the monitor goes down, you lose monitoring entirely. A hosted service avoids all of these problems.
What Monitoring Interval Should You Use?
| Endpoint Type | Recommended Interval | Why |
|---|---|---|
| Payment/checkout API | 30 seconds | Every second of downtime is lost revenue |
| Authentication API | 1 minute | Users can't log in, high impact |
| Core data API | 1 minute | Application depends on it |
| Internal/admin API | 5 minutes | Lower traffic, less urgent |
| Third-party API dependency | 5 minutes | You can't fix it, just need to know |
Alert Configuration Tips
- Confirm before alerting: Most tools let you require 2 or 3 consecutive failures before sending an alert. This eliminates noise from transient network blips.
- Escalate by severity: Email for a slow response, SMS for a down endpoint, phone call for a payment API failure.
- Include context in alerts: A good alert tells you which endpoint failed, the status code, and the response time. A bad alert just says "something is down."
- Set up recovery alerts: Knowing when the API comes back up is just as important as knowing when it went down. It helps you calculate downtime duration and write incident reports.
Frequently Asked Questions
How many API endpoints should I monitor?
Monitor every endpoint that, if broken, would impact users or revenue. For most applications, that's 3 to 10 endpoints: health check, authentication, core data retrieval, payment processing, and any third-party API dependencies your app relies on. You don't need to monitor every single route.
Should I monitor my API from the outside or the inside?
Both, ideally. External monitoring (from a service like Notifier) tells you what your users experience: DNS resolution, SSL handshake, network latency, and actual response. Internal monitoring (health check endpoints) tells you which specific component is broken. External monitoring catches problems internal monitoring can't see, like DNS failures or CDN issues.
What response time is too slow for an API?
It depends on the endpoint, but as general guidelines: under 200ms is good, 200ms to 500ms is acceptable, 500ms to 1 second is slow, and over 1 second is a problem. For payment APIs or real-time data, aim for under 300ms. Set your monitoring alert threshold at roughly 2x your normal response time so you catch degradation early without false alarms.
Will monitoring requests count against my API rate limit?
Yes. A monitor checking every minute makes 1,440 requests per day per endpoint. For most APIs, this is negligible. If you're on a tight rate limit, monitor less frequently (every 5 minutes = 288 requests/day) or use a lightweight health check endpoint that doesn't count against your rate limit. If you're monitoring a third-party API, check their rate limit documentation before setting up frequent checks.
How do I monitor a POST endpoint?
Some monitoring tools support custom HTTP methods and request bodies. You can send a POST request with a test payload and validate the response. For endpoints that create or modify data (like a /orders endpoint), don't send real data from your monitor. Instead, create a dedicated test endpoint that accepts POST requests, validates the request body, but doesn't actually create records. Or use a health check endpoint that accepts GET.
What's the difference between API monitoring and APM?
API monitoring checks endpoints from the outside: is it responding, is it fast, is the data correct? APM (Application Performance Monitoring) instruments your code from the inside: which function is slow, where are the database bottlenecks, what's the error rate per endpoint? API monitoring tells you something is wrong. APM tells you why. Most teams start with API monitoring and add APM as they scale.