Overview
TimelinesAI APIs enforce rate limits to ensure fair usage and platform stability for all users. Requests that exceed the rate limit will receive an HTTP429 Too Many Requests response.
Rate Limit Details
| Parameter | Value |
|---|---|
| Rate limit | 30 requests per second |
| Per | IP address |
| Burst allowance | 10 additional requests |
| Exceeded response | HTTP 429 Too Many Requests |
The rate limit applies per IP address, not per API token. If you have multiple integrations running from the same IP, they share the same rate limit budget.
How It Works
Incoming requests are evaluated against a limit of 30 requests per second per IP address. When you exceed this rate:- Up to 10 additional requests are queued (burst allowance) and processed once capacity becomes available.
- Any requests beyond the burst allowance are immediately rejected with an HTTP
429status code.
Handling Rate Limits
When you receive a429 response, implement an exponential backoff strategy to retry requests:
Best Practices
Spread requests over time
Spread requests over time
Instead of sending bulk requests all at once, distribute them evenly. For example, if you need to send 100 messages, space them out at ~30 per second rather than sending all 100 simultaneously.
Implement exponential backoff
Implement exponential backoff
When you receive a
429 response, wait progressively longer between retries — e.g., 1 second, then 2, then 4. This prevents a “thundering herd” effect when rate limits lift.Use queuing for bulk operations
Use queuing for bulk operations
For batch processing (e.g., sending messages to many contacts), implement a client-side queue that respects the 30 req/s limit. Process items from the queue at a controlled rate.
Monitor your usage
Monitor your usage
Track your request rates in your application logs. If you’re consistently hitting rate limits, consider optimizing your integration to make fewer, more targeted API calls.

