API Response Time, Explained in 1000 Words or Less

Download Now: How to Use an API
Jamie Juviler
Jamie Juviler

Published:

Application programming interfaces (APIs) make today’s vast network of websites, web applications, and mobile apps possible. In basic terms, an API allows developers to expose their application’s data to the public so other applications can integrate with it.

two developers measuring API response time on a computer

When creating and monitoring an API integration, it’s always important to look at the efficiency of the integration: How quickly is your API fielding incoming requests and sending responses? How is speed affected when the volume of requests is higher?

For web applications, API performance has a compounding impact on the user experience. When using a web app, a client may make dozens of requests over the span of minutes. If an API is slow to respond, this makes the client application lag behind, causing unhappy customers and unhappy developers.

In this post, we’ll learn about one metric you can monitor to ensure your API is fast and efficient: API response time. We’ll define what response time means, as well as what range of response time your program should aim for. Let’s get started.

Download our Free Ebook: How to Use an API

API response time is one key metric when grading the speed of a web application. If an API is slow to respond to client requests, this in turn slows down all third-party applications that utilize the API, hurting the user experience. On the flip side, a fast API gains a positive reputation and is more likely to be adopted by clients.

API Latency vs. Response Time

In regards to APIs, the terms “response time” and “latency” are often used interchangeably. However, while related, they do not measure the same thing.

Response time refers to the total amount of time a web API takes to receive, process, and respond to a client’s request. Response time is largely affected by how long that “process” step takes. For example, a request might require the API to retrieve some files from a database. If the server takes a long time to complete that retrieval, this extra time adds to the total response time.

Latency is different — it measures the time that both the request and the response take to travel over between the client and server. Latency is affected by things like the number and function of proxy servers between the client and API server, as well as physical distance between the client and API server. A client will tend to experience more latency when requesting an API server that’s 5,000 miles away than 500 miles away.

Here’s a useful diagram from Scalable Developer that helps illustrate the difference between latency and response time:

a diagram illustrating the difference between api response time, latency, and processing time

Image Source

As you can see from the diagram, response time is approximately the sum of latency and server processing time.

What is a good API response time?

Before discussing API response time standards, it’s important to know that API response time can be measured in several ways. However, the two most common are average response time and peak response time.

Average response time is the average time the API takes to respond to requests from a sample of requests. This is a good indicator of the API’s overall performance. For instance, if you took a sample of 10,000 requests to an API over a period of time, you can simply average all response times from that sample.

Peak response time is the maximum API response time taken from a sample of API requests. The peak response time is used to identify problems with the API, especially if the API’s peak response time is much higher than the average response time. Lowering peak response time also improves average response time.

Generally, APIs that are considered high-performing have an average response time between 0.1 and one second. At this speed, end users will likely not experience any interruption. At around one to two seconds, users begin to notice some delay. Around five seconds, users will feel a significant delay and may abandon the application or website as a result.

Your API may not hit this benchmark for every single request, especially particularly resource-intensive requests or requests during periods of high traffic, both of which place more load on the server and slow its response.

But, in your API testing, you should aim for your average response time to generally fall under one second in duration. And, be mindful of your peak response time and other outliers, as these will raise your average and make your API seem like it’s performing worse than it actually is most of the time.

How to Measure API Response Time

For measuring API response time, there are many API testing tools available to try. Tools like Postman or Apache JMeter can perform a variety of tests on your API and record performance indicators like average response time.

You may also be able to use a website monitoring service to measure response times. Note that you may get different measurements of response times across different tools, as tools may calculate this metric in slightly different ways.

API Response Time, Explained

If you understand the basics of APIs, the concept of response time isn’t too difficult to wrap your head around. Still, it’s one key metric you should monitor when running your web application, and continually optimize to keep in an optimal range.

api

 

Related Articles

We're committed to your privacy. HubSpot uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy.

Everything you need to know about the history and use of APIs.

CMS Hub is flexible for marketers, powerful for developers, and gives customers a personalized, secure experience

START FREE OR GET A DEMO