Using catch_response in Locust for Custom Response Validation
Last updated
Last updated
The catch_response=True
parameter in Locust allows you to evaluate the response manually and decide if the request is successful or should be marked as failed. This is particularly useful when your test criteria go beyond simple status codes. For example, you may want to ensure specific content or conditions are met in the response.
catch_response
Here’s a simple example that demonstrates the usage of catch_response
for validating a response. In this scenario, we are making a GET request to fetch a specific "todo" item from a REST API and ensuring that the item is owned by "Christian Adams."
from locust import HttpUser, constant, task
class MyReqRes(HttpUser):
wait_time = constant(1)
host = "http://localhost:8001"
@task
def get_todos(self):
# Sending a GET request and catching the response for custom validation
with self.client.get("/todos/104", name="test todo", catch_response=True) as resp1:
# Checking if the response JSON contains the expected owner name
if resp1.status_code == 200 and "Christian Adams" in resp1.json().get("name", ""):
resp1.success() # Mark the response as successful
else:
resp1.failure("Expected owner 'Christian Adams' not found.") # Mark it as a failure
Task Definition:
The @task
decorator marks get_todos
as a task to be executed by the simulated users.
The task sends a GET request to /todos/104
.
Using catch_response
:
The with self.client.get(..., catch_response=True)
block allows you to catch and evaluate the response manually.
The response is checked for:
A status code of 200.
The presence of "Christian Adams" in the name
field of the JSON response.
Marking Success or Failure:
If the response meets the criteria, resp1.success()
is called.
If the criteria are not met, resp1.failure()
is invoked with a descriptive message.
catch_response
?Custom Validation: Go beyond basic status code validation.
Detailed Metrics: Locust records the success and failure of each request, helping you identify bottlenecks and failed scenarios.
Dynamic Logic: Easily handle complex test cases, such as validating multi-step workflows or content checks.
When running this Locust test, you'll see metrics categorized as test todo
in the Locust web interface. Success or failure counts will reflect the validation logic you’ve implemented.
In some scenarios, responses might need to be reused, such as retrieving a token or reusing a user ID for subsequent requests. However, in this particular example, the task fetches data on demand without the need for caching.
Parameterized Testing: Dynamically test multiple todo
IDs by generating random or sequential IDs in the task.
Error Handling: Add handling for scenarios like server timeouts or malformed JSON.
Post-Test Analysis: Use Locust logs and metrics to pinpoint failures and improve your API performance.