Using catch_response in Locust for Custom Response Validation
The catch_response=True
parameter in Locust allows you to evaluate the response manually and decide if the request is successful or should be marked as failed. This is particularly useful when your test criteria go beyond simple status codes. For example, you may want to ensure specific content or conditions are met in the response.
Example Code: Validating a Response Using catch_response
catch_response
Here’s a simple example that demonstrates the usage of catch_response
for validating a response. In this scenario, we are making a GET request to fetch a specific "todo" item from a REST API and ensuring that the item is owned by "Christian Adams."
Breakdown of the Code
Task Definition:
The
@task
decorator marksget_todos
as a task to be executed by the simulated users.The task sends a GET request to
/todos/104
.
Using
catch_response
:The
with self.client.get(..., catch_response=True)
block allows you to catch and evaluate the response manually.The response is checked for:
A status code of 200.
The presence of "Christian Adams" in the
name
field of the JSON response.
Marking Success or Failure:
If the response meets the criteria,
resp1.success()
is called.If the criteria are not met,
resp1.failure()
is invoked with a descriptive message.
Why Use catch_response
?
catch_response
?Custom Validation: Go beyond basic status code validation.
Detailed Metrics: Locust records the success and failure of each request, helping you identify bottlenecks and failed scenarios.
Dynamic Logic: Easily handle complex test cases, such as validating multi-step workflows or content checks.
Sample Output
When running this Locust test, you'll see metrics categorized as test todo
in the Locust web interface. Success or failure counts will reflect the validation logic you’ve implemented.
When to Cache Responses?
In some scenarios, responses might need to be reused, such as retrieving a token or reusing a user ID for subsequent requests. However, in this particular example, the task fetches data on demand without the need for caching.
Enhancements and Next Steps
Parameterized Testing: Dynamically test multiple
todo
IDs by generating random or sequential IDs in the task.Error Handling: Add handling for scenarios like server timeouts or malformed JSON.
Post-Test Analysis: Use Locust logs and metrics to pinpoint failures and improve your API performance.
Last updated