Increased Error Rates for Inference Endpoints

Incident Report for Vellum

Resolved

This issue was resolved a few minutes later. There was an issue in our queueing service.
Posted Apr 02, 2025 - 18:24 UTC

Investigating

We're seeing an increase in error rates for inference endpoints, resulting in unexpected 500 responses.

We're actively investigating this issue and will provide updates here.
Posted Apr 02, 2025 - 18:18 UTC
This incident affected: Web Application & APIs, Predict Endpoint, and Documents Endpoint.