Wabee Agent Core API
Wabee Agent Core API (v1)
Wabee Agent Core API enables developers to interact with an AI agent in a custom, secure and flexible manner through REST API calls.
One can build applications to interact with the agent API to complete a task using any of the Agents endpoints which allow for both text streaming and direct JSON response. The Memory endpoints are useful for managing the agent memory programatically. Moreover, the Metrics endpoints provide an interface for monitoring the underlying Agent in terms of latency, health, token consumption and much more.
Request
Returns quantitative information about agent runs for monitoring. The following metrics are provided:
- status_value_counts (histogram)
- total_requests
- total_errors
- total_tokens
- mean_tokens
- mean_latency
- max_latency
- min_latency
- total_prompt_tokens
- mean_prompt_tokens
- total_completion_tokens
- mean_completion_tokens
- Mock serverhttps://api.docs.wabee.ai/_mock/openapi/core/v1/metrics
- Production serverhttps://<your_agent_uri>.wabee.ai/core/v1/metrics
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
https://api.docs.wabee.ai/_mock/openapi/core/v1/metrics \
-H 'x-wabee-access: YOUR_API_KEY_HERE'{ "body": {} }
Request
Returns run execution logs in stringify format. The logs contain the following fields:
- name: run name
- id: run unique identifier
- model: model name
- error: run error if exists, else null
- status: run status
- latency: run latency
- end_time: terminated run timestamp
- completion_tokens: number of tokens returned by the model when the run is finished
- total_tokens: total number of tokens associated with the run
The results are paginated. Use offset and limit parameters to navigate through pages.
- Mock serverhttps://api.docs.wabee.ai/_mock/openapi/core/v1/logs
- Production serverhttps://<your_agent_uri>.wabee.ai/core/v1/logs
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
'https://api.docs.wabee.ai/_mock/openapi/core/v1/logs?run_id=string&limit=10&start_timestamp=0&end_timestamp=0' \
-H 'x-wabee-access: YOUR_API_KEY_HERE'{ "data": "[{\"name\":\"Wabee LLM Advanced\",\"id\":\"70ea273b-c2ba-4c24-9a8b-aae2cda7c95e\",\"model\":\"llm-model-advanced\",\"error\":null,\"status\":\"success\",\"latency\":5.5,\"end_time\":1712076831672,\"completion_tokens\":150,\"total_tokens\":300}]", "total": 100, "has_more": true }
- Mock serverhttps://api.docs.wabee.ai/_mock/openapi/health
- Production serverhttps://<your_agent_uri>.wabee.ai/health
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
https://api.docs.wabee.ai/_mock/openapi/health{}