Skip to content

Wabee Agent Core API (v1)

Wabee Agent Core API

Wabee Agent Core API enables developers to interact with an AI agent in a custom, secure and flexible manner through REST API calls.

One can build applications to interact with the agent API to complete a task using any of the Agents endpoints which allow for both text streaming and direct JSON response. The Memory endpoints are useful for managing the agent memory programatically. Moreover, the Metrics endpoints provide an interface for monitoring the underlying Agent in terms of latency, health, token consumption and much more.

Download OpenAPI description
Overview
License
Wabee License
Languages
Servers
Mock server
https://api.docs.wabee.ai/_mock/openapi
Production server
https://<your_agent_uri>.wabee.ai

Agent

Endpoints for interacting with the AI agent

Operations

Memory

Endpoints for managing agent memory

Observability

Endpoints for monitoring agent performance and logs

Operations

Request

Returns quantitative information about agent runs for monitoring. The following metrics are provided:

  • status_value_counts (histogram)
  • total_requests
  • total_errors
  • total_tokens
  • mean_tokens
  • mean_latency
  • max_latency
  • min_latency
  • total_prompt_tokens
  • mean_prompt_tokens
  • total_completion_tokens
  • mean_completion_tokens
Security
APIKeyHeader
curl -i -X GET \
  https://api.docs.wabee.ai/_mock/openapi/core/v1/metrics \
  -H 'x-wabee-access: YOUR_API_KEY_HERE'

Responses

Successful Response

Bodyapplication/json
bodyobject(Body)required

The metrics response body

body.​property name*anyadditional property
Response
application/json
{ "body": {} }

Request

Returns run execution logs in stringify format. The logs contain the following fields:

  • name: run name
  • id: run unique identifier
  • model: model name
  • error: run error if exists, else null
  • status: run status
  • latency: run latency
  • end_time: terminated run timestamp
  • completion_tokens: number of tokens returned by the model when the run is finished
  • total_tokens: total number of tokens associated with the run

The results are paginated. Use offset and limit parameters to navigate through pages.

Security
APIKeyHeader
Query
run_idRun Id (string) or Run Id (null)(Run Id)
Any of:
string(Run Id)
limitinteger(Limit)[ 1 .. 20 ]

Number of items to return per page

Default 10
start_timestampinteger(Start Timestamp)>= 0

The epoch timestamp to start the search

Default 0
end_timestampinteger(End Timestamp)>= 0

The epoch timestamp to end the search

Default 0
curl -i -X GET \
  'https://api.docs.wabee.ai/_mock/openapi/core/v1/logs?run_id=string&limit=10&start_timestamp=0&end_timestamp=0' \
  -H 'x-wabee-access: YOUR_API_KEY_HERE'

Responses

Successful Response

Bodyapplication/json
any
Response
application/json
{ "data": "[{\"name\":\"Wabee LLM Advanced\",\"id\":\"70ea273b-c2ba-4c24-9a8b-aae2cda7c95e\",\"model\":\"llm-model-advanced\",\"error\":null,\"status\":\"success\",\"latency\":5.5,\"end_time\":1712076831672,\"completion_tokens\":150,\"total_tokens\":300}]", "total": 100, "has_more": true }

Request

Returns the health status of the API

curl -i -X GET \
  https://api.docs.wabee.ai/_mock/openapi/health

Responses

API is healthy

Bodyapplication/json
property name*anyadditional property
Response
application/json
{}

Sessions

Endpoints for managing agent sessions

Operations

Sub-Agents

Endpoints for managing sub-agents

Operations
Operations