Generates a one-paragraph AI summary of the note thread for an entity. Synchronous LLM call — expect 2 to 6 seconds latency. Cache results client-side if you display them frequently; the response is not cached server-side.
Billing: This endpoint invokes a Cedar.AI language model and is billed under AI pricing. The other notes endpoints (/v1/notes/list, /v1/notes/create, /v1/notes/search, etc.) are not AI-backed and follow standard API metering.
Documentation Index
Fetch the complete documentation index at: https://docs.cedarai.com/llms.txt
Use this file to discover all available pages before exploring further.
Carrier ID the request is scoped to.
A successful response.