Athina
Blog ↗Blog ↗Contact ↗Contact ↗
GitHubGitHub (opens in a new tab)
  • Introduction
  • Quick Start
    • Production Orchestration
    • Open Source Evaluation: Philosophy
  • Logging
    • Quick Start
    • OpenAI Chat (1.x)
    • OpenAI Chat (0.x)
    • OpenAI Assistant
    • LiteLLM
    • Langchain
    • All other models
    • Log via API Request
    • Log via Python SDK
    • OpenAI Completion (1.x)
    • OpenAI Completion (0.x)
    • Supported Models
    • How to Update Logged Inferences
    • How to log Conversations / Chats
    • Logging User Feedback
      • Data Policy
      • Logging Latency
      • Proxy
    • Tracing
    • Tracing for Langchain
  • Evals
    • Quick Start
    • Automatic Evals
    • Why Athina Evals
    • Running Evals
      • Python: Run a single eval
      • Python: Run an eval suite
      • UI: Run evals on a dataset
      • Continuous Evaluation in Production
    • Preset Evals
      • RAG Evals
      • Context Contains Enough Information
      • Faithfulness
      • Does Response Answer Query
        • PII Detection
        • Prompt Injection
      • Summarization Q&A
      • More Evals
      • Function Evals
      • Grounded Evals
      • RAGAS
      • Conversation Evals
      • Groundedness
    • Custom Evals
      • API Call
      • Custom Prompt
      • Grading Criteria
    • Bring your own eval
    • Loading Data for Evaluation
    • Running Evals in CI/CD
      • Why does LLM eval work?
      • Can I use a different model?
      • Can I run evals in parallel?
      • Why not use traditional eval metrics?
    • Cookbooks
    • Create Your Own Eval
    • Develop Dashboard
  • Guides
    • Measure retrieval and response quality in RAG
    • Prompt Injection: Attacks and Defenses
    • Running evals as real-time guardrails
    • LLM Eval Workflows
    • Improving Eval Performance
  • Monitoring
    • Inference Trace
    • Analytics
    • Topic Classification
    • Export data from Athina
    • Performance Metrics
  • Datasets
    • Create Dataset
  • Integrations
  • GraphQL API
    • Getting Started
    • Sample GraphQL Queries
    • Curl and Python Examples
    • How we use your OpenAI API key
    • Can I Self-Host?
    • Does Athina support JS / TS?

On This Page

  • Data Policy
  • Can I self-host my data?
Question? Give us feedback → (opens in a new tab)Edit this page
Logging
FAQs
Data Policy

Data Policy

Inferences are stored in a database on AWS in the EU region.

For custom data retention policies, or to request deletion of your data, please contact hello@athina.ai.


Can I self-host my data?

We're working on it.

If you would like to inquire about a self-hosted deployment, please contact founders@athina.ai

Logging User FeedbackLogging Latency

athina.ai