Skip to main content

Availability

EditionDeployment Type
Community & EnterpriseSelf-Managed, Hybrid
Tyk AI Studio is a comprehensive platform that enables organizations to manage, govern, and deploy AI applications with enterprise-grade security, control, and observability. Before diving into installation and configuration, let’s understand what AI Studio offers and its core concepts.

Key Components & Philosophy

Tyk AI Studio is designed as a secure, observable, and extensible gateway for interacting with Large Language Models (LLMs) and other AI services. Key architectural pillars include:
  • AI Gateway: The central gateway managing all interactions between your applications and various LLM providers. It enforces policies, logs activity, and handles vendor abstraction. The gateway exists in two forms:
    • Embedded Gateway (in AI Studio): A lightweight “gateway-lite” for testing LLM configurations, powering the Chat interface, and proxying tool/datasource requests. No filters, no middleware, no plugins.
    • Edge Gateway (standalone binary): The full-featured data plane with the complete middleware pipeline — authentication, filters, plugins, analytics, budget enforcement, tool calling (REST + MCP), and datasource querying. Deployed at edge locations in a hub-and-spoke architecture.
  • Model Router (Enterprise): Intelligent request routing across multiple LLM vendors based on model name patterns, with support for load balancing, failover, and model name translation.
  • AI Portal: Empowers developers with a curated catalog of AI tools and services for faster innovation.
  • Chat: Provides a secure and interactive environment for users to engage with LLMs, leveraging integrated tools and data sources.
  • User Management & RBAC: Securely manages users, groups, and permissions. Access to resources like LLMs, Tools, and Data Sources is controlled via group memberships.
  • AI Portal: Empowers developers with a curated catalog of AI tools and services for faster innovation.
  • Policy Enforcement (Filters): Intercept and modify LLM requests/responses using custom scripts to enforce specific rules or data transformations.
  • Configuration over Code: Many aspects like LLM parameters, Filters, and Budgets are configured through the UI/API rather than requiring code changes.
  • Security First: Features like Secrets Management, SSO integration, and fine-grained access control are integral to the platform.
  • Observability: Includes systems for Analytics & Monitoring and Notifications to track usage, costs, and system events.

Core Entities

Understanding these entities is crucial:

User

Represents an individual interacting with Tyk AI Studio, managed within the User Management system.

Group

Collections of users, the primary mechanism for assigning access rights to resources via RBAC.

App

An App serves as the primary interface or bridge between an end-user and a Large Language Model (LLM). It acts as a managed API endpoint that wraps around an LLM provider (like OpenAI, Anthropic, or Mistral) to provide governance, security, and tracking. It encapsulates the LLMs, tools, and data sources needed for specific AI use cases and provides RESTful access via credentials.

API Key

Credentials generated by Users to allow applications or scripts programmatic access to Tyk AI Studio APIs (like the Proxy), inheriting the User’s permissions. See also User Management.

LLM Configuration

Represents a specific LLM provider and model setup (e.g., OpenAI GPT-4, Anthropic Claude 3), including parameters and potentially associated pricing and budgets.

Tool

Definitions of external APIs (via OpenAPI spec) that can be invoked by LLMs during chat sessions to perform actions or retrieve external data. See also Tools.

Data Source

Connections to vector databases or other data repositories used for Retrieval-Augmented Generation (RAG) within chat sessions. See also Data Sources.

Catalogue

Collections that group related Tools or Data Sources for easier management and assignment to Groups for access control.

Secret

Securely stored credentials (API keys, tokens) referenced indirectly (e.g., $SECRET/MY_KEY) in configurations like LLMs, Tools, or Data Sources. See also Secrets Management.

Filter

Custom logic (using Tengo scripts) associated with specific execution points (e.g., pre/post LLM request) to intercept and modify requests/responses. See also Filters. For a detailed view of how these components fit together, including the hub-and-spoke architecture and proxy modes, see the Architecture Overview.

Getting Started

Now that you understand the core concepts, you’re ready to begin your AI Studio journey:
  1. Choose your installation method: Docker/Packages (recommended) or Kubernetes
  2. Complete first-time setup: Register your admin user and configure your first LLM
  3. Explore the platform: Start with the chat interface and gradually explore advanced features
    Ready to start? Head to the Installation Guide to get AI Studio up and running in minutes.