# Composants — Showcase (/fr/docs/components-showcase) Cette page sert de référence visuelle. Chaque composant ci-dessous est rendu avec ses props minimales pour vérifier qu'il est correctement câblé dans le pipeline MDX et stylé selon le thème Lyncea. Voir aussi [Premiers pas](/fr/docs/getting-started) pour le parcours utilisateur. ## Banner [#banner] `` est nativement `sticky top-0 z-40` et modifie `--fd-banner-height` sur `:root` — il est destiné au layout racine, pas inline dans une page. Pour ce showcase on désactive `changeLayout` et on override les classes positionnelles via `className` (mergé par `tailwind-merge`). Cette bannière est persistante (la fermeture est mémorisée par `id`). ## Callout [#callout] Encart informatif neutre. Encart d'avertissement. Encart d'erreur bloquante. Encart de confirmation. ## Card / Cards [#card--cards] Carte cliquable qui pointe vers une autre page de la doc. Carte purement informative — aucun lien associé. Lien externe vers la doc Fumadocs officielle. ## Tabs [#tabs] ```bash curl -X POST https://api.example.com/v1/logs \ -H "Authorization: Bearer lyn_live_xxx" \ --data-binary @log.pb ``` ```ts await fetch('https://api.example.com/v1/logs', { method: 'POST', headers: { Authorization: 'Bearer lyn_live_xxx' }, body: payload, }); ``` ```python import httpx httpx.post( 'https://api.example.com/v1/logs', headers={'Authorization': 'Bearer lyn_live_xxx'}, content=payload, ) ``` ## Accordion [#accordion] 100 000 logs par jour pour un compte gratuit. Augmentable depuis les paramètres du tenant. 7 jours en `dev`, 30 jours en `staging`, 90 jours en `prod` (rétention par défaut). Oui — endpoint GDPR disponible dans l'espace admin (rôle `platform_admin` requis). ## Steps [#steps] ### Créer un projet Depuis le tableau de bord, **Nouveau projet** → choisir l'environnement. [#créer-un-projet-depuis-le-tableau-de-bord-nouveau-projet--choisir-lenvironnement] ### Générer une clé d'API Onglet **Clés d'API**, scope `write:logs`. La valeur n'est affichée [#générer-une-clé-dapi-onglet-clés-dapi-scope-writelogs-la-valeur-nest-affichée] qu'une seule fois. ### Envoyer le premier log `POST /v1/logs` avec le header `Authorization: Bearer lyn_live_xxx`. [#envoyer-le-premier-log-post-v1logs-avec-le-header-authorization-bearer-lyn_live_xxx] ## Files [#files] ## TypeTable [#typetable] Manuelle — on déclare chaque prop à la main. ## AutoTypeTable [#autotypetable] Génère la table directement depuis une déclaration TypeScript — le compilateur extrait le type, les JSDoc (`@default`, `@param`, `@returns`) et les unions. Server Component obligatoire (compilation au build). ## Bloc de code (CodeBlock auto-câblé) [#bloc-de-code-codeblock-auto-câblé] ```ts title="apps/api/src/iam/auth.service.ts" async login(email: string, password: string): Promise { const user = await this.users.findByEmail(email); if (!user) throw new UnauthorizedError('invalid_credentials'); const ok = await argon2.verify(user.passwordHash, password); if (!ok) throw new UnauthorizedError('invalid_credentials'); return this.tokens.mint(user); } ``` ### CodeBlock — features avancées [#codeblock--features-avancées] Highlight de lignes (`// [!code highlight]`), highlight de mot (`// [!code word:xxx]`), diff (`// [!code ++]` / `// [!code --]`). ```ts title="next.config.ts" import createMDX from 'fumadocs-mdx/config'; const withMDX = createMDX(); // [!code word:rewrites] const config = { reactStrictMode: true, // [!code highlight] output: 'standalone', // [!code highlight] async rewrites() { return [ { source: '/:lang(fr|en)/docs/:path*.mdx', // [!code ++] destination: '/llms.mdx/:lang/docs/:path*', // [!code ++] legacyKey: 'old', // [!code --] }, ]; }, }; export default withMDX(config); ``` ## DynamicCodeBlock [#dynamiccodeblock] Bloc de code compilé côté client via Shiki — utile pour du contenu généré dynamiquement (snippet utilisateur, résultat d'API). ## InlineTOC [#inlinetoc] Sommaire de la page ## ImageZoom [#imagezoom] Cliquer sur l'image pour zoomer (overlay plein écran). ## GraphView [#graphview] Graphe force-directed interactif des pages de la doc. Une node par page du locale courant ; une arête par lien interne extrait du body MDX (`extractLinkReferences: true` dans `source.config.ts`). Hover pour afficher la description, clic pour naviguer. ## GithubInfo [#githubinfo] Affiche les statistiques publiques d'un dépôt GitHub (sans token : limite 60 req/h par IP — suffisant pour du dev). ## Headings (auto-styled) [#headings-auto-styled] ### H3 — sous-section [#h3--sous-section] #### H4 — niveau 4 [#h4--niveau-4] ##### H5 — niveau 5 [#h5--niveau-5] ###### H6 — niveau 6 [#h6--niveau-6] Les `h1` à `h6` du markdown brut traversent automatiquement le composant `Heading` de Fumadocs (ancre auto, scroll spy pour la TOC latérale). ## Math (KaTeX) [#math-katex] Inline avec délimiteurs `$..$` : la résolution du quota par jour suit $Q_\text{jour} = \min(Q_\text{tenant},\, \sum_i q_i)$. Bloc avec fence ` ```math ` : ```math \text{TTL}_\text{token} = \begin{cases} 900\,\text{s} & \text{si type} = \texttt{access} \\ 2{,}592{,}000\,\text{s} & \text{si type} = \texttt{refresh} \end{cases} ``` ## Mermaid [#mermaid] Le pipeline d'ingestion OTLP → Kafka → ClickHouse : État machine d'un refresh token : ## Twoslash [#twoslash] Hover sur `^?` pour voir le type inféré (Shiki + Twoslash compilent au build). ```ts twoslash type Brand = T & { readonly __brand: B }; type TenantId = Brand; const asTenantId = (raw: string): TenantId => raw as TenantId; const id = asTenantId('tenant_42'); // ^? ``` Détection d'erreur compilateur (annotation `// @errors:`) : ```ts twoslash // @errors: 2322 type Severity = 'info' | 'warn' | 'error'; const s: Severity = 'critical'; ``` Complétion d'API (`// ^|`) : ```ts twoslash // @noErrors const log = { severity: 'info', tenantId: 'foo' }; log.s; // ^| ``` # Environnements de projet (/fr/docs/environments) ## Comment fonctionnent les environnements [#comment-fonctionnent-les-environnements] Chaque projet dispose d'une **allowlist d'environnements** — l'ensemble des valeurs de `deployment.environment` qu'il acceptera dans les logs entrants. Les trois valeurs disponibles sont `dev`, `staging` et `prod`. Les trois sont activées par défaut ; vous pouvez restreindre la liste à la création du projet ou la modifier ultérieurement. ## Déclarer l'environnement dans vos logs [#déclarer-lenvironnement-dans-vos-logs] Lyncea lit l'attribut `deployment.environment` dans la section **resource** de chaque enregistrement de log OTLP. Vous le définissez dans votre SDK ou OTel Collector — il n'est jamais déduit de la clé d'API. ```yaml processors: resource: attributes: - key: deployment.environment value: staging # dev | staging | prod action: insert service: pipelines: logs: processors: [resource] exporters: [otlphttp/lyncea] ``` Extrait du `Resource` à passer à votre setup OTel — l'export OTLP complet (processor + exporter) est couvert par l'exemple Pino de [Premiers pas](/fr/docs/getting-started). ```js import { NodeSDK } from '@opentelemetry/sdk-node'; import { Resource } from '@opentelemetry/resources'; const sdk = new NodeSDK({ resource: new Resource({ 'deployment.environment': 'staging', // dev | staging | prod }), // …logRecordProcessor / OTLPLogExporter à câbler ici pour exporter vers Lyncea. }); ``` ```python from opentelemetry.sdk.resources import Resource from opentelemetry.semconv.resource import ResourceAttributes resource = Resource.create({ ResourceAttributes.DEPLOYMENT_ENVIRONMENT: "staging", # dev | staging | prod }) ``` Lyncea normalise automatiquement les alias courants : `development` → `dev`, `production` → `prod`. Les logs dont la valeur de `deployment.environment` **n'est pas dans l'allowlist du projet sont silencieusement rejetés** — aucune erreur n'est renvoyée à l'émetteur. L'endpoint d'ingestion répond toujours `200 OK` pour les payloads bien formés. ## Clés d'API et environnements [#clés-dapi-et-environnements] **Une clé d'API est scopée à un projet, pas à un environnement spécifique.** Une seule clé peut envoyer des logs depuis tous les environnements acceptés par le projet — vous n'avez pas besoin d'une clé par environnement. ### Ce que signifie le préfixe `live` / `test` [#ce-que-signifie-le-préfixe-live--test] Les clés ont la forme `lyn_live_xxx` ou `lyn_test_xxx`. Ce préfixe reflète la **plateforme Lyncea** sur laquelle la clé a été émise : | Préfixe | Signification | | -------------- | -------------------------------------------------- | | `lyn_live_xxx` | Émise sur un déploiement Lyncea en production | | `lyn_test_xxx` | Émise sur un déploiement sandbox / hors-production | Ce préfixe existe pour que les outils de détection de secrets (GitGuardian, TruffleHog, …) puissent identifier les clés exposées. Il ne détermine **pas** pour quels environnements `dev` / `staging` / `prod` la clé est valide. ## Restreindre l'allowlist [#restreindre-lallowlist] Vous pouvez restreindre l'allowlist à tout moment depuis la console ou via l'API. Par exemple, un projet dédié à la surveillance de la production peut être limité à `prod` uniquement — tout log tagué `dev` ou `staging` envoyé à ce projet sera rejeté. La restriction prend effet immédiatement pour les nouveaux lots de logs. Les logs déjà indexés ne sont pas affectés. Référence API : [`POST /api/v1/projects`](/fr/docs/api/projects/ProjectsController_create) · [`PATCH /api/v1/projects/{projectId}`](/fr/docs/api/projects/ProjectsController_update) # Premiers pas (/fr/docs/getting-started) Ce guide couvre le parcours minimal pour faire arriver votre premier log dans Lyncea. Toutes les étapes d'administration se font depuis la console ; les détails du contrat HTTP (payload, codes d'erreur, retry, limites) vivent dans la [référence ingestion](/fr/docs/ingestion), et chaque endpoint dans la [référence API](/fr/docs/api). L'inscription publique est ouverte : créer un compte provisionne aussi un nouveau tenant dont vous êtes propriétaire (`owner`). Si vous avez été invité à rejoindre un tenant existant, connectez-vous puis ouvrez vos invitations en attente pour les accepter. **L'envoi d'email d'invitation n'est pas encore implémenté** (bientôt disponible) — l'invitation doit être communiquée hors-bande pour le moment. ### Créer un compte (ou se connecter) [#créer-un-compte-ou-se-connecter] Depuis la console : s'inscrire ou se connecter. L'inscription crée à la fois l'utilisateur et un tenant rattaché, dont l'inscrivant devient `owner`. Une fois authentifié, le serveur émet un JWT **EdDSA/Ed25519** de 15 minutes accompagné d'un cookie de refresh à rotation 30 jours. Si vous appartenez à plusieurs tenants, sélectionnez celui sur lequel travailler — le JWT est scopé à un seul tenant actif à la fois. Référence API : [`POST /api/v1/auth/register`](/fr/docs/api/authentication/AuthController_register), [`POST /api/v1/auth/login`](/fr/docs/api/authentication/AuthController_login), [`POST /api/v1/auth/switch-tenant`](/fr/docs/api/authentication/AuthController_switchTenant). ### Créer un projet [#créer-un-projet] Depuis la console : ouvrez le formulaire de création de projet. Il pré-remplit le `slug` (URL-safe, unique dans le tenant) et l'allowlist d'environnements OTel acceptés (`dev`, `staging`, `prod` par défaut). Consultez [Environnements de projet](/fr/docs/environments) pour comprendre comment cette allowlist fonctionne et comment taguer vos logs avec le bon environnement. Un **projet** représente le périmètre auquel s'appliquent une rétention, un quota d'ingestion et un jeu de clés d'API. La granularité courante est **un projet par périmètre opérationnel** (typiquement par produit ou par environnement majeur), avec `service.name` qui distingue les services à l'intérieur. Multiplier les projets n'a de sens que si vous voulez des rétentions, quotas ou clés séparés. Référence API : [`POST /api/v1/projects`](/fr/docs/api/projects/ProjectsController_create). ### Générer une clé d'API [#générer-une-clé-dapi] Depuis la console : ouvrez la liste des projets, sélectionnez le vôtre, puis l'onglet « API keys » et cliquez sur « Créer une clé » avec le scope `write:logs`. Le secret en clair n'est **affiché qu'une seule fois** à la création — Lyncea ne stocke qu'un hash. Conservez-le dans un gestionnaire de secrets (Vault, AWS Secrets Manager, Doppler, etc.). Le secret retourné a la forme `lyn_live_xxx` (déploiement Lyncea de production) ou `lyn_test_xxx` (sandbox). Ce préfixe est destiné aux outils de détection de secrets — il ne restreint pas pour quels environnements `dev` / `staging` / `prod` la clé peut envoyer des logs. Voir [Environnements de projet](/fr/docs/environments). Référence API : [`POST /api/v1/projects/{projectId}/api-keys`](/fr/docs/api/api-keys/ApiKeysController_create). ### Envoyer un log via OTLP/HTTP [#envoyer-un-log-via-otlphttp] L'ingestion est purement programmatique : pas de formulaire d'upload, vous configurez votre **logger applicatif existant** (Pino, Python `logging`, etc.) pour exporter en OTLP/HTTP vers — ou vous déployez un OTel Collector si votre stack n'a pas de bridge SDK. Authentification par clé d'API (header `Authorization: Bearer lyn_live_xxx`), body protobuf (par défaut côté SDK) ou JSON. ```bash pnpm add pino pino-opentelemetry-transport ``` ```js import pino from 'pino'; export const logger = pino({ level: 'info', transport: { target: 'pino-opentelemetry-transport', options: { logRecordProcessorOptions: { recordProcessorType: 'batch', exporterOptions: { protocol: 'http/json', url: 'https://api.lyncea.io/v1/logs', headers: { Authorization: `Bearer ${process.env.LYNCEA_API_KEY}` }, }, }, resourceAttributes: { 'service.name': 'checkout-api', 'deployment.environment': 'prod', }, }, }, }); logger.info({ userId: 42 }, 'user logged in'); ``` ```bash pip install opentelemetry-sdk opentelemetry-exporter-otlp-proto-http ``` ```python import logging from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler from opentelemetry.sdk._logs.export import BatchLogRecordProcessor from opentelemetry.sdk.resources import Resource from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter provider = LoggerProvider(resource=Resource.create({ "service.name": "checkout-api", "deployment.environment": "prod", })) provider.add_log_record_processor( BatchLogRecordProcessor( OTLPLogExporter( endpoint="https://api.lyncea.io/v1/logs", headers={"Authorization": "Bearer lyn_live_xxx"}, ) ) ) logging.getLogger().addHandler(LoggingHandler(logger_provider=provider)) logging.getLogger().setLevel(logging.INFO) logging.info("user logged in") ``` À utiliser quand votre stack n'a pas de bridge SDK (Java/Logback, Go, Ruby, .NET…) ou quand vous avez déjà un Collector qui collecte logs/traces/metrics pour d'autres backends. ```yaml exporters: otlphttp/lyncea: endpoint: https://api.lyncea.io headers: Authorization: Bearer ${env:LYNCEA_API_KEY} compression: gzip service: pipelines: logs: exporters: [otlphttp/lyncea] ``` Pour un test rapide depuis le terminal : ```bash curl -X POST https://api.lyncea.io/v1/logs \ -H "Authorization: Bearer lyn_live_xxx" \ -H "Content-Type: application/json" \ -d '{ "resourceLogs": [{ "resource": { "attributes": [ { "key": "service.name", "value": { "stringValue": "checkout-api" } }, { "key": "deployment.environment", "value": { "stringValue": "prod" } } ]}, "scopeLogs": [{ "scope": { "name": "lyncea-quickstart" }, "logRecords": [{ "timeUnixNano": "1717171717000000000", "severityNumber": 9, "severityText": "INFO", "body": { "stringValue": "user logged in" } }] }] }] }' ``` Une réponse `200 OK` confirme la prise en charge — Lyncea applique un contrat **full-success** (pas de rejet par enregistrement remonté synchroniquement). À retenir : * **OTel Collector et bridges SDK** pointent vers la racine `https://api.lyncea.io` et ajoutent eux-mêmes `/v1/logs`. Un exporter OTLP appelé directement (par exemple `OTLPLogExporter`) attend l'URL complète `https://api.lyncea.io/v1/logs`. * **L'API de recherche est disponible** — voir l'étape « Lire vos logs » ci-dessous. Un appel `GET /api/v1/logs` confirme aussi côté lecture que l'intégration fonctionne. Pour le détail du contrat HTTP (payload OTLP/JSON complet, codes d'erreur, sévérité OTel, retry/backoff, limites), voir la [référence ingestion](/fr/docs/ingestion). Référence API : [`POST /v1/logs`](/fr/docs/api/ingestion/IngestionController_ingest). ### Lire vos logs [#lire-vos-logs] Une fois qu'au moins un log est arrivé dans ClickHouse, vous pouvez interroger la fenêtre de rétention via l'API de lecture. Le tenant actif provient de votre JWT — ne jamais passer un `tenantId` en query parameter, la passerelle refusera la requête. ```bash # Les dernières 24 h, filtrées par sévérité et projet curl -G "https://api.lyncea.io/api/v1/logs" \ -H "Authorization: Bearer ${JWT}" \ --data-urlencode "severities=ERROR,FATAL" \ --data-urlencode "projectIds=${PROJECT_ID}" \ --data-urlencode "limit=100" ``` ```jsonc // 200 OK { "data": [ { "id": "…", "timestamp": "…", "severityText": "ERROR", "body": "…", … } ], "nextCursor": "eyJ0aW1lc3RhbXAiOiIyMDI2LTA1LTEwVDIz…", "totalScanned": 1234 } ``` La pagination est **keyset** : passer le `nextCursor` reçu tel quel via `?cursor=…` à l'appel suivant. Le curseur est opaque — ne pas le décoder. ```bash # Un seul log par id (404 si l'id n'appartient pas à votre tenant) curl "https://api.lyncea.io/api/v1/logs/${LOG_ID}" \ -H "Authorization: Bearer ${JWT}" # Live tail via Server-Sent Events curl -N "https://api.lyncea.io/api/v1/logs/stream?projectIds=${PROJECT_ID}" \ -H "Authorization: Bearer ${JWT}" ``` **Throttling plan-aware** : chaque tenant a un budget de requêtes par minute et un `max_execution_time` ClickHouse. Une erreur `query_rate_limited` (429) suggère de ralentir ; un `query_timeout` (504) suggère de réduire la fenêtre de recherche. Référence API : [`GET /api/v1/logs`](/fr/docs/api/logs/LogsController_list), [`GET /api/v1/logs/{logId}`](/fr/docs/api/logs/LogsController_getOne), [`GET /api/v1/logs/stream`](/fr/docs/api/logs/LogsStreamController_stream), [`GET /api/v1/dashboard/summary`](/fr/docs/api/dashboard/DashboardController_summary). ## Et ensuite ? [#et-ensuite-] Payload OTLP/JSON complet, mapping `severityNumber`, codes d'erreur HTTP, politique de retry, limites — ce dont vous avez besoin pour écrire un client robuste. Comment Lyncea route les logs par environnement, comment déclarer `deployment.environment` dans votre SDK, et ce que signifie le préfixe `live` / `test` des clés. Console : page Membres pour les rôles tenant (`owner`, `admin`, `member`), puis l'onglet « Membres » de chaque projet (depuis{' '} la liste des projets) pour affiner les rôles projet (`admin`, `editor`, `viewer`). Référence OpenAPI complète : auth, projets, clés, ingestion, audit. # Vue d'ensemble (/fr/docs) Lyncea est une plateforme de supervision des logs pensée pour les équipes qui exploitent plusieurs services en production. L'ingestion suit la spécification **OTLP/HTTP** d'OpenTelemetry — vous pointez vos collecteurs ou SDK existants vers Lyncea, sans réécrire votre instrumentation. Lyncea est en cours de construction. Sont disponibles aujourd'hui : l'ingestion OTLP, la gestion multi-tenant (tenants, projets, membres, clés d'API), le RBAC, la **recherche de logs**, le **live tail SSE** et un **dashboard de résumé 24 h**. L'alerting et l'API d'audit côté tenant ne sont pas encore implémentés — voir les badges *Bientôt disponible* ci-dessous. ## Ce que vous pouvez faire [#ce-que-vous-pouvez-faire] Envoyer vos logs au format OTLP via une clé d'API par projet, en protobuf ou JSON, avec ou sans compression gzip. Recherche token-exact, filtres par sévérité/environnement/service, pagination keyset et live tail via Server-Sent Events. Throttling plan-aware sur le rythme de requêtes et l'execution time. Un tenant par organisation cliente, des projets à l'intérieur, deux niveaux de RBAC (tenant et projet) et des clés d'API scopées par projet. Toute action sensible (création de clé, changement de rôle, accès BO) est tracée dans un journal d'audit. ⚠️ L'API de **consultation** côté tenant n'est pas encore exposée — bientôt disponible. ## Concepts clés [#concepts-clés] Lyncea a **deux niveaux de hiérarchie** : un tenant représente votre organisation cliente (facturation, quota d'ingestion, droits de haut niveau) et contient un ou plusieurs projets, qui reçoivent les logs et portent les clés d'API. Il n'y a pas de regroupement intermédiaire — « tenant » et « organisation » désignent la même chose. | Terme | Définition | | ----------------- | ----------------------------------------------------------------------------------------------------- | | **Tenant** | Espace isolé d'une organisation cliente. Toutes les données portent un `tenant_id`. | | **Projet** | Application ou service à monitorer, à l'intérieur du tenant. Porte les clés d'API et reçoit les logs. | | **Clé d'API** | Secret signé hashé en base, scopé `write:logs`. Émise par projet, utilisée par vos services. | | **Membre tenant** | Utilisateur attaché au tenant avec un rôle global (`owner`, `admin`, `member`). | | **Membre projet** | Surcharge facultative du rôle d'un membre tenant sur un projet précis (`admin`, `editor`, `viewer`). | Lyncea n'expose **jamais** un `tenant_id` côté requête : il est lu depuis le JWT à chaque appel. Le cloisonnement multi-tenant est garanti au niveau du service, pas au niveau du client. ## Architecture en bref [#architecture-en-bref] L'API découple ingestion (chaude) et écriture analytique (ClickHouse via Kafka). Authentification JWT EdDSA/Ed25519 pour la console, clés d'API pour l'ingestion — deux chemins d'auth indépendants. La lecture (recherche, live tail SSE, dashboard) interroge ClickHouse en passant par le JWT. ## Pour aller plus loin [#pour-aller-plus-loin] Créer un projet, générer une clé d'API et envoyer son premier log en moins de cinq minutes. Documentation OpenAPI complète des endpoints Front Office, générée depuis le code source de l'API. # Référence ingestion (/fr/docs/ingestion) Cette page documente le contrat HTTP de en détail. Pour un guide pas-à-pas avec exemples par stack, voir [Premiers pas](/fr/docs/getting-started). ## Endpoint [#endpoint] `POST` — authentification par clé d'API (header `Authorization: Bearer lyn_live_xxx`, scope `write:logs`). L'OTel Collector et la plupart des transports/appenders SDK pointent vers la **racine** `https://api.lyncea.io` et ajoutent eux-mêmes `/v1/logs`. Quand vous configurez un SDK OTel à la main (`OTLPLogExporter` côté Node ou Python par exemple), écrivez le **chemin complet** `https://api.lyncea.io/v1/logs` — ces exporters n'ajoutent pas le suffixe. ## Format du payload [#format-du-payload] Deux content-types acceptés : | Content-Type | Usage | | ------------------------ | ------------------------------------------------------------------------------------- | | `application/x-protobuf` | Format binaire `ExportLogsServiceRequest`. Par défaut pour les SDK OTel et Collector. | | `application/json` | Version JSON du même schéma. Pratique pour les scripts et les bridges sans protobuf. | `Content-Encoding: gzip` et `deflate` sont décodés automatiquement. La limite de **16 MiB** s'applique après décompression — un ratio extrême déclenche un `413` plutôt qu'une consommation mémoire incontrôlée. ### Exemple minimal valide (JSON) [#exemple-minimal-valide-json] ```json { "resourceLogs": [ { "resource": { "attributes": [ { "key": "service.name", "value": { "stringValue": "checkout-api" } }, { "key": "deployment.environment", "value": { "stringValue": "prod" } } ] }, "scopeLogs": [ { "scope": { "name": "lyncea-quickstart" }, "logRecords": [ { "timeUnixNano": "1717171717000000000", "severityNumber": 9, "severityText": "INFO", "body": { "stringValue": "user logged in" }, "attributes": [ { "key": "user.id", "value": { "stringValue": "42" } } ] } ] } ] } ] } ``` Contraintes : * `resourceLogs[]` ne peut pas être vide (`400` sinon). * `timeUnixNano` est une **chaîne** de 19 chiffres (nanosecondes Unix) — ne pas l'envoyer en number, JavaScript perd la précision au-delà de `2^53`. * Tout le reste est optionnel mais recommandé pour exploiter pleinement la [recherche de logs](/fr/docs/getting-started#lire-vos-logs). ## Sévérité [#sévérité] OTel utilise `severityNumber` (entier 1–24). Mapping habituel à connaître si vous écrivez le payload à la main — les bridges officiels font ce mapping pour vous : | Niveau | `severityNumber` | `severityText` | | ------ | :--------------: | -------------- | | TRACE | 1 | `TRACE` | | DEBUG | 5 | `DEBUG` | | INFO | 9 | `INFO` | | WARN | 13 | `WARN` | | ERROR | 17 | `ERROR` | | FATAL | 21 | `FATAL` | Les valeurs intermédiaires (`DEBUG2 = 6`, `INFO3 = 11`, etc.) sont autorisées par la spec et passent sans transformation. ## Attributs OTel exploités [#attributs-otel-exploités] Lyncea **lit** un seul attribut côté ingestion : `deployment.environment` (resource), pour appliquer l'allowlist du projet — voir [Environnements de projet](/fr/docs/environments). Tous les autres attributs (`service.name`, `service.version`, `trace_id`, attributs de logRecord custom…) sont **conservés tels quels** dans le payload et indexables côté recherche via [`GET /api/v1/logs`](/fr/docs/api/logs/LogsController_list). Aucun attribut n'est silencieusement supprimé. ## Réponses HTTP [#réponses-http] | Statut | Quand | Body | `Retry-After` | | ------------------------- | -------------------------------------------------------------------- | ---------------------------------------- | ------------- | | `200 OK` | Lot accepté à l'edge | `ExportLogsServiceResponse` (vide en v1) | — | | `400 Bad Request` | JSON malformé, `resourceLogs` vide, Content-Type non géré | `Status` `{ code: 3, message }` | — | | `401 Unauthorized` | Clé d'API absente ou invalide | `Status` `{ code: 16, message }` | — | | `403 Forbidden` | Scope `write:logs` manquant, ou tenant suspendu | `Status` `{ code: 7, message }` | — | | `413 Payload Too Large` | Body brut > 4 MiB **ou** décompressé > 16 MiB | `Status` `{ code: 3, message }` | — | | `429 Too Many Requests` | Quota EPS dépassé (par clé **ou** par tenant — voir Plans) | `Status` `{ code: 8, message }` | secondes | | `503 Service Unavailable` | Pipeline Lyncea momentanément indisponible (incident côté ingestion) | `Status` `{ code: 14, message }` | `5` | Le body d'erreur est toujours le proto gRPC `Status` `{ code, message }`, encodé dans le même content-type que la requête (JSON ou protobuf). ## Retry & backoff [#retry--backoff] Côté client, appliquer la convention OTel standard : * **Retry exponentiel** sur `429` et `503` uniquement, en respectant `Retry-After` quand il est présent. * **Pas de retry** sur les autres `4xx` — ils signalent une erreur de configuration (clé invalide, scope manquant, payload trop gros, JSON malformé) qu'un retry ne corrigera pas. * **Buffer en mémoire** côté client pour absorber les coupures brèves. Les SDK OTel officiels et l'OTel Collector implémentent ce comportement nativement (queue interne + retry + circuit breaker) — vous n'avez rien à coder. Si vous écrivez votre propre client, partez d'un backoff exponentiel `1s → 2s → 4s → 8s → 16s` avec jitter et un cap raisonnable. `200 OK` signifie « accepté pour traitement », pas « consultable dans l'interface ». La pipeline est asynchrone (Kafka → ClickHouse) — comptez quelques secondes en régime nominal avant qu'un log apparaisse dans [`GET /api/v1/logs`](/fr/docs/api/logs/LogsController_list) ou dans le [live tail](/fr/docs/api/logs/LogsStreamController_stream). ## Limites [#limites] | Paramètre | Valeur par défaut | | ------------------------------------ | --------------------------------------------- | | Body brut maximum | 4 MiB | | Body après décompression maximum | 16 MiB | | Quota d'événements par seconde (EPS) | dépend du plan — [voir Plans](/fr/docs/plans) | | Granularité du rate-limit | par clé d'API **et** par tenant agrégé | Le rate-limit est un token-bucket dont la capacité et la vitesse de refill suivent l'EPS du plan. Une rafale peut consommer jusqu'à la capacité complète avant de rate-limiter. ## Voir aussi [#voir-aussi] * [Premiers pas](/fr/docs/getting-started) — créer un projet, générer une clé et envoyer son premier log. * [Environnements de projet](/fr/docs/environments) — allowlist `deployment.environment` et préfixes de clé. * [Plans](/fr/docs/plans) — quotas EPS et rétention par offre. # Plans (/fr/docs/plans) Votre plan Lyncea contrôle plusieurs paramètres techniques de votre tenant et borne ce que vos projets peuvent demander : * la **rétention** des logs — combien de jours ils restent consultables après ingestion ; * la **capacité d'ingestion** — combien d'événements par seconde votre tenant peut envoyer avant d'être rate-limité ; * la **capacité de lecture** — combien de requêtes `/api/v1/logs` par minute et combien de flux live tail concurrents vous pouvez ouvrir. Le plan est attribué **au niveau de l'organisation** (le tenant). Tous les projets d'une même organisation partagent ses quotas — il n'existe pas de plan par projet. ## Tableau comparatif [#tableau-comparatif] | | Free | Starter | Growth | Enterprise | | -------------------------- | :----------: | :------------: | :---------------: | :----------------: | | Projets inclus | 1 | 5 | illimité | sur mesure | | Rétention des logs | 7 jours | 30 jours | 90 jours | 365 jours | | Capacité d'ingestion | 500 events/s | 2 000 events/s | 10 000 events/s | 50 000 events/s | | Requêtes lecture / minute | 60 | 300 | 1 200 | 3 000 | | Timeout requête (CH) | 5 s | 15 s | 30 s | 60 s | | Flux live tail concurrents | 1 | 3 | 10 | 30 | | Support | Communauté | Email | Email prioritaire | Dédié | | Tarification | Gratuit | Self-service | Self-service | Contrat sur mesure | **Disponibilité actuelle.** Seul le plan **Free** est ouvert à l'inscription publique. Starter, Growth et Enterprise sont sur la roadmap — vous pourrez basculer depuis vos réglages d'organisation dès qu'ils seront livrés. ## Rétention par projet [#rétention-par-projet] Le plan définit un **plafond** — la rétention maximale que chaque projet de votre organisation peut configurer. Dans ce plafond, **chaque projet choisit sa propre valeur indépendamment**. Par exemple, avec un plan **Growth** (plafond : 90 jours) vous pouvez avoir : * un projet production qui conserve les logs 90 jours, * un projet staging à 30 jours, * un projet development à 7 jours seulement. Les valeurs disponibles correspondent aux paliers de plans : **7, 30, 90 ou 365 jours**. Choisir une valeur supérieure au plafond de votre plan est rejeté par l'API. La valeur par défaut lors de la création d'un projet est **30 jours**. En cas de **downgrade**, tout projet dont la rétention dépasse le plafond du nouveau plan doit être réduit manuellement avant que la bascule puisse s'effectuer — voir [Downgrade](#downgrade) ci-dessous. ## Comment changer de plan [#comment-changer-de-plan] ### Free ↔ Starter ↔ Growth — bascule depuis la console (à venir) [#free--starter--growth--bascule-depuis-la-console-à-venir] La gestion du plan se fait depuis les réglages de votre organisation, par le **propriétaire** (`owner`) de l'organisation uniquement. Les `admin` gèrent le quotidien (membres, projets, clés d'API, rétention de chaque projet) mais ne basculent pas le plan — c'est une décision contractuelle qu'on réserve à l'owner. L'effet est immédiat sur les nouveaux logs ingérés et sur la capacité d'ingestion (le nouveau seuil s'applique dans la minute qui suit). ### Bascule vers ou depuis Enterprise — sur contact [#bascule-vers-ou-depuis-enterprise--sur-contact] Le plan **Enterprise** est un contrat sur mesure : volumes engagés, SLA, support dédié, conformité contractuelle (NDA, MSA). Il n'est pas accessible en self-service. * **Pour passer à Enterprise** depuis n'importe quel plan : contactez-nous pour cadrer le contrat. La bascule est ensuite appliquée par notre équipe. * **Pour repasser d'Enterprise à un plan self-service** : la sortie suit le même cadre — c'est un événement contractuel, pas un clic. ## Ce qui se passe quand vous changez de plan [#ce-qui-se-passe-quand-vous-changez-de-plan] ### Upgrade [#upgrade] Vos nouveaux logs profitent immédiatement de la rétention plus longue. La capacité d'ingestion s'aligne dans la minute qui suit. Vos projets existants continuent de fonctionner sans interruption. ### Downgrade [#downgrade] Avant d'appliquer un downgrade, nous vérifions que **chaque projet respecte le nouveau cap** — notamment la rétention configurée par projet. Si un projet demande plus que ce que le nouveau plan autorise, la bascule est **bloquée** et la liste des projets concernés vous est retournée. Vous (ou notre équipe, pour Enterprise) devez d'abord **éditer la rétention de chaque projet en conflit pour la ramener sous le cap du nouveau plan**, puis retenter la bascule. Ce choix est volontaire : nous ne réduisons jamais la rétention d'un projet sans une décision explicite de votre part. ### Vos logs déjà collectés [#vos-logs-déjà-collectés] Les logs **déjà ingérés** conservent leur durée de rétention initiale, celle en vigueur au moment de leur collecte. Seuls les logs ingérés *après* la bascule adoptent la nouvelle politique. Concrètement : un downgrade ne supprime pas votre historique payé — il s'éteint progressivement à mesure que les logs atteignent leur date d'expiration d'origine. ## Voir aussi [#voir-aussi] * [Vue d'ensemble](/fr/docs) — concepts d'organisation, de projet et de RBAC. * [Démarrage](/fr/docs/getting-started) — créer un compte (plan Free) et envoyer vos premiers logs. # Référence API (/fr/docs/api) {/* This file is regenerated by scripts/generate-openapi-pages.mjs. Edit there. */} Les endpoints sont regroupés par domaine. Chaque section liste ses opérations avec leur méthode HTTP. Toutes les requêtes (sauf l'ingestion `/v1/logs`) attendent un JWT Bearer scopé sur un tenant actif. # Create API key (/fr/docs/api/api-keys/ApiKeysController_create) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List API keys (/fr/docs/api/api-keys/ApiKeysController_list) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Revoke API key (/fr/docs/api/api-keys/ApiKeysController_revoke) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Login (/fr/docs/api/authentication/AuthController_login) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Logout (/fr/docs/api/authentication/AuthController_logout) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get current user (/fr/docs/api/authentication/AuthController_me) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List my tenants (/fr/docs/api/authentication/AuthController_meTenants) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Refresh session (/fr/docs/api/authentication/AuthController_refresh) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Register tenant (/fr/docs/api/authentication/AuthController_register) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Switch tenant (/fr/docs/api/authentication/AuthController_switchTenant) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Update profile (/fr/docs/api/authentication/AuthController_updateMe) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Dashboard summary (trailing 24h) (/fr/docs/api/dashboard/DashboardController_summary) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Accept invitation (/fr/docs/api/invitations/MyInvitationsController_accept) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Decline invitation (/fr/docs/api/invitations/MyInvitationsController_decline) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List my invitations (/fr/docs/api/invitations/MyInvitationsController_list) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Send logs (/fr/docs/api/ingestion/IngestionController_ingest) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get a single log by id (/fr/docs/api/logs/LogsController_getOne) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List logs (paginated + filterable) (/fr/docs/api/logs/LogsController_list) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Live tail logs (SSE) (/fr/docs/api/logs/LogsStreamController_stream) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Add member (/fr/docs/api/project-members/ProjectMembersController_add) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List members (/fr/docs/api/project-members/ProjectMembersController_list) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Remove member (/fr/docs/api/project-members/ProjectMembersController_remove) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Update member role (/fr/docs/api/project-members/ProjectMembersController_updateRole) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Create project (/fr/docs/api/projects/ProjectsController_create) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get project (/fr/docs/api/projects/ProjectsController_findOne) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List projects (/fr/docs/api/projects/ProjectsController_list) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get my project role (/fr/docs/api/projects/ProjectsController_myRole) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Update project (/fr/docs/api/projects/ProjectsController_update) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Invite member (/fr/docs/api/tenant-members/MembershipsController_invite) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List members (/fr/docs/api/tenant-members/MembershipsController_list) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List invitations (/fr/docs/api/tenant-members/MembershipsController_listInvitations) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Remove member (/fr/docs/api/tenant-members/MembershipsController_remove) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Revoke invitation (/fr/docs/api/tenant-members/MembershipsController_revokeInvitation) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Update member role (/fr/docs/api/tenant-members/MembershipsController_updateRole) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get current tenant (/fr/docs/api/tenants/TenantMeController_me) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Components — Showcase (/en/docs/components-showcase) This page is a visual reference. Every component below is rendered with its minimal props to verify it is correctly wired into the MDX pipeline and styled to match the Lyncea theme. See also [Getting started](/en/docs/getting-started) for the user-facing walkthrough. ## Banner [#banner] `` is natively `sticky top-0 z-40` and mutates `--fd-banner-height` on `:root` — it's designed to live in the root layout, not inline in a page. For this showcase we turn off `changeLayout` and override the positional classes via `className` (merged by `tailwind-merge`). This banner is dismissible (closed state is persisted by `id`). ## Callout [#callout] Neutral informational note. Warning callout. Blocking error callout. Confirmation callout. ## Card / Cards [#card--cards] Clickable card pointing to another doc page. Informational card — no associated link. External link to the official Fumadocs documentation. ## Tabs [#tabs] ```bash curl -X POST https://api.example.com/v1/logs \ -H "Authorization: Bearer my_live_xxx" \ --data-binary @log.pb ``` ```ts await fetch('https://api.example.com/v1/logs', { method: 'POST', headers: { Authorization: 'Bearer my_live_xxx' }, body: payload, }); ``` ```python import httpx httpx.post( 'https://api.example.com/v1/logs', headers={'Authorization': 'Bearer my_live_xxx'}, content=payload, ) ``` ## Accordion [#accordion] 100,000 logs per day on the free tier. Adjustable from the tenant settings. 7 days on `dev`, 30 days on `staging`, 90 days on `prod` (default retention). Yes — GDPR export endpoint available in the admin area (`platform_admin` role required). ## Steps [#steps] \### Create a project From the dashboard, **New project** → pick an environment. ### Generate an API key **API keys** tab, scope `write:logs`. The value is shown once — store [#generate-an-api-key-api-keys-tab-scope-writelogs-the-value-is-shown-once--store] it. ### Send your first log `POST /v1/logs` with the `Authorization: Bearer my_live_xxx` header. [#send-your-first-log-post-v1logs-with-the-authorization-bearer-my_live_xxx-header] ## Files [#files] ## TypeTable [#typetable] Manual — every prop is hand-declared. ## AutoTypeTable [#autotypetable] Generates the table directly from a TypeScript declaration — the compiler extracts the type, JSDoc tags (`@default`, `@param`, `@returns`) and unions. Server Component only (build-time compilation). ## Code block (CodeBlock auto-wired) [#code-block-codeblock-auto-wired] ```ts title="apps/api/src/iam/auth.service.ts" async login(email: string, password: string): Promise { const user = await this.users.findByEmail(email); if (!user) throw new UnauthorizedError('invalid_credentials'); const ok = await argon2.verify(user.passwordHash, password); if (!ok) throw new UnauthorizedError('invalid_credentials'); return this.tokens.mint(user); } ``` ### CodeBlock — advanced features [#codeblock--advanced-features] Line highlight (`// [!code highlight]`), word highlight (`// [!code word:xxx]`), diff (`// [!code ++]` / `// [!code --]`). ```ts title="next.config.ts" import createMDX from 'fumadocs-mdx/config'; const withMDX = createMDX(); // [!code word:rewrites] const config = { reactStrictMode: true, // [!code highlight] output: 'standalone', // [!code highlight] async rewrites() { return [ { source: '/:lang(fr|en)/docs/:path*.mdx', // [!code ++] destination: '/llms.mdx/:lang/docs/:path*', // [!code ++] legacyKey: 'old', // [!code --] }, ]; }, }; export default withMDX(config); ``` ## DynamicCodeBlock [#dynamiccodeblock] Client-side Shiki-compiled block — useful for dynamically generated content (user snippet, API response). ## InlineTOC [#inlinetoc] Page outline ## ImageZoom [#imagezoom] Click the image to zoom (full-screen overlay). ## GraphView [#graphview] Interactive force-directed graph of all docs pages. One node per page in the current locale; one edge per internal link extracted from the MDX body (`extractLinkReferences: true` in `source.config.ts`). Hover for description, click to navigate. ## GithubInfo [#githubinfo] Renders public stats for a GitHub repo (no token: 60 req/h per IP — fine for dev). ## Headings (auto-styled) [#headings-auto-styled] ### H3 — subsection [#h3--subsection] #### H4 — level 4 [#h4--level-4] ##### H5 — level 5 [#h5--level-5] ###### H6 — level 6 [#h6--level-6] Plain markdown `h1`–`h6` are auto-routed through Fumadocs' `Heading` component (auto anchor + scroll-spy for the side TOC). ## Math (KaTeX) [#math-katex] Inline with `$..$` delimiters: the daily quota resolution follows $Q_\text{day} = \min(Q_\text{tenant},\, \sum_i q_i)$. Block with ` ```math ` fence: ```math \text{TTL}_\text{token} = \begin{cases} 900\,\text{s} & \text{if type} = \texttt{access} \\ 2{,}592{,}000\,\text{s} & \text{if type} = \texttt{refresh} \end{cases} ``` ## Mermaid [#mermaid] OTLP → Kafka → ClickHouse ingestion pipeline: Refresh token state machine: ## Twoslash [#twoslash] Hover on `^?` to see the inferred type (Shiki + Twoslash compile at build). ```ts twoslash type Brand = T & { readonly __brand: B }; type TenantId = Brand; const asTenantId = (raw: string): TenantId => raw as TenantId; const id = asTenantId('tenant_42'); // ^? ``` Compiler error detection (`// @errors:` annotation): ```ts twoslash // @errors: 2322 type Severity = 'info' | 'warn' | 'error'; const s: Severity = 'critical'; ``` API completion (`// ^|`): ```ts twoslash // @noErrors const log = { severity: 'info', tenantId: 'foo' }; log.s; // ^| ``` # Project environments (/en/docs/environments) ## How environments work [#how-environments-work] Every project has an **environment allowlist** — the set of `deployment.environment` values it will accept from incoming logs. The three available values are `dev`, `staging`, and `prod`. All three are enabled by default; you can narrow the list when creating the project or update it later. ## Declaring the environment in your logs [#declaring-the-environment-in-your-logs] Lyncea reads the `deployment.environment` attribute from the **resource** section of every OTLP log record. You set this in your SDK or OTel Collector — it is never inferred from the API key. ```yaml processors: resource: attributes: - key: deployment.environment value: staging # dev | staging | prod action: insert service: pipelines: logs: processors: [resource] exporters: [otlphttp/lyncea] ``` This is the `Resource` snippet to pass to your OTel setup — the full OTLP export (processor + exporter) is covered by the Pino example in [Getting started](/en/docs/getting-started). ```js import { NodeSDK } from '@opentelemetry/sdk-node'; import { Resource } from '@opentelemetry/resources'; const sdk = new NodeSDK({ resource: new Resource({ 'deployment.environment': 'staging', // dev | staging | prod }), // …wire logRecordProcessor / OTLPLogExporter here to export to Lyncea. }); ``` ```python from opentelemetry.sdk.resources import Resource from opentelemetry.semconv.resource import ResourceAttributes resource = Resource.create({ ResourceAttributes.DEPLOYMENT_ENVIRONMENT: "staging", # dev | staging | prod }) ``` Lyncea normalises common aliases automatically: `development` → `dev`, `production` → `prod`. Logs whose `deployment.environment` value is **not in the project's allowlist are silently discarded**. No error is returned to the sender — the ingestion endpoint always responds `200 OK` for well-formed payloads. ## API keys and environments [#api-keys-and-environments] **An API key is scoped to a project, not to a specific environment.** A single key can send logs from every environment the project accepts — you do not need one key per environment. ### What the `live` / `test` prefix means [#what-the-live--test-prefix-means] Keys are prefixed with `lyn_live_` or `lyn_test_`. This reflects the **Lyncea platform** the key was issued on: | Prefix | Meaning | | -------------- | ----------------------------------------------- | | `lyn_live_xxx` | Issued on a production Lyncea deployment | | `lyn_test_xxx` | Issued on a sandbox / non-production deployment | This prefix exists so secret-scanning tools (GitGuardian, TruffleHog, …) can detect leaked keys. It does **not** determine which of your `dev` / `staging` / `prod` environments the key is valid for. ## Restricting the allowlist [#restricting-the-allowlist] You can narrow the allowlist at any time from the console or via the API. For example, a project dedicated to production monitoring can be restricted to `prod` only — any log tagged `dev` or `staging` sent to that project will be discarded. Removing an environment takes effect immediately for new log batches. Logs already indexed are not affected. API reference: [`POST /api/v1/projects`](/en/docs/api/projects/ProjectsController_create) · [`PATCH /api/v1/projects/{projectId}`](/en/docs/api/projects/ProjectsController_update) # Getting started (/en/docs/getting-started) This guide walks through the minimum path to get your first log into Lyncea. Every administration step is performed from the console; the HTTP contract details (payload, error codes, retry, limits) live in the [ingestion reference](/en/docs/ingestion), and each endpoint in the [API reference](/en/docs/api). Public sign-up is open: registering creates a new tenant for which you become the `owner`. If you have been invited to an existing tenant, sign in and open your pending invitations to accept them. **Invitation email delivery is not implemented yet** (coming soon) — invitations must be communicated out-of-band for now. ### Create an account (or sign in) [#create-an-account-or-sign-in] From the console: sign up or sign in. Registration creates both a user and an attached tenant, with the registrant as `owner`. Once authenticated, the server issues a 15-minute **EdDSA/Ed25519** JWT alongside a 30-day rotating refresh cookie. If you belong to several tenants, pick the one to work on — the JWT is scoped to a single active tenant at a time. API reference: [`POST /api/v1/auth/register`](/en/docs/api/authentication/AuthController_register), [`POST /api/v1/auth/login`](/en/docs/api/authentication/AuthController_login), [`POST /api/v1/auth/switch-tenant`](/en/docs/api/authentication/AuthController_switchTenant). ### Create a project [#create-a-project] From the console: open the project creation form. It pre-fills the `slug` (URL-safe, unique inside the tenant) and the allowlist of accepted OTel environments (`dev`, `staging`, `prod` by default). See [Project environments](/en/docs/environments) to understand how this allowlist works and how to tag your logs with the right environment. A **project** is the scope to which a retention, an ingestion quota and a set of API keys apply. The common granularity is **one project per operational scope** (typically per product or per major environment), with `service.name` distinguishing services inside. Splitting into more projects is only useful if you want separate retention, quota or key sets. API reference: [`POST /api/v1/projects`](/en/docs/api/projects/ProjectsController_create). ### Mint an API key [#mint-an-api-key] From the console: open the project list, pick your project, then the « API keys » tab and click « Create a key » with the `write:logs` scope. The plaintext secret is **shown only once** at creation — Lyncea only stores a hash. Save it in a secrets manager (Vault, AWS Secrets Manager, Doppler, …). The returned secret looks like `lyn_live_xxx` (production Lyncea deployment) or `lyn_test_xxx` (sandbox). This prefix is for secret-scanning tools — it does not restrict which `dev` / `staging` / `prod` environments the key can send logs from. See [Project environments](/en/docs/environments). API reference: [`POST /api/v1/projects/{projectId}/api-keys`](/en/docs/api/api-keys/ApiKeysController_create). ### Send a log over OTLP/HTTP [#send-a-log-over-otlphttp] Log shipping is purely programmatic: no upload form — you wire your **existing application logger** (Pino, Python `logging`, etc.) to export over OTLP/HTTP to , or deploy an OTel Collector if your stack has no SDK bridge. Authenticate with an API key (`Authorization: Bearer lyn_live_xxx` header), body in protobuf (the SDK default) or JSON. ```bash pnpm add pino pino-opentelemetry-transport ``` ```js import pino from 'pino'; export const logger = pino({ level: 'info', transport: { target: 'pino-opentelemetry-transport', options: { logRecordProcessorOptions: { recordProcessorType: 'batch', exporterOptions: { protocol: 'http/json', url: 'https://api.lyncea.io/v1/logs', headers: { Authorization: `Bearer ${process.env.LYNCEA_API_KEY}` }, }, }, resourceAttributes: { 'service.name': 'checkout-api', 'deployment.environment': 'prod', }, }, }, }); logger.info({ userId: 42 }, 'user logged in'); ``` ```bash pip install opentelemetry-sdk opentelemetry-exporter-otlp-proto-http ``` ```python import logging from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler from opentelemetry.sdk._logs.export import BatchLogRecordProcessor from opentelemetry.sdk.resources import Resource from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter provider = LoggerProvider(resource=Resource.create({ "service.name": "checkout-api", "deployment.environment": "prod", })) provider.add_log_record_processor( BatchLogRecordProcessor( OTLPLogExporter( endpoint="https://api.lyncea.io/v1/logs", headers={"Authorization": "Bearer lyn_live_xxx"}, ) ) ) logging.getLogger().addHandler(LoggingHandler(logger_provider=provider)) logging.getLogger().setLevel(logging.INFO) logging.info("user logged in") ``` Use this when your stack has no SDK bridge (Java/Logback, Go, Ruby, .NET…) or when you already run a Collector aggregating logs/traces/metrics for other backends. ```yaml exporters: otlphttp/lyncea: endpoint: https://api.lyncea.io headers: Authorization: Bearer ${env:LYNCEA_API_KEY} compression: gzip service: pipelines: logs: exporters: [otlphttp/lyncea] ``` For a quick test from the terminal: ```bash curl -X POST https://api.lyncea.io/v1/logs \ -H "Authorization: Bearer lyn_live_xxx" \ -H "Content-Type: application/json" \ -d '{ "resourceLogs": [{ "resource": { "attributes": [ { "key": "service.name", "value": { "stringValue": "checkout-api" } }, { "key": "deployment.environment", "value": { "stringValue": "prod" } } ]}, "scopeLogs": [{ "scope": { "name": "lyncea-quickstart" }, "logRecords": [{ "timeUnixNano": "1717171717000000000", "severityNumber": 9, "severityText": "INFO", "body": { "stringValue": "user logged in" } }] }] }] }' ``` A `200 OK` response confirms acceptance — Lyncea enforces a **full-success** contract (no per-record reject is reported synchronously). Things to know: * **The OTel Collector and SDK bridges** point at the root `https://api.lyncea.io` and append `/v1/logs` themselves. A direct OTLP exporter (`OTLPLogExporter` for example) expects the full URL `https://api.lyncea.io/v1/logs`. * **The search API is available** — see the « Read your logs » step below. A `GET /api/v1/logs` call also confirms the integration works from the read side. For the full contract (OTLP/JSON payload, HTTP error codes, OTel severity mapping, retry/backoff, limits), see the [ingestion reference](/en/docs/ingestion). API reference: [`POST /v1/logs`](/en/docs/api/ingestion/IngestionController_ingest). ### Read your logs [#read-your-logs] Once at least one log lands in ClickHouse, you can query the trailing window with the read API. The active tenant comes from your JWT — never pass a `tenantId` as a query parameter, the gateway will refuse the request. ```bash # List the last 24 h of logs, filter by severity and project curl -G "https://api.lyncea.io/api/v1/logs" \ -H "Authorization: Bearer ${JWT}" \ --data-urlencode "severities=ERROR,FATAL" \ --data-urlencode "projectIds=${PROJECT_ID}" \ --data-urlencode "limit=100" ``` ```jsonc // 200 OK { "data": [ { "id": "…", "timestamp": "…", "severityText": "ERROR", "body": "…", … } ], "nextCursor": "eyJ0aW1lc3RhbXAiOiIyMDI2LTA1LTEwVDIz…", "totalScanned": 1234 } ``` Pagination is **keyset-based** : pass the `nextCursor` value verbatim as `?cursor=…` on the next call. The cursor is opaque — do not decode it. ```bash # Single log by id (404 if it does not belong to your tenant) curl "https://api.lyncea.io/api/v1/logs/${LOG_ID}" \ -H "Authorization: Bearer ${JWT}" # Live tail via Server-Sent Events curl -N "https://api.lyncea.io/api/v1/logs/stream?projectIds=${PROJECT_ID}" \ -H "Authorization: Bearer ${JWT}" ``` **Plan-aware throttling** : each tenant has a per-minute query budget and a ClickHouse `max_execution_time` cap. A `query_rate_limited` (429) suggests slowing down ; a `query_timeout` (504) suggests narrowing the filter window. API reference: [`GET /api/v1/logs`](/en/docs/api/logs/LogsController_list), [`GET /api/v1/logs/{logId}`](/en/docs/api/logs/LogsController_getOne), [`GET /api/v1/logs/stream`](/en/docs/api/logs/LogsStreamController_stream), [`GET /api/v1/dashboard/summary`](/en/docs/api/dashboard/DashboardController_summary). ## What next? [#what-next] Full OTLP/JSON payload, `severityNumber` mapping, HTTP error codes, retry policy, limits — everything you need to write a robust client. How Lyncea routes logs by environment, how to declare `deployment.environment` in your SDK, and what the `live` / `test` key prefix means. Console: Members page for tenant roles (`owner`, `admin`, `member`), then the « Members » tab of each project (from{' '} the project list) to fine-tune project roles (`admin`, `editor`, `viewer`). Full OpenAPI reference: auth, projects, keys, ingestion, audit. # Overview (/en/docs) Lyncea is a log observability platform built for teams running multiple services in production. Ingestion follows the **OTLP/HTTP** OpenTelemetry specification — point your existing collectors or SDKs at Lyncea, no need to rewrite your instrumentation. Lyncea is under active development. Available today: OTLP ingestion, multi-tenant management (tenants, projects, members, API keys), RBAC, **log search**, **live tail over SSE** and a **trailing-24 h dashboard summary**. Alerting and the tenant-side audit API are not implemented yet — see the *Coming soon* badges below. ## What you can do [#what-you-can-do] Send logs in OTLP format using a per-project API key, either as protobuf or JSON, with optional gzip compression. Token-exact search, filter by severity/environment/service, paginate by keyset cursor and live tail via Server-Sent Events. Plan-aware throttling caps query rate and execution time. One tenant per customer organisation, projects inside, two RBAC layers (tenant and project) and project-scoped API keys. Every sensitive action (key creation, role change, BO access) is recorded in an audit log. ⚠️ The **read** API for tenants is not exposed yet — coming soon. ## Key concepts [#key-concepts] Lyncea has **two hierarchy levels**: a tenant represents your customer organisation (billing, ingestion quota, top-level permissions) and contains one or more projects, which receive the logs and own the API keys. There is no intermediate grouping — "tenant" and "organisation" mean the same thing. | Term | Definition | | ------------------ | ------------------------------------------------------------------------------------------------- | | **Tenant** | Isolated space for a customer organisation. Every row carries a `tenant_id`. | | **Project** | Application or service being monitored, inside a tenant. Owns API keys and receives logs. | | **API key** | Signed secret hashed in database, scoped `write:logs`. Minted per project, used by your services. | | **Tenant member** | User attached to a tenant with a global role (`owner`, `admin`, `member`). | | **Project member** | Optional override of a tenant member's role on a specific project (`admin`, `editor`, `viewer`). | Lyncea **never** accepts a `tenant_id` from the request body or query: it is read from the JWT on every call. Multi-tenant isolation is enforced server side, not client side. ## Architecture at a glance [#architecture-at-a-glance] The API decouples ingestion (hot path) from analytical writes (ClickHouse via Kafka). EdDSA/Ed25519 JWT for the console, API keys for ingestion — two independent auth paths. Reads (search, SSE live tail, dashboard) hit ClickHouse, gated by the JWT. ## Going further [#going-further] Create a project, mint an API key and send your first log in under five minutes. Full OpenAPI documentation of the Front Office endpoints, generated from the API source code. # Ingestion reference (/en/docs/ingestion) This page documents the HTTP contract of in detail. For a step-by-step quickstart with per-stack examples, see [Getting started](/en/docs/getting-started). ## Endpoint [#endpoint] `POST` — authenticate with an API key (`Authorization: Bearer lyn_live_xxx` header, `write:logs` scope). The OTel Collector and most SDK log transports/appenders point at the **root** `https://api.lyncea.io` and append `/v1/logs` themselves. When you wire an OTLP exporter directly (`OTLPLogExporter` in Node or Python for example), write the **full path** `https://api.lyncea.io/v1/logs` — those exporters do not append the suffix. ## Payload format [#payload-format] Two content-types accepted: | Content-Type | Use | | ------------------------ | --------------------------------------------------------------------------------- | | `application/x-protobuf` | Binary `ExportLogsServiceRequest`. Default for OTel SDKs and the Collector. | | `application/json` | JSON shape of the same proto. Handy for scripts and for bridges without protobuf. | `Content-Encoding: gzip` and `deflate` are decoded transparently. The **16 MiB** cap is enforced after decompression — an extreme ratio triggers a `413` rather than uncontrolled memory use. ### Minimum valid example (JSON) [#minimum-valid-example-json] ```json { "resourceLogs": [ { "resource": { "attributes": [ { "key": "service.name", "value": { "stringValue": "checkout-api" } }, { "key": "deployment.environment", "value": { "stringValue": "prod" } } ] }, "scopeLogs": [ { "scope": { "name": "lyncea-quickstart" }, "logRecords": [ { "timeUnixNano": "1717171717000000000", "severityNumber": 9, "severityText": "INFO", "body": { "stringValue": "user logged in" }, "attributes": [ { "key": "user.id", "value": { "stringValue": "42" } } ] } ] } ] } ] } ``` Constraints: * `resourceLogs[]` must not be empty (otherwise `400`). * `timeUnixNano` is a **string** of 19 digits (Unix nanoseconds) — do not send it as a JS number, you'd lose precision past `2^53`. * Everything else is optional but recommended to make full use of [log search](/en/docs/getting-started#read-your-logs). ## Severity [#severity] OTel uses `severityNumber` (integer 1–24). Common mapping to know if you hand-craft the payload — official bridges do this mapping for you: | Level | `severityNumber` | `severityText` | | ----- | :--------------: | -------------- | | TRACE | 1 | `TRACE` | | DEBUG | 5 | `DEBUG` | | INFO | 9 | `INFO` | | WARN | 13 | `WARN` | | ERROR | 17 | `ERROR` | | FATAL | 21 | `FATAL` | Intermediate values (`DEBUG2 = 6`, `INFO3 = 11`, …) are allowed by the spec and pass through untouched. ## OTel attributes consumed [#otel-attributes-consumed] Lyncea **reads** a single attribute at ingest time: `deployment.environment` (resource), used to enforce the project allowlist — see [Project environments](/en/docs/environments). Every other attribute (`service.name`, `service.version`, `trace_id`, custom log-record attributes…) is **preserved as-is** in the payload and indexable from the search API via [`GET /api/v1/logs`](/en/docs/api/logs/LogsController_list). No attribute is silently dropped. ## HTTP responses [#http-responses] | Status | When | Body | `Retry-After` | | ------------------------- | ----------------------------------------------------------------- | ----------------------------------------- | ------------- | | `200 OK` | Batch accepted at the edge | `ExportLogsServiceResponse` (empty in v1) | — | | `400 Bad Request` | Malformed JSON, empty `resourceLogs`, unsupported Content-Type | `Status` `{ code: 3, message }` | — | | `401 Unauthorized` | Missing or invalid API key | `Status` `{ code: 16, message }` | — | | `403 Forbidden` | Missing `write:logs` scope, or tenant suspended | `Status` `{ code: 7, message }` | — | | `413 Payload Too Large` | Raw body > 4 MiB **or** decompressed > 16 MiB | `Status` `{ code: 3, message }` | — | | `429 Too Many Requests` | EPS quota exceeded (per-key **or** per-tenant — see Plans) | `Status` `{ code: 8, message }` | seconds | | `503 Service Unavailable` | Lyncea pipeline temporarily unavailable (ingestion-side incident) | `Status` `{ code: 14, message }` | `5` | Error bodies are always the gRPC `Status` proto `{ code, message }`, encoded in the same content-type as the request (JSON or protobuf). ## Retry & backoff [#retry--backoff] Apply the standard OTel client convention: * **Exponential retry** on `429` and `503` only, honouring `Retry-After` when present. * **No retry** on other `4xx` — they signal a config error (bad key, missing scope, payload too large, malformed JSON) that a retry won't fix. * **In-memory buffer** on the client side to absorb short outages. Official OTel SDKs and the OTel Collector implement this behaviour natively (internal queue + retry + circuit breaker) — you don't have to write any of it. If you're rolling your own client, start from `1s → 2s → 4s → 8s → 16s` exponential backoff with jitter and a sensible cap. `200 OK` means "accepted for processing", not "visible in the UI". The pipeline is asynchronous (Kafka → ClickHouse) — expect a few seconds in nominal mode before a log shows up in [`GET /api/v1/logs`](/en/docs/api/logs/LogsController_list) or the [live tail](/en/docs/api/logs/LogsStreamController_stream). ## Limits [#limits] | Setting | Default | | ----------------------------- | -------------------------------------------- | | Max raw body | 4 MiB | | Max body after decompression | 16 MiB | | Events-per-second quota (EPS) | plan-dependent — [see Plans](/en/docs/plans) | | Rate-limit granularity | per-API-key **and** aggregate per-tenant | The rate-limit is a token bucket whose capacity and refill rate follow the plan's EPS. A burst can consume up to full capacity before throttling kicks in. ## See also [#see-also] * [Getting started](/en/docs/getting-started) — create a project, mint a key, send your first log. * [Project environments](/en/docs/environments) — `deployment.environment` allowlist and key prefix semantics. * [Plans](/en/docs/plans) — EPS quotas and retention by tier. # Plans (/en/docs/plans) Your Lyncea plan controls several technical parameters of your tenant and bounds what your projects can ask for: * **log retention** — how many days your logs remain queryable after ingestion; * **ingestion capacity** — how many events per second your tenant can send before being rate-limited; * **read capacity** — how many `/api/v1/logs` queries per minute and how many concurrent live tail streams you can open. The plan is set **at the organization level** (the tenant). Every project inside an organization shares its quotas — there is no per-project plan. ## Comparison table [#comparison-table] | | Free | Starter | Growth | Enterprise | | --------------------- | :----------: | :------------: | :-------------: | :-------------: | | Projects included | 1 | 5 | unlimited | custom | | Log retention | 7 days | 30 days | 90 days | 365 days | | Ingestion capacity | 500 events/s | 2,000 events/s | 10,000 events/s | 50,000 events/s | | Read queries / minute | 60 | 300 | 1,200 | 3,000 | | Query timeout (CH) | 5 s | 15 s | 30 s | 60 s | | Concurrent live tails | 1 | 3 | 10 | 30 | | Support | Community | Email | Priority email | Dedicated | | Pricing | Free | Self-service | Self-service | Custom contract | **Current availability.** Only the **Free** plan is open to public registration. Starter, Growth and Enterprise are on the roadmap — you'll be able to switch from your organization settings as soon as they ship. ## Per-project retention [#per-project-retention] The plan defines a **ceiling** — the longest retention any project in your organization can configure. Within that ceiling, **each project chooses its own value independently**. For example, on a **Growth** plan (ceiling: 90 days) you can have: * a production project retaining logs for 90 days, * a staging project retaining logs for 30 days, * a development project retaining logs for only 7 days. The allowed values match the plan tiers: **7, 30, 90, or 365 days**. Requesting a value above your plan's ceiling is rejected by the API. The default when creating a project is **30 days**. On a **downgrade**, every project whose retention exceeds the new plan's ceiling must be lowered manually before the switch can proceed — see [Downgrade](#downgrade) below. ## How to change plan [#how-to-change-plan] ### Free ↔ Starter ↔ Growth — from the console (coming soon) [#free--starter--growth--from-the-console-coming-soon] Plan management lives in your organization settings, available to the **owner** of the organization only. `admin`s handle day-to-day operations (members, projects, API keys, per-project retention) but don't switch the plan — that's a contractual decision we reserve for the owner. The effect is immediate on newly ingested logs and on ingestion capacity (the new ceiling applies within the minute). ### Moving to or from Enterprise — by contact [#moving-to-or-from-enterprise--by-contact] The **Enterprise** plan is a custom contract: committed volumes, SLA, dedicated support, contractual compliance (NDA, MSA). It is not available self-service. * **To move to Enterprise** from any plan: get in touch so we can scope the contract. The switch is then applied by our team. * **To move back from Enterprise to a self-service plan**: the same process — it's a contractual event, not a button click. ## What happens when you change plan [#what-happens-when-you-change-plan] ### Upgrade [#upgrade] Your new logs immediately benefit from the longer retention. Ingestion capacity aligns within the minute. Existing projects keep running without interruption. ### Downgrade [#downgrade] Before applying a downgrade, we verify that **every project respects the new cap** — notably the per-project retention setting. If any project asks for more than the new plan allows, the switch is **blocked** and the list of offending projects is returned. You (or our team, for Enterprise) must first **edit each offending project's retention so it falls under the new plan's cap**, then retry the switch. This is intentional: we never silently reduce a project's retention without an explicit decision on your side. ### Your already-collected logs [#your-already-collected-logs] Logs **already ingested** keep the retention duration they had at collection time. Only logs ingested *after* the switch adopt the new policy. In practice: a downgrade does not delete your paid history — it fades out progressively as each log reaches its original expiry. ## See also [#see-also] * [Overview](/en/docs) — organization, project and RBAC concepts. * [Getting started](/en/docs/getting-started) — create an account (Free plan) and send your first logs. # API reference (/en/docs/api) {/* This file is regenerated by scripts/generate-openapi-pages.mjs. Edit there. */} Endpoints are grouped by domain. Each section lists its operations with the HTTP method. All requests (except `/v1/logs` ingestion) expect a tenant-scoped Bearer JWT. # Create API key (/en/docs/api/api-keys/ApiKeysController_create) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List API keys (/en/docs/api/api-keys/ApiKeysController_list) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Revoke API key (/en/docs/api/api-keys/ApiKeysController_revoke) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Login (/en/docs/api/authentication/AuthController_login) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Logout (/en/docs/api/authentication/AuthController_logout) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get current user (/en/docs/api/authentication/AuthController_me) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List my tenants (/en/docs/api/authentication/AuthController_meTenants) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Refresh session (/en/docs/api/authentication/AuthController_refresh) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Register tenant (/en/docs/api/authentication/AuthController_register) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Switch tenant (/en/docs/api/authentication/AuthController_switchTenant) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Update profile (/en/docs/api/authentication/AuthController_updateMe) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Dashboard summary (trailing 24h) (/en/docs/api/dashboard/DashboardController_summary) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Accept invitation (/en/docs/api/invitations/MyInvitationsController_accept) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Decline invitation (/en/docs/api/invitations/MyInvitationsController_decline) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List my invitations (/en/docs/api/invitations/MyInvitationsController_list) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Send logs (/en/docs/api/ingestion/IngestionController_ingest) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get a single log by id (/en/docs/api/logs/LogsController_getOne) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List logs (paginated + filterable) (/en/docs/api/logs/LogsController_list) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Live tail logs (SSE) (/en/docs/api/logs/LogsStreamController_stream) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Add member (/en/docs/api/project-members/ProjectMembersController_add) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List members (/en/docs/api/project-members/ProjectMembersController_list) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Remove member (/en/docs/api/project-members/ProjectMembersController_remove) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Update member role (/en/docs/api/project-members/ProjectMembersController_updateRole) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Create project (/en/docs/api/projects/ProjectsController_create) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get project (/en/docs/api/projects/ProjectsController_findOne) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List projects (/en/docs/api/projects/ProjectsController_list) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get my project role (/en/docs/api/projects/ProjectsController_myRole) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Update project (/en/docs/api/projects/ProjectsController_update) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Invite member (/en/docs/api/tenant-members/MembershipsController_invite) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List members (/en/docs/api/tenant-members/MembershipsController_list) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List invitations (/en/docs/api/tenant-members/MembershipsController_listInvitations) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Remove member (/en/docs/api/tenant-members/MembershipsController_remove) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Revoke invitation (/en/docs/api/tenant-members/MembershipsController_revokeInvitation) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Update member role (/en/docs/api/tenant-members/MembershipsController_updateRole) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get current tenant (/en/docs/api/tenants/TenantMeController_me) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}