UNPKG

@hotmeshio/hotmesh

Version:

Permanent-Memory Workflows & AI Agents

210 lines (209 loc) 9.35 kB
import { ILogger } from '../../../logger'; import { KeyType } from '../../../../modules/key'; import { StreamService } from '../../index'; import { KeyStoreParams, StringAnyType } from '../../../../types'; import { PostgresClientType } from '../../../../types/postgres'; import { PublishMessageConfig, StreamConfig, StreamMessage, StreamStats } from '../../../../types/stream'; import { ProviderClient, ProviderTransaction } from '../../../../types/provider'; /** * PostgreSQL Stream Service * * High-performance stream message provider using PostgreSQL with LISTEN/NOTIFY. * * ## Module Organization * * This service is organized into focused modules following KISS principles: * - `postgres.ts` (this file) - Main orchestrator and service interface * - `kvtables.ts` - Schema deployment and table management * - `messages.ts` - Message CRUD operations (publish, fetch, ack, delete) * - `stats.ts` - Statistics and query operations * - `scout.ts` - Scout role coordination for polling visible messages * - `notifications.ts` - LISTEN/NOTIFY notification system with static state management * - `lifecycle.ts` - Stream and consumer group lifecycle operations * * ## Lifecycle * * ### Initialization (`init`) * 1. Deploy PostgreSQL schema (tables, indexes, triggers, functions) * 2. Create ScoutManager for coordinating visibility timeout polling * 3. Create NotificationManager for LISTEN/NOTIFY event handling * 4. Set up notification handler (once per client, shared across instances) * 5. Start fallback poller (backup for missed notifications) * 6. Start router scout poller (for visibility timeout processing) * * ### Shutdown (`cleanup`) * 1. Stop router scout polling loop * 2. Release scout role if held * 3. Stop notification consumers for this instance * 4. UNLISTEN from channels when last instance disconnects * 5. Clean up fallback poller when last instance disconnects * 6. Remove notification handlers when last instance disconnects * * ## Notification System (LISTEN/NOTIFY) * * ### Real-time Message Delivery * - PostgreSQL trigger on INSERT sends NOTIFY when messages are immediately visible * - Messages with visibility timeout are NOT notified on INSERT * - Multiple service instances share the same client and notification handlers * - Static state ensures only ONE LISTEN per channel across all instances * * ### Components * - **Notification Handler**: Listens for PostgreSQL NOTIFY events * - **Fallback Poller**: Polls every 30s (default) for missed messages * - **Router Scout**: Active role-holder polls visible messages frequently (~100ms) * - **Visibility Function**: `notify_visible_messages()` checks for expired timeouts * * ## Scout Role (Visibility Timeout Processing) * * When messages are published with visibility timeouts (delays), they need to be * processed when they become visible. The scout role ensures this happens efficiently: * * 1. **Role Acquisition**: One instance per app acquires "router" scout role * 2. **Fast Polling**: Scout polls `notify_visible_messages()` every ~100ms * 3. **Notification**: Function triggers NOTIFY for streams with visible messages * 4. **Role Rotation**: Role expires after interval, another instance can claim it * 5. **Fallback**: Non-scouts sleep longer, try to acquire role periodically * * ## Message Flow * * ### Publishing * 1. Messages inserted into partitioned table * 2. If immediately visible → INSERT trigger sends NOTIFY * 3. If visibility timeout → no NOTIFY (scout will handle when visible) * * ### Consuming (Event-Driven) * 1. Consumer calls `consumeMessages` with notification callback * 2. Service executes LISTEN on channel `stream_{name}_{group}` * 3. On NOTIFY → fetch messages → invoke callback * 4. Initial fetch done immediately (catch any queued messages) * * ### Consuming (Polling) * 1. Consumer calls `consumeMessages` without callback * 2. Service directly queries and reserves messages * 3. Returns messages synchronously * * ## Reliability Guarantees * * - **Notification Fallback**: Poller catches missed notifications every 30s * - **Visibility Scout**: Ensures delayed messages are processed when visible * - **Graceful Degradation**: Falls back to polling if LISTEN fails * - **Shared State**: Multiple instances coordinate via static maps * - **Race Condition Safe**: SKIP LOCKED prevents message duplication * * @example * ```typescript * // Initialize service * const service = new PostgresStreamService(client, storeClient, config); * await service.init('namespace', 'appId', logger); * * // Event-driven consumption (recommended) * await service.consumeMessages('stream', 'group', 'consumer', { * notificationCallback: (messages) => { * // Process messages in real-time * } * }); * * // Polling consumption * const messages = await service.consumeMessages('stream', 'group', 'consumer', { * batchSize: 10 * }); * * // Cleanup on shutdown * await service.cleanup(); * ``` */ declare class PostgresStreamService extends StreamService<PostgresClientType & ProviderClient, any> { namespace: string; appId: string; logger: ILogger; private scoutManager; private notificationManager; constructor(streamClient: PostgresClientType & ProviderClient, storeClient: ProviderClient, config?: StreamConfig); init(namespace: string, appId: string, logger: ILogger): Promise<void>; private isNotificationsEnabled; private checkForMissedMessages; private fetchAndDeliverMessages; private getConsumerKey; mintKey(type: KeyType, params: KeyStoreParams): string; transact(): ProviderTransaction; getTableName(): string; safeName(appId: string): string; createStream(streamName: string): Promise<boolean>; deleteStream(streamName: string): Promise<boolean>; createConsumerGroup(streamName: string, groupName: string): Promise<boolean>; deleteConsumerGroup(streamName: string, groupName: string): Promise<boolean>; /** * `publishMessages` can be roped into a transaction by the `store` * service. If so, it will add the SQL and params to the * transaction. [Process Overview]: The engine keeps a reference * to the `store` and `stream` providers; it asks the `store` to * create a transaction and then starts adding store commands to the * transaction. The engine then calls the router to publish a * message using the `stream` provider (which the router keeps * a reference to), and provides the transaction object. * The `stream` provider then calls this method to generate * the SQL and params for the transaction (but, of course, the sql * is not executed until the engine calls the `exec` method on * the transaction object provided by `store`). * * NOTE: this strategy keeps `stream` and `store` operations separate but * allows calls to the stream to be roped into a single SQL transaction. */ publishMessages(streamName: string, messages: string[], options?: PublishMessageConfig): Promise<string[] | ProviderTransaction>; _publishMessages(streamName: string, messages: string[], options?: PublishMessageConfig): { sql: string; params: any[]; }; consumeMessages(streamName: string, groupName: string, consumerName: string, options?: { batchSize?: number; blockTimeout?: number; autoAck?: boolean; reservationTimeout?: number; enableBackoff?: boolean; initialBackoff?: number; maxBackoff?: number; maxRetries?: number; enableNotifications?: boolean; notificationCallback?: (messages: StreamMessage[]) => void; }): Promise<StreamMessage[]>; private shouldUseNotifications; private setupNotificationConsumer; stopNotificationConsumer(streamName: string, groupName: string): Promise<void>; private fetchMessages; ackAndDelete(streamName: string, groupName: string, messageIds: string[]): Promise<number>; acknowledgeMessages(streamName: string, groupName: string, messageIds: string[], options?: StringAnyType): Promise<number>; deleteMessages(streamName: string, groupName: string, messageIds: string[], options?: StringAnyType): Promise<number>; retryMessages(streamName: string, groupName: string, options?: { consumerName?: string; minIdleTime?: number; messageIds?: string[]; delay?: number; maxRetries?: number; limit?: number; }): Promise<StreamMessage[]>; getStreamStats(streamName: string): Promise<StreamStats>; getStreamDepth(streamName: string): Promise<number>; getStreamDepths(streamNames: { stream: string; }[]): Promise<{ stream: string; depth: number; }[]>; trimStream(streamName: string, options: { maxLen?: number; maxAge?: number; exactLimit?: boolean; }): Promise<number>; getProviderSpecificFeatures(): { supportsBatching: boolean; supportsDeadLetterQueue: boolean; supportsOrdering: boolean; supportsTrimming: boolean; supportsRetry: boolean; supportsNotifications: boolean; maxMessageSize: number; maxBatchSize: number; }; cleanup(): Promise<void>; } export { PostgresStreamService };