n8n-nodes-loopman
Version:
Official n8n community node package for integrating with Loopman - the human-in-the-loop automation platform
277 lines (195 loc) ⢠8.5 kB
Markdown
# Loopman n8n Community Node
[Loopman](https://loopman.ai) is a human-in-the-loop automation platform that allows you to seamlessly integrate human decision-making into your automated workflows.
This community node for n8n enables you to send messages to humans and wait for their responses within your n8n workflows.
## š Quick Start
### Installation
```bash
npm install n8n-nodes-loopman
```
### Configuration
1. Get your API key from [loopman.ai](https://loopman.ai)
2. In n8n, add a **Loopman** credential with your API key
3. Add the **Loopman** node to your workflow
### Usage
The node supports two modes for sending messages to Loopman:
#### Message Mode
- **Message**: The text to send to humans
- **Organization ID**: Optional organization override
- **Metadata**: Optional JSON metadata
#### Auto (MCP) Mode
- **Automatic Processing**: Uses input data as message payload
- **MCP Integration**: Requires MCP Client node for advanced AI-powered processing
- **Enhanced Features**: Automatic data analysis, summarization, and insights extraction
For detailed MCP setup instructions, see [MCP Client Setup Guide](docs/MCP_CLIENT_SETUP.md).
## š¤ LLM Configuration (Recommended)
When using the Loopman node with AI models (OpenAI, Claude, etc.), we recommend specific configurations to ensure optimal performance and consistent output formatting.
### Temperature Settings
**Recommended: `0.1`** for consistent, deterministic behavior
- **`0.0`** - Maximum determinism (may be too rigid)
- **`0.1`** - Very deterministic, excellent for structured tasks ā
**RECOMMENDED**
- **`0.3`** - Good balance for most use cases
- **`0.7`** - Default value (may cause formatting issues)
### Complete LLM Configuration
```javascript
// OpenAI Chat Model Configuration
{
"model": "gpt-4o-mini",
"temperature": 0.1, // Very deterministic
"topP": 0.9, // Good balance
"maxTokens": 2000, // Prevent truncated responses
"timeout": 30000, // Appropriate timeout
"frequencyPenalty": 0.0, // No frequency penalty
"presencePenalty": 0.0 // No presence penalty
}
```
### Why These Settings?
- **`temperature: 0.1`** - Ensures consistent JSON formatting and reduces parsing errors
- **`maxTokens: 2000`** - Prevents truncated responses that can cause workflow failures
- **`topP: 0.9`** - Maintains quality while staying deterministic
### Common Issues Fixed
- ā
**Prevents duplicate "Action Input" errors** in ReAct parsing
- ā
**Ensures consistent JSON formatting** for MCP tools
- ā
**Reduces workflow failures** from truncated responses
- ā
**Improves reliability** of human-in-the-loop decisions
## š¦ Installation Methods
### Local n8n Instance
**Option 1: Via npm (Recommended)**
```bash
# In your n8n instance directory
npm install n8n-nodes-loopman
# Or install a specific version
npm install n8n-nodes-loopman@0.1.17
```
**Option 2: Development with npm link**
```bash
# In your loopman-n8n-connector project
npm link
# In your n8n instance directory
npm link n8n-nodes-loopman
```
### Docker Installation
Add to your `docker-compose.yml`:
```yaml
version: "3.8"
services:
n8n:
image: n8nio/n8n
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=admin
- N8N_BASIC_AUTH_PASSWORD=password
volumes:
- n8n_data:/home/node/.n8n
command: >
sh -c "
npm install n8n-nodes-loopman &&
n8n start
"
volumes:
n8n_data:
```
### n8n Cloud
- ā **Not yet available** - Currently under review by n8n team
- ā³ **Coming soon** - Will be available in the n8n marketplace once approved
## š§ After Installation
1. **Restart n8n**:
```bash
# Local installation
n8n restart
# Docker
docker-compose restart n8n
```
2. **Verify installation**:
- Open n8n in your browser
- Go to **Nodes** ā **Community Nodes**
- Search for **"Loopman"**
- The node should appear in the list
3. **Use the connector**:
- Create a new workflow
- Add a node
- Search for **"Loopman"**
- Configure with your credentials
For detailed installation instructions, see [Installation Guide](docs/INSTALLATION.md).
## š ļø Development
### Quick Start
```bash
# Setup (first time only)
npm run setup
# Start development environment (isolated n8n instance)
npm run start:dev
# Stop all n8n processes
npm run stop
# Clean development data (if needed)
npm run clean:dev
```
This launches:
- TypeScript compiler in watch mode
- **Isolated n8n instance** (no conflicts with global n8n)
- Node.js debugger on port 9230 with TypeScript source maps
- Fresh database (no migration issues)
### Debug TypeScript Directly
1. Start development: `npm run start:dev`
2. In VS Code: **Run and Debug** ā **Debug n8n Connector (TypeScript)**
3. Set breakpoints directly in your `.ts` files
4. Debug with full TypeScript support!
### Manual Development
```bash
# Install and build
npm install
npm run build
npm link
# Watch TypeScript changes
npm run dev
# In another terminal, start n8n
N8N_CUSTOM_EXTENSIONS=$(pwd)/dist n8n start
```
## š Features
- ā
**Human-in-the-loop automation** - Send messages and wait for responses
- ā
**Two operation modes** - Simple message mode and advanced MCP mode
- ā
**TypeScript support** - Full type safety and IntelliSense
- ā
**Debugging support** - Debug directly in TypeScript with source maps
- ā
**Docker ready** - Easy deployment with Docker Compose
- ā
**npm package** - Install via npm for easy distribution
## š Links
- **Website**: [loopman.ai](https://loopman.ai)
- **Documentation**: [docs.loopman.ai](https://docs.loopman.ai)
- **npm Package**: [n8n-nodes-loopman](https://www.npmjs.com/package/n8n-nodes-loopman)
- **GitHub Repository**: [Loopman-AI/n8n-loopman-connector](https://github.com/Loopman-AI/n8n-loopman-connector)
## š Support
- **Email**: support@loopman.ai
- **Issues**: [GitHub Issues](https://github.com/Loopman-AI/n8n-loopman-connector/issues)
- **Documentation**: [docs.loopman.ai](https://docs.loopman.ai)
## š License
MIT License - see [LICENSE](LICENSE.md) file for details.
---
**Made with ā¤ļø by the Loopman Team**
```
EXECUTION PROTOCOL - FOLLOW THESE STEPS STRICTLY IN THIS EXACT ORDER:
1ļøā£ INITIALIZATION:
- FIRST, call the MCP tool exactly as follows (single line JSON):
{"action":"getHumanGuidelines","params":{"workflow_id":"{{ $workflow.id }}"}}
- Do not proceed until these guidelines have been successfully loaded.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
2ļøā£ AGENT ACTION:
[Core of your Agent Prompt: email classification, reasoning, JSON output, etc.]
Apply the retrieved guidelines and decision rules to ensure your proposed output fully complies with policy and workflow expectations.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
3ļøā£ SUBMISSION šØ MANDATORY FINAL STEP šØ:
ā YOU MUST CALL submitForHumanReview TO COMPLETE YOUR TASK ā
- After completing your reasoning and generating your final result, you MUST call the MCP tool `submitForHumanReview`
- This is NOT optional - it is a REQUIRED step for EVERY workflow
- You CANNOT provide a final answer without calling submitForHumanReview first
- Do NOT skip this step under any circumstances
- The tool description will show all required and optional fields
- Structure your submission as if writing an email to your supervisor for decision approval
- The reviewer may not have full context, so provide clear explanations and supporting evidence
ā ļø CRITICAL RULES:
- The order must always be: `getHumanGuidelines` ā reasoning ā `submitForHumanReview` (NO EXCEPTIONS)
- Use workflow Id = {{ $workflow.id }} and execution Id = {{ $execution.id }} (automatically added by the system)
- Only valid JSON (single line) will be executed ā any text, markdown, or explanation outside the JSON will cause an error
- Optional fields must be omitted, not left empty
- Always put the entire JSON on one line. Never use line breaks in Action Input
- If you provide a final answer without calling submitForHumanReview first, you have FAILED the task
```