gaunt-sloth-assistant
Version:
[](https://github.com/Galvanized-Pukeko/gaunt-sloth-assistant/actions/workflows/unit-tests.yml) [ • 6.79 kB
Markdown
# Development Guidelines of Gaunt Sloth Assistant
Gaunt Sloth Assistant is a command-line Node.js utility helping developers to access LLM from the command line.
## Technologies Used
- NodeJS 22 (LTS)
- Vitest 3 for tests
- Typescript 5
- LangChain and LangGraph 0.3
Please refer to package.json to check exact versions
## Core Development Principles
Vendor and system abstractions and wrappers should be used in most cases.
### Imports
Project uses import alias with `#src/*.js` pointing to `src/` and after build resolving to generated `dist/`.
Please abstain from using relative imports, only use them when no other choices are available
(currently the only exception is entry point cli.js)
### Architecture and Flow
- Make sure proper separation of LangChain components (LLMs, chains, agents, tools)
- Check for clear data flow between components
- Ensure proper state management in LangGraph workflows
- Validate error handling and fallback mechanisms
### Security
- Make sure API key handling and environment variables
- Make sure no personal data is present in code
- **Make sure that API keys are NOT accidentally included into diff.**
- Check for proper input sanitization
- Verify output validation and sanitization
### Output
Use [consoleUtils.ts](src/consoleUtils.ts) to output to users.
Do not use console.log directly.
### System
Use [systemUtils.ts](src/systemUtils.ts) to access system variables and functions such as
process.env, process.stdout, etc.
### LLM
Use [llmUtils.ts](src/llmUtils.ts) to access LLM.
### Middleware
Starting with v1.0.0, Gaunt Sloth uses LangChain middleware pattern instead of hooks.
Middleware provides hooks to intercept and control agent execution at critical points:
- `beforeModel`: Called before model invocation
- `afterModel`: Called after model response
- `beforeAgent`: Called before agent initialization
- `afterAgent`: Called after agent completion
- `wrapModelCall`: Wrap model calls with full control
- `wrapToolCall`: Wrap tool calls with full control
**Predefined Middleware:**
- `anthropic-prompt-caching`: Reduces API costs by caching prompts (Anthropic only)
- `summarization`: Condenses conversation history when approaching token limits
**Configuration:**
- Middleware is configured in the `middleware` array in config
- JSON configs support predefined middleware (string or config object)
- JS configs support both predefined and custom middleware objects
**Implementation:**
- Middleware registry is in [src/middleware/registry.ts](src/middleware/registry.ts)
- Middleware types are in [src/middleware/types.ts](src/middleware/types.ts)
- Provider-specific middleware can be auto-injected via `postProcessJsonConfig()` in preset files
## Testing (Important)
Tests are located in `spec/`. Integration tests are located in `integration-tests/`.
- In spec files never import mocked files themselves, mock them, and a tested file should import them.
- Always import the tested file dynamically within the test.
- Mocks are hoisted, so it is better to simply place them at the top of the file to avoid confusion.
- Make sure that beforeEach is always present and always calls vi.resetAllMocks(); as a first thing.
- Create variables with vi.fn() without adding implementations to them, then apply these functions with vi.mock outside
of the describe.
- Apply mock implementations and return values to mocks within individual tests.
- When mock implementations are common for all test cases, apply them in beforeEach.
- Make sure test actually testing a function, rather than simply testing the mock.
Example test
```typescript
import {beforeEach, describe, expect, it, vi} from 'vitest';
const consoleUtilsMock = {
display: vi.fn(),
displayError: vi.fn(),
displayInfo: vi.fn(),
displayWarning: vi.fn(),
displaySuccess: vi.fn(),
displayDebug: vi.fn(),
};
vi.mock('#src/utils/consoleUtils.js', () => consoleUtilsMock);
const fsUtilsMock = {
existsSync: vi.fn(),
readFileSync: vi.fn(),
writeFileSync: vi.fn(),
};
vi.mock('node:fs', () => fsUtilsMock);
describe('specialUtil', () => {
beforeEach(() => {
vi.resetAllMocks(); // Always reset all mocks in beforeEach
// Set up default mock values
fsUtilsMock.existsSync.mockImplementation(() => true);
});
it('specialFunction should eventually write test contents to a file', async () => {
fsUtilsMock.readFileSync.mockImplementation((path: string) => {
if (path.includes('inputFile.txt')) return 'TEST CONTENT';
return '';
});
const {specialFunction} = await import('#src/specialUtil.js'); // Always import tested file within the test
// Function under test
specialFunction();
expect(fsUtilsMock.writeFileSync).toHaveBeenCalledWith('outputFile.txt', 'TEST CONTENT\nEXTRA CONTENT');
expect(consoleUtilsMock.displayDebug).not.toHaveBeenCalled();
expect(consoleUtilsMock.displayWarning).not.toHaveBeenCalled();
expect(consoleUtilsMock.display).not.toHaveBeenCalled();
expect(consoleUtilsMock.displayError).not.toHaveBeenCalled();
expect(consoleUtilsMock.displayInfo).not.toHaveBeenCalled();
expect(consoleUtilsMock.displaySuccess).toHaveBeenCalledWith('Successfully transferred to outputFile.txt');
});
});
```
When mocking class constructors follow this pattern:
```javascript
// With export default
const gthFileSystemToolkitGetToolsMock = vi.fn();
vi.mock('#src/tools/GthFileSystemToolkit.js', () => {
const GthFileSystemToolkit = vi.fn();
GthFileSystemToolkit.prototype.getTools = gthFileSystemToolkitGetToolsMock;
return {
default: GthFileSystemToolkit,
};
});
// With named exports
const otherToolkitMock = vi.fn();
vi.mock('#src/tools/OtherToolkit.js', () => {
const OtherToolkit = vi.fn();
OtherToolkit.prototype.getTools = otherToolkitMock;
return {
OtherToolkit,
};
});
```
### Configs in tests
When you create incomplete configs please cast them as Partial,
for example `Partial<RawGthConfig>`
and later cast them to RawGthConfig when providing them as argument to function
which expects full interface. Even better option is to provide all properties.
## Development Workflow
Please follow this workflow:
- Analyze requirements.
- Develop changes.
- Make sure all tests pass `npm run test` and fix if possible.
- Request relevant documentation if some of the test failures are unclear.
- Once all tests are green check lint with `npm run lint`.
- If any lint failures are present try fixing them with `npm run lint-n-fix`.
- If autofix didn't help, try fixing them yourself.
- Prefer testing all user outputs, including testing the absence of unexpected outputs.
---