oneie
Version:
Build apps, websites, and AI agents in English. Zero-interaction setup for AI agents (Claude Code, Cursor, Windsurf). Download to your computer, run in the cloud, deploy to the edge. Open source and free forever.
1,192 lines (876 loc) • 38.7 kB
Markdown
title: Ai Token Crypto Analysis Framework
dimension: knowledge
category: ai-token-crypto-analysis-framework.md
tags: ai, artificial-intelligence
related_dimensions: connections, things
scope: global
created: 2025-11-03
updated: 2025-11-03
version: 1.0.0
ai_context: |
This document is part of the knowledge dimension in the ai-token-crypto-analysis-framework.md category.
Location: one/knowledge/ai-token-crypto-analysis-framework.md
Purpose: Documents ai token crypto analysis framework
Related dimensions: connections, things
For AI agents: Read this to understand ai token crypto analysis framework.
# AI Token Crypto Analysis Framework
**A Practical Guide for Evaluating AI-Related Cryptocurrencies**
## What This Guide Is For
This document helps you evaluate AI-related cryptocurrencies and decide whether they're worth your investment. Unlike traditional cryptocurrencies that rely primarily on speculation, AI tokens should provide **real utility**—actual AI services like compute, model access, or data—making them more complex to analyze but potentially more valuable when done right.
**Version**: 1.0.0
**Last Updated**: 2025-11-03
**Audience**: Investors, researchers, analysts, crypto enthusiasts
## Table of Contents
1. [Understanding AI Tokens](#understanding-ai-tokens)
2. [Why AI Tokens Are Different](#why-ai-tokens-are-different)
3. [The Nine Types of AI Tokens](#the-nine-types-of-ai-tokens)
4. [The Essential Question Framework](#the-essential-question-framework)
5. [Step-by-Step Analysis Process](#step-by-step-analysis-process)
6. [Red Flags & Warning Signs](#red-flags--warning-signs)
7. [Green Flags & Positive Indicators](#green-flags--positive-indicators)
8. [Real-World Examples](#real-world-examples)
9. [Making the Investment Decision](#making-the-investment-decision)
10. [Common Mistakes to Avoid](#common-mistakes-to-avoid)
## Understanding AI Tokens
### What Are AI Tokens?
AI tokens are cryptocurrencies that power decentralized AI infrastructure. Instead of centralized companies like OpenAI or Google controlling AI services, AI tokens enable:
- **Decentralized compute networks**: Rent GPU power for AI training and cycle
- **Agent-to-agent economies**: AI agents transacting autonomously
- **Model marketplaces**: Buy, sell, and license AI models
- **Data exchanges**: Trade training data and datasets
- **AI infrastructure**: Storage, oracles, and other AI-enabling services
### Why Do They Exist?
**The Problem**: Centralized AI is expensive, censored, and controlled by a few big tech companies.
**The Solution**: AI tokens create open, permissionless markets for AI services where:
- Anyone can provide compute, data, or models
- Prices are set by market competition, not monopolies
- No single company can censor or control access
- Revenue flows directly to providers, not middlemen
### Key Concept: Utility-Driven Value
Unlike Bitcoin (digital gold) or most altcoins (speculation), AI tokens should derive value from **actual usage**:
```
Traditional Token Value = Hype + Speculation + Network Effects
AI Token Value = Actual AI Usage + Performance + Competitive Advantage
```
If an AI token isn't being used for real AI work, it's just another speculative token with fancy marketing.
## Why AI Tokens Are Different
### Traditional Crypto vs AI Tokens
| Aspect | Traditional Crypto | AI Tokens |
|--------|-------------------|-----------|
| **Value Driver** | Market sentiment, adoption | Real AI utility + market sentiment |
| **Key Metrics** | Price, volume, holders | Compute utilization, cycle count, agent activity |
| **Competition** | Other cryptos | Centralized AI (OpenAI, Google, Anthropic) |
| **Success Measure** | Price appreciation | Competitive pricing + growing usage |
| **Verification** | On-chain transactions | Proof of AI work (zkML, TEE, sampling) |
| **Token Necessity** | Usually essential | May be optional (could use stablecoins) |
### The Critical Difference: Verifiable Utility
Anyone can claim their token powers "decentralized AI." You need to verify:
1. **Is real AI compute happening?** Not just blockchain transactions, but actual model training or cycle
2. **Are people paying for it?** Revenue from actual usage, not just token trading
3. **Is it competitive?** Cheaper, faster, or more accessible than centralized alternatives
4. **Is the token necessary?** Would the system work without it?
## The AI Token Landscape
AI tokens are more diverse than traditional crypto. They can represent:
- **Infrastructure**: Decentralized compute, storage, networks
- **Individual Agents**: Specific AI agents with their own tokens
- **Creators & Influencers**: AI personalities, content creators, influencers
- **Data Economies**: Personal data, synthetic data, validation services
- **Intelligence Markets**: Access to specialized or general intelligence
### The Nine Types of AI Tokens
**Infrastructure (Decentralized Services)**:
1. **Compute Tokens**: GPU/TPU networks for training and cycle
2. **Agent Economy Tokens**: Platforms for agent-to-agent coordination
3. **Model Access Tokens**: AI model marketplaces and licensing
4. **Data Marketplace Tokens**: Traditional data trading platforms
5. **AI Infrastructure Tokens**: Storage, oracles, and enabling services
**Individual Entities (Specific AI Actors)**:
6. **Individual AI Agent Tokens**: Specific AI traders, researchers, or specialists
7. **AI Influencer/Creator Tokens**: AI personalities and content creators
**Data Economies (Beyond Marketplaces)**:
8. **Personal & Synthetic Data Tokens**: Data ownership, generation, validation, provenance
**Frontier Intelligence**:
9. **Ultra-Intelligence & AGI Tokens**: Advanced reasoning and superintelligence systems
### 1. Compute Tokens (GPU/TPU Networks)
**What they do**: Rent out GPU power for AI training and cycle
**Examples**: Akash Network, Render Network, io.net
**Key Questions**:
- How many GPUs are in the network?
- What's the network utilization rate? (Should be >40% for healthy network)
- How does pricing compare to AWS, GCP, or RunPod?
- Can you verify compute is actually happening?
**Good Indicators**:
- High utilization (>60%)
- Pricing 30-50% cheaper than centralized alternatives
- Verifiable proof of compute
- Diverse node operators (not concentrated)
- Growing unique users
**Red Flags**:
- Low utilization (<20%) = network isn't being used
- More expensive than AWS
- No proof of compute mechanism
- Single entity controls most nodes
- Only speculative trading, no real compute usage
### 2. Agent Economy Tokens (Autonomous AI Agents)
**What they do**: Enable AI agents to transact and coordinate autonomously
**Examples**: Fetch.ai, SingularityNET, Ocean Protocol
**Key Questions**:
- How many active agents are actually running?
- What's the agent-to-agent transaction ratio? (A2A vs human-initiated)
- What tasks can agents actually complete?
- Can agents operate profitably (earn more than they spend)?
**Good Indicators**:
- A2A transactions >50% (truly autonomous)
- High task success rate (>80%)
- Diverse agent capabilities
- Positive agent economics (profitable)
- Growing agent count independent of price
**Red Flags**:
- A2A ratio <20% (mostly human-driven)
- Low task success rate (<60%)
- No actual autonomous agents running
- Agents operating at a loss
- "Agent" is just marketing, not real AI
### 3. Model Access Tokens (AI Model Marketplaces)
**What they do**: Provide access to AI models or model training services
**Examples**: Bittensor, Gensyn, Prime Intellect
**Key Questions**:
- What models are available?
- How do model performance benchmarks compare to open-source alternatives?
- How many cycles are being run daily?
- Can users fine-tune models?
**Good Indicators**:
- Competitive benchmark scores (vs Llama, Mistral, etc.)
- High daily cycle volume
- Growing fine-tuning requests
- Models not available elsewhere
- Reproducible benchmark results
**Red Flags**:
- Unverified or fabricated benchmarks
- Zero actual usage (no cycles)
- Worse performance than free open-source models
- Models trained on stolen/unlicensed data
- No way to verify model quality
### 4. Data Marketplace Tokens (AI Training Data)
**What they do**: Facilitate buying/selling AI training data
**Examples**: Ocean Protocol, Streamr, The Graph
**Key Questions**:
- What datasets are available?
- Are datasets actually being used for training?
- Is data quality verified?
- Is data collection privacy-compliant (GDPR, CCPA)?
**Good Indicators**:
- High-quality, verified datasets
- Active training runs using the data
- Strong privacy compliance
- Fair revenue sharing with data contributors
- Growing dataset contributions
**Red Flags**:
- Unverified or poor-quality data
- No actual training runs (no demand)
- Privacy violations
- Biased or unrepresentative datasets
- Synthetic data sold as real data
- Unclear data ownership
### 5. AI Infrastructure Tokens (Enabling Services)
**What they do**: Provide storage, oracles, or other infrastructure for AI
**Examples**: Filecoin (storage), Arweave (permanent storage), Chainlink (oracles)
**Key Questions**:
- What specific AI use case does this enable?
- Is it being used by actual AI projects?
- How does it compare to centralized alternatives?
- Is the token necessary for the service?
**Good Indicators**:
- Clear AI use case (not just general blockchain)
- Integration with real AI projects
- Competitive pricing and performance
- Growing AI-specific usage
- Token essential to protocol operation
**Red Flags**:
- Vague "AI" claims with no specific use case
- No actual AI integrations
- More expensive or slower than Web2 alternatives
- Token is just a payment rail (could use anything)
- AI is just marketing, not core function
### 6. Individual AI Agent Tokens (Specific AI Entities)
**What they do**: Represent specific AI agents (traders, researchers, creators) with their own token economies
**Examples**: AI trading bots with performance tokens, AI research agents, AI content creators
**Key Questions**:
- What is this specific AI agent's track record?
- Is the agent's performance verifiable and auditable?
- Does the agent generate consistent value/revenue?
- How is token value tied to agent performance?
**Good Indicators**:
- Proven track record (months/years of performance data)
- Transparent performance metrics (P&L for traders, accuracy for analysts)
- Agent actually autonomous (not human-operated)
- Token value linked to agent success
- Agent has unique capabilities or advantages
- Revenue sharing with token holders
**Red Flags**:
- No verifiable performance history
- Agent is actually human-operated (fake AI)
- Inflated or fabricated performance claims
- Token has no relationship to agent's output
- Agent can't operate profitably
- No transparency into agent's decision-making
**Unique Considerations**:
- **Performance Risk**: Agent quality can degrade over time
- **Single Point of Failure**: One agent = concentrated risk
- **Accountability**: Who's responsible if agent makes bad decisions?
- **Reproducibility**: Can performance be verified independently?
### 7. AI Influencer/Creator Tokens (AI Personalities)
**What they do**: Represent AI-generated influencers, content creators, or virtual personalities
**Examples**: AI music artists, AI virtual influencers, AI content creators, AI streamers
**Key Questions**:
- How large and engaged is the AI's audience?
- Is the content actually AI-generated or human-assisted?
- What revenue does the AI generate?
- Is the personality/brand sustainable long-term?
**Good Indicators**:
- Large, growing, engaged audience (followers, subscribers)
- Verifiably AI-generated content (not human ghostwritten)
- Diversified revenue streams (sponsorships, content sales, appearances)
- Unique personality or niche
- Strong brand identity
- Token provides utility (exclusive content, governance, revenue share)
**Red Flags**:
- Bot followers or fake engagement
- Content is actually human-created
- No revenue generation (just followers)
- Personality is generic or easily replaceable
- Token provides no real utility
- Controversial content without moderation
**Unique Considerations**:
- **Virality Risk**: Influencer popularity can be fleeting
- **Platform Dependency**: Reliant on Instagram, TikTok, YouTube, etc.
- **Authenticity**: Is "AI influencer" just marketing for human creators?
- **Content Moderation**: AI can generate problematic content
- **Competition**: Easy for others to create similar AI influencers
**Revenue Model**:
```
Token Value = (Audience Size × Engagement Rate × Revenue per Follower)
× Brand Strength × Content Quality
```
### 8. Personal & Synthetic Data Tokens (Data Economy)
**What they do**: Broader data ecosystem beyond just marketplaces—includes personal data ownership, synthetic data generation, data validation
**Sub-Categories**:
**A. Personal Data Ownership Tokens**
- Users own and monetize their personal data
- Examples: Ad viewing rewards, search data sales, health data licensing
- Key: Privacy-preserving (zero-knowledge proofs, federated learning)
**B. Synthetic Data Generation Tokens**
- AI-generated training data
- Examples: Synthetic images, synthetic conversations, synthetic code
- Key: Quality verification, diversity scores, bias detection
**C. Data Validation & Labeling Tokens**
- Human or AI verification of data quality
- Examples: Image labeling, text annotation, data cleaning
- Key: Accuracy scores, consistency checks, dispute resolution
**D. Data Provenance Tokens**
- Track data lineage and usage rights
- Examples: Content attribution, royalty tracking, usage licensing
- Key: Immutable records, automatic royalties, copyright protection
**Key Questions**:
- What type of data economy is this?
- Is data ownership/provenance actually tracked?
- Are privacy regulations (GDPR, CCPA) respected?
- Is synthetic data distinguishable from real?
- How is data quality verified?
**Good Indicators**:
- Strong privacy protections (ZK proofs, encryption)
- Regulatory compliance
- High data quality scores
- Growing data contributors
- Fair revenue sharing
- Diverse use cases for data
**Red Flags**:
- Privacy violations
- Selling personal data without consent
- Synthetic data passed off as real
- No quality verification
- Unclear data ownership
- Exploitative revenue sharing
**Unique Considerations**:
- **Regulatory Risk**: Data privacy laws evolving rapidly
- **Quality Control**: Ensuring data isn't polluted or biased
- **Consent**: Proving data subjects agreed to monetization
- **Value Capture**: How does token capture data value?
### 9. Ultra-Intelligence & AGI Tokens (Frontier AI)
**What they do**: Represent access to cutting-edge, highly capable, or AGI-level AI systems
**Examples**: Advanced reasoning systems, multi-modal superintelligence, specialized expert systems
**Key Questions**:
- What makes this AI "ultra" intelligent?
- How does capability compare to GPT-4, Claude, or similar?
- Is the intelligence level verifiable through benchmarks?
- What problems can this AI solve that others can't?
- Is this actually novel, or repackaged existing AI?
**Good Indicators**:
- Exceeds GPT-4/Claude on multiple benchmarks
- Solves problems other AI can't (novel capabilities)
- Demonstrated in rigorous testing environments
- Academic papers or peer review
- Clear use cases requiring this level of intelligence
- Safe AI practices (alignment, interpretability)
**Red Flags**:
- Vague "super intelligence" claims without proof
- No benchmarks or only cherry-picked metrics
- Worse than existing free models
- "AGI" claims without evidence
- No safety measures or alignment research
- Grandiose claims with no working product
**Unique Considerations**:
- **Capability Verification**: Hard to verify "superintelligence"
- **Safety Risks**: Advanced AI carries existential risks
- **Regulatory Scrutiny**: Frontier AI attracts government attention
- **Alignment**: Is the AI aligned with human values?
- **Control**: Can the AI be controlled or shut down?
- **Monopoly Risk**: Breakthrough AI could dominate market
**Critical Evaluation**:
```
Ultra-Intelligence Score =
(Benchmark Performance - SOTA) × Use Case Uniqueness × Safety Measures
÷ Risk Factor
Where SOTA = State of the Art (e.g., GPT-4, Claude 3.5)
```
**Warning**: This category has the highest bullshit ratio. Most "AGI" or "superintelligence" tokens are pure marketing. Demand extraordinary evidence for extraordinary claims.
## The Essential Question Framework
Before investing in any AI token, answer these 10 questions honestly:
### 1. **Is the AI real?**
Can you independently verify that AI compute, models, or services are actually being provided?
✅ **Good**: Verifiable proof of compute, reproducible benchmarks, public cycle APIs
❌ **Bad**: Just claims in whitepaper, no way to verify, closed-source
### 2. **Is there real demand?**
Are people actively using the AI services, not just trading the token?
✅ **Good**: Growing cycle volume, increasing compute hours, active agents
❌ **Bad**: Zero usage metrics, only trading volume on exchanges
### 3. **Is it cheaper than centralized alternatives?**
Compare the cost per cycle/compute hour to OpenAI, AWS, or Google.
✅ **Good**: 30-70% cheaper than centralized options
❌ **Bad**: More expensive or same price as centralized
### 4. **Is the token necessary?**
Would this system work just as well with stablecoins or without a token?
✅ **Good**: Token deeply integrated into protocol mechanics, staking, governance
❌ **Bad**: Token is just a payment method, easily replaceable
### 5. **Is usage growing?**
Is AI usage increasing independent of token price?
✅ **Good**: Usage grows even when price is flat or down
❌ **Bad**: Usage only spikes when price pumps
### 6. **Are the metrics verifiable?**
Can you check the usage statistics yourself, or just trust the team?
✅ **Good**: On-chain metrics, public APIs, third-party verification
❌ **Bad**: Only team claims, no way to verify
### 7. **Who's the competition?**
For AI tokens, the competition is centralized AI (OpenAI, Anthropic, Google), not other tokens.
✅ **Good**: Clear competitive advantage (price, privacy, censorship-resistance)
❌ **Bad**: No advantage over centralized AI
### 8. **Is the tech proven?**
Are there working products, or just promises?
✅ **Good**: Live products, real users, months/years of operation
❌ **Bad**: "Coming soon," perpetual beta, no working product
### 9. **Are there safety measures?**
AI has risks—does the protocol address them?
✅ **Good**: Content moderation, ethics policy, dispute resolution, kill switches
❌ **Bad**: No safety measures, regulation nightmares waiting to happen
### 10. **Is the team credible?**
Do they have AI and crypto expertise?
✅ **Good**: Public team, relevant experience, successful track record
❌ **Bad**: Anonymous team, no AI experience, previous rug pulls
**Scoring**:
- 9-10 Yes: Strong candidate for investment
- 7-8 Yes: Promising, but higher risk
- 5-6 Yes: Speculative, small position only
- <5 Yes: Avoid
## Step-by-Step Analysis Process
### Phase 1: Quick Screen (5 minutes)
**Goal**: Filter out obvious bad projects
**Check**:
1. Is there a working product? (Not "coming soon")
2. Can you use the AI service yourself? (Public access)
3. Are there verifiable usage metrics? (On-chain or public dashboard)
4. Is the project on CoinGecko/CMC with basic info?
5. Any major red flags? (Known scams, rug pulls, fake team)
**Decision**: If all checks pass, proceed to deep dive. Otherwise, skip.
### Phase 2: Deep Dive Analysis (30-60 minutes)
#### A. Verify the AI Capability
**For Compute Networks:**
```
1. Find their public dashboard or stats page
2. Check:
- Total nodes/GPUs in network
- Current utilization rate (aim for >40%)
- GPU types available
- Cost per GPU-hour vs AWS/GCP
3. Try to rent compute yourself (if possible)
4. Look for proof-of-compute mechanism
```
**For Agent Tokens:**
```
1. Find agent marketplace or registry
2. Check:
- Number of active agents
- Agent-to-agent transaction %
- Task types supported
- Task success rates
3. Watch for actual autonomous behavior
4. Calculate if agents can be profitable
```
**For Model Tokens:**
```
1. Find model leaderboard or benchmarks
2. Check:
- Models available
- Benchmark scores vs open-source (Llama, Mistral)
- Cycle volume
- Fine-tuning options
3. Try running cycle yourself
4. Verify benchmarks are reproducible
```
**For Data Tokens:**
```
1. Browse data marketplace
2. Check:
- Datasets available
- Data quality scores
- Training runs using data
- Privacy compliance
3. Review sample data (if available)
4. Check licensing terms
```
#### B. Analyze Usage & Growth
**Key Metrics to Track** (30-day trends):
| Metric | How to Find | What's Good |
|--------|------------|-------------|
| Daily Active Users | Protocol dashboard, Dune Analytics | Growing +10% month-over-month |
| Transaction Volume | Blockchain explorer | Growing, with low volatility |
| Revenue (fees paid) | Protocol dashboard | Growing from actual usage, not trading |
| Network Size | Protocol stats | More nodes/agents/models over time |
| Unique Users | Analytics dashboards | Increasing unique addresses |
**Calculate the Key Ratio**:
```
Usage Growth Rate / Token Price Growth Rate
If > 1.0: Usage driving price (healthy)
If < 0.5: Price driven by speculation (risky)
```
#### C. Competitive Analysis
**Compare to Centralized Alternatives**:
| Feature | Decentralized (Token) | Centralized (OpenAI/AWS) |
|---------|----------------------|--------------------------|
| Cost per cycle | $X | $Y |
| Latency (ms) | X | Y |
| Uptime (%) | X | Y |
| Censorship resistance | Yes/No | No |
| Privacy | Strong/Weak | Weak |
| Model diversity | X models | Y models |
**Decision Logic**:
- If decentralized is **cheaper AND competitive quality**: Strong buy signal
- If decentralized is **more expensive but better privacy/censorship-resistance**: Niche use case
- If decentralized is **more expensive AND worse quality**: Avoid
#### D. Tokenomics Review
**Check Token Distribution**:
```
Team/Advisors: <20% (ideally <15%)
VCs/Investors: <30%
Public Sale: >20%
Liquidity: >10%
Community/Ecosystem: >20%
```
**Check Vesting**:
- Team tokens locked for 2-4 years? ✅
- Cliff before vesting starts (6-12 months)? ✅
- VCs locked for 1-2 years? ✅
- No lock-up? ❌ (dump risk)
**Check Token Utility**:
- Required for payments? ✅
- Staking for node operators? ✅
- Governance rights? ✅
- Just optional payment method? ❌
#### E. Team & Development
**Team Check**:
```
□ Public team members (not anonymous)
□ Relevant AI expertise (PhDs, research papers, previous AI work)
□ Relevant crypto expertise (previous projects)
□ Active on social media and community
□ No rug pull history
```
**Development Activity** (GitHub):
```
□ Frequent commits (weekly)
□ Multiple contributors (not 1-2 people)
□ Open-source (not closed)
□ Active issue/PR management
□ Recent updates (within last month)
```
### Phase 3: Risk Assessment (15 minutes)
Calculate scores for each risk category (0-100, lower is better):
#### Technical Risks
```
Smart Contract Risk: ___/100
- Unaudited: +30
- Audited by tier-2 firm: +10
- Audited by top firm (Trail of Bits, etc): 0
- Known vulnerabilities: +20 per critical issue
Compute Reliability Risk: ___/100
- No proof mechanism: +50
- Uptime <90%: +30
- Uptime 90-95%: +15
- Uptime >95%: 0
AI Verification Risk: ___/100
- Can't verify AI is happening: +60
- Trusted third party verification: +20
- zkML / TEE verification: 0
```
#### Market Risks
```
Competition Risk: ___/100
- More expensive than centralized: +50
- Same price: +30
- 10-30% cheaper: +10
- >30% cheaper: 0
Adoption Risk: ___/100
- <100 daily active users: +40
- 100-1000 DAU: +20
- 1000-10,000 DAU: +10
- >10,000 DAU: 0
Token Necessity Risk: ___/100
- Token not needed for core function: +60
- Token optional but useful: +30
- Token essential: 0
```
#### Regulatory Risks
```
AI Safety Risk: ___/100
- No content moderation: +40
- No ethics policy: +30
- No kill switch: +20
- Controversial use cases: +30
Data Privacy Risk: ___/100
- No GDPR/CCPA compliance: +50
- Unclear data rights: +30
- Full compliance: 0
```
**Total Risk Score** = Average of all categories
```
0-30: Low risk
31-50: Medium risk
51-70: High risk
71-100: Very high risk (avoid)
```
## Red Flags & Warning Signs
### 🚩 Critical Red Flags (Immediate Disqualification)
1. **Team has rug pull history**: Check previous projects on rug pull databases
2. **Fake AI claims**: No verifiable AI happening, just marketing
3. **Unverified contracts**: Smart contracts not verified on blockchain explorer
4. **Massive token unlock coming**: Large team/VC unlock within 3 months
5. **No working product**: Only whitepaper and promises
6. **Centralized control**: Single entity controls majority of nodes/supply
7. **Privacy violations**: Clear GDPR/CCPA violations
8. **Copied codebase**: Forked code with no meaningful changes
### ⚠️ Major Red Flags (Strong Caution)
1. **Low utilization**: Network <20% utilized
2. **Declining usage**: AI usage declining month-over-month
3. **More expensive than Web2**: Costs more than OpenAI/AWS
4. **Anonymous team**: No public team members
5. **No code updates**: GitHub inactive for 3+ months
6. **Poor benchmarks**: AI performance worse than free alternatives
7. **High concentration**: Top 10 holders own >50% of supply
8. **Token not necessary**: Could easily work without token
### ⚡ Minor Red Flags (Proceed with Caution)
1. **Low A2A ratio** (<30%): For agent tokens, mostly human-driven
2. **Poor documentation**: Unclear how to use the service
3. **Closed source**: Can't verify the code
4. **Limited transparency**: Metrics not public or verifiable
5. **High inflation**: >20% annual token inflation
6. **Unclear roadmap**: No clear development plan
## Green Flags & Positive Indicators
### ✅ Strong Positive Signals
1. **Growing organic usage**: AI usage increasing independent of price
2. **Significant cost advantage**: 30-70% cheaper than centralized AI
3. **High utilization**: Network >60% utilized
4. **Verifiable proofs**: zkML, TEE, or other proof mechanisms
5. **Top-tier audits**: Audited by Trail of Bits, OpenZeppelin, etc.
6. **Strong team**: Public team with proven AI/crypto expertise
7. **Active development**: Weekly commits, growing contributor base
8. **High A2A ratio**: >60% agent-to-agent transactions (for agent tokens)
### 🟢 Good Indicators
1. **Competitive performance**: Similar quality to centralized AI
2. **Fair tokenomics**: Team <15%, 4-year vesting
3. **Decentralized network**: No single entity >20% control
4. **Strong partnerships**: Integrations with known AI/crypto projects
5. **Growing ecosystem**: Multiple projects building on top
6. **Token utility**: Token essential for staking, governance, payments
### 💚 Promising Signs
1. **Transparent metrics**: Public dashboards with verifiable data
2. **Active community**: Engaged community building projects
3. **Regular updates**: Weekly development updates
4. **Open source**: Code publicly available
5. **Privacy-first**: Strong privacy and compliance measures
6. **Diverse use cases**: Multiple applications using the protocol
## Real-World Examples
### Example 1: Evaluating a Compute Token
**Token**: Hypothetical "DecentralGPU" (DGPU)
**Quick Screen**:
- ✅ Working product (can rent GPUs)
- ✅ Public dashboard with metrics
- ✅ Listed on CoinGecko
- ✅ No obvious red flags
**Deep Dive Findings**:
| Metric | DGPU | AWS (comparable) | Score |
|--------|------|------------------|-------|
| H100 GPU/hour | $2.50 | $4.00 | ✅ 37.5% cheaper |
| Utilization | 68% | N/A | ✅ Healthy |
| Uptime | 97% | 99.9% | ⚠️ Slightly lower |
| Daily Active Users | 450 | N/A | ✅ Growing 15%/month |
| Proof of Compute | TEE-based | N/A | ✅ Verified |
**Tokenomics**:
- Team: 12% (4-year vest) ✅
- VCs: 25% (2-year vest) ✅
- Public: 30% ✅
- Token used for: Payments + Staking ✅
**Risk Scores**:
- Technical: 15/100 (Low)
- Market: 25/100 (Low)
- Regulatory: 20/100 (Low)
- **Overall: 20/100 (Low Risk)**
**Competitive Advantages**:
1. Significantly cheaper than AWS
2. Strong utilization shows real demand
3. Verifiable proof of compute
4. Fair tokenomics
**Concerns**:
1. Slightly lower uptime than centralized
2. Relatively small user base
**Decision**: **STRONG BUY** - Clear value proposition, real usage, fair tokenomics, low risk
### Example 2: Evaluating an Agent Token
**Token**: Hypothetical "AgentDAO" (AGENT)
**Quick Screen**:
- ✅ Working product
- ❌ No public metrics dashboard
- ✅ Listed on exchanges
- ⚠️ Team partially anonymous
**Deep Dive Findings**:
| Metric | AGENT | Assessment |
|--------|-------|------------|
| Active Agents | 150 claimed | ⚠️ Can't verify |
| A2A Transactions | 25% | ❌ Mostly human-driven |
| Task Success Rate | 55% | ❌ Low quality |
| Agent Economics | Agents losing money | ❌ Not sustainable |
| Safety Measures | None listed | ❌ Regulatory risk |
**Tokenomics**:
- Team: 30% (1-year vest) ❌ High allocation
- VCs: 40% (no lock-up!) ❌ Dump risk
- Public: 15% ❌ Low
- Token used for: Optional payment ❌ Not necessary
**Risk Scores**:
- Technical: 45/100 (Medium)
- Market: 65/100 (High)
- Regulatory: 70/100 (High)
- **Overall: 60/100 (High Risk)**
**Red Flags**:
1. Can't verify agent count
2. Low A2A ratio (not truly autonomous)
3. Agents can't operate profitably
4. No safety measures
5. Poor tokenomics (high team %, no VC lock-up)
6. Token not necessary for function
**Decision**: **AVOID** - Too many red flags, high risk, questionable utility
### Example 3: Evaluating a Model Token
**Token**: Hypothetical "OpenModels" (OMDL)
**Quick Screen**:
- ✅ Working product
- ✅ Public leaderboard
- ✅ Can test models
- ✅ Active development
**Deep Dive Findings**:
| Benchmark | OMDL Best Model | Llama 3 70B (Free) | GPT-4 |
|-----------|-----------------|---------------------|-------|
| MMLU | 78.5 | 82.0 | 86.4 |
| HumanEval | 65.2 | 81.7 | 88.0 |
| Cost per 1M tokens | $4.00 | Free | $30.00 |
**Analysis**:
- ⚠️ Worse than free open-source models (Llama 3)
- ❌ Paying for worse performance
- ❌ No unique models unavailable elsewhere
- ✅ Cheaper than GPT-4, but worse quality
**Tokenomics**:
- Fair distribution ✅
- Token required for access ✅
- Active development ✅
**Risk Score**: 40/100 (Medium)
**Critical Issue**: Why pay for models worse than free alternatives?
**Decision**: **AVOID** - Lacks competitive advantage. Unless models improve dramatically, no reason to use over free alternatives.
## Making the Investment Decision
### Decision Matrix
Use this matrix to make your final decision:
```
Low Risk Medium Risk High Risk
(0-30) (31-60) (61-100)
─────────────────────────────────────────
Strong Utility │ STRONG BUY │ BUY │ SMALL POSITION
(Score >70) │ 20-30% │ 10-15% │ 2-5%
│ │ │
─────────────────┼────────────────┼─────────────────┼──────────────
Medium Utility │ BUY │ SMALL POSITION │ AVOID
(Score 50-70) │ 10-15% │ 2-5% │
│ │ │
─────────────────┼────────────────┼─────────────────┼──────────────
Weak Utility │ SMALL POSITION│ AVOID │ AVOID
(Score <50) │ 2-5% │ │
│ │ │
```
**Utility Score Calculation**:
```typescript
Utility Score = (
Real Usage × 0.3 + // Are people actually using it?
Competitive Advantage × 0.3 + // Is it better than Web2?
Growth Trajectory × 0.2 + // Is it growing?
Token Necessity × 0.2 // Is token actually needed?
) × 100
Where each component is scored 0-1
```
### Position Sizing Guidelines
**Conservative Approach** (Recommended for most investors):
```
Portfolio Allocation to AI Tokens: 5-15% max
Within AI Token Allocation:
- Strong Buy positions: 30-40% each, max 2-3 positions
- Buy positions: 15-20% each, max 3-4 positions
- Small positions: 5-10% each, max 4-5 positions
```
**Aggressive Approach** (High risk tolerance):
```
Portfolio Allocation to AI Tokens: Up to 30%
Within AI Token Allocation:
- Strong Buy: Up to 50% in single position
- Buy: 20-30% positions
- Small: 10-15% positions
```
### Rebalancing Strategy
**Monthly Review**:
1. Check usage metrics (growing, flat, declining?)
2. Check competitive landscape (new centralized alternatives?)
3. Review risk scores (any new red flags?)
4. Reassess utility score
**Rebalance if**:
- Usage declining >20% month-over-month → Reduce position
- New major red flag appears → Exit immediately
- Utility score drops by >15 points → Reduce or exit
- Better alternative emerges → Rotate allocation
## Common Mistakes to Avoid
### 1. **Ignoring the Centralized Competition**
❌ **Wrong**: "This token powers decentralized AI, so it must be valuable!"
✅ **Right**: "Is this cheaper/better than OpenAI? If not, why would anyone use it?"
**Lesson**: AI tokens compete with OpenAI, Anthropic, and Google—not just other tokens.
### 2. **Trusting Unverifiable Claims**
❌ **Wrong**: "The team says they have 10,000 active agents, so they must!"
✅ **Right**: "Can I see the on-chain data or use a public dashboard to verify this?"
**Lesson**: If you can't verify it, don't trust it.
### 3. **Confusing Trading Volume with Usage**
❌ **Wrong**: "High trading volume on exchanges = successful project"
✅ **Right**: "High cycle volume / compute hours = real usage"
**Lesson**: Speculation ≠ Utility. Check actual AI usage metrics, not just token trading.
### 4. **Falling for "AI" Marketing**
❌ **Wrong**: "The whitepaper mentions AI 50 times, must be legit!"
✅ **Right**: "What specific AI service is this providing? Can I use it right now?"
**Lesson**: Many tokens slap "AI" on regular crypto projects. Verify the AI is real.
### 5. **Ignoring Token Necessity**
❌ **Wrong**: "The protocol works great, so the token will be valuable!"
✅ **Right**: "Could this protocol work just as well with stablecoins? If yes, why hold the token?"
**Lesson**: Token must be essential to the protocol, not just a payment option.
### 6. **Overlooking Tokenomics**
❌ **Wrong**: "I love the tech, tokenomics don't matter!"
✅ **Right**: "40% going to VCs with no lock-up? That's a dump risk."
**Lesson**: Bad tokenomics can kill good tech. Always check distribution and vesting.
### 7. **Comparing to Other Crypto, Not Web2**
❌ **Wrong**: "This is better than Ethereum gas fees!"
✅ **Right**: "This is 2x more expensive than AWS."
**Lesson**: For AI tokens, Web2 is the competition, not other blockchains.
### 8. **Ignoring Safety & Compliance**
❌ **Wrong**: "Decentralized = no regulations needed!"
✅ **Right**: "No content moderation or GDPR compliance? Regulatory nightmare incoming."
**Lesson**: AI regulations are coming. Projects without safety measures are at risk.
### 9. **Chasing Hype Over Fundamentals**
❌ **Wrong**: "Everyone is talking about this token, I need to buy!"
✅ **Right**: "Let me check the usage metrics and competitive positioning first."
**Lesson**: Hype fades. Utility doesn't. Focus on fundamentals.
### 10. **Not Diversifying**
❌ **Wrong**: "I'm 100% in this one AI token!"
✅ **Right**: "I'm spreading across 3-5 quality AI tokens in different categories."
**Lesson**: Even great tokens can fail. Diversify within AI tokens.
## Quick Reference Checklist
Use this before making any AI token investment:
### Essential Checks
```
□ Is there a working product I can use right now?
□ Can I verify the AI usage metrics independently?
□ Is it cheaper or better than centralized alternatives?
□ Is usage growing month-over-month?
□ Is the token necessary for the protocol to function?
□ Are the smart contracts audited?
□ Is the team public and credible?
□ Are tokenomics fair (team <20%, long vesting)?
□ Are there any critical red flags?
□ Do I understand how this makes money?
```
**If you can't check all 10 boxes, reconsider the investment.**
## Final Thoughts
AI tokens represent a fascinating intersection of artificial intelligence and cryptocurrency. When done right, they can democratize access to AI, reduce costs, and create new economic models. When done wrong, they're just another layer of speculation on top of overhyped tech.
**The key to evaluating AI tokens is ruthless pragmatism**:
- Does the AI actually work?
- Is it competitive with centralized alternatives?
- Is there real, growing demand?
- Is the token essential?
If you can't answer "yes" to all four questions, be very cautious.
**Remember**:
- AI tokens should provide value through **utility**, not just speculation
- Your competition is **OpenAI and Google**, not other crypto projects
- **Verifiable metrics** are everything—if you can't verify it, it's not real
- Most AI tokens will fail—be selective and position size appropriately
## Additional Resources
- **Crypto Ontology**: `/one/knowledge/ontology-crypto.md` - Technical specification
- **Analysis Strategy**: `/one/knowledge/crypto-analysis-strategy.md` - Detailed methodology
- **Deep Researcher Agent**: `/one/things/plans/deep-researcher-agent.md` - Automated analysis
- **Crypto Token Researcher**: `/one/things/plans/crypto-token-researcher.md` - Domain specialist
**Document Version**: 1.0.0
**Last Updated**: 2025-11-03
**Maintained by**: ONE Platform Research Team
**Disclaimer**: This framework is for educational purposes. Always do your own research. Cryptocurrency investments are high-risk. Only invest what you can afford to lose.