COMPUTE.md

Universal Compute Coordination for Autonomous Systems โ€” unified access to distributed compute resources for real-time agent operations.

Part of the protocols.md network
๐Ÿ’ป Draft v0.1 - Compute coordination framework. This specification explores how autonomous agents might efficiently discover and utilize compute resources. RFC stage

Challenge

Real-time compute needs are distributed across incompatible systems:

  • Traditional providers use proprietary interfaces and billing models.
  • Distributed networks operate with custom protocols and endpoints.
  • Peer-to-peer systems require unique authentication and interaction patterns.
  • Edge computing resources remain largely inaccessible without standards.
  • Hybrid infrastructure lacks unified orchestration mechanisms.

Agents need seamless access to compute resources without managing this complexity.

Solution โ€“ Unified Compute Layer

GET https://compute.md/discover
Loading...

compute.md provides a standard interface for all compute resources. Every provider โ€” from distributed networks to edge devices โ€” becomes accessible through one consistent protocol.

Core APIs

Discovery

GET /nodes?resource_type=GPU_A&max_price=2.50&latency_band=metro

Find available compute by resource type, price, location, latency requirements โ€” across all providers.

Provider Abstraction

{
  "endpoint": "/providers",
  "unified_access": true,
  "providers": [
    {
      "name": "distributed_gpu_network",
      "type": "decentralized_compute",
      "available_resources": 14000,
      "resource_types": ["GPU_A", "GPU_B", "GPU_C"],
      "price_range": "$0.79-2.49/hr",
      "payment": ["crypto", "fiat"],
      "latency_band": "metro"
    },
    {
      "name": "peer_network_alpha",
      "type": "decentralized_cloud",
      "active_deployments": 8400,
      "cpu_cores": 47000,
      "avg_savings": "85%",
      "payment": ["TOKEN_A", "STABLE_COIN"],
      "latency_band": "regional"
    },
    {
      "name": "marketplace_beta",
      "type": "p2p_marketplace",
      "hosts": 4700,
      "interruptible": true,
      "price_range": "$0.20-1.50/hr",
      "verification": "benchmark_based"
    },
    {
      "name": "serverless_provider",
      "type": "serverless_compute",
      "regions": 30,
      "cold_start": "2-5s",
      "autoscale": true,
      "spot_discount": "70%"
    }
  ]
}

Unified Orchestration

POST /orchestrate
{
  "requester": "agent_model_trainer",
  "workload": {
    "type": "batch_processing",
    "compute_hours": 1000,
    "memory_gb": 80,
    "interconnect": "optional"
  },
  "constraints": {
    "max_price_per_hour": 2.00,
    "latency_band": "regional",
    "redundancy": 2
  }
}

// Returns optimized allocation
{
  "plan_id": "plan_7k3h9s",
  "allocations": [
    {
      "provider": "marketplace_beta",
      "nodes": 12,
      "resource_type": "GPU_C",
      "price": "$0.44/hr",
      "allocation": "60%"
    },
    {
      "provider": "distributed_network",
      "nodes": 4,
      "resource_type": "GPU_A", 
      "price": "$0.79/hr",
      "allocation": "40%"
    }
  ],
  "total_cost": "$580",
  "estimated_time": "14 hours",
  "sla": "99.5%"
}

Capability Vectors

{
  "endpoint": "/capabilities",
  "description": "Normalized performance metrics",
  "resource_profiles": {
    "GPU_TYPE_A": {
      "fp16_tflops": 989,
      "fp32_tflops": 67,
      "int8_tops": 3958,
      "memory_gb": 80,
      "bandwidth_gb_s": 3350,
      "interconnect": "HighSpeed_900GB/s",
      "typical_price": "$2.49/hr"
    },
    "GPU_TYPE_B": {
      "fp16_tflops": 82,
      "fp32_tflops": 82,
      "int8_tops": 660,
      "memory_gb": 24,
      "bandwidth_gb_s": 1008,
      "interconnect": "Standard_64GB/s",
      "typical_price": "$0.44/hr"
    }
  },
  "latency_bands": {
    "LAN": "1-3ms",
    "metro": "3-10ms",
    "regional": "10-30ms",
    "global": "30-100ms"
  }
}

Active Compute Markets

Provider TypeScalePrice RangeLatency BandStatus
Distributed GPU Network14,000+ GPUs$0.79-2.49/hrmetroLive
Decentralized Cloud8,400+ nodes$0.10-0.50/hrregionalLive
P2P Marketplace4,700+ hosts$0.20-1.50/hrregionalLive
Serverless Platform30+ regions$0.39-3.89/hrmetroLive
Traditional CloudGlobal$1.20-8.00/hrregionalLive
Edge FleetExperimentalTBDLANFuture

All providers normalized through one protocol. Edge/vehicle compute marked as future experimental.

Agent Use Cases

Distributed Training

// Agent needs 1000 GPU-hours for model training

// Discover cost-effective options across all providers
const options = await fetch('https://compute.md/discover?gpu_hours=1000&min_memory_gb=24&max_price=1.00&sort=price_asc')
  .then(res => res.json());

// Book distributed allocation
const bookCompute = async () => {
  try {
    const response = await fetch('https://compute.md/book', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({
        plan: 'distributed',
        allocations: [
          { provider: 'marketplace_a', gpu_hours: 600, gpu_type: 'TYPE_B' },
          { provider: 'network_b', gpu_hours: 400, cpu_fallback: true }
        ],
        payment: { method: 'STABLE_COIN', stream: true }
      })
    });
    
    const booking = await response.json();
    console.log(`Booked ${booking.total_gpu_hours} hours`);
    console.log(`Total cost: $${booking.total_cost}`);
    console.log(`Savings vs traditional: ${booking.savings_percent}%`);
    // Output: "Savings vs traditional: 78%"
  } catch (error) {
    console.error('Error booking compute:', error);
  }
};

Multi-Provider Inference

# Load balance inference across providers for redundancy
POST /orchestrate
{
  "workload": "large_model_serve",
  "requests_per_second": 100,
  "redundancy": 3,
  "providers": ["network_a", "platform_b", "marketplace_c"]
}

# Automatic failover and rebalancing
{
  "endpoints": [
    "https://network-gateway.compute.md/infer",
    "https://platform-gateway.compute.md/infer",
    "https://marketplace-gateway.compute.md/infer"
  ],
  "load_balancer": "https://compute.md/lb/abc123",
  "health_check": "/health",
  "auto_failover": true
}

Dynamic Resource Allocation

# Agent monitors and optimizes resource usage
while True:
    prices = requests.get('https://compute.md/prices/spot').json()
    
    # Find optimal allocation opportunities
    for resource_type in ['GPU_A', 'GPU_B', 'GPU_C']:
        p2p_price = prices['p2p_market'][resource_type]
        cloud_price = prices['traditional'][resource_type]
        
        if cloud_price > p2p_price * 3:
            # Allocate on P2P network
            capacity = book_p2p(resource_type, hours=10)
            listing = optimize_allocation(
                capacity, 
                efficiency_target=0.95,
                marketplace='compute.md/exchange'
            )
            savings = cloud_price - p2p_price
            print(f"Optimization: {savings} per hour saved")

Why This Matters

Currently, agents face fragmented compute landscapes with different interfaces and requirements. Distributed networks offer significant cost efficiencies but require diverse technical integrations.

With compute.md, compute becomes seamlessly accessible:

  • One API for all compute providers
  • 70-85% cost optimization through distributed markets
  • Automatic failover and redundancy
  • Flexible payment models per second, token, or epoch
  • Verified execution with attestation

Security & Verification

{
  "security_model": {
    "sandboxing": ["WASM", "Container_Tech", "Microvm"],
    "attestation": ["TEE_Type_A", "TEE_Type_B", "TEE_Type_C"],
    "workload_signing": "ECDSA + DID",
    "resource_limits": "cgroups_v2",
    "network_isolation": "Secure_Mesh"
  },
  "verification": {
    "proof_of_compute": "periodic_checkpoints",
    "result_validation": "Cryptographic_Proofs",
    "payment_condition": "attestation_receipt",
    "dispute_resolution": "decentralized_arbitration"
  }
}

Protocol Stack

transport: HTTP/3 + QUIC
encoding: MessagePack / Protobuf
discovery: DNS-SD + provider registries
auth: DID + UCAN + OAuth2 (legacy)
payment: Stablecoins / Lightning / provider tokens
orchestration: Container orchestrators + schedulers
monitoring: OpenTelemetry + Prometheus
settlement: streaming or epoch-based (per hour/day)

Network Effects

Once compute.md becomes the standard compute gateway:

  • Price Discovery โ€“ Real-time pricing across all providers.
  • Liquidity โ€“ Instant access to global compute resources.
  • Provider Competition โ€“ Market-driven optimization of price and features.
  • Agent Composability โ€“ Any agent can access any compute source.
  • Payment Innovation โ€“ Granular billing models aligned with usage.
spec_version: 0.1.0-draft
published: 2025-09-07T14:22:17-07:00
content_hash: sha256:d9f2c8a7e4b1f5a6c3d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0
status: exploratory
contact: proofmdorg@gmail.com

compute.md

ยฉ 2025 compute.md authors ยท MIT License ยท Exploratory specification