Total Configurations
56
framework-model pairs
Pareto Optimal
2
frontier points
Best Score
100.0
Claude Code
Cheapest Optimal
$0.02
OpenClaw
30486786105$0.13$0.27$0.40$0.53$0.67Claude Code / claude-opus-4-6 Score: 100 Cost: $0.10 Tier: OtherManus / Manus-1.6-Lite Score: 100 Cost: $0.10 Tier: StandardOpenClaw / MiniMax-M2.5 Score: 100 Cost: $0.03 Tier: EconomyOpenClaw / MiniMax-M2.5 Score: 100 Cost: $0.03 Tier: EconomyOpenClaw / MiniMax-M2.5 Score: 100 Cost: $0.03 Tier: EconomyOpenClaw / MiniMax-M2.5 Score: 100 Cost: $0.03 Tier: EconomyOpenClaw / auto Score: 100 Cost: $0.10 Tier: OtherOpenClaw / gpt-5.3-codex Score: 100 Cost: $0.10 Tier: OtherOpenClaw / gpt-5.4 Score: 100 Cost: $0.10 Tier: OtherOpenClaw / gpt-5.4 Score: 100 Cost: $0.10 Tier: OtherOpenClaw / k2p5 Score: 100 Cost: $0.10 Tier: OtherOpenClaw / miaoda-model-auto Score: 100 Cost: $0.10 Tier: OtherOpenClaw / qwen3.5-plus Score: 100 Cost: $0.04 Tier: StandardOpenClaw / qwen3.5-plus Score: 100 Cost: $0.04 Tier: StandardOpenClaw (Miaoda) / miaoda-model-auto Score: 100 Cost: $0.10 Tier: OtherOpenClaw (Miaoda) / miaoda-model-auto Score: 100 Cost: $0.10 Tier: OtherWorkBuddy / AI Assistant Score: 100 Cost: $0.10 Tier: OtherOpenClaw / deepseek-chat Score: 100 Cost: $0.02 Tier: Economy ★ Pareto OptimalWorkBuddy / WorkBuddy-Agent Score: 95.75 Cost: $0.10 Tier: OtherOpenClaw / MiniMax-M2.5 Score: 93.3 Cost: $0.03 Tier: EconomyOpenClaw / claude-opus-4.5 Score: 92 Cost: $0.58 Tier: FlagshipOpenClaw / claude-sonnet-4.6 Score: 92 Cost: $0.18 Tier: FlagshipOpenClaw / grok-4.20-beta Score: 92 Cost: $0.25 Tier: FlagshipOpenClaw / grok-4.20-beta Score: 92 Cost: $0.25 Tier: FlagshipOpenClaw / claude-sonnet-4 Score: 91 Cost: $0.15 Tier: StandardOpenClaw / claude-sonnet-4.5 Score: 91 Cost: $0.18 Tier: FlagshipOpenClaw / deepseek-r1 Score: 90 Cost: $0.04 Tier: StandardOpenClaw / gemini-2.5-pro Score: 90 Cost: $0.15 Tier: FlagshipOpenClaw / gemini-2.5-pro Score: 90 Cost: $0.15 Tier: FlagshipOpenClaw / claude-3.5-sonnet Score: 90 Cost: $0.12 Tier: StandardOpenClaw / deepseek-v3.2 Score: 89 Cost: $0.03 Tier: Open SourceOpenClaw / glm-5 Score: 89 Cost: $0.05 Tier: StandardOpenClaw / llama-4-maverick Score: 89 Cost: $0.02 Tier: Open SourceOpenClaw / llama-4-maverick Score: 89 Cost: $0.02 Tier: Open SourceOpenClaw / deepseek-reasoner Score: 88 Cost: $0.04 Tier: StandardOpenClaw / gemini-2.5-flash Score: 88 Cost: $0.05 Tier: EconomyOpenClaw / qwen3-coder-plus Score: 88 Cost: $0.04 Tier: StandardOpenClaw / qwen3.5-plus Score: 88 Cost: $0.04 Tier: StandardOpenClaw / kimi-k2-thinking Score: 87 Cost: $0.06 Tier: StandardOpenClaw / qwen3-max Score: 87 Cost: $0.06 Tier: StandardOpenClaw / deepseek-chat Score: 86.7 Cost: $0.02 Tier: Economy ★ Pareto OptimalClaude Code / claude-opus-4-5 Score: 86.67 Cost: $0.58 Tier: FlagshipOpenClaw / glm-4.7 Score: 86 Cost: $0.04 Tier: StandardOpenClaw / llama-3.3-70b-instruct Score: 86 Cost: $0.03 Tier: Open SourceOpenClaw / qvq-plus Score: 86 Cost: $0.04 Tier: StandardOpenClaw / glm-4.5 Score: 85 Cost: $0.03 Tier: EconomyOpenClaw / kimi-k2.5 Score: 85 Cost: $0.04 Tier: StandardOpenClaw / glm-4.5-air Score: 84 Cost: $0.02 Tier: EconomyOpenClaw / glm-4.6 Score: 84 Cost: $0.03 Tier: EconomyOpenClaw / glm-4-plus Score: 83.3 Cost: $0.04 Tier: StandardOpenClaw / moonshot-v1-128k Score: 83 Cost: $0.08 Tier: StandardOpenClaw / MiniMax-M2.5 Score: 82 Cost: $0.03 Tier: EconomyOpenClaw / moonshot-v1-auto Score: 80 Cost: $0.04 Tier: EconomyOpenClaw / qwen-max Score: 80 Cost: $0.06 Tier: StandardOpenClaw / glm-4.7 Score: 73.3 Cost: $0.04 Tier: StandardOpenClaw / MiniMax-M2.5 Score: 11.9 Cost: $0.03 Tier: EconomyCost per task (USD)ScoreFlagshipStandardEconomyOpen SourceOther

Claw Ecosystem Data Points

FrameworkModelTierScoreCost / TaskPareto Optimal
Claude Codeclaude-opus-4-6Other100.0$0.10No
ManusManus-1.6-LiteStandard100.0$0.10No
OpenClawMiniMax-M2.5Economy100.0$0.03No
OpenClawMiniMax-M2.5Economy100.0$0.03No
OpenClawMiniMax-M2.5Economy100.0$0.03No
OpenClawMiniMax-M2.5Economy100.0$0.03No
OpenClawautoOther100.0$0.10No
OpenClawgpt-5.3-codexOther100.0$0.10No
OpenClawgpt-5.4Other100.0$0.10No
OpenClawgpt-5.4Other100.0$0.10No
OpenClawk2p5Other100.0$0.10No
OpenClawmiaoda-model-autoOther100.0$0.10No
OpenClawqwen3.5-plusStandard100.0$0.04No
OpenClawqwen3.5-plusStandard100.0$0.04No
OpenClaw (Miaoda)miaoda-model-autoOther100.0$0.10No
OpenClaw (Miaoda)miaoda-model-autoOther100.0$0.10No
WorkBuddyAI AssistantOther100.0$0.10No
OpenClawdeepseek-chatEconomy100.0$0.02Yes
WorkBuddyWorkBuddy-AgentOther95.8$0.10No
OpenClawMiniMax-M2.5Economy93.3$0.03No
OpenClawclaude-opus-4.5Flagship92.0$0.58No
OpenClawclaude-sonnet-4.6Flagship92.0$0.18No
OpenClawgrok-4.20-betaFlagship92.0$0.25No
OpenClawgrok-4.20-betaFlagship92.0$0.25No
OpenClawclaude-sonnet-4Standard91.0$0.15No
OpenClawclaude-sonnet-4.5Flagship91.0$0.18No
OpenClawdeepseek-r1Standard90.0$0.04No
OpenClawgemini-2.5-proFlagship90.0$0.15No
OpenClawgemini-2.5-proFlagship90.0$0.15No
OpenClawclaude-3.5-sonnetStandard90.0$0.12No
OpenClawdeepseek-v3.2Open Source89.0$0.03No
OpenClawglm-5Standard89.0$0.05No
OpenClawllama-4-maverickOpen Source89.0$0.02No
OpenClawllama-4-maverickOpen Source89.0$0.02No
OpenClawdeepseek-reasonerStandard88.0$0.04No
OpenClawgemini-2.5-flashEconomy88.0$0.05No
OpenClawqwen3-coder-plusStandard88.0$0.04No
OpenClawqwen3.5-plusStandard88.0$0.04No
OpenClawkimi-k2-thinkingStandard87.0$0.06No
OpenClawqwen3-maxStandard87.0$0.06No
OpenClawdeepseek-chatEconomy86.7$0.02Yes
Claude Codeclaude-opus-4-5Flagship86.7$0.58No
OpenClawglm-4.7Standard86.0$0.04No
OpenClawllama-3.3-70b-instructOpen Source86.0$0.03No
OpenClawqvq-plusStandard86.0$0.04No
OpenClawglm-4.5Economy85.0$0.03No
OpenClawkimi-k2.5Standard85.0$0.04No
OpenClawglm-4.5-airEconomy84.0$0.02No
OpenClawglm-4.6Economy84.0$0.03No
OpenClawglm-4-plusStandard83.3$0.04No
OpenClawmoonshot-v1-128kStandard83.0$0.08No
OpenClawMiniMax-M2.5Economy82.0$0.03No
OpenClawmoonshot-v1-autoEconomy80.0$0.04No
OpenClawqwen-maxStandard80.0$0.06No
OpenClawglm-4.7Standard73.3$0.04No
OpenClawMiniMax-M2.5Economy11.9$0.03No