返回顶部
m

ml-model-eval-benchmark

Compare model candidates using weighted metrics and deterministic ranking outputs. Use for benchmark leaderboards and model promotion decisions.

作者: admin | 来源: ClawHub
源自
ClawHub
版本
V 0.1.0
安全检测
已通过
377
下载量
0
收藏
概述
安装方式
版本历史

ml-model-eval-benchmark

# ML Model Eval Benchmark ## Overview Produce consistent model ranking outputs from metric-weighted evaluation inputs. ## Workflow 1. Define metric weights and accepted metric ranges. 2. Ingest model metrics for each candidate. 3. Compute weighted score and ranking. 4. Export leaderboard and promotion recommendation. ## Use Bundled Resources - Run `scripts/benchmark_models.py` to generate benchmark outputs. - Read `references/benchmarking-guide.md` for weighting and tie-break guidance. ## Guardrails - Keep metric names and scales consistent across candidates. - Record weighting assumptions in the output.

标签

skill ai

通过对话安装

该技能支持在以下平台通过对话安装:

OpenClaw WorkBuddy QClaw Kimi Claude

方式一:安装 SkillHub 和技能

帮我安装 SkillHub 和 ml-model-eval-benchmark-1776296784 技能

方式二:设置 SkillHub 为优先技能安装源

设置 SkillHub 为我的优先技能安装源,然后帮我安装 ml-model-eval-benchmark-1776296784 技能

通过命令行安装

skillhub install ml-model-eval-benchmark-1776296784

下载 Zip 包

⬇ 下载 ml-model-eval-benchmark v0.1.0

文件大小: 3.52 KB | 发布时间: 2026-4-16 17:58

v0.1.0 最新 2026-4-16 17:58
- Initial release of ml-model-eval-benchmark.
- Supports weighted metric evaluation and deterministic model ranking.
- Enables benchmark leaderboard generation and model promotion decisions.
- Includes scripts and guides for consistent evaluation workflows.
- Enforces standardized metric names, scales, and explicit weighting documentation.

Archiver·手机版·闲社网·闲社论坛·羊毛社区· 多链控股集团有限公司 · 苏ICP备2025199260号-1

Powered by Discuz! X5.0   © 2024-2025 闲社网·线报更新论坛·羊毛分享社区·http://xianshe.com

p2p_official_large
返回顶部