返回顶部
x

xcrawl-map

Use this skill for XCrawl map tasks, including site URL discovery, regex filtering, scope estimation, and crawl planning before full-site crawling.

作者: admin | 来源: ClawHub
源自
ClawHub
版本
V 1.0.2
安全检测
已通过
714
下载量
0
收藏
概述
安装方式
版本历史

xcrawl-map

# XCrawl Map ## Overview This skill uses XCrawl Map API to discover URLs for a site. Default behavior is raw passthrough: return upstream API response bodies as-is. ## Required Local Config Before using this skill, the user must create a local config file and write `XCRAWL_API_KEY` into it. Path: `~/.xcrawl/config.json` ```json { "XCRAWL_API_KEY": "<your_api_key>" } ``` Read API key from local config file only. Do not require global environment variables. ## Credits and Account Setup Using XCrawl APIs consumes credits. If the user does not have an account or available credits, guide them to register at `https://dash.xcrawl.com/`. After registration, they can activate the free `1000` credits plan before running requests. ## Tool Permission Policy Request runtime permissions for `curl` and `node` only. Do not request Python, shell helper scripts, or other runtime permissions. ## API Surface - Start map task: `POST /v1/map` - Base URL: `https://run.xcrawl.com` - Required header: `Authorization: Bearer <XCRAWL_API_KEY>` ## Usage Examples ### cURL ```bash API_KEY="$(node -e "const fs=require('fs');const p=process.env.HOME+'/.xcrawl/config.json';const k=JSON.parse(fs.readFileSync(p,'utf8')).XCRAWL_API_KEY||'';process.stdout.write(k)")" curl -sS -X POST "https://run.xcrawl.com/v1/map" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer ${API_KEY}" \ -d '{"url":"https://example.com","filter":"/docs/.*","limit":2000,"include_subdomains":true,"ignore_query_parameters":false}' ``` ### Node ```bash node -e ' const fs=require("fs"); const apiKey=JSON.parse(fs.readFileSync(process.env.HOME+"/.xcrawl/config.json","utf8")).XCRAWL_API_KEY; const body={url:"https://example.com",filter:"/docs/.*",limit:3000,include_subdomains:true,ignore_query_parameters:false}; fetch("https://run.xcrawl.com/v1/map",{ method:"POST", headers:{"Content-Type":"application/json",Authorization:`Bearer ${apiKey}`}, body:JSON.stringify(body) }).then(async r=>{console.log(await r.text());}); ' ``` ## Request Parameters ### Request endpoint and headers - Endpoint: `POST https://run.xcrawl.com/v1/map` - Headers: - `Content-Type: application/json` - `Authorization: Bearer <api_key>` ### Request body: top-level fields | Field | Type | Required | Default | Description | |---|---|---:|---|---| | `url` | string | Yes | - | Site entry URL | | `filter` | string | No | - | Regex filter for URLs | | `limit` | integer | No | `5000` | Max URLs (up to `100000`) | | `include_subdomains` | boolean | No | `true` | Include subdomains | | `ignore_query_parameters` | boolean | No | `true` | Ignore URLs with query parameters | ## Response Parameters | Field | Type | Description | |---|---|---| | `map_id` | string | Task ID | | `endpoint` | string | Always `map` | | `version` | string | Version | | `status` | string | `completed` | | `url` | string | Entry URL | | `data` | object | URL list data | | `started_at` | string | Start time (ISO 8601) | | `ended_at` | string | End time (ISO 8601) | | `total_credits_used` | integer | Total credits used | `data` fields: - `links`: URL list - `total_links`: URL count - `credits_used`: credits used - `credits_detail`: credit breakdown ## Workflow 1. Restate mapping objective. - Discovery only, selective crawl planning, or structure analysis. 2. Build and execute `POST /v1/map`. - Keep filters explicit and reproducible. 3. Return raw API response directly. - Do not synthesize URL-family summaries unless requested. ## Output Contract Return: - Endpoint used (`POST /v1/map`) - `request_payload` used for the request - Raw response body from map call - Error details when request fails Do not generate summaries unless the user explicitly requests a summary. ## Guardrails - Do not claim full site coverage if `limit` is reached. - Do not mix inferred URLs with returned URLs. - Do not hardcode provider-specific tool schemas in core logic.

标签

skill ai

通过对话安装

该技能支持在以下平台通过对话安装:

OpenClaw WorkBuddy QClaw Kimi Claude

方式一:安装 SkillHub 和技能

帮我安装 SkillHub 和 xcrawl-map-1776122912 技能

方式二:设置 SkillHub 为优先技能安装源

设置 SkillHub 为我的优先技能安装源,然后帮我安装 xcrawl-map-1776122912 技能

通过命令行安装

skillhub install xcrawl-map-1776122912

下载 Zip 包

⬇ 下载 xcrawl-map v1.0.2

文件大小: 2.57 KB | 发布时间: 2026-4-15 15:02

v1.0.2 最新 2026-4-15 15:02
Initial release of xcrawl-map skill for site URL discovery and crawl planning:

- Supports XCrawl Map API for site URL mapping with regex filtering, subdomain inclusion, and optional query ignoring.
- Reads API key only from `~/.xcrawl/config.json`; no global environment variables required.
- Requires `curl` or `node` tools; no Python or extra scripts.
- Includes setup guidance for XCrawl account and free credits activation.
- Provides sample Bash (curl) and Node usage examples.
- Raw API responses are returned as-is unless a summary is explicitly requested.

Archiver·手机版·闲社网·闲社论坛·羊毛社区· 多链控股集团有限公司 · 苏ICP备2025199260号-1

Powered by Discuz! X5.0   © 2024-2025 闲社网·线报更新论坛·羊毛分享社区·http://xianshe.com

p2p_official_large
返回顶部