返回顶部
x

xcrawl-search

Use this skill for XCrawl search tasks, including keyword search request design, location and language controls, result analysis, and follow-up crawl or scrape planning.

作者: admin | 来源: ClawHub
源自
ClawHub
版本
V 1.0.2
安全检测
已通过
796
下载量
0
收藏
概述
安装方式
版本历史

xcrawl-search

# XCrawl Search ## Overview This skill uses XCrawl Search API to retrieve query-based results. Default behavior is raw passthrough: return upstream API response bodies as-is. ## Required Local Config Before using this skill, the user must create a local config file and write `XCRAWL_API_KEY` into it. Path: `~/.xcrawl/config.json` ```json { "XCRAWL_API_KEY": "<your_api_key>" } ``` Read API key from local config file only. Do not require global environment variables. ## Credits and Account Setup Using XCrawl APIs consumes credits. If the user does not have an account or available credits, guide them to register at `https://dash.xcrawl.com/`. After registration, they can activate the free `1000` credits plan before running requests. ## Tool Permission Policy Request runtime permissions for `curl` and `node` only. Do not request Python, shell helper scripts, or other runtime permissions. ## API Surface - Search endpoint: `POST /v1/search` - Base URL: `https://run.xcrawl.com` - Required header: `Authorization: Bearer <XCRAWL_API_KEY>` ## Usage Examples ### cURL ```bash API_KEY="$(node -e "const fs=require('fs');const p=process.env.HOME+'/.xcrawl/config.json';const k=JSON.parse(fs.readFileSync(p,'utf8')).XCRAWL_API_KEY||'';process.stdout.write(k)")" curl -sS -X POST "https://run.xcrawl.com/v1/search" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer ${API_KEY}" \ -d '{"query":"AI web crawler API","location":"US","language":"en","limit":20}' ``` ### Node ```bash node -e ' const fs=require("fs"); const apiKey=JSON.parse(fs.readFileSync(process.env.HOME+"/.xcrawl/config.json","utf8")).XCRAWL_API_KEY; const body={query:"web scraping pricing",location:"DE",language:"de",limit:30}; fetch("https://run.xcrawl.com/v1/search",{ method:"POST", headers:{"Content-Type":"application/json",Authorization:`Bearer ${apiKey}`}, body:JSON.stringify(body) }).then(async r=>{console.log(await r.text());}); ' ``` ## Request Parameters ### Request endpoint and headers - Endpoint: `POST https://run.xcrawl.com/v1/search` - Headers: - `Content-Type: application/json` - `Authorization: Bearer <api_key>` ### Request body: top-level fields | Field | Type | Required | Default | Description | |---|---|---:|---|---| | `query` | string | Yes | - | Search query | | `location` | string | No | `US` | Location (country/city/region name or ISO code; best effort) | | `language` | string | No | `en` | Language (ISO 639-1) | | `limit` | integer | No | `10` | Max results (`1-100`) | ## Response Parameters | Field | Type | Description | |---|---|---| | `search_id` | string | Task ID | | `endpoint` | string | Always `search` | | `version` | string | Version | | `status` | string | `completed` | | `query` | string | Search query | | `data` | object | Search result data | | `started_at` | string | Start time (ISO 8601) | | `ended_at` | string | End time (ISO 8601) | | `total_credits_used` | integer | Total credits used | `data` notes from current API reference: - Concrete result schema is implementation-defined - Includes billing fields like `credits_used` and `credits_detail` ## Workflow 1. Rewrite the request as a clear search objective. - Include entity, geography, language, and freshness intent. 2. Build and execute `POST /v1/search`. - Keep request explicit and deterministic. 3. Return raw API response directly. - Do not synthesize relevance summaries unless requested. ## Output Contract Return: - Endpoint used (`POST /v1/search`) - `request_payload` used for the request - Raw response body from search call - Error details when request fails Do not generate summaries unless the user explicitly requests a summary. ## Guardrails - Do not claim ranking guarantees that the API does not expose. - Do not fabricate unavailable filters or response fields. - Do not hardcode provider-specific tool schemas in core logic.

标签

skill ai

通过对话安装

该技能支持在以下平台通过对话安装:

OpenClaw WorkBuddy QClaw Kimi Claude

方式一:安装 SkillHub 和技能

帮我安装 SkillHub 和 xcrawl-search-1776122906 技能

方式二:设置 SkillHub 为优先技能安装源

设置 SkillHub 为我的优先技能安装源,然后帮我安装 xcrawl-search-1776122906 技能

通过命令行安装

skillhub install xcrawl-search-1776122906

下载 Zip 包

⬇ 下载 xcrawl-search v1.0.2

文件大小: 2.64 KB | 发布时间: 2026-4-15 15:03

v1.0.2 最新 2026-4-15 15:03
Initial release of xcrawl-search skill.

- Supports XCrawl Search API for keyword searches with location and language controls.
- Reads API key only from local config file at ~/.xcrawl/config.json.
- Guides users to sign up and activate free credits if needed.
- Provides cURL and Node usage examples for making search requests.
- Allows use of curl and node tools exclusively (no Python or shell helpers).
- Returns endpoint, input payload, and raw response for every request; no summaries unless requested.

Archiver·手机版·闲社网·闲社论坛·羊毛社区· 多链控股集团有限公司 · 苏ICP备2025199260号-1

Powered by Discuz! X5.0   © 2024-2025 闲社网·线报更新论坛·羊毛分享社区·http://xianshe.com

p2p_official_large
返回顶部