返回顶部
c

corespeed-studio

Generate video, images, audio, and music using 40+ AI models via fal.ai. Use for video generation (Kling v3, Sora 2, Veo 3.1, LTX 2.3, Pixverse v5), image generation (Nano Banana 2, FLUX 2 Pro/Schnell, GPT Image 1.5, Qwen Image 2 Pro, Recraft V4, Seedream 5), text-to-speech (MiniMax Speech-02 HD), music/sound effects (Beatoven), and utilities (Topaz upscale, background removal, lipsync). Use when a user asks to create videos, generate images, produce voiceovers, create music/sound effects, upsca

作者: admin | 来源: ClawHub
源自
ClawHub
版本
V 1.0.0
安全检测
已通过
107
下载量
1
收藏
概述
安装方式
版本历史

corespeed-studio

# Corespeed Art — Multi-Model AI Media via fal.ai Auth: Set `FAL_KEY` with your fal.ai API key (get one at https://fal.ai/dashboard/keys). ## Workflow 1. Pick a model from the tables below 2. **Read its reference file** to get the exact endpoint and parameters 3. Run the command with the endpoint and JSON parameters ## Usage ```bash uv run {baseDir}/scripts/fal.py ENDPOINT --json '{"param":"value"}' -f output.ext [-i input.ext] ``` - `ENDPOINT` — the fal.ai model path from the reference file (e.g. `fal-ai/nano-banana-2`) - `--json` — model parameters as JSON object - `-f` — output filename - `-i` — input file(s) to upload (repeat for multiple), auto-injected as `image_url`/`image_urls`/`start_image_url`/`video_url` - `--audio` — audio input file (for lipsync) ## Image Generation | Model | Best For | Reference | |-------|----------|-----------| | Nano Banana 2 | Pro quality, web search, thinking | Read [nanobanana.md](references/nanobanana.md) | | FLUX 2 Pro | Photorealistic, zero-config | Read [flux.md](references/flux.md) | | FLUX Schnell | ⚡ Fastest iteration | Read [flux.md](references/flux.md) | | FLUX Pro v1.1 | Accelerated, commercial use | Read [flux.md](references/flux.md) | | FLUX.1 Dev | 12B params, fine-tuning friendly | Read [flux.md](references/flux.md) | | GPT Image 1.5 | Transparent bg, instruction following | Read [gpt.md](references/gpt.md) | | Qwen Image 2 Pro | Chinese+English, typography, native 2K | Read [qwen.md](references/qwen.md) | | Recraft V4 Pro | Design/marketing, color control | Read [recraft.md](references/recraft.md) | | Seedream 5 Lite | Multi-image editing, reasoning | Read [seedream.md](references/seedream.md) | ## Video Generation | Model | Best For | Reference | |-------|----------|-----------| | Kling v3 Pro I2V | Best I2V, multi-shot, audio, 3–15s | Read [kling.md](references/kling.md) | | Sora 2 T2V | Long video up to 20s, characters | Read [sora.md](references/sora.md) | | Sora 2 I2V | Image→video with Sora | Read [sora.md](references/sora.md) | | Veo 3.1 T2V | Cinematic + native audio/dialogue | Read [veo.md](references/veo.md) | | Veo 3.1 I2V | Image→video with audio | Read [veo.md](references/veo.md) | | LTX 2.3 T2V Fast | ⚡ Fast, up to 2160p/20s, open source | Read [ltx.md](references/ltx.md) | | LTX 2.3 I2V | Image→video, start+end frame | Read [ltx.md](references/ltx.md) | | Pixverse v5 I2V | Anime, 3D, clay, cyberpunk styles | Read [pixverse.md](references/pixverse.md) | ## Audio / TTS | Model | Best For | Reference | |-------|----------|-----------| | MiniMax Speech-02 HD | 30+ languages, loudness normalization | Read [minimax-speech.md](references/minimax-speech.md) | ## Music & Sound Effects | Model | Best For | Reference | |-------|----------|-----------| | Beatoven Music | AI music, up to 90s | Read [beatoven-music.md](references/beatoven-music.md) | ## Utilities | Tool | Best For | Reference | |------|----------|-----------| | Topaz Upscale | AI image/video upscale 2x–4x | Read [topaz.md](references/topaz.md) | | BRIA RMBG | Professional background removal | Read [bria-rmbg.md](references/bria-rmbg.md) | | Sync Lipsync | Audio-driven lip sync on video | Read [sync-lipsync.md](references/sync-lipsync.md) | ## Notes - **No manual Python setup required.** The script uses [PEP 723 inline metadata](https://peps.python.org/pep-0723/). `uv run` automatically creates an isolated virtual environment and installs the `fal-client` dependency on first run. - fal.ai uses a **queue** system — the script polls until generation completes. - Video generation can take 30s–3min. - Use timestamps in filenames: `yyyy-mm-dd-hh-mm-ss-name.ext`. - Script prints `MEDIA:` line for OpenClaw to auto-attach. - Do not read generated media back; report the saved path only. ## Support Built by [Corespeed](https://corespeed.io). If you need help or run into issues: - 💬 Discord: [discord.gg/mAfhakVRnJ](https://discord.gg/mAfhakVRnJ) - 🐦 X/Twitter: [@CoreSpeed_io](https://x.com/CoreSpeed_io) - 🐙 GitHub: [github.com/corespeed-io/skills](https://github.com/corespeed-io/skills/issues)

标签

skill ai

通过对话安装

该技能支持在以下平台通过对话安装:

OpenClaw WorkBuddy QClaw Kimi Claude

方式一:安装 SkillHub 和技能

帮我安装 SkillHub 和 corespeed-studio-1776119950 技能

方式二:设置 SkillHub 为优先技能安装源

设置 SkillHub 为我的优先技能安装源,然后帮我安装 corespeed-studio-1776119950 技能

通过命令行安装

skillhub install corespeed-studio-1776119950

下载 Zip 包

⬇ 下载 corespeed-studio v1.0.0

文件大小: 19.73 KB | 发布时间: 2026-4-14 14:09

v1.0.0 最新 2026-4-14 14:09
Initial release of corespeed-studio — multi-model AI media generator via fal.ai.

- Supports video, image, audio, and music generation using 40+ fal.ai models.
- Easy command-line workflow with isolated environment and automatic dependency setup.
- Includes video generation (Kling, Sora, Veo, LTX, Pixverse), image generation (Nano Banana, FLUX, GPT Image, Recraft), text-to-speech, AI music, upscaling, background removal, and lipsync.
- Tables describe best use per model and link to reference files for parameters.
- No manual Python setup required; outputs ready-to-use media files.

Archiver·手机版·闲社网·闲社论坛·羊毛社区· 多链控股集团有限公司 · 苏ICP备2025199260号-1

Powered by Discuz! X5.0   © 2024-2025 闲社网·线报更新论坛·羊毛分享社区·http://xianshe.com

p2p_official_large
返回顶部