pc网站怎么建设流程,电影网站开发与设计,查询海外whois的网站,宜兴建设局 网站whisper-cli 实际通常指 whisper.cpp 提供的命令行工具#xff08;main 可执行文件#xff09;#xff0c;以下是 Windows 环境下的核心使用说明#xff0c;适配轻量级、CPU 优先的本地化语音识别需求#xff1a;
一、 基础命令格式
bash
运行
./main -m models/ggml-…whisper-cli实际通常指whisper.cpp提供的命令行工具main可执行文件以下是 Windows 环境下的核心使用说明适配轻量级、CPU 优先的本地化语音识别需求一、 基础命令格式bash运行./main -m models/ggml-base.en.bin -f audio.wav核心参数说明-m model-path指定模型文件路径必须项需提前下载对应量化模型如ggml-base.en.bin-f audio-path指定待识别的音频文件路径支持 WAV/MP3 等推荐 16kHz 单声道 WAV二、 关键优化参数适配 CPU 优先 实时需求量化与低资源配置bash运行./main -m models/ggml-tiny.en.bin -f audio.wav -c 0 --threads 4-c ctx-size设置上下文窗口大小-c 0自动适配降低内存占用--threads num指定 CPU 线程数建议设为 CPU 核心数的一半平衡速度与资源消耗模型选择优先选tiny/base级别的英文模型ggml-tiny.en.bin体积小、识别快适合垂直领域指令实时语音识别麦克风输入bash运行./main -m models/ggml-base.en.bin --mic 1 --step 500 --length 5000--mic device-id指定麦克风设备 IDWindows 下可通过./main --list-mics查看设备列表--step ms实时识别的步长单位毫秒越小越实时建议 500--length ms每次识别的音频长度建议 5000即 5 秒垂直领域优化商业服务指令bash运行./main -m models/ggml-base.en.bin -f audio.wav -k 10 --prompt 收款 配镜 验光 取镜--prompt text添加领域关键词提示引导模型优先识别商业服务相关指令-k max-words限制输出最大词数适配短语音指令场景三、 输出与格式控制-otxt将识别结果保存为 TXT 文件-ojson输出 JSON 格式结果便于程序调用bash运行./main -m models/ggml-base.en.bin -f audio.wav -ojson命令行全说明usage: D:\ai\asr\whisper64\whisper-cli.exe [options] file0 file1 ... supported audio formats: flac, mp3, ogg, wav options: -h, --help [default] show this help message and exit -t N, --threads N [4 ] number of threads to use during computation -p N, --processors N [1 ] number of processors to use during computation -ot N, --offset-t N [0 ] time offset in milliseconds -on N, --offset-n N [0 ] segment index offset -d N, --duration N [0 ] duration of audio to process in milliseconds -mc N, --max-context N [-1 ] maximum number of text context tokens to store -ml N, --max-len N [0 ] maximum segment length in characters -sow, --split-on-word [false ] split on word rather than on token -bo N, --best-of N [5 ] number of best candidates to keep -bs N, --beam-size N [5 ] beam size for beam search -ac N, --audio-ctx N [0 ] audio context size (0 - all) -wt N, --word-thold N [0.01 ] word timestamp probability threshold -et N, --entropy-thold N [2.40 ] entropy threshold for decoder fail -lpt N, --logprob-thold N [-1.00 ] log probability threshold for decoder fail -nth N, --no-speech-thold N [0.60 ] no speech threshold -tp, --temperature N [0.00 ] The sampling temperature, between 0 and 1 -tpi, --temperature-inc N [0.20 ] The increment of temperature, between 0 and 1 -debug, --debug-mode [false ] enable debug mode (eg. dump log_mel) -tr, --translate [false ] translate from source language to english -di, --diarize [false ] stereo audio diarization -tdrz, --tinydiarize [false ] enable tinydiarize (requires a tdrz model) -nf, --no-fallback [false ] do not use temperature fallback while decoding -otxt, --output-txt [false ] output result in a text file -ovtt, --output-vtt [false ] output result in a vtt file -osrt, --output-srt [false ] output result in a srt file -olrc, --output-lrc [false ] output result in a lrc file -owts, --output-words [false ] output script for generating karaoke video -fp, --font-path [/System/Library/Fonts/Supplemental/Courier New Bold.ttf] path to a monospace font for karaoke video -ocsv, --output-csv [false ] output result in a CSV file -oj, --output-json [false ] output result in a JSON file -ojf, --output-json-full [false ] include more information in the JSON file -of FNAME, --output-file FNAME [ ] output file path (without file extension) -np, --no-prints [false ] do not print anything other than the results -ps, --print-special [false ] print special tokens -pc, --print-colors [false ] print colors --print-confidence [false ] print confidence -pp, --print-progress [false ] print progress -nt, --no-timestamps [false ] do not print timestamps -l LANG, --language LANG [en ] spoken language (auto for auto-detect) -dl, --detect-language [false ] exit after automatically detecting language --prompt PROMPT [ ] initial prompt (max n_text_ctx/2 tokens) --carry-initial-prompt [false ] always prepend initial prompt -m FNAME, --model FNAME [models/ggml-base.en.bin] model path -f FNAME, --file FNAME [ ] input audio file path -oved D, --ov-e-device DNAME [CPU ] the OpenVINO device used for encode inference -dtw MODEL --dtw MODEL [ ] compute token-level timestamps -ls, --log-score [false ] log best decoder scores of tokens -ng, --no-gpu [false ] disable GPU -fa, --flash-attn [true ] enable flash attention -nfa, --no-flash-attn [false ] disable flash attention -sns, --suppress-nst [false ] suppress non-speech tokens --suppress-regex REGEX [ ] regular expression matching tokens to suppress --grammar GRAMMAR [ ] GBNF grammar to guide decoding --grammar-rule RULE [ ] top-level GBNF grammar rule name --grammar-penalty N [100.0 ] scales down logits of nongrammar tokens Voice Activity Detection (VAD) options: --vad [false ] enable Voice Activity Detection (VAD) -vm FNAME, --vad-model FNAME [ ] VAD model path -vt N, --vad-threshold N [0.50 ] VAD threshold for speech recognition -vspd N, --vad-min-speech-duration-ms N [250 ] VAD min speech duration (0.0-1.0) -vsd N, --vad-min-silence-duration-ms N [100 ] VAD min silence duration (to split segments) -vmsd N, --vad-max-speech-duration-s N [FLT_MAX] VAD max speech duration (auto-split longer) -vp N, --vad-speech-pad-ms N [30 ] VAD speech padding (extend segments) -vo N, --vad-samples-overlap N [0.10 ] VAD samples overlap (seconds between segments)阿雪技术观让我们积极投身于技术共享的浪潮中不仅仅是作为受益者更要成为贡献者。无论是分享自己的代码、撰写技术博客还是参与开源项目的维护和改进每一个小小的举动都可能成为推动技术进步的巨大力量Embrace open source and sharing, witness the miracle of technological progress, and enjoy the happy times of humanity! Lets actively join the wave of technology sharing. Not only as beneficiaries, but also as contributors. Whether sharing our own code, writing technical blogs, or participating in the maintenance and improvement of open source projects, every small action may become a huge force driving technological progrss.