Bridge the gap between "what I want to do" and "how do I write that command?" with an intelligent terminal assistant that translates natural language into executable bash commands.
Whether you're a terminal veteran or a newcomer, hi-shell provides a fast, AI-powered way to generate and execute commands safely.
Run models locally using candle with hardware acceleration (Metal/CUDA). Supports Llama, Phi-3, and Qwen2 architectures.
Connect to your own Ollama or LM Studio instance for complete privacy and control.
Seamless integration with OpenRouter, Gemini, and Anthropic for powerful cloud-based models.
A dedicated shell environment for continuous assistance and iterative command building.
Dangerous commands are flagged, and confirmation is required before execution. Your system stays safe.
Optimized for speed with hardware acceleration support. Get your commands in milliseconds.
Choose your preferred installation method. We detect your operating system automatically.
curl -sSL https://raw.githubusercontent.com/tufantunc/hi-shell/main/install.sh | bash brew tap tufantunc/tap && brew install hi-shell scoop bucket add hi-shell https://github.com/tufantunc/scoop-bucket && scoop install hi-shell cargo install hi-shell Run the initialization command to set up your preferred LLM provider:
hi-shell --initJust prefix your natural language request with hi-shell and let
the magic happen.
Get quick answers directly from your command line. Just describe what you want to do in natural language and get the exact command you need.
Start a dedicated shell environment for continuous assistance. The context is preserved between commands, so you can refine your requests naturally like a conversation.
Dangerous commands are automatically detected and flagged with a warning. Confirmation is always required before execution, keeping your system safe.
Stop searching for commands. Start describing what you want to do.