Witsy is a BYOK (Bring Your Own Keys) AI application: it means you need to have API keys for the LLM providers you want to use. Alternatively,
you can use Ollama to run models locally on your machine for free and use them in Witsy.
Connect other providers (together, siliconflow, fireworks...) through OpenAI compatibility layer
Chat completion with vision models support (describe an image)
Text-to-image and text-to video with OpenAI, HuggingFace and Replicate
Scratchpad to interactively create the best content with any model!
Prompt anywhere allows to generate content directly in any application
AI commands runnable on highlighted text in almost any application
Experts prompts to specialize your bot on a specific topic
LLM plugins to augment LLM: execute python code, search the Internet...
Long-term memory plugin to increase relevance of LLM answers
Read aloud of assistant messages (requires OpenAI or ElevenLabs API key)
Read aloud of any text in other applications (requires OpenAI or ElevenLabs API key)
Chat with your local files and documents (RAG)
Transcription/Dictation (Speech-to-Text)
Realtime Chat aka Voice Mode
Anthropic Computer Use support
Local history of conversations (with automatic titles)
Formatting and copy to clipboard of generated code
Conversation PDF export
Image copy and download
Prompt Anywhere
Generate content in any application:
From any editable content in any application
Hit the Prompt anywhere shortcut (Shift+Control+Space / ^⇧Space)
Enter your prompt in the window that pops up
Watch Witsy enter the text directly in your application!
On Mac, you can define an expert that will automatically be triggered depending on the foreground application. For instance, if you have an expert used to generate linux commands, you can have it selected if you trigger Prompt Anywhere from the Terminal application!
AI Commands
AI commands are quick helpers accessible from a shortcut that leverage LLM to boost your productivity:
Select any text in any application
Hit the AI command shorcut (Alt+Control+Space / ⌃⌥Space)
Select one of the commands and let LLM do their magic!
You can also create custom commands with the prompt of your liking!
You can connect each chat with a document repository: Witsy will first search for relevant documents in your local files and provide this info to the LLM. To do so:
Click on the database icon on the left of the prompt
Click Manage and then create a document repository
OpenAI Embedding require on API key, Ollama requires an embedding model
Add documents by clicking the + button on the right hand side of the window
Once your document repository is created, click on the database icon once more and select the document repository you want to use. The icon should turn blue
Transcription / Dictation (Speech-to-Text)
You can transcribe audio recorded on the microphone to text. Transcription can be done using OpenAI Whisper online model (requires API key) or using local Whisper model (requires download of large files). Once the text is transcribed you can:
Copy it to your clipboard
Insert it in the application that was running before you activated the dictation