Releases: codeofdusk/gptcmd
v2.3.2
v2.3.1
Welcome to the 14 November 2025 release of Gptcmd!
Changes
- Cost estimation has been added for
gpt-5.1.
v2.3.0
Welcome to the 30 October 2025 release of Gptcmd!
New features
- Gptcmd can now notify you of updates to both itself and some external providers on startup.
- This feature can be disabled by setting the
check_for_updatesconfig option tofalse.
- This feature can be disabled by setting the
Changes
- Cost estimation has been added for the
gpt-5model family.
v2.2.0
Welcome to the 7 July 2025 release of Gptcmd! This release introduces new automation and data management features, including a macro system, message-level metadata, and support for audio attachments.
Important notes
Python 3.7 support has been removed
Gptcmd now requires Python 3.8.6 or later.
New features
- Gptcmd now supports user-defined command macros, making it easier and more efficient to enter complex or frequently-used commands.
- Macros are defined in the
[macros]table of the Gptcmd configuration file. Each key in[macros]creates a custom command; each value lists the underlying commands to run, one per line. - Positional arguments (
{1},{2},{*}), default parameters ({1?default}), and built-in variables ({account},{model},{thread}) are supported. - For more information on the macro format, consult Gptcmd's default configuration.
- Macros are defined in the
- Messages can now store arbitrary key–value metadata. Use the
metaandunmetacommands to add, view, or remove metadata on any message. This is useful for keeping personal notes or for interacting with third-partyLLMProviders that make special parameters available. - Gptcmd now supports audio: the
audiocommand allows local or remote audio files to be attached to messages, similar to the existingimagecommand.- Audio requires support from the active
LLMProvider: OpenAI (via an audio-specific model) and Gemini (when configured as an "OpenAI-like" provider) are known to work.
- Audio requires support from the active
- A
grepcommand has been added to search the active thread using regular expressions. This is especially useful if you forget the index of a message and want to find it again!
Changes
- Gptcmd now loads non-default accounts on demand, improving start-up time for users with many configured accounts.
- Cost estimation has been updated for the
o3model family and added for thegpt-4o-audio-previewmodel.
Bug fixes
- Crash dumps should now be saved in a broader range of exceptional situations, improving data recovery.
v2.1.0
Welcome to the 27 April 2025 release of Gptcmd! This release focuses on improving model handling and compatibility (especially for "OpenAI-like" provider configurations), increasing stability, and refining the user experience.
Important notes
Python 3.7 deprecation
Python 3.7 reached end-of-life on 27 June 2023. This is the final release with 3.7 compatibility. Future releases will require Python 3.8.6 or later.
New features
- Gptcmd can now disambiguate partial model names (for instance, resolving
4otogpt-4oif appropriate), simplifying model selection. - Ongoing API requests can now be cancelled by pressing Ctrl+c.
- Added cost estimation support for the
gpt-4.1model family,o3, ando4-mini. LLMProviderimplementations can now opt out of model validation, improving support, for instance, for "OpenAI-like" providers that don't implement a model listing endpoint.- The automatic creation of new named threads by the
retrycommand is now configurable via thecreate_new_thread_on_retryconfiguration setting (set toaskby default). To restore the previous behaviour, set this option toalways. - To prevent data loss, Gptcmd now attempts to save messages and some application state to a JSON file in the current directory when an unexpected application failure occurs.
Changes
- On OpenAI accounts,
gpt-4.1is now used by default if available.
Bug fixes
- Improved handling of edge cases in OpenAI responses, enhancing stability especially when interacting with "OpenAI-like" providers.
- General improvements to stability and performance.
v2.0.3
Welcome to the 7 March 2025 release of Gptcmd!
New features
- Added cost estimation support for
gpt-4.5-preview.
Changes
- Enabled streaming on
o1.
v2.0.2
Welcome to the 9 February 2025 release of Gptcmd! This minor update keeps Gptcmd current with OpenAI's latest changes, particularly for the o-series models.
New features
- Gptcmd's cost estimator now supports the latest available
o1,o3-mini, andgpt-4omodels. - Added support for system messages on
o1ando3series models.
Changes
- Gptcmd now shows a simpler error message when an
OpenAIaccount fails to initialize from configuration. (#1) - Since the default model is now vision-capable and to reduce false positives, the "this model may not support vision" warning has been removed.
- Messages can now be moved backwards within a thread (for example,
move -1 1is equivalent toflipin Gptcmd 1.x).
v2.0.1
v2.0.0
Welcome to the 29 November 2024 release of Gptcmd! This is a very substantial release that introduces multi-provider and multi-account support, adds the ability to attach images to messages for use with vision models, implements a new configuration system and message editor, and enhances cost estimation, streamed responses, and the general command-line experience.
Important notes
Python 3.7 deprecation
Python 3.7 reached end-of-life on 27 June 2023. While Gptcmd currently maintains best-effort compatibility with Python 3.7, this support is deprecated and will be removed in the next release. Updating to Python 3.8.6 or later is strongly recommended.
JSON file compatibility
JSON files created with older versions of Gptcmd can be loaded with this version, but any files saved with Gptcmd version 2.0.0 or later will be incompatible with previous versions due to changes in the JSON format. Users attempting to load the newer JSON format using a previous release will be instructed to update Gptcmd to version 2.0.0.
New features
- An
imagecommand has been added, which allows images to be attached to messages by file or URL. Consult the readme for instructions on using Gptcmd with vision models. - Gptcmd can now be used with additional providers besides OpenAI:
- Azure AI.
- OpenAI-compatible APIs, such as OpenRouter and Ollama.
- Anthropic Claude.
- A custom provider of your own design.
- Cost estimation has been completely rewritten:
- The new cost estimator supports nearly all OpenAI models.
- Gptcmd now provides cost estimates for streamed responses that complete successfully.
- Incomplete cost estimates (estimates for sessions where not every response has a cost estimate available) can optionally be enabled.
- Gptcmd now takes the discount on cached prompt tokens into account when calculating estimated OpenAI costs.
- Gptcmd can now display prompt and sampled token usage on streamed responses.
- Gptcmd now has a configuration system for setting application options and specifying credentials for large language model provider accounts. Consult the readme for more information about the configuration format.
- An
accountcommand has been added to switch between configured large language model provider accounts.
- An
- The
flipcommand has been replaced with amovecommand that allows for arbitrary message reordering. - The
slicecommand has been replaced with acopycommand that appends copies of a message range to a specified thread. - Gptcmd now supports the use of an external text editor for some operations:
- With no arguments, the
user,assistant,system, andsaycommands now open an external editor for message composition. - An
editcommand has been added, which opens the selected message in an external editor so its content can be modified.
- With no arguments, the
- Command feedback across the application has been significantly improved.
Changes
- Gptcmd now displays token usage by request, not by session.
- The
nAPI parameter, which controls the number of responses generated by OpenAI models, is no longer supported. - The default
temperaturesetting of 0.6 has been removed, so no temperature value is sent with API requests unless explicitly set. This means that the defaulttemperaturefor OpenAI requests is now effectively set to 1. - Gptcmd now works with API parameters and the selected model on a per-account (not per-thread) basis and no longer saves these parameters to JSON files.
- By default, Gptcmd now streams responses when possible.
- Gptcmd now uses
gpt-4oby default in OpenAI sessions. - The
retrycommand now deletes from the end up to the last assistant message, not the entire span of messages after the last user message, before resending. In conversations of alternating user and assistant messages, this change has no effect. However, this greatly simplifys the use ofretrywith models that allow the generated assistant response to be constrained with a custom prefix.
Bug fixes
- Gptcmd is now much more stable when streaming responses.
- Gptcmd's command parsing has been improved, including better handling of quoted file paths and more predictable behaviour with message ranges containing a single negative index.
- General improvements to stability and performance have been introduced as part of a larger refactoring effort.