Skip to content

Commit 853787c

Browse files
committed
feat: add support for OpenAI Responses API
1 parent 17cab1d commit 853787c

File tree

12 files changed

+452
-295
lines changed

12 files changed

+452
-295
lines changed

CHANGELOG.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,9 @@
11
# CHANGELOG
22

3+
## v4.9.3
4+
5+
* Add support for OpenAI Responses API
6+
37
## v4.9.2
48

59
* Add prompt-based tool calling

README.md

Lines changed: 18 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -187,6 +187,7 @@ For Azure OpenAI Service, apiBaseUrl should be set to format `https://[YOUR-ENDP
187187
[Github Copilot](https://github.com/features/copilot) is supported with built-in authentication (a popup will ask your permission when using Github Copilot models).
188188

189189
**Supported Models:**
190+
190191
- **OpenAI Models**: `gpt-3.5-turbo`, `gpt-4`, `gpt-4-turbo`, `gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.5`
191192
- **Reasoning Models**: `o1-ga`, `o3-mini`, `o3`, `o4-mini`
192193
- **Claude Models**: `claude-3.5-sonnet`, `claude-3.7-sonnet`, `claude-3.7-sonnet-thought`, `claude-sonnet-4`, `claude-opus-4`
@@ -253,6 +254,7 @@ The extension provides various commands accessible through the Command Palette (
253254
<summary> Context Menu Commands </summary>
254255

255256
### **Context Menu Commands** (Right-click on selected code)
257+
256258
| Command | Keyboard Shortcut | Description |
257259
| ------- | ----------------- | ----------- |
258260
| **Generate Code** | `Ctrl+Shift+A` / `Cmd+Shift+A` | Generate code based on comments or requirements |
@@ -268,11 +270,11 @@ The extension provides various commands accessible through the Command Palette (
268270

269271
</details>
270272

271-
272273
<details>
273274
<summary> General Commands </summary>
274275

275276
### **General Commands**
277+
276278
| Command | Description |
277279
| ------- | ----------- |
278280
| `ChatGPT: Ask anything` | Open input box to ask any question |
@@ -284,14 +286,14 @@ The extension provides various commands accessible through the Command Palette (
284286
| `Add Current File to Chat Context` | Add the currently open file to chat context |
285287
| `ChatGPT: Open MCP Servers` | Manage Model Context Protocol servers |
286288

287-
288289
</details>
289290

290291
<details>
291292

292293
<summary> Prompt Management </summary>
293294

294295
### **Prompt Management**
296+
295297
- Use `#` followed by prompt name to search and apply saved prompts
296298
- Use `@` to add files to your conversation context
297299
- Access the Prompt Manager through the sidebar for full prompt management
@@ -309,12 +311,14 @@ The extension supports the **Model Context Protocol (MCP)**, allowing you to ext
309311
### **What is MCP?**
310312

311313
MCP enables AI models to securely connect to external data sources and tools, providing:
314+
312315
- **Custom Tools**: Integrate your own tools and APIs
313316
- **Data Sources**: Connect to databases, file systems, APIs, and more
314317
- **Secure Execution**: Sandboxed tool execution environment
315318
- **Multi-Step Workflows**: Agent-like behavior with tool chaining
316319

317320
### **MCP Server Types**
321+
318322
The extension supports three types of MCP servers:
319323

320324
| Type | Description | Use Case |
@@ -323,14 +327,14 @@ The extension supports three types of MCP servers:
323327
| **sse** | Server-Sent Events over HTTP | Web-based tools and APIs |
324328
| **streamable-http** | HTTP streaming communication | Real-time data sources |
325329

326-
327330
</details>
328331

329332
<details>
330333

331334
<summary> How to configure MCP? </summary>
332335

333336
### **MCP Configuration**
337+
334338
1. **Access MCP Manager**: Use `ChatGPT: Open MCP Servers` command or click the MCP icon in the sidebar
335339
2. **Add MCP Server**: Configure your MCP servers with:
336340
- **Name**: Unique identifier for the server
@@ -343,6 +347,7 @@ The extension supports three types of MCP servers:
343347
### **Example MCP Configurations**
344348

345349
**File System Access (stdio):**
350+
346351
```json
347352
{
348353
"name": "filesystem",
@@ -354,6 +359,7 @@ The extension supports three types of MCP servers:
354359
```
355360

356361
**Web Search (sse):**
362+
357363
```json
358364
{
359365
"name": "web-search",
@@ -372,6 +378,7 @@ The extension supports three types of MCP servers:
372378
### **Agent Mode**
373379

374380
When MCP servers are enabled, the extension operates in **Agent Mode**:
381+
375382
- **Max Steps**: Configure up to 15 tool execution steps
376383
- **Tool Chaining**: Automatic multi-step workflows
377384
- **Error Handling**: Robust error recovery and retry logic
@@ -386,6 +393,7 @@ When MCP servers are enabled, the extension operates in **Agent Mode**:
386393
<summary> Full list of configuration options </summary>
387394

388395
### **Core Configuration**
396+
389397
| Setting | Default | Description |
390398
| ------- | ------- | ----------- |
391399
| `chatgpt.gpt3.provider` | `Auto` | AI Provider: Auto, OpenAI, Azure, AzureAI, Anthropic, GitHubCopilot, Google, Mistral, xAI, Together, DeepSeek, Groq, Perplexity, OpenRouter, Ollama |
@@ -396,6 +404,7 @@ When MCP servers are enabled, the extension operates in **Agent Mode**:
396404
| `chatgpt.gpt3.organization` | | Organization ID (OpenAI only) |
397405

398406
### **Model Parameters**
407+
399408
| Setting | Default | Description |
400409
| ------- | ------- | ----------- |
401410
| `chatgpt.gpt3.maxTokens` | `0` (unlimited) | Maximum tokens to generate in completion |
@@ -404,6 +413,7 @@ When MCP servers are enabled, the extension operates in **Agent Mode**:
404413
| `chatgpt.systemPrompt` | | System prompt for the AI assistant |
405414

406415
### **DeepClaude (Reasoning + Chat) Configuration**
416+
407417
| Setting | Default | Description |
408418
| ------- | ------- | ----------- |
409419
| `chatgpt.gpt3.reasoning.provider` | `Auto` | Provider for reasoning model (Auto, OpenAI, Azure, AzureAI, Google, DeepSeek, Groq, OpenRouter, Ollama) |
@@ -413,17 +423,21 @@ When MCP servers are enabled, the extension operates in **Agent Mode**:
413423
| `chatgpt.gpt3.reasoning.organization` | | Organization ID for reasoning model (OpenAI only) |
414424

415425
### **Agent & MCP Configuration**
426+
416427
| Setting | Default | Description |
417428
| ------- | ------- | ----------- |
418429
| `chatgpt.gpt3.maxSteps` | `15` | Maximum steps for agent mode when using MCP servers |
419430

420431
### **Feature Toggles**
432+
421433
| Setting | Default | Description |
422434
| ------- | ------- | ----------- |
423435
| `chatgpt.gpt3.generateCode-enabled` | `true` | Enable code generation context menu |
424436
| `chatgpt.gpt3.searchGrounding.enabled` | `false` | Enable search grounding (Gemini models only) |
437+
| `chatgpt.gpt3.responsesAPI.enabled` | `false` | Enable OpenAI Responses API. Only available for OpenAI/AzureOpenAI models |
425438

426439
### **Prompt Prefixes & Context Menu**
440+
427441
| Setting | Default | Description |
428442
| ------- | ------- | ----------- |
429443
| `chatgpt.promptPrefix.addTests` | `Implement tests for the following code` | Prompt for generating unit tests |
@@ -445,6 +459,7 @@ When MCP servers are enabled, the extension operates in **Agent Mode**:
445459
| `chatgpt.promptPrefix.customPrompt2-enabled` | `false` | Enable second custom prompt in context menu |
446460

447461
### **User Interface**
462+
448463
| Setting | Default | Description |
449464
| ------- | ------- | ----------- |
450465
| `chatgpt.response.showNotification` | `false` | Show notification when AI responds |

package.json

Lines changed: 28 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
"displayName": "ChatGPT Copilot",
55
"icon": "images/ai-logo.png",
66
"description": "An VS Code ChatGPT Copilot Extension",
7-
"version": "4.9.2",
7+
"version": "4.9.3",
88
"aiKey": "",
99
"repository": {
1010
"url": "https://github.com/feiskyer/chatgpt-copilot"
@@ -361,6 +361,7 @@
361361
"o1-preview",
362362
"o3",
363363
"o3-mini",
364+
"o3-pro",
364365
"o4",
365366
"o4-mini",
366367
"claude-3-5-sonnet-20240620",
@@ -372,18 +373,13 @@
372373
"claude-3-7-sonnet-latest",
373374
"claude-opus-4-20250514",
374375
"claude-sonnet-4-20250514",
375-
"gemini-1.5-flash",
376-
"gemini-1.5-pro",
377-
"gemini-exp-1206",
378376
"gemini-2.0-flash",
379-
"gemini-2.0-flash-exp",
380377
"gemini-2.0-flash-lite",
381-
"gemini-2.0-pro-exp-02-05",
382-
"gemini-2.0-flash-thinking-exp-01-21",
383-
"gemini-2.5-flash-preview-05-20",
384-
"gemini-2.5-pro-exp-03-25",
378+
"gemini-2.0-flash-preview-image-generation",
379+
"gemini-2.5-flash",
380+
"gemini-2.5-flash-lite-preview-06-17",
381+
"gemini-2.5-pro",
385382
"gemini-2.5-pro-preview-05-06",
386-
"learnlm-1.5-pro-experimental",
387383
"grok-2-1212",
388384
"grok-2-vision-1212",
389385
"grok-vision-beta",
@@ -455,6 +451,12 @@
455451
"description": "Enable search grounding for Gemini model. Only available for Google Gemini models.",
456452
"order": 11
457453
},
454+
"chatgpt.gpt3.responsesAPI.enabled": {
455+
"type": "boolean",
456+
"default": false,
457+
"description": "Enable OpenAI Responses API. Only available for OpenAI/AzureOpenAI models.",
458+
"order": 12
459+
},
458460
"chatgpt.promptPrefix.addTests": {
459461
"type": "string",
460462
"default": "Implement tests for the following code",
@@ -646,7 +648,7 @@
646648
"build": "yarn esbuild-base --sourcemap",
647649
"watch": "yarn esbuild-base --sourcemap --watch",
648650
"fmt": "prettier --write \"src/**/*.ts\" && yarn test",
649-
"update": "yarn add -D npm-check-updates && ncu -u && yarn install",
651+
"update": "yarn add -D npm-check-updates && ncu -u -t latest && yarn install",
650652
"test": "eslint src --ext ts && tsc --noEmit",
651653
"package": "yarn vsce package",
652654
"publish": "yarn vsce publish"
@@ -655,18 +657,18 @@
655657
"@types/glob": "^8.1.0",
656658
"@types/isomorphic-fetch": "^0.0.39",
657659
"@types/mocha": "^10.0.10",
658-
"@types/node": "^22.15.29",
660+
"@types/node": "^24.0.3",
659661
"@types/react": "^19",
660662
"@types/react-dom": "^19",
661663
"@types/uuid": "^10.0.0",
662664
"@types/vscode": "1.96.0",
663665
"@types/vscode-webview": "^1.57.5",
664666
"@vscode/test-electron": "^2.5.2",
665-
"@vscode/vsce": "^3.4.2",
667+
"@vscode/vsce": "^3.5.1-3",
666668
"esbuild": "^0.25.5",
667-
"eslint": "^9.28.0",
668-
"glob": "^11.0.2",
669-
"mocha": "^11.5.0",
669+
"eslint": "^9.29.0",
670+
"glob": "^11.0.3",
671+
"mocha": "^11.7.0",
670672
"npm-check-updates": "^18.0.1",
671673
"prettier": "^3.5.3",
672674
"rimraf": "^6.0.1",
@@ -678,7 +680,7 @@
678680
"@ai-sdk/anthropic": "^1.2.12",
679681
"@ai-sdk/azure": "^1.3.23",
680682
"@ai-sdk/deepseek": "^0.2.14",
681-
"@ai-sdk/google": "^1.2.18",
683+
"@ai-sdk/google": "^1.2.19",
682684
"@ai-sdk/groq": "^1.2.9",
683685
"@ai-sdk/mistral": "^1.2.8",
684686
"@ai-sdk/openai": "^1.3.22",
@@ -687,25 +689,25 @@
687689
"@ai-sdk/replicate": "^0.2.8",
688690
"@ai-sdk/togetherai": "^0.2.14",
689691
"@ai-sdk/xai": "^1.2.16",
690-
"@modelcontextprotocol/sdk": "^1.12.1",
691-
"@openrouter/ai-sdk-provider": "^0.7.0",
692+
"@modelcontextprotocol/sdk": "^1.13.0",
693+
"@openrouter/ai-sdk-provider": "^0.7.2",
692694
"@quail-ai/azure-ai-provider": "1.2.0",
693695
"@types/minimatch": "^5.1.2",
694696
"ai": "^4.3.16",
695697
"ajv": "^8.17.1",
696-
"axios": "^1.9.0",
697-
"cheerio": "^1.0.0",
698+
"axios": "^1.10.0",
699+
"cheerio": "^1.1.0",
698700
"delay": "^6.0.0",
699701
"eventsource-parser": "^3.0.2",
700702
"gpt3-tokenizer": "^1.1.5",
701703
"isomorphic-fetch": "^3.0.0",
702-
"keyv": "^5.3.3",
703-
"minimatch": "^10.0.1",
704+
"keyv": "^5.3.4",
705+
"minimatch": "^10.0.3",
704706
"ollama-ai-provider": "^1.2.0",
705-
"openai": "^5.0.1",
707+
"openai": "^5.5.1",
706708
"p-timeout": "^6.1.4",
707709
"punycode": "^2.3.1",
708-
"puppeteer": "^24.9.0",
710+
"puppeteer": "^24.10.2",
709711
"puppeteer-extra": "^3.3.6",
710712
"puppeteer-extra-plugin-stealth": "^2.11.2",
711713
"puppeteer-extra-plugin-user-data-dir": "^2.4.1",
@@ -717,7 +719,7 @@
717719
"strip-markdown": "^6.0.0",
718720
"uri-js": "^4.4.1",
719721
"uuid": "^11.1.0",
720-
"zod": "^3.25.46"
722+
"zod": "^3.25.67"
721723
},
722724
"resolutions": {
723725
"punycode": "2.3.1",

src/chatgpt-view-provider.ts

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -599,6 +599,9 @@ export default class ChatGptViewProvider implements vscode.WebviewViewProvider {
599599
const searchGrounding = configuration.get(
600600
"gpt3.searchGrounding.enabled",
601601
) as boolean;
602+
const enableResponsesAPI = configuration.get(
603+
"gpt3.responsesAPI.enabled",
604+
) as boolean;
602605

603606
let systemPrompt = configuration.get("systemPrompt") as string;
604607
if (this.systemPromptOverride != "") {
@@ -675,6 +678,7 @@ export default class ChatGptViewProvider implements vscode.WebviewViewProvider {
675678
organization,
676679
systemPrompt,
677680
searchGrounding,
681+
enableResponsesAPI,
678682
isReasoning: false,
679683
});
680684
if (this.reasoningModel != "") {
@@ -697,6 +701,7 @@ export default class ChatGptViewProvider implements vscode.WebviewViewProvider {
697701
organization,
698702
systemPrompt: "",
699703
searchGrounding,
704+
enableResponsesAPI,
700705
isReasoning: true,
701706
});
702707
}

src/deepclaude.ts

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -197,6 +197,13 @@ export async function reasoningChat(
197197
temperature: provider.modelConfig.temperature,
198198
// topP: provider.modelConfig.topP,
199199
}),
200+
// ...(provider.provider === "Google" && provider.modelConfig.searchGrounding && {
201+
// providerOptions: {
202+
// google: {
203+
// useSearchGrounding: true,
204+
// },
205+
// },
206+
// }),
200207
});
201208
for await (const part of result.fullStream) {
202209
// logger.appendLine(`INFO: deepclaude.model: ${provider.model} deepclaude.question: ${question} response: ${JSON.stringify(part, null, 2)}`);

src/llms.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ export async function initGeminiModel(
7676
if (config.isReasoning) {
7777
const model = viewProvider.reasoningModel
7878
? viewProvider.reasoningModel
79-
: "gemini-1.5-flash-latest";
79+
: "gemini-2.5-pro";
8080
viewProvider.apiReasoning = wrapLanguageModel({
8181
model: ai(model),
8282
middleware: extractReasoningMiddleware({ tagName: "think" }),
@@ -93,7 +93,7 @@ export async function initGeminiModel(
9393
} else {
9494
const model = viewProvider.model
9595
? viewProvider.model
96-
: "gemini-1.5-flash-latest";
96+
: "gemini-2.5-pro";
9797
viewProvider.apiChat = ai(model);
9898
if (config.searchGrounding) {
9999
viewProvider.apiChat = ai(model, {

src/mcp.ts

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -133,12 +133,7 @@ export async function createToolSet(config: MCPServerConfig): Promise<ToolSet> {
133133
if (parameters.jsonSchema.additionalProperties == null) {
134134
parameters.jsonSchema.additionalProperties = false;
135135
}
136-
// const parameters = jsonSchema(
137-
// Object.fromEntries(
138-
// Object.entries(t.inputSchema as JSONSchema7)
139-
// .filter(([key]) => key !== "additionalProperties" && key !== "$schema")
140-
// )
141-
// );
136+
142137
toolset.tools[toolName] = tool({
143138
description: t.description || toolName,
144139
parameters: parameters,

0 commit comments

Comments
 (0)