-
Notifications
You must be signed in to change notification settings - Fork 74
customization logic reference #830
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* update contributing * openai v5.6.0 * create schema for responses API * add skeleton route to conversations router * add some basic tests * put package back * put package lock back * add case for message array input * feat: complete create response test suite * add minimum to max_output_tokens * add more test cases * add test case * set zod version to 3.25 for v4 support * start of pr feedback * adjust schema and tests to not allow empty message strings/arrays * move create responses * create new (very basic) router for responses API * basic test for responses router, will be expanded once middleware is added * update test * update func name * create route for responses API -- ensure certain options are set via config * clean up tests by moving MONGO_CHAT_MODEL string to test config * use GenerateResponse type * update comments in router * update reqId stuff * update comment
* create errors helper * update errors to include http code * add sendErrorResponse helper * add enum for error codes * update createResponse to use new error helpers * handle input validation in createResponse vs middleware * mostly update tests * improve error messages from zod * update tests * adjust variable names/exports * add test case for unknown errors * remove openai dep from mongodb-chat-server in favor of importing from rag-core * update errors to use openai types and classes * update tests with new validation errors * improve router test * Add rate limiting to responses api (#792) * add rate limit and global slowdown to responses router * basic working rate limit with test for responses router * add more test assertions for openai error * configure rateLimit middleware properly with makeRateLimitError helper * update test case * update test case * use sendErrorResponse helper within rateLimit middleware to ensure we get logging there too * remove extra comment * update error message * abstract error strings for tests
* cleaanup err msg constants * add failing tests to rag-core for message_id helper * add findByMessageId to conversation service * remove extra types from conversation service * add indexes to conversationsDb * add test for new getByMessageId service * remove duplicate export * add logic for getting conversation to createResponse * update configs for createResponse -- includes some cleanup * fix test for successful previous_message_id input * cleanup test variables * even cleaner tests * add logic to catch bad object ids * add more tests * more tests * last test * fix broken mock * cleanup tests * share logic for reaching maxUserMessages in a Conversation * bump
* skeleton for addMessagesToConversation helper * more skeleton * better name * increment * add check for previousResponse and input array * store metadata on conversation and create array of final messages * add call to save messages * add logic for checking userId changed * add test for conversation user id logic * update logic for adding messages to conversation * remove unneeded error case * remove test case * dont filter, just map * create helper for convertInputToDBMessages * update store logic * save store data on conversation, check for previous message id and no store * add case to handle mismatched conversation storage settings * cleanup logic for checking if conversation is stored * basic spy * implement tests * update * add final spys * safeParse > safeParseAsync bc we don't need to await any refines, etc. * add comment * add userId and storeMessageContent fields to Conversation * adjust logic for response api to use new convo fields * fix bug in conversation service logic * update userId check * update tests * test name tweak * test naming * add jset mock cleanup * abstract testMessageContent helper * update test helper * test for function call and outputs message storage * update tests * cleanup test
* ensure stream is configurable in chatbot-server-public layer * setup data streamer helper * add skeleton data streaming to createResponse service * move StreamFunction type to chat-server, share with public * fix test * start streaming in createResponse * add stream disconnect * move in progress stream message * stream event from verifiedAnswer helper * create helper for addMessageToConversationVerifiedAnswerStream * apply stream configs to chat-public * update test to use openAI client * update input schema for openai client call to responses * add test helper for making local server * almost finish converting createResposne tests to use openai client * more tests * disconnect data streamed on error * update data stream logic for create response * update data streamer sendResponsesEvent write type * i think this still needs to be a string * mapper for streamError * export openai shim types * mostly working tests with reading the response stream from openai client * create test helper * fix test helper for reading entire stream * dont send normal http message at end (maybe need this when we support non-streaming version) * improved tests * more test improvement -- proper use of conversation service, additional conversation testing * fix test for too many messages * remove skip tests * mostly working responses tests * abstract helpers for openai client requests * use helpers in create response tests * fix tests by passing responseId * skip problematic test * skip problematic test * create baseResponseData helper * pass zod validated req body * add tests for all responses fields * remove log * abstract helper for formatOpenaiError * replace helper * await server closing properly * basic working responses tests with openai client * update rate limit test * fix testing port * update test type related to responses streaming * apply type to data streamer * cleanup shared type * fix router tests * fix router tests * update errors to be proper openai stream errors * ensure format message cleans customData as well * add comment * update tests per review * update test utils * fix test type * update openai rag-core to 5.9 * fix data streamer for responses events to be SSE compliant * cleanup responses tests * cleanup createResponse tests * cleanup error handling to match openai spec * fix tests for standard openai exceptions * cleanup * add "required" as an option for tool_choice * cleanup datastreamer test globals * add test to dataStreamer for streamResponses
closing b/c this is just a reference implementation |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Jira: https://jira.mongodb.org/browse/EAI-
Changes
Notes