diff --git a/docs/how-to/how-to-run-examples.md b/docs/how-to/how-to-run-examples.md new file mode 100644 index 0000000..11b4d17 --- /dev/null +++ b/docs/how-to/how-to-run-examples.md @@ -0,0 +1,154 @@ +# How to run the examples + +## How to generate the example files + +In order to use the examples provided in `OpenAIExamples`, +you'll need to have some examples files available locally. + +To create all required files you can execute: + +```smalltalk +OpenAIExamples new generateExampleFiles +``` + +This will create, inside the *open-ai* directory: + +- answers-example.jsonl +- classifications-example.jsonl +- file-that-will-be-deleted.jsonl +- fine-tune-example.jsonl +- search-example.jsonl + +## How to configure your API key + +Before running the examples, you must provide a valid API key. + +1. [Sign up](https://beta.openai.com/signup) for an OpenAI account. + +2. Go to your +[API keys]( +https://beta.openai.com/account/api-keys +) +and click on the *Create new secret key* button. + +3. Click on the *Copy* link next to your recently created key. + +4. Create a file named *apikey.secret* in the *open-ai* directory. + +5. Paste the copied key into the file, then save it. + +## How to invoke the APIs with the examples + +Several examples are provided which show a possible use case for the diferent APIs. + +The example methods return an object that extracts the relevant answer +from the full API respose. +This is intended for the examples to illustrate how to navigate the structure +of the responses of the different APIs wrapped. + +### Files API + +Examples using the [Files API](https://beta.openai.com/docs/api-reference/files) + +OpenAI offers by default 1 GB storage *per organization*. +For individual users this means 1 GB as well. +There is no cost associated with managing files. +This means that running the examples in this section will not reduce +your free balance. + +Files are uploaded asynchronously. +Uploading returns a *File ID* than can be used later to check the file status. +You can't use a file until it's status is **processed**. + +All uploads in the examples will wait up to 20 seconds, polling every second, +until the upload is complete. + +The uploading method in the examples `OpenAIExamples>>#idForFileNamed:intendedFor:` +checks whether a file with the same name was already uploaded. +If it's already there, it does not re-upload it. +Since OpenAI allows multiple files with the same name, +failing to check first would result in an ever increasing use of your storage. + +Running `OpenAIExamples new files` will list the files currently registered. +In case you have not yet made any upload, the example will upload `open-ai/fine-tune-example.jsonl`. + +Running `OpenAIExamples new downloadAndRemoveFile` will upload `open-ai/file-that-will-be-deleted.jsonl`. +After uploading is complete, the file will be deleted. +This is just meant to show the steps to both upload and delete a file. + +Running `OpenAIExamples new deleteAllFiles` will **delete all files** +declared to your OpenAI account. +**Always use this example with caution**. + +### Answers API + +Examples using the [Answers API](https://beta.openai.com/docs/api-reference/answers) + +Running `OpenAIExamples new answers` will ask *where is France?'* +using as context the information `France is in Europe` and +`Canada is in America` and `Japan is in Asia`. +To explain to OpenAI how to extract information from context, +it will provide the example that given the context `this car is 2 meters long` +and the question `how long is this car` the answer should be `2 meters`. +This use case does not employ the Files API, all processing is done in the moment. + +Running `OpenAIExamples new answersFromFile` will do the same, + but the context is provided by the file `answers-example.jsonl`. +The file is uploaded, and then the question is asked to OpenAI. + +### Search API + +Examples using the [Search API](https://beta.openai.com/docs/api-reference/searches) + +Running `OpenAIExamples new search` will try to find `bulldog` in the list +`cat dog car building vehicle person`. +The result is the list ordered by how closely they match the query. + +Running `OpenAIExamples new searchFromFile` will first upload the file `search-example.jsonl`, +then present the query `the dog feels happy in a building where some person lives`. +The result is the list of *documents* listed in the file sorted by +relevance to the query. + +### Completions API + +Examples using the [Completions API](https://beta.openai.com/docs/api-reference/completions) + +Running `OpenAIExamples new completions` will ask for the next word +after the sequence `This is the day`. +Note that you can easily ask for more words by changing +the collaborator sent to `changeMaximumNumberOfTokensTo:`. +Keep in mind that this will increase the cost of each run of the example +since more *tokens* will be generated. + +The completions API benefits from using the more complex (and expensive) OpenAI engines. +To try them out in the example, add: + +```smalltalk +apiClient changeEngineTo: 'davinci'. +``` + +If you do this, it is also a good idea to increase the numbers of tokens +as explained before. + +Remember that the DaVinci engine is **75 times more expensive** +than the default Ada engine. + +### Classifications API + +Examples using the [Classifications API](https://beta.openai.com/docs/api-reference/classifications) + +Running `OpenAIExamples new classifications` will ask OpenAI to classify +the sentence `the weather is great` as either `happy` OR `sad`, +considering the samples `the grass is green` is `happy`, +`the sky is pretty` is `happy` and `the soil is rotten` is `sad`. + +Running `OpenAIExamples new classificationsFromFile` will upload the examples +in the file `classifications-example.jsonl`, +then ask to classify the sentence `movie is very good` +as either `Positive` or `Negative`. + +Running `OpenAIExamples new classificationsFromFileWithoutLabels` will also +upload the file and present the query `movie is very good`. +Only this time no classification options are provided. +OpenAI is free to choose how to classify the sentence, +based on its pre-training in interpreting text. diff --git a/docs/reference/Answers API.md b/docs/reference/Answers API.md new file mode 100644 index 0000000..b77a91c --- /dev/null +++ b/docs/reference/Answers API.md @@ -0,0 +1,81 @@ +# Answers API + +Offers access to the [features mentioned here](https://beta.openai.com/docs/api-reference/answers). + +The API can be accessed using the class `AnswersAPIClient`. + +## Instance Creation + +To create a Files API client, you need a *RESTful API Client* +and an *OpenAI API key*. + +You can obtain a *RESTful API Client* by evaluating: `RESTfulAPIClient cachingOnLocalMemory`. + +You should be able to obtain an API Key by following the [steps mentioned here](../how-to/how-to-run-examples.md). +Afterwards you can access the key by sending +`'open-ai/apikey.secret' asFileReference contents`. + +So to create the Answers API Client you can evaluate: + +```smalltalk +| restfulClient apiKey answersAPIClient | + +restfulClient := RESTfulAPIClient cachingOnLocalMemory. +apiKey := 'open-ai/apikey.secret' asFileReference contents. + +answersAPIClient := AnswersAPIClient + accessingAPIsWith: restfulClient + authenticatedWith: apiKey +``` + +## Public protocol + +### answer: *question* against: *documents* given: *examples* within: *context* + +Executes a POST call to obtain the answer to the question indicated. +A collection of strings is used as the document list, +where the information will be looked up. +Aditionally you must provide a question-answer example, +along with a context that contains the document +that would have been relevant to answer the example. + +As specified [here](https://beta.openai.com/docs/api-reference/answers/create), +the response will include an *anwsers* element which will contain +the possible answers to the question. + +As an example, suppose you want to present the Answers API +with the query `where is France?`. + +You need to provide several documents that might contain the answer. +The example will provide 3 sentences: +`France is in Europe` and `Canada is in America` and `Japan is in Asia`. + +Also an example must be chosen, +with a logic similar to the one you are expecting the API to apply. + +You can tell the API that given the document +`this car is 2 meters long` and the question +`how long is this car?` the answer is `2 meters`. + +The above example with be written as follows: + +```smalltalk +answersAPIClient answer: 'where is France?' + against: #('France is in Europe' + 'Canada is in America' + 'Japan is in Asia' ) + given: ( Array with: #( 'how long is this car?' '2 meters' ) ) + within: 'this car is 2 meters long'. +``` + +### changeSearchEngineTo: *engine id* + +The *model* and *search moodel* are separate parameters in the API. +The wrapper defaults to **Ada** for both +to minimize the default cost of using the API client. + +### changeEngineTo: *engine id* + +The *model* and *search moodel* are separate parameters in the API. +The wrapper defaults to **Ada** for both +to minimize the default cost of using the API client. diff --git a/docs/reference/Files API.md b/docs/reference/Files API.md new file mode 100644 index 0000000..f0e1d90 --- /dev/null +++ b/docs/reference/Files API.md @@ -0,0 +1,102 @@ +# Files API + +Offers access to the [features mentioned here](https://beta.openai.com/docs/api-reference/files). + +The API can be accessed using the class `FilesAPIClient`. + +## Instance Creation + +To create a Files API client, you need a *RESTful API Client* +and an *OpenAI API key*. + +You can obtain a *RESTful API Client* by evaluating: `RESTfulAPIClient cachingOnLocalMemory`. + +You should be able to obtain an API Key by following the [steps mentioned here](../how-to/how-to-run-examples.md). +Afterwards you can access the key by sending +`'open-ai/apikey.secret' asFileReference contents`. + +So to create the Files API Client you can evaluate: + +```smalltalk +| restfulClient apiKey filesAPIClient | + +restfulClient := RESTfulAPIClient cachingOnLocalMemory. +apiKey := 'open-ai/apikey.secret' asFileReference contents. + +filesAPIClient := FilesAPIClient + accessingAPIsWith: restfulClient + authenticatedWith: apiKey +``` + +## Public protocol + +### listFiles + +Executes a GET call to obtain the list of all files currently uploaded +to your OpenAI account. + +As specified [here](https://beta.openai.com/docs/api-reference/files/list), +the response will contain a *data* element with a list of files, +where each includes information such as +id, name, size, creation stamp and purpose. + +To use it, just evaluate: + +```smalltalk +filesAPIClient listFiles +``` + +### idForProcessed: *file reference* intendedFor: *purpose* waiting: *time* + +Executes a POST call to upload a file. This requires, +apart from the reference to the desired file, +a **purpose** that OpenAI requires to know beforehand, +to confirm the format is correct for the intended API use. +The method also asks for a maximum time to wait for the file to be processed. + +As specified [here](https://beta.openai.com/docs/api-reference/files/upload), +the response will contain information such as +id, name, size, creation stamp and purpose. +The *id* is obtained and used as the method return. + +An undocumented (so far) feature of OpenAI is that when querying for a specific file, +a *status* is returned. +Unless the status is listed as **processed**, +the file can't be used in the other APIs. + +This method will poll the status every second up to the maximum specified. +In case the status is not processed, +it will raise an Exception to let the user know. +This does not mean that OpenAI won't eventually have the file ready for use, +only that it was not so in the time period indicated. + +An example using this method, in case you want to upload the file at `open-ai/answers-example.jsonl`, +to be later used with the answers API, +implies evaluating: + +```smalltalk +apiClient + idForProcessed: 'open-ai/answers-example.jsonl' asFileReference + intendedFor: 'answers' + waiting: 4 seconds +``` + +### retrieveFileIdentifiedBy: *file id* + +Executes a DELETE request, to remove from your storage +the file with the *id* indicated. + +You can obtain the *id* but either keeping the returned value of an upload, +or by checking the *id* attribute when listing your files +using the `listFiles` method. + +As specified [here](https://beta.openai.com/docs/api-reference/files/delete), +the response will include an attribute *deleted*, +which will be *true* as long as the deletion could be completed. + +As an example, if you have a file which returned the id `file-XjGxS3KTG0uNmNOK362iJua3`, +you would delete it by evaluating: + +```smalltalk +apiClient retrieveFileIdentifiedBy: 'file-XjGxS3KTG0uNmNOK362iJua3' +``` diff --git a/source/Open-AI-Examples/OpenAIExamples.class.st b/source/Open-AI-Examples/OpenAIExamples.class.st index 778d97f..7ef608b 100644 --- a/source/Open-AI-Examples/OpenAIExamples.class.st +++ b/source/Open-AI-Examples/OpenAIExamples.class.st @@ -4,6 +4,32 @@ Class { #category : #'Open-AI-Examples' } +{ #category : #accessing } +OpenAIExamples >> advancedTweetClassificationViaCompletion [ + + | apiClient seedText response | + + "https://beta.openai.com/examples/default-adv-tweet-classifier" + apiClient := self completionsAPIClient. + apiClient + changeMaximumNumberOfTokensTo: 60; + changeModelTo: 'text-davinci-002'; + changeTemperatureTo: 0. + seedText := 'Classify the sentiment in these tweets: + +1. "I can''t stand homework" +2. "This sucks. I''m bored 馃槧" +3. "I can''t wait for Halloween!!!" +4. "My cat is adorable 鉂わ笍鉂わ笍" +5. "I hate chocolate" + +Tweet sentiment ratings:'. + + response := apiClient complete: seedText. + + ^ '<1s><2s>' expandMacrosWith: seedText with: ( ( response at: 'choices' ) first at: 'text' ) +] + { #category : #accessing } OpenAIExamples >> answers [ @@ -27,17 +53,23 @@ OpenAIExamples >> answersAPIClient [ accessingAPIsWith: RESTfulAPIClient cachingOnLocalMemory authenticatedWith: self apiKey. apiClient - changeEngineTo: 'ada'; + changeModelTo: 'ada'; changeSearchEngineTo: 'ada'; stopAt: #( '\n' '<|endoftext|>' ). ^ apiClient ] +{ #category : #'private - accessing' } +OpenAIExamples >> answersExampleFileReference [ + + ^ self examplesDirectory / 'answers-example.jsonl' +] + { #category : #'private - accessing' } OpenAIExamples >> answersFileId [ - ^ self idForFileNamed: 'open-ai/answers-example.jsonl' intendedFor: 'answers' + ^ self idForFileNamed: self answersExampleFileReference intendedFor: 'answers' ] { #category : #accessing } @@ -65,6 +97,27 @@ OpenAIExamples >> apiKey [ ^ 'open-ai/apikey.secret' asFileReference contents ] +{ #category : #'private - processing' } +OpenAIExamples >> ask: apiClient toCancelFineTuneIdentifiedBy: fineTune [ + + ^ fineTune status = 'cancelled' + ifTrue: [ 'The fine tune <1s> has already been canceled.' expandMacrosWith: fineTune id ] + ifFalse: [ apiClient cancelFineTuneIdentifiedBy: fineTune id ] +] + +{ #category : #'private - processing' } +OpenAIExamples >> ask: apiClient toDelete: modelName relatedTo: fineTune [ + + ^ [ apiClient deleteFineTunedModelNamed: modelName ] + on: HTTPClientError notFound + do: [ :ex | + ex return: + ( 'The model <1s> has already been deleted but the related fine tune <2s> is still listed, it should be purged eventually by Open AI.' + expandMacrosWith: modelName + with: fineTune id ) + ] +] + { #category : #accessing } OpenAIExamples >> classifications [ @@ -72,10 +125,10 @@ OpenAIExamples >> classifications [ apiClient := self classificationsAPIClient. apiClient labelAsOneOf: #( 'happy' 'sad' ). - + response := apiClient classify: 'the weather is great' given: ( Array with: #( 'the grass is green' 'happy' ) - with: #( 'el cielo est谩 lindo' 'happy' ) + with: #( 'the sky is pretty' 'happy' ) with: #( 'the soil is rotten' 'sad' ) ). ^ response at: 'label' ] @@ -89,16 +142,22 @@ OpenAIExamples >> classificationsAPIClient [ accessingAPIsWith: RESTfulAPIClient cachingOnLocalMemory authenticatedWith: self apiKey. apiClient - changeEngineTo: 'ada'; + changeModelTo: 'ada'; changeSearchEngineTo: 'ada'. ^ apiClient ] +{ #category : #'private - accessing' } +OpenAIExamples >> classificationsExampleFileReference [ + + ^ self examplesDirectory / 'classifications-example.jsonl' +] + { #category : #'private - accessing' } OpenAIExamples >> classificationsFileId [ - ^ self idForFileNamed: 'open-ai/classifications-example.jsonl' intendedFor: 'classifications' + ^ self idForFileNamed: self classificationsExampleFileReference intendedFor: 'classifications' ] { #category : #accessing } @@ -131,6 +190,33 @@ OpenAIExamples >> classificationsFromFileWithoutLabels [ ^ response at: 'label' ] +{ #category : #'private - accessing' } +OpenAIExamples >> classifyUsing: aFineTune [ + + | apiClient seedText response | + + apiClient := self completionsAPIClient. + apiClient + changeModelTo: ( aFineTune at: #fine_tuned_model ); + returnLikelyTokensUpTo: 2; + changeMaximumNumberOfTokensTo: 1. + seedText := 'https://t.co/f93xEd2 Excited to share my latest blog post! ->'. + + response := apiClient complete: seedText. + + ^ ( ( ( response at: 'choices' ) detect: [ :choice | choice index = 0 ] ) at: 'text' ) trimBoth +] + +{ #category : #'private - processing' } +OpenAIExamples >> classifyUsingNewestAmong: eligible [ + + | sorted | + + sorted := eligible sorted: [ :a :b | ( a at: #fine_tuned_model ) >= ( b at: #fine_tuned_model ) ]. + + ^ self classifyUsing: sorted first +] + { #category : #'private - accessing' } OpenAIExamples >> cleanDelimitersFrom: anAnswerCollection [ @@ -159,11 +245,88 @@ OpenAIExamples >> completionsAPIClient [ apiClient := CompletionsAPIClient accessingAPIsWith: RESTfulAPIClient cachingOnLocalMemory authenticatedWith: self apiKey. - apiClient changeEngineTo: 'ada'. + apiClient changeModelTo: 'ada'. ^ apiClient ] +{ #category : #'private - processing' } +OpenAIExamples >> considerPreparingFineTuneExampleBasedOn: fineTunes [ + + | pending | + + pending := fineTunes reject: [ :fineTune | #( succeeded cancelled ) includes: fineTune status ]. + + ^ pending + ifEmpty: [ + self prepareExampleFineTune. + 'No pending nor finished fine tune found. A new job has been launched. Wait a few minutes, then try again.' + ] + ifNotEmpty: [ 'No fine tune was ready for use yet. Wait a few minutes, then try again ' ] +] + +{ #category : #'private - processing' } +OpenAIExamples >> convertToLinuxLineEndings: aString [ + + ^ ( aString copyReplaceAll: String crlf with: String lf ) copyReplaceAll: String cr with: String lf +] + +{ #category : #'private - processing' } +OpenAIExamples >> createAnswersExampleFile [ + + self + write: '{"text": "France is in Europe", "metadata": "country with french people"} +{"text": "Canada is in America", "metadata": "country with canadian people"} +{"text": "Japan is in Asia", "metadata": "country with japanese people"}' + to: self answersExampleFileReference +] + +{ #category : #'private - processing' } +OpenAIExamples >> createClassificationsExampleFile [ + + self + write: + '{"text": "good film, but very glum.", "label": "Positive", "metadata": {"source":"example.com"}} +{"text": "i sympathize with the plight of these families, but the movie doesn''t do a very good job conveying the issue at hand.", "label": "Negative", "metadata": {"source":"example.com"}}' + to: self classificationsExampleFileReference +] + +{ #category : #'private - processing' } +OpenAIExamples >> createExamplesDirectory [ + + self examplesDirectory createDirectory +] + +{ #category : #'private - processing' } +OpenAIExamples >> createFileThatWillBeDeleted [ + + self + write: '{"prompt": "", "completion": ""}' + to: self fileThatWillBeDeletedReference +] + +{ #category : #'private - processing' } +OpenAIExamples >> createFineTuneExampleFile [ + + self + write: '{"prompt":"Overjoyed with the new iPhone! ->", "completion":" positive"} +{"prompt":"@lakers disappoint for a third straight night https://t.co/38EFe43 ->", "completion":" negative"}' + to: self fineTuneExampleFileReference +] + +{ #category : #'private - processing' } +OpenAIExamples >> createSearchExampleFile [ + + self + write: '{"text": "a cat is happy", "metadata": "a type of feline"} +{"text": "a dog is hungry", "metadata": "a type of canine"} +{"text": "a car is loud", "metadata": "a type of vehicle"} +{"text": "a building is tall", "metadata": "a type of location"} +{"text": "a vehicle is fast", "metadata": "a type of transport"} +{"text": "a person is smart", "metadata": "a type of sentient being"}' + to: self searchExampleFileReference +] + { #category : #accessing } OpenAIExamples >> deleteAllFiles [ @@ -172,19 +335,68 @@ OpenAIExamples >> deleteAllFiles [ | apiClient | apiClient := self filesAPIClient. - ^ apiClient listFiles data collect: [ :file | apiClient removeFileIdentifiedBy: file id ] + ^ apiClient listFiles data collect: [ :file | apiClient deleteFileIdentifiedBy: file id ] +] + +{ #category : #accessing } +OpenAIExamples >> deleteAllFineTunes [ + + "Use with extreme caution!" + + | apiClient | + + apiClient := self fineTuneAPIClient. + ^ apiClient listFineTunes collect: [ :fineTune | + | modelName | + + "fine_tuned_model returns null/nil until processing is completed" + modelName := fineTune at: #fine_tuned_model. + + modelName + ifNil: [ self ask: apiClient toCancelFineTuneIdentifiedBy: fineTune ] + ifNotNil: [ self ask: apiClient toDelete: modelName relatedTo: fineTune ] + ] +] + +{ #category : #'private - testing' } +OpenAIExamples >> does: apiClient considerAsReady: fineTune [ + + ^ fineTune status = 'succeeded' and: [ self does: apiClient knowAboutModelOf: fineTune ] +] + +{ #category : #'private - testing' } +OpenAIExamples >> does: apiClient knowAboutModelOf: fineTune [ + + ^ [ + apiClient modelNamed: ( fineTune at: #fine_tuned_model ). + true + ] + on: HTTPClientError notFound + do: [ :ex | ex return: false ] ] { #category : #accessing } -OpenAIExamples >> downloadAndRemoveFile [ +OpenAIExamples >> downloadAndDeleteFile [ "All attempts at downloading give the same 400 error: Not allowed to download files of purpose: " | id | - id := self idForFileNamed: 'open-ai/file-that-will-be-deleted.jsonl' intendedFor: 'fine-tune'. - ^ self filesAPIClient removeFileIdentifiedBy: id + id := self idForFileNamed: self fileThatWillBeDeletedReference intendedFor: 'fine-tune'. + ^ self filesAPIClient deleteFileIdentifiedBy: id +] + +{ #category : #'private - accessing' } +OpenAIExamples >> examplesDirectory [ + + ^ 'open-ai' asFileReference +] + +{ #category : #'private - accessing' } +OpenAIExamples >> fileThatWillBeDeletedReference [ + + ^ self examplesDirectory / 'file-that-will-be-deleted.jsonl' ] { #category : #accessing } @@ -197,7 +409,7 @@ OpenAIExamples >> files [ ^ files data ifEmpty: [ Array with: ( apiClient - idForProcessed: 'open-ai/fine-tune-example.jsonl' asFileReference + idForProcessed: self fineTuneExampleFileReference intendedFor: 'fine-tune' waiting: 4 seconds ) ] @@ -213,19 +425,79 @@ OpenAIExamples >> filesAPIClient [ ] { #category : #'private - accessing' } -OpenAIExamples >> idForFileNamed: aName intendedFor: aPurpose [ +OpenAIExamples >> fineTuneAPIClient [ + + ^ FineTuneAPIClient + accessingAPIsWith: RESTfulAPIClient cachingOnLocalMemory + authenticatedWith: self apiKey +] + +{ #category : #'private - accessing' } +OpenAIExamples >> fineTuneExampleFileReference [ + + ^ self examplesDirectory / 'fine-tune-example.jsonl' +] + +{ #category : #'private - accessing' } +OpenAIExamples >> fineTuneFileId [ + + ^ self idForFileNamed: self fineTuneExampleFileReference intendedFor: 'fine-tune' +] + +{ #category : #accessing } +OpenAIExamples >> fineTunedSentimentAnalysis [ + + | apiClient fineTunes ready | + + "The final result is answering https instead of a classification, most likely because there are only 2 examples instead of 200" + apiClient := self fineTuneAPIClient. + fineTunes := apiClient listFineTunes. + + ready := fineTunes select: [ :fineTune | self does: apiClient considerAsReady: fineTune ]. + + ^ ready + ifNotEmpty: [ self classifyUsingNewestAmong: ready ] + ifEmpty: [ self considerPreparingFineTuneExampleBasedOn: fineTunes ] +] + +{ #category : #processing } +OpenAIExamples >> generateExampleFiles [ + + self createExamplesDirectory. + self createAnswersExampleFile. + self createClassificationsExampleFile. + self createFileThatWillBeDeleted. + self createFineTuneExampleFile. + self createSearchExampleFile +] + +{ #category : #'private - accessing' } +OpenAIExamples >> idForFileNamed: aFileReference intendedFor: aPurpose [ | fileName filesAPIClient files | - fileName := aName asFileReference basename. + fileName := aFileReference basename. filesAPIClient := self filesAPIClient. files := filesAPIClient listFiles. - + ^ files data detect: [ :file | file filename = fileName ] ifFound: [ :file | file id ] ifNone: [ - filesAPIClient idForProcessed: aName asFileReference intendedFor: aPurpose waiting: 4 seconds ] + filesAPIClient idForProcessed: aFileReference intendedFor: aPurpose waiting: 20 seconds ] +] + +{ #category : #'private - processing' } +OpenAIExamples >> prepareExampleFineTune [ + + | fileId apiClient | + + fileId := self fineTuneFileId. + + apiClient := self fineTuneAPIClient. + apiClient changeSuffixTo: 'example'. + + apiClient trainUsing: fileId ] { #category : #accessing } @@ -248,15 +520,21 @@ OpenAIExamples >> searchAPIClient [ apiClient := SearchAPIClient accessingAPIsWith: RESTfulAPIClient cachingOnLocalMemory authenticatedWith: self apiKey. - apiClient changeEngineTo: 'ada'. + apiClient changeModelTo: 'ada'. ^ apiClient ] +{ #category : #'private - accessing' } +OpenAIExamples >> searchExampleFileReference [ + + ^ self examplesDirectory / 'search-example.jsonl' +] + { #category : #'private - accessing' } OpenAIExamples >> searchFileId [ - ^ self idForFileNamed: 'open-ai/search-example.jsonl' intendedFor: 'search' + ^ self idForFileNamed: self searchExampleFileReference intendedFor: 'search' ] { #category : #accessing } @@ -278,3 +556,60 @@ OpenAIExamples >> sortByScoreDataIn: response [ ^ ( response at: 'data' ) sorted: [ :a :b | ( a at: 'score' ) >= ( b at: 'score' ) ] ] + +{ #category : #accessing } +OpenAIExamples >> spanishBondTweetClassificationViaCompletion [ + + | apiClient seedText response | + + apiClient := self completionsAPIClient. + apiClient + changeMaximumNumberOfTokensTo: 60; + changeModelTo: 'text-davinci-002'; + changeTemperatureTo: 0. + seedText := 'Classify the sentiment in these tweets: + +1. "Si, prefiero opciones con m谩s cup贸n. AE38, AL41 y los 2037 de PBA mis preferidos. Igual si la cosa va bien los de mayor apreciaci贸n van a ser los bonos cortos (#TARGET y #AL30). Obviamente lo mismo para sus equivalentes ley NY." +2. "q feo ese $TARGET, lo conozco?" +3. "#AL30D $TARGET vuela" +4. "Que mejor manera de arrancar las vacaciones que el norte en m谩ximos y el TARGET subiendo por sexta jornada seguida" +5. "Mi hijo solo sali贸 a comprar TARGET como todos los d铆as y me lo fundieron. La madre de un trader fundido pide justicia." + +Tweet sentiment ratings:'. + + response := apiClient complete: seedText. + + ^ '<1s><2s>' expandMacrosWith: seedText with: ( ( response at: 'choices' ) first at: 'text' ) +] + +{ #category : #accessing } +OpenAIExamples >> tweetsInSpanishReferringToBonds [ + + | apiClient seedText response | + + apiClient := self completionsAPIClient. + apiClient + changeMaximumNumberOfTokensTo: 60; + changeModelTo: 'text-davinci-002'; + changeTemperatureTo: 0. + seedText := 'Determine which of these tweets refer to financial bonds: + +1. "Borrachita al29%" +2. "Pero no paga nada de cup贸n. Es como una acci贸n. C贸mo ves el AE38 o el AL41?" +3. "#dolar AL30(ci) MEP 190.71 Cable 189.97 GD30(ci) MEP 190.76 Cable 190.09 (BM+LELIQ+Pases)/Reservas al 29/03 196.24 Naci贸n +30% +35% 192.64" +4. "Buena semana para los D despu茅s de mucho tiempo!! El #AL30 que la semana pasada toc贸 los 27 y una tir de 30,8%, cierra en 30,7 y 28,5%!" +5. "Este medio period铆stico..... Es una farza.... Esta. Asustados porque petro no llega al30% de votos" + +Bond tweets:'. + + response := apiClient complete: seedText. + + ^ ( response at: 'choices' ) first at: 'text' +] + +{ #category : #'private - processing' } +OpenAIExamples >> write: aString to: aFileReference [ + + aFileReference writeStreamDo: [ :stream | + stream nextPutAll: ( self convertToLinuxLineEndings: aString ) ] +] diff --git a/source/Open-AI-Model-Tests/AnswersAPIClientTest.class.st b/source/Open-AI-Model-Tests/AnswersAPIClientTest.class.st index dfc6dfe..16564b4 100644 --- a/source/Open-AI-Model-Tests/AnswersAPIClientTest.class.st +++ b/source/Open-AI-Model-Tests/AnswersAPIClientTest.class.st @@ -110,12 +110,12 @@ AnswersAPIClientTest >> testAnswerQueryingGivenWithin [ ] { #category : #tests } -AnswersAPIClientTest >> testChangeEngineTo [ +AnswersAPIClientTest >> testChangeModelTo [ | client entityContentsJSON | client := AnswersAPIClient accessingAPIsWith: self restfulAPIClient authenticatedWith: '1234'. - client changeEngineTo: 'davinci'. + client changeModelTo: 'davinci'. self httpClient whenSend: #entity: evaluate: [ :entityReceived | diff --git a/source/Open-AI-Model-Tests/ClassificationsAPIClientTest.class.st b/source/Open-AI-Model-Tests/ClassificationsAPIClientTest.class.st index 2f50797..66895f0 100644 --- a/source/Open-AI-Model-Tests/ClassificationsAPIClientTest.class.st +++ b/source/Open-AI-Model-Tests/ClassificationsAPIClientTest.class.st @@ -71,14 +71,14 @@ ClassificationsAPIClientTest >> restfulAPIClient [ ] { #category : #tests } -ClassificationsAPIClientTest >> testChangeEngineTo [ +ClassificationsAPIClientTest >> testChangeModelTo [ | client entityContentsJSON | client := ClassificationsAPIClient accessingAPIsWith: self restfulAPIClient authenticatedWith: '1234'. - client changeEngineTo: 'davinci'. + client changeModelTo: 'davinci'. self httpClient whenSend: #entity: evaluate: [ :entityReceived | diff --git a/source/Open-AI-Model-Tests/CompletionsAPIClientTest.class.st b/source/Open-AI-Model-Tests/CompletionsAPIClientTest.class.st index 1ad7f41..5057ff9 100644 --- a/source/Open-AI-Model-Tests/CompletionsAPIClientTest.class.st +++ b/source/Open-AI-Model-Tests/CompletionsAPIClientTest.class.st @@ -13,38 +13,47 @@ CompletionsAPIClientTest >> restfulAPIClient [ ] { #category : #tests } -CompletionsAPIClientTest >> testChangeEngineTo [ +CompletionsAPIClientTest >> testChangeMaximumNumberOfTokensTo [ - | client url | + | client entityContentsJSON | client := CompletionsAPIClient accessingAPIsWith: self restfulAPIClient authenticatedWith: '1234'. - client changeEngineTo: 'davinci'. - self httpClient whenSend: #url: evaluate: [ :urlReceived | url := urlReceived ]. + client changeMaximumNumberOfTokensTo: 10. + self httpClient + whenSend: #entity: + evaluate: [ :entityReceived | + entityContentsJSON := NeoJSONObject fromString: entityReceived contents ]. self configureHttpClientToRespondWith: self thatResponse. - client complete: 'This is the day'. - self assert: url equals: 'https://api.openai.com/v1/engines/davinci/completions' asUrl + self + assert: entityContentsJSON prompt equals: 'This is the day'; + assert: ( entityContentsJSON at: #max_tokens ) equals: 10; + assert: entityContentsJSON model equals: 'text-ada-001'; + assert: entityContentsJSON keys size equals: 3 ] { #category : #tests } -CompletionsAPIClientTest >> testChangeMaximumNumberOfTokensTo [ +CompletionsAPIClientTest >> testChangeModelTo [ | client entityContentsJSON | client := CompletionsAPIClient accessingAPIsWith: self restfulAPIClient authenticatedWith: '1234'. - client changeMaximumNumberOfTokensTo: 10. + client changeModelTo: 'davinci'. + self httpClient whenSend: #entity: evaluate: [ :entityReceived | entityContentsJSON := NeoJSONObject fromString: entityReceived contents ]. self configureHttpClientToRespondWith: self thatResponse. + client complete: 'This is the day'. self assert: entityContentsJSON prompt equals: 'This is the day'; - assert: ( entityContentsJSON at: #max_tokens ) equals: 10; - assert: entityContentsJSON keys size equals: 2 + assert: ( entityContentsJSON at: #max_tokens ) equals: 1; + assert: entityContentsJSON model equals: 'davinci'; + assert: entityContentsJSON keys size equals: 3 ] { #category : #tests } @@ -85,7 +94,8 @@ CompletionsAPIClientTest >> testDefaultEntity [ self assert: entityContentsJSON prompt equals: 'This is the day'; assert: ( entityContentsJSON at: #max_tokens ) equals: 1; - assert: entityContentsJSON keys size equals: 2 + assert: entityContentsJSON model equals: 'text-ada-001'; + assert: entityContentsJSON keys size equals: 3 ] { #category : #tests } @@ -99,7 +109,7 @@ CompletionsAPIClientTest >> testDefaultUrl [ client complete: 'This is the day'. - self assert: url equals: 'https://api.openai.com/v1/engines/ada/completions' asUrl + self assert: url equals: 'https://api.openai.com/v1/completions' asUrl ] { #category : #private } diff --git a/source/Open-AI-Model-Tests/FilesAPIClientTest.class.st b/source/Open-AI-Model-Tests/FilesAPIClientTest.class.st index 52faa23..bce5d47 100644 --- a/source/Open-AI-Model-Tests/FilesAPIClientTest.class.st +++ b/source/Open-AI-Model-Tests/FilesAPIClientTest.class.st @@ -72,6 +72,25 @@ FilesAPIClientTest >> restfulAPIClient [ cachingIn: ExpiringCache onLocalMemory ] +{ #category : #tests } +FilesAPIClientTest >> testDeleteFileIdentifiedBy [ + + | client answerJSON url | + + client := FilesAPIClient accessingAPIsWith: self restfulAPIClient authenticatedWith: '1234'. + self httpClient whenSend: #url: evaluate: [ :urlReceived | url := urlReceived ]. + self configureHttpClientToRespondWith: self deleteResponse. + + answerJSON := client deleteFileIdentifiedBy: 'fileID'. + + self + assert: answerJSON object equals: 'file'; + assert: answerJSON deleted; + assert: answerJSON id equals: 'fileID'. + + self assert: url equals: 'https://api.openai.com/v1/files/fileID' asUrl +] + { #category : #tests } FilesAPIClientTest >> testIdForProcessedIntendedForWaiting [ @@ -127,25 +146,6 @@ FilesAPIClientTest >> testListFiles [ ] ] -{ #category : #tests } -FilesAPIClientTest >> testRemoveFileIdentifiedBy [ - - | client answerJSON url | - - client := FilesAPIClient accessingAPIsWith: self restfulAPIClient authenticatedWith: '1234'. - self httpClient whenSend: #url: evaluate: [ :urlReceived | url := urlReceived ]. - self configureHttpClientToRespondWith: self deleteResponse. - - answerJSON := client removeFileIdentifiedBy: 'fileID'. - - self - assert: answerJSON object equals: 'file'; - assert: answerJSON deleted; - assert: answerJSON id equals: 'fileID'. - - self assert: url equals: 'https://api.openai.com/v1/files/fileID' asUrl -] - { #category : #private } FilesAPIClientTest >> uploadResponse [ diff --git a/source/Open-AI-Model-Tests/SearchAPIClientTest.class.st b/source/Open-AI-Model-Tests/SearchAPIClientTest.class.st index 49865db..b6a5f74 100644 --- a/source/Open-AI-Model-Tests/SearchAPIClientTest.class.st +++ b/source/Open-AI-Model-Tests/SearchAPIClientTest.class.st @@ -89,12 +89,12 @@ SearchAPIClientTest >> restfulAPIClient [ ] { #category : #tests } -SearchAPIClientTest >> testChangeEngineTo [ +SearchAPIClientTest >> testChangeModelTo [ | client url | client := SearchAPIClient accessingAPIsWith: self restfulAPIClient authenticatedWith: '1234'. - client changeEngineTo: 'davinci'. + client changeModelTo: 'davinci'. self httpClient whenSend: #url: evaluate: [ :urlReceived | url := urlReceived ]. self configureHttpClientToRespondWith: self dogResponseFromDocuments. diff --git a/source/Open-AI-Model/AnswersAPIClient.class.st b/source/Open-AI-Model/AnswersAPIClient.class.st index fb4ebc9..f141b12 100644 --- a/source/Open-AI-Model/AnswersAPIClient.class.st +++ b/source/Open-AI-Model/AnswersAPIClient.class.st @@ -48,7 +48,7 @@ AnswersAPIClient >> answersParametersToAnswer: aQuestion querying: aFileId given ] { #category : #configuring } -AnswersAPIClient >> changeEngineTo: anEngineId [ +AnswersAPIClient >> changeModelTo: anEngineId [ "https://beta.openai.com/docs/engines Davinci - $0.0600 per 1K tokens - Good at: Complex intent, cause and effect, summarization for audience @@ -93,7 +93,7 @@ AnswersAPIClient >> initializeAccessingAPIsWith: apiClient authenticatedWith: an parametersTemplate := Dictionary new. "Defaulting to the cheapest model" - self changeEngineTo: 'ada'. + self changeModelTo: 'ada'. self changeSearchEngineTo: 'ada' ] diff --git a/source/Open-AI-Model/ClassificationsAPIClient.class.st b/source/Open-AI-Model/ClassificationsAPIClient.class.st index 44743ba..72d0300 100644 --- a/source/Open-AI-Model/ClassificationsAPIClient.class.st +++ b/source/Open-AI-Model/ClassificationsAPIClient.class.st @@ -8,7 +8,7 @@ Class { } { #category : #configuring } -ClassificationsAPIClient >> changeEngineTo: anEngineId [ +ClassificationsAPIClient >> changeModelTo: anEngineId [ "https://beta.openai.com/docs/engines Davinci - $0.0600 per 1K tokens - Good at: Complex intent, cause and effect, summarization for audience @@ -81,7 +81,7 @@ ClassificationsAPIClient >> initializeAccessingAPIsWith: apiClient authenticated parametersTemplate := Dictionary new. "Defaulting to the cheapest model" - self changeEngineTo: 'ada'. + self changeModelTo: 'ada'. self changeSearchEngineTo: 'ada' ] diff --git a/source/Open-AI-Model/CompletionsAPIClient.class.st b/source/Open-AI-Model/CompletionsAPIClient.class.st index 24b5e6f..93b233d 100644 --- a/source/Open-AI-Model/CompletionsAPIClient.class.st +++ b/source/Open-AI-Model/CompletionsAPIClient.class.st @@ -2,32 +2,45 @@ Class { #name : #CompletionsAPIClient, #superclass : #OpenAIApiClient, #instVars : [ - 'maximumNumberOfTokens', - 'engineId' + 'parametersTemplate' ], #category : #'Open-AI-Model' } { #category : #configuring } -CompletionsAPIClient >> changeEngineTo: anEngineId [ +CompletionsAPIClient >> changeMaximumNumberOfTokensTo: aNumber [ - "https://beta.openai.com/docs/engines -Davinci - $0.0600 per 1K tokens - Good at: Complex intent, cause and effect, summarization for audience -Curie - $0.0060 per 1K tokens - Good at: Language translation, complex classification, text sentiment, summarization -Babbage - $0.0012 per 1K tokens - Good at: Moderate classification, semantic search classification -Ada - $0.0008 per 1K tokens - Parsing text, simple classification, address correction, keywords" + "integer Optional Defaults to 16 + The maximum number of tokens to generate in the completion. + The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). + + https://beta.openai.com/docs/guides/fine-tuning/classification + For clasification tasks, choose classes that map to a single token. At inference time, specify max_tokens=1 since you only need the first token for classification." - engineId := anEngineId + parametersTemplate at: 'max_tokens' put: aNumber ] { #category : #configuring } -CompletionsAPIClient >> changeMaximumNumberOfTokensTo: aNumber [ +CompletionsAPIClient >> changeModelTo: aModelId [ - "max_tokens integer Optional Defaults to 16 - The maximum number of tokens to generate in the completion. - The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except davinci-codex, which supports 4096)." + "https://beta.openai.com/docs/models/gpt-3 +text-davinci-002 Most capable GPT-3 model. Can do any task the other models can do, often with less context. In addition to responding to prompts, also supports inserting completions within text. +text-curie-001 Very capable, but faster and lower cost than Davinci. +text-babbage-001 Capable of straightforward tasks, very fast, and lower cost. +text-ada-001 Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost." + + parametersTemplate at: 'model' put: aModelId +] + +{ #category : #configuring } +CompletionsAPIClient >> changeTemperatureTo: aNumber [ + + "number Optional Defaults to 1 + What sampling temperature to use. Higher values means the model will take more risks. + Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. + We generally recommend altering this or top_p but not both." - maximumNumberOfTokens := aNumber + parametersTemplate at: 'temperature' put: aNumber ] { #category : #processing } @@ -39,9 +52,8 @@ CompletionsAPIClient >> complete: aString [ { #category : #'private - accessing' } CompletionsAPIClient >> completionsParametersPrompting: aString [ - ^ Dictionary new + ^ parametersTemplate copy at: 'prompt' put: aString; - at: 'max_tokens' put: maximumNumberOfTokens; yourself ] @@ -55,15 +67,30 @@ CompletionsAPIClient >> endpoint [ CompletionsAPIClient >> initializeAccessingAPIsWith: apiClient authenticatedWith: anAPIKey [ super initializeAccessingAPIsWith: apiClient authenticatedWith: anAPIKey. + parametersTemplate := Dictionary new. self changeMaximumNumberOfTokensTo: 1. "Defaulting to the cheapest model" - self changeEngineTo: 'ada'. + self changeModelTo: 'text-ada-001'. +] + +{ #category : #configuring } +CompletionsAPIClient >> returnLikelyTokensUpTo: aNumber [ + + "integer Optional + Defaults to null + Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. + The maximum value for logprobs is 5. If you need more than this, please contact support@openai.com and describe your use case. + + https://beta.openai.com/docs/guides/fine-tuning/classification + For clasification tasks, to get class log probabilities you can specify logprobs=5 (for 5 classes) when using your model" + + parametersTemplate at: 'logprobs' put: aNumber ] { #category : #'private - accessing' } CompletionsAPIClient >> serviceUrl [ - ^ self openAIUrl / 'engines' / engineId / self endpoint + ^ self openAIUrl / self endpoint ] diff --git a/source/Open-AI-Model/FilesAPIClient.class.st b/source/Open-AI-Model/FilesAPIClient.class.st index 430e5f5..c30f2e9 100644 --- a/source/Open-AI-Model/FilesAPIClient.class.st +++ b/source/Open-AI-Model/FilesAPIClient.class.st @@ -14,6 +14,14 @@ FilesAPIClient class >> initialize [ extensionsMap at: 'jsonl' ifAbsentPut: [ ZnMimeType applicationOctetStream ] ] +{ #category : #processing } +FilesAPIClient >> deleteFileIdentifiedBy: aFileId [ + + ^ client deleteAt: self serviceUrl / aFileId + configuredBy: [ :request | request headers setBearerTokenTo: apiKey ] + withSuccessfulResponseDo: [ :response | NeoJSONObject fromString: response ] +] + { #category : #'private - accessing' } FilesAPIClient >> endpoint [ @@ -38,14 +46,6 @@ FilesAPIClient >> listFiles [ withSuccessfulResponseDo: [ :response | NeoJSONObject fromString: response ] ] -{ #category : #processing } -FilesAPIClient >> removeFileIdentifiedBy: aFileId [ - - ^ client deleteAt: self serviceUrl / aFileId - configuredBy: [ :request | request headers setBearerTokenTo: apiKey ] - withSuccessfulResponseDo: [ :response | NeoJSONObject fromString: response ] -] - { #category : #'private - processing' } FilesAPIClient >> retrieveFileIdentifiedBy: aFileId [ diff --git a/source/Open-AI-Model/FineTuneAPIClient.class.st b/source/Open-AI-Model/FineTuneAPIClient.class.st new file mode 100644 index 0000000..a715c5a --- /dev/null +++ b/source/Open-AI-Model/FineTuneAPIClient.class.st @@ -0,0 +1,142 @@ +Class { + #name : #FineTuneAPIClient, + #superclass : #OpenAIApiClient, + #instVars : [ + 'parametersTemplate' + ], + #category : #'Open-AI-Model' +} + +{ #category : #processing } +FineTuneAPIClient >> cancelFineTuneIdentifiedBy: aFineTuneId [ + + ^ client + postAt: self serviceUrl / aFineTuneId / 'cancel' + configuredBy: [ :request | request headers setBearerTokenTo: apiKey ] + withSuccessfulResponseDo: [ :response | NeoJSONObject fromString: response ] +] + +{ #category : #configuring } +FineTuneAPIClient >> changeModelTo: aModelId [ + + "string Optional + Defaults to curie + The name of the base model to fine-tune. You can select one of 'ada', 'babbage', 'curie', or 'davinci'." + + parametersTemplate at: 'model' put: aModelId +] + +{ #category : #configuring } +FineTuneAPIClient >> changeNumberOfClassesTo: aNumber [ + + "integer Optional + Defaults to null + The number of classes in a classification task. + This parameter is required for multiclass classification." + + parametersTemplate at: 'classification_n_classes' put: aNumber +] + +{ #category : #configuring } +FineTuneAPIClient >> changeSuffixTo: aString [ + + "string Optional + Defaults to null + A string of up to 40 characters that will be added to your fine-tuned model name. + For example, a suffix of 'custom-model-name' would produce a model name like ada:ft-your-org:custom-model-name-2022-02-15-04-21-04." + + parametersTemplate at: 'suffix' put: aString +] + +{ #category : #configuring } +FineTuneAPIClient >> computeClassificationMetrics [ + + "boolean Optional + Defaults to false + If set, we calculate classification-specific metrics such as accuracy and F-1 score using the validation set at the end of every epoch. These metrics can be viewed in the results file. + In order to compute classification metrics, you must provide a validation_file. Additionally, you must specify classification_n_classes for multiclass classification or classification_positive_class for binary classification." + + parametersTemplate at: 'compute_classification_metrics' put: true +] + +{ #category : #processing } +FineTuneAPIClient >> deleteFineTunedModelNamed: aFineTunedModelName [ + + ^ client + deleteAt: self modelsUrl / aFineTunedModelName + configuredBy: [ :request | request headers setBearerTokenTo: apiKey ] + withSuccessfulResponseDo: [ :response | NeoJSONObject fromString: response ] +] + +{ #category : #'private - accessing' } +FineTuneAPIClient >> endpoint [ + + ^ 'fine-tunes' +] + +{ #category : #initialization } +FineTuneAPIClient >> initializeAccessingAPIsWith: apiClient authenticatedWith: anAPIKey [ + + super initializeAccessingAPIsWith: apiClient authenticatedWith: anAPIKey. + parametersTemplate := Dictionary new. + + "Defaulting to the cheapest model" + self changeModelTo: 'ada'. +] + +{ #category : #processing } +FineTuneAPIClient >> listFineTunes [ + + ^ client + getAt: self serviceUrl + configuredBy: [ :request | request headers setBearerTokenTo: apiKey ] + withSuccessfulResponseDo: [ :response | ( NeoJSONObject fromString: response ) data ] +] + +{ #category : #processing } +FineTuneAPIClient >> modelNamed: aFineTunedModelName [ + + ^ client + getAt: self modelsUrl / aFineTunedModelName + configuredBy: [ :request | request headers setBearerTokenTo: apiKey ] + withSuccessfulResponseDo: [ :response | NeoJSONObject fromString: response ] +] + +{ #category : #'private - accessing' } +FineTuneAPIClient >> modelsUrl [ + + ^ self openAIUrl / 'models' +] + +{ #category : #processing } +FineTuneAPIClient >> retrieveFineTuneIdentifiedBy: aFineTuneId [ + + ^ client getAt: self serviceUrl / aFineTuneId + configuredBy: [ :request | request headers setBearerTokenTo: apiKey ] + withSuccessfulResponseDo: [ :response | NeoJSONObject fromString: response ] +] + +{ #category : #'private - accessing' } +FineTuneAPIClient >> serviceUrl [ + + ^ self openAIUrl / self endpoint +] + +{ #category : #processing } +FineTuneAPIClient >> trainUsing: aFileId [ + + ^ self postContaining: ( parametersTemplate copy + at: 'training_file' put: aFileId; + yourself ) +] + +{ #category : #configuring } +FineTuneAPIClient >> validateUsing: aFileId [ + + "string Optional + The ID of an uploaded file that contains validation data. + If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. Your train and validation data should be mutually exclusive. + Your dataset must be formatted as a JSONL file, where each validation example is a JSON object with the keys 'prompt' and 'completion'. Additionally, you must upload your file with the purpose fine-tune." + + parametersTemplate at: 'validation_file' put: aFileId +] diff --git a/source/Open-AI-Model/SearchAPIClient.class.st b/source/Open-AI-Model/SearchAPIClient.class.st index 5134108..5809f68 100644 --- a/source/Open-AI-Model/SearchAPIClient.class.st +++ b/source/Open-AI-Model/SearchAPIClient.class.st @@ -9,7 +9,7 @@ Class { } { #category : #configuring } -SearchAPIClient >> changeEngineTo: anEngineId [ +SearchAPIClient >> changeModelTo: anEngineId [ "https://beta.openai.com/docs/engines Davinci - $0.0600 per 1K tokens - Good at: Complex intent, cause and effect, summarization for audience @@ -43,7 +43,7 @@ SearchAPIClient >> initializeAccessingAPIsWith: apiClient authenticatedWith: anA parametersTemplate := Dictionary new. "Defaulting to the cheapest model" - self changeEngineTo: 'ada'. + self changeModelTo: 'ada'. ] { #category : #configuring }