⛓️ Langflow

⛓️ Langflow

⛓️ Langflow

⛓️ Langflow

Fine-Tuning GPT Models with Langflow

Fine-Tuning GPT Models with Langflow

Fine-Tuning GPT Models with Langflow

Fine-Tuning GPT Models with Langflow

Alexandre

Oct 31, 2023

A guide to tailoring Large Language Models for specific tasks

Large Language Models excel at a variety of tasks due to their extensive pre-training on huge datasets. However, they still struggle with tasks requiring specific data or a particular format, which is where the process of fine-tuning comes in handy.

Fine-tuning involves retraining LLMs on data specific to a desired task. It has proven to be a powerful strategy for tasks such as information retrieval, especially when there’s the need to adapt responses to a pre-defined format.

Langflow provides a user-friendly interface to prototype and develop AI pipelines, and now it’s possible to fine-tune a GPT model — no coding required!

In this tutorial, you will learn how to use Langflow to fine-tune a GPT model.

Langflow components for fine-tuning (or using fine-tuned models) are available to download at the end of this article.

Fine-Tuning GPT Models with Langflow

To fine-tune a GPT model using Langflow, you first need to connect a ChatOpenAI or an OpenAI component to our fine-tuning component.

Currently, OpenAI allows you to fine-tune the following models: gpt-3.5-turbo, babbage-002, and davinci-002. Other models, such as gpt-4, will be available soon.

Invalid model choices will raise an error.

Parameters in the Fine-Tuning Component

The parameters that can be modified in the Fine Tune GPT Model component are:


  • Job ID path: Local path required to save and load the fine-tuned model.

  • Training data: Local training data in .jsonl or .csv formats (required). For CSV, the table must contain a ‘user’ and an ‘assistant’ columns. For .jsonl files, follow OpenAI specifications.

  • System content: LLM prompt. You can also specify the system content when fine-tuning a gpt-3.5-turbo model.

  • Epochs: The number of training rounds. Higher values can lead to worse generalization. Auto-calculated based on the number of samples if unspecified.

The Training Process

After selecting the training data and building the flow, the training process will start. According to OpenAI, after you’ve started a fine-tuning job, it may take some time to complete, since your job may be queued, and training a model can take minutes or even hours, depending on the model and dataset size. Once training is done, OpenAI notifies the API owner via email confirmation.

Notice that if you cancel the flow building or close the application, the training process will continue in OpenAI servers.

Using the Fine-Tuned Model

After your job is completed, the model should be available for inference right away, but if requests time out or the model’s name can’t be found, try again in a few minutes.

Use a flow to chat with your trained model just like using any regular OpenAI model. Insert the same Job ID Path used during fine-tuning to retrieve the trained model.

Example

As a simple use case, let’s fine-tune GPT 3.5 turbo to learn a bit about Langflow documentation. The playground dataset (CSV table) consists of 10 samples, with user inputs being of the form:

“Describe [some concept] in Langflow”

“Describe [some concept] in Langflow”

“Describe [some concept] in Langflow”

“Describe [some concept] in Langflow”

The respective assistant’s answer is just the full documentation page for that concept.

Before fine-tuning, the model provided a generic answer to the question “What is Langflow?”.

After fine-tuning, however, the model exhibited a much better understanding of the Langflow library.

Download Links (Gists)

{"name":"Fine-tune GPT","description":"Building Intelligent Interactions.","data":{"nodes":[{"width":384,"height":623,"id":"ChatOpenAI-do85j","type":"genericNode","position":{"x":-1113.0013036522153,"y":322.9219767794165},"data":{"type":"ChatOpenAI","node":{"template":{"callbacks":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"callbacks","advanced":false,"dynamic":false,"info":"","type":"langchain.callbacks.base.BaseCallbackHandler","list":true},"cache":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"cache","advanced":false,"dynamic":false,"info":"","type":"bool","list":false},"client":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"client","advanced":false,"dynamic":false,"info":"","type":"Any","list":false},"max_retries":{"required":false,"placeholder":"","show":false,"multiline":false,"value":6,"password":false,"name":"max_retries","advanced":false,"dynamic":false,"info":"","type":"int","list":false},"max_tokens":{"required":false,"placeholder":"","show":true,"multiline":false,"password":true,"name":"max_tokens","advanced":false,"dynamic":false,"info":"","type":"int","list":false,"value":""},"metadata":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"metadata","advanced":false,"dynamic":false,"info":"","type":"dict","list":false},"model_kwargs":{"required":false,"placeholder":"","show":true,"multiline":false,"password":false,"name":"model_kwargs","advanced":true,"dynamic":false,"info":"","type":"dict","list":false},"model_name":{"required":false,"placeholder":"","show":true,"multiline":false,"value":"gpt-3.5-turbo","password":false,"options":["gpt-3.5-turbo-0613","gpt-3.5-turbo","gpt-3.5-turbo-16k-0613","gpt-3.5-turbo-16k","gpt-4-0613","gpt-4-32k-0613","gpt-4","gpt-4-32k"],"name":"model_name","advanced":false,"dynamic":false,"info":"","type":"str","list":true},"n":{"required":false,"placeholder":"","show":false,"multiline":false,"value":1,"password":false,"name":"n","advanced":false,"dynamic":false,"info":"","type":"int","list":false},"openai_api_base":{"required":false,"placeholder":"","show":true,"multiline":false,"password":false,"name":"openai_api_base","display_name":"OpenAI API Base","advanced":false,"dynamic":false,"info":"\nThe base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\n\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\n","type":"str","list":false,"value":""},"openai_api_key":{"required":false,"placeholder":"","show":true,"multiline":false,"value":"","password":true,"name":"openai_api_key","display_name":"OpenAI API Key","advanced":false,"dynamic":false,"info":"","type":"str","list":false},"openai_organization":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"openai_organization","display_name":"OpenAI Organization","advanced":false,"dynamic":false,"info":"","type":"str","list":false},"openai_proxy":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"openai_proxy","display_name":"OpenAI Proxy","advanced":false,"dynamic":false,"info":"","type":"str","list":false},"request_timeout":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"request_timeout","advanced":false,"dynamic":false,"info":"","type":"float","list":false},"streaming":{"required":false,"placeholder":"","show":false,"multiline":false,"value":false,"password":false,"name":"streaming","advanced":false,"dynamic":false,"info":"","type":"bool","list":false},"tags":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"tags","advanced":false,"dynamic":false,"info":"","type":"str","list":true},"temperature":{"required":false,"placeholder":"","show":true,"multiline":false,"value":0.7,"password":false,"name":"temperature","advanced":false,"dynamic":false,"info":"","type":"float","list":false},"tiktoken_model_name":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"tiktoken_model_name","advanced":false,"dynamic":false,"info":"","type":"str","list":false},"verbose":{"required":false,"placeholder":"","show":false,"multiline":false,"value":false,"password":false,"name":"verbose","advanced":false,"dynamic":false,"info":"","type":"bool","list":false},"_type":"ChatOpenAI"},"description":"`OpenAI` Chat large language models API.","base_classes":["BaseChatModel","BaseLanguageModel","ChatOpenAI","BaseLLM"],"display_name":"ChatOpenAI","custom_fields":{},"output_types":[],"documentation":"https://python.langchain.com/docs/modules/model_io/models/chat/integrations/openai","beta":false,"error":null},"id":"ChatOpenAI-do85j"},"selected":false,"positionAbsolute":{"x":-1113.0013036522153,"y":322.9219767794165},"dragging":false},{"width":384,"height":727,"id":"CustomComponent-SNGd7","type":"genericNode","position":{"x":-592.8543094958106,"y":323.07654372746487},"data":{"type":"CustomComponent","node":{"template":{"code":{"dynamic":true,"required":true,"placeholder":"","show":true,"multiline":true,"value":"from typing import Optional\nfrom langflow import CustomComponent\nfrom langflow.template.field.base import TemplateField\nfrom langchain.llms import HuggingFaceEndpoint\nfrom langchain.llms.base import BaseLLM\nimport openai\nimport pickle\nimport pandas as pd\nimport json\nimport time\n\nclass FineTuneGPTComponent(CustomComponent):\n    display_name: str = \"Fine Tune GPT Model\"\n    description: str = \"Fine Tune a GPT Model with loaded training data. Data must be in .jsonl format or .csv with 'user' and 'assistant' columns. User can specify also the column 'system'.\"\n \n    def convert_table_to_openai_format(self, table, model_type, system_content=None):\n        dataset = []\n        for _, row in table.iterrows():\n            if model_type == \"gpt-3.5-turbo\":\n                if system_content:\n                    dataset.append(\n                    {\"messages\": [\n                        {\"role\": \"system\", \"content\": system_content},\n                        {\"role\": \"user\", \"content\": row['user']},\n                        {\"role\": \"assistant\", \"content\": row['assistant']}\n                        ]}\n                )\n                else:\n                    dataset.append(\n                        {\"messages\": [\n                            {\"role\": \"user\", \"content\": row['user']},\n                            {\"role\": \"assistant\", \"content\": row['assistant']}\n                            ]}\n                    )\n            elif model_type in [\"babbage-002\", \"davinci-002\"]:\n                dataset.append(\n                    {\"prompt\": row['user'], \"completion\": row['assistant']}\n                )\n            else:\n                raise ValueError(\"Invalid model type! \\n Options: 'gpt-3.5-turbo', 'babbage-002', 'davinci-002'\")\n            self.repr_value = \"OK\"\n            table.to_csv(\"TESTING.csv\")\n        return dataset\n    \n    def build_config(self):\n         return {\n             \"training_data_path\": {\n                 \"display_name\": \"Training data\",\n                 \"required\": True,\n                 \"file_types\": [\"jsonl\",\"csv\"],\n                 \"field_type\": \"file\", \n                 \"suffixes\": [\".jsonl\",\".csv\"],\n             },\n             \"system_content\":{\n                \"display_name\": \"System Content\",\n                \"required\": False,  \n             },\n             \"job_id_save_path\":{\n                \"display_name\": \"job ID path\",\n                \"required\": True,  \n             },\n             \"n_epochs\":{\n                \"display_name\": \"Epochs\",\n                \"required\": False, \n            },\n              \"code\": {\"show\": True},\n         }\n        \n    def build(\n        self,\n        training_data_path: str,\n        LLM: BaseLLM,\n        job_id_save_path: str=\"job_id.txt\",\n        system_content = None,\n        n_epochs = None,\n    ) -> BaseLLM:\n        \n        openai_api_key = LLM.openai_api_key\n        model_type = LLM.model_name\n        \n        try:\n            # Check training data format\n            file_type = training_data_path.split(\".\")[-1]\n            raw_data_path = training_data_path.split(\".\")[0]\n            \n            if file_type == \"csv\":\n                table = pd.read_csv(training_data_path)\n                dataset = self.convert_table_to_openai_format(\n                        table, \n                        model_type=model_type,\n                        system_content=system_content,\n                       ) \n                with open(raw_data_path+\".jsonl\", 'w') as outfile:\n                    for entry in dataset:\n                        json.dump(entry, outfile)\n                        outfile.write('\\n')\n            elif file_type == \"json\":\n                pass\n            else:\n                raise ValueError(\"Invalid training data format \\nOptions: .jsonl or .csv\")\n            \n            file_id = openai.File.create(\n                file=open(raw_data_path+\".jsonl\", \"rb\"),\n                purpose='fine-tune',\n                api_key=openai_api_key,\n                ).id\n            job = openai.FineTuningJob.create(\n                training_file=file_id, \n                model=model_type,\n                api_key=openai_api_key,\n                hyperparameters={\"n_epochs\":n_epochs,}\n                )\n            \n            # Save job id\n            job_id = job.id\n            with open(job_id_save_path,\"wb\") as file:\n                pickle.dump(job_id, file)\n            \n            running = True\n            while(running):\n                status = openai.FineTuningJob.list_events(\n                    id=job_id, \n                    api_key=openai_api_key, \n                    limit=1\n                    )\n                message = status['data'][0]['message']\n                self.repr_value = message\n                if message == \"The job has successfully completed\":\n                    running = False\n                time.sleep(5)\n            \n        except Exception as e:\n            #raise ValueError(\"Could not connect to OpeanAI.\") from e\n            raise ValueError(e) from e\n            self.repr_value = e\n        return 0","password":false,"name":"code","advanced":false,"type":"code","list":false},"_type":"CustomComponent","LLM":{"required":true,"placeholder":"","show":true,"multiline":false,"password":false,"name":"LLM","display_name":"LLM","advanced":false,"dynamic":false,"info":"","type":"BaseLLM","list":false},"job_id_save_path":{"required":true,"placeholder":"","show":true,"multiline":false,"value":"job_id.txt","password":false,"name":"job_id_save_path","display_name":"job ID path","advanced":false,"dynamic":false,"info":"","type":"str","list":false},"n_epochs":{"required":false,"placeholder":"","show":true,"multiline":false,"password":false,"name":"n_epochs","display_name":"Epochs","advanced":false,"dynamic":false,"info":"","type":"str","list":false},"system_content":{"required":false,"placeholder":"","show":true,"multiline":false,"password":false,"name":"system_content","display_name":"System Content","advanced":false,"dynamic":false,"info":"","type":"str","list":false},"training_data_path":{"required":true,"placeholder":"","show":true,"multiline":false,"suffixes":[".jsonl",".csv"],"password":false,"name":"training_data_path","display_name":"Training data","advanced":false,"dynamic":false,"info":"","type":"file","list":false,"fileTypes":["jsonl","csv"],"file_path":"C:\\Users\\xande\\AppData\\Local\\langflow\\langflow\\Cache\\e426367d-67aa-413a-8ec6-e0d119340d12\\53ddd5e193322b52ce474aa708aeb88a38eccba3e17694ec264fd80ea444cb14.csv","value":"fine_tune_mdx.csv"}},"description":"Fine Tune a GPT Model with loaded training data. Data must be in .jsonl format or .csv with 'user' and 'assistant' columns. User can specify also the column 'system'.","base_classes":["BaseLanguageModel","BaseLLM"],"display_name":"Fine Tune GPT Model","custom_fields":{"LLM":null,"job_id_save_path":null,"n_epochs":null,"system_content":null,"training_data_path":null},"output_types":[],"documentation":"","beta":true,"error":null},"id":"CustomComponent-SNGd7"},"selected":false,"dragging":false,"positionAbsolute":{"x":-592.8543094958106,"y":323.07654372746487}}],"edges":[{"source":"ChatOpenAI-do85j","target":"CustomComponent-SNGd7","sourceHandle":"ChatOpenAI|ChatOpenAI-do85j|BaseChatModel|BaseLanguageModel|ChatOpenAI|BaseLLM","targetHandle":"BaseLLM|LLM|CustomComponent-SNGd7","id":"reactflow__edge-ChatOpenAI-do85jChatOpenAI|ChatOpenAI-do85j|BaseChatModel|BaseLanguageModel|ChatOpenAI|BaseLLM-CustomComponent-SNGd7BaseLLM|LLM|CustomComponent-SNGd7","style":{"stroke":"#555"},"className":"stroke-gray-900 ","animated":false,"selected":false}],"viewport":{"x":726.1502238317959,"y":-165.92230938004224,"zoom":0.5908314342817417}},"id":"e426367d-67aa-413a-8ec6-e0d119340d12","user_id":"09e0d67d-3765-43ba-9d5f-9dc205202340"}

Custom Component to fine-tune a GPT model

{"name":"using_fine_tuned_gpt_langflow","description":"Flow using a Custom Component in Langflow to load a fine-tuned GPT model in a chat conversation.","data":{"nodes":[{"width":384,"height":372,"id":"PromptTemplate-5GnSD","type":"genericNode","position":{"x":1002.2909888285608,"y":-28.810498723906477},"data":{"type":"PromptTemplate","node":{"template":{"output_parser":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"output_parser","advanced":false,"dynamic":false,"info":"","type":"BaseOutputParser","list":false},"input_variables":{"required":true,"placeholder":"","show":false,"multiline":false,"password":false,"name":"input_variables","advanced":false,"dynamic":false,"info":"","type":"str","list":true,"value":["var"]},"partial_variables":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"partial_variables","advanced":false,"dynamic":false,"info":"","type":"dict","list":false},"template":{"required":true,"placeholder":"","show":true,"multiline":true,"password":false,"name":"template","advanced":false,"dynamic":false,"info":"","type":"prompt","list":false,"value":"{var}"},"template_format":{"required":false,"placeholder":"","show":false,"multiline":false,"value":"f-string","password":false,"name":"template_format","advanced":false,"dynamic":false,"info":"","type":"str","list":false},"validate_template":{"required":false,"placeholder":"","show":false,"multiline":false,"value":true,"password":false,"name":"validate_template","advanced":false,"dynamic":false,"info":"","type":"bool","list":false},"_type":"PromptTemplate","var":{"required":false,"placeholder":"","show":true,"multiline":true,"value":"","password":false,"name":"var","display_name":"var","advanced":false,"input_types":["Document","BaseOutputParser"],"dynamic":false,"info":"","type":"str","list":false}},"description":"A prompt template for a language model.","base_classes":["PromptTemplate","BasePromptTemplate","StringPromptTemplate"],"name":"","display_name":"PromptTemplate","documentation":"https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/","custom_fields":{"":["var"],"template":["var"]},"output_types":[],"field_formatters":{"formatters":{"openai_api_key":{}},"base_formatters":{"kwargs":{},"optional":{},"list":{},"dict":{},"union":{},"multiline":{},"show":{},"password":{},"default":{},"headers":{},"dict_code_file":{},"model_fields":{"MODEL_DICT":{"OpenAI":["text-davinci-003","text-davinci-002","text-curie-001","text-babbage-001","text-ada-001"],"ChatOpenAI":["gpt-3.5-turbo-0613","gpt-3.5-turbo","gpt-3.5-turbo-16k-0613","gpt-3.5-turbo-16k","gpt-4-0613","gpt-4-32k-0613","gpt-4","gpt-4-32k"],"Anthropic":["claude-v1","claude-v1-100k","claude-instant-v1","claude-instant-v1-100k","claude-v1.3","claude-v1.3-100k","claude-v1.2","claude-v1.0","claude-instant-v1.1","claude-instant-v1.1-100k","claude-instant-v1.0"],"ChatAnthropic":["claude-v1","claude-v1-100k","claude-instant-v1","claude-instant-v1-100k","claude-v1.3","claude-v1.3-100k","claude-v1.2","claude-v1.0","claude-instant-v1.1","claude-instant-v1.1-100k","claude-instant-v1.0"]}}}},"beta":false,"error":null},"id":"PromptTemplate-5GnSD"},"selected":true,"positionAbsolute":{"x":1002.2909888285608,"y":-28.810498723906477},"dragging":false},{"width":384,"height":338,"id":"LLMChain-8C9XJ","type":"genericNode","position":{"x":1024.2195688898594,"y":410.82691722912193},"data":{"type":"LLMChain","node":{"template":{"callbacks":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"callbacks","advanced":false,"dynamic":false,"info":"","type":"langchain.callbacks.base.BaseCallbackHandler","list":true},"llm":{"required":true,"placeholder":"","show":true,"multiline":false,"password":false,"name":"llm","advanced":false,"dynamic":false,"info":"","type":"BaseLanguageModel","list":false},"memory":{"required":false,"placeholder":"","show":true,"multiline":false,"password":false,"name":"memory","advanced":false,"dynamic":false,"info":"","type":"BaseMemory","list":false},"output_parser":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"output_parser","advanced":false,"dynamic":false,"info":"","type":"BaseLLMOutputParser","list":false},"prompt":{"required":true,"placeholder":"","show":true,"multiline":false,"password":false,"name":"prompt","advanced":false,"dynamic":false,"info":"","type":"BasePromptTemplate","list":false},"llm_kwargs":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"llm_kwargs","advanced":false,"dynamic":false,"info":"","type":"dict","list":false},"metadata":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"metadata","advanced":false,"dynamic":false,"info":"","type":"dict","list":false},"output_key":{"required":true,"placeholder":"","show":true,"multiline":false,"value":"text","password":false,"name":"output_key","advanced":true,"dynamic":false,"info":"","type":"str","list":false},"return_final_only":{"required":false,"placeholder":"","show":false,"multiline":false,"value":true,"password":false,"name":"return_final_only","advanced":false,"dynamic":false,"info":"","type":"bool","list":false},"tags":{"required":false,"placeholder":"","show":false,"multiline":false,"password":false,"name":"tags","advanced":false,"dynamic":false,"info":"","type":"str","list":true},"verbose":{"required":false,"placeholder":"","show":false,"multiline":false,"value":false,"password":false,"name":"verbose","advanced":true,"dynamic":false,"info":"","type":"bool","list":false},"_type":"LLMChain"},"description":"Chain to run queries against LLMs.","base_classes":["LLMChain","Chain","function"],"display_name":"LLMChain","custom_fields":{},"output_types":[],"documentation":"https://python.langchain.com/docs/modules/chains/foundational/llm_chain","beta":false,"error":null},"id":"LLMChain-8C9XJ"},"selected":false,"positionAbsolute":{"x":1024.2195688898594,"y":410.82691722912193},"dragging":false},{"width":384,"height":705,"id":"CustomComponent-5XYPW","type":"genericNode","position":{"x":569.2530158574887,"y":-24.122112513296386},"data":{"type":"CustomComponent","node":{"template":{"code":{"dynamic":true,"required":true,"placeholder":"","show":true,"multiline":true,"value":"from langflow import CustomComponent\r\n\r\nfrom langchain.llms.base import BaseLLM\r\nfrom langchain.chains import LLMChain\r\nfrom langchain import PromptTemplate\r\nfrom langchain.schema import Document\r\nfrom typing import Optional\r\nimport openai\r\nimport pickle\r\n\r\nfrom langchain.chat_models import ChatOpenAI\r\n\r\nimport requests\r\n\r\nclass FineTunedModel(CustomComponent):\r\n    display_name: str = \"Use Fine-Tuned GPT Model\"\r\n    description: str = \"OpenAI model. Can be a standard or a fine tuned one.\"\r\n\r\n    def build_config(self):\r\n        return { \r\n                \"max_tokens\":{\r\n                    \"display_name\": \"Max Tokens\",\r\n                    \"required\": False,\r\n                },\r\n                \"job_id_path\": {\r\n                    \"required\": True,\r\n                    \"password\": False,\r\n                    \"display_name\": \"Job ID path\",\r\n                    \"value\": \"job_id.txt\",\r\n                },\r\n                \"openai_api_base\": {\r\n                    \"required\": False,\r\n                    \"password\": False,\r\n                    \"display_name\": \"OpenAI API Base\",\r\n                },\r\n                \"openai_api_key\": {\r\n                    \"required\": True,\r\n                    \"password\": True,\r\n                    \"display_name\": \"OpenAI API Key\",\r\n                },\r\n                \"temperature\": {\r\n                    \"required\": False,\r\n                    \"value\": 0.7,\r\n                    \"password\": False,\r\n                    \"name\": \"Temperature\",\r\n                },\r\n            }\r\n            \r\n\r\n    def build(\r\n            self,      \r\n            temperature,\r\n            max_tokens = None, \r\n            job_id_path = None, \r\n            openai_api_base = None, \r\n            openai_api_key = None,\r\n            ) -> BaseLLM:\r\n        \r\n        try:\r\n            with open(job_id_path,\"rb\") as file:\r\n                job_id = pickle.load(file)\r\n            model_name = openai.FineTuningJob.retrieve(\r\n                job_id,\r\n                api_key=openai_api_key,\r\n                ).fine_tuned_model\r\n        except Exception as e:\r\n            raise ValueError(\"For fine-tuned models please insert the Job ID path\") from e\r\n            self.repr_value = e\r\n        \r\n        chat = ChatOpenAI(\r\n            model=model_name,\r\n            temperature=temperature,\r\n            openai_api_key=openai_api_key,\r\n            max_tokens=max_tokens,\r\n            openai_api_base=openai_api_base,\r\n        )\r\n        return chat\r\n","password":false,"name":"code","advanced":false,"type":"code","list":false},"_type":"CustomComponent","job_id_path":{"required":true,"placeholder":"","show":true,"multiline":false,"value":"job_id.txt","password":false,"name":"job_id_path","display_name":"Job ID path","advanced":false,"dynamic":false,"info":"","type":"str","list":false},"max_tokens":{"required":false,"placeholder":"","show":true,"multiline":false,"password":false,"name":"max_tokens","display_name":"Max Tokens","advanced":false,"dynamic":false,"info":"","type":"str","list":false},"openai_api_base":{"required":false,"placeholder":"","show":true,"multiline":false,"password":false,"name":"openai_api_base","display_name":"OpenAI API Base","advanced":false,"dynamic":false,"info":"","type":"str","list":false},"openai_api_key":{"required":true,"placeholder":"","show":true,"multiline":false,"password":true,"name":"openai_api_key","display_name":"OpenAI API Key","advanced":false,"dynamic":false,"info":"","type":"str","list":false,"value":""},"temperature":{"required":false,"placeholder":"","show":true,"multiline":false,"value":0.7,"password":false,"name":"temperature","display_name":"temperature","advanced":false,"dynamic":false,"info":"","type":"str","list":false}},"description":"OpenAI model. Can be a standard or a fine tuned one.","base_classes":["BaseLanguageModel","BaseLLM"],"display_name":"Use Fine-Tuned GPT Model","custom_fields":{"job_id_path":null,"max_tokens":null,"openai_api_base":null,"openai_api_key":null,"temperature":null},"output_types":[],"documentation":"","beta":true,"error":null},"id":"CustomComponent-5XYPW"},"selected":false,"positionAbsolute":{"x":569.2530158574887,"y":-24.122112513296386},"dragging":false}],"edges":[{"source":"PromptTemplate-5GnSD","sourceHandle":"PromptTemplate|PromptTemplate-5GnSD|PromptTemplate|BasePromptTemplate|StringPromptTemplate","target":"LLMChain-8C9XJ","targetHandle":"BasePromptTemplate|prompt|LLMChain-8C9XJ","style":{"stroke":"#555"},"className":"","animated":false,"id":"reactflow__edge-PromptTemplate-5GnSDPromptTemplate|PromptTemplate-5GnSD|PromptTemplate|BasePromptTemplate|StringPromptTemplate-LLMChain-8C9XJBasePromptTemplate|prompt|LLMChain-8C9XJ"},{"source":"CustomComponent-5XYPW","sourceHandle":"CustomComponent|CustomComponent-5XYPW|BaseLanguageModel|BaseLLM","target":"LLMChain-8C9XJ","targetHandle":"BaseLanguageModel|llm|LLMChain-8C9XJ","style":{"stroke":"#555"},"className":"","animated":false,"id":"reactflow__edge-CustomComponent-5XYPWCustomComponent|CustomComponent-5XYPW|BaseLanguageModel|BaseLLM-LLMChain-8C9XJBaseLanguageModel|llm|LLMChain-8C9XJ"}],"viewport":{"x":-203.3065974595986,"y":72.07039269509912,"zoom":0.630407321211217}},"id":"a3e094e2-cf7e-49e4-8667-6fa2d362a17a","user_id":"aa32ab4f-da83-4937-966e-36f98082564c"}

Custom Component to use a fine-tuned model

If you enjoyed this article, try out these components and fine-tune your own model using Langflow!