Choose this use-case if you would like to develop a model that provides human-level responses specialized on your internal document corpus. This model facilitates using a Large Language Model (LLM) to answer natural language question prompts and engage in conversational dialogue regarding institutional knowledge bases, such as training/onboarding resources, policies and procedures, product documentation, and intellectual property documentation. Additionally, ChatLLM can be combined with DataLLM to query structured data and provide conversational explanations.
Dataset and Feature Group Requirements
This section specifies the Datasets / Feature Groups requirements to successfully train a ChatLLM model. Feature requirements include recommendations on additional datasets that might enhance model performance.
Training Models - Training Options and Metrics
This section describes all the available model training options that can be used to create a ChatLLM model. You can utilize the metric explanations to better understand how they measure the performance of the model you trained.
Evaluating Predictions
This section contains a quick model evaluation guide that helps you understand how well your model is performing.
Prediction API
This section discusses the prediction API method so that you could properly generate predictions from the model you deployed.