Training Parameters And Accuracy Measures

Our platform provides the flexibility to adjust a set of training parameters. There are general training parameters and advanced training options that could influence the model predictions. The predictions are measured on the basis of a set of accuracy measures or metrics that are also discussed in this section.


Training Options

Once you have fulfilled all the feature group requirements for the use case, you can set the following general and advanced training configuration options to train your ML model:

Training Option Name Description Possible Values
Name The name you would like to give to the model that is going to be trained. The system generates a default name depending upon the name of the project the model is a part of. The name can be comprised of any alphanumeric character and the length can be anywhere from 5 to 60 characters.
Set Refresh Schedule (UTC) Refresh schedule refers to the schedule when your dataset is set to be replaced by an updated copy of the particular dataset in context from your storage bucket location. This value to be entered is a CRON time string that describes the schedule in UTC time zone. A string in CRON Format. If you're unfamiliar with Cron Syntax, Crontab Guru can help translate the syntax back into natural language.
Anomaly Fraction The expected fraction of values that are anomalies in the dataset. If automatic, we will determine a fraction. Floating value from 0 to 1

Metrics

Our AI engine will calculate the following metrics for this use case:

Metric Name Description
Recall Recall is the percentage of total relevant results correctly classified by the model. In other words, it is the fraction of the total amount of relevant instances that were actually retrieved. It has a range from 0 to 1. The closer it gets to 1, the better. For further details, please visit this link.
Area Under ROC Curve (AUC Curve) AUC, or Area Under the Curve, describes a model's ability to distinguish between two or more classes, with a higher AUC indicating superior performance in correctly predicting positive instances as positive and negative instances as negative. Conversely, an AUC close to 0 suggests the model is incorrectly classifying negative instances as positive and vice versa. A value between 0.6 and 1 signifies that the model has learned meaningful patterns rather than making random guesses. AUC serves as a performance metric for classification problems across various threshold settings, offering an aggregate measure of performance. Its desirability stems from being scale-invariant, assessing the ranking quality of predictions, and classification-threshold-invariant, evaluating prediction quality regardless of the chosen classification threshold. For more details, please visit the link.
Mean Event Score The average anomaly score for all events in the Event Log.
Mean Anomaly Score The average score for all detected anomalies in the Event Log.
Median Event Score The median anomaly score for all events in the Event Log.
Median Anomaly Score The median score for all detected anomalies in the Event Log.
Number of Anomalies Detected The number of anomalies detected by the model on the Event Log.