5. Experiments, Models, Predictions

Goal

Train models, compare results, register best candidate, and run prediction workflows.

Create an experiment

  1. Open Experiments.

  2. Click New Experiment.

  3. Configure: - Dataset/version. - Target column. - Problem type (classification/regression/time series if available). - Validation strategy and objective metric.

  4. Start run and monitor status.

Review results and model registry

  1. Open leaderboard when run completes.

  2. Confirm primary metric and rank ordering.

  3. Open best model detail and inspect: - Metrics. - Feature importance/explainability (if enabled). - Artifact/version metadata.

  4. Register or pin model according to workflow.

Run predictions

  1. Open Predictions.

  2. Run single-record prediction from UI form.

  3. Run batch prediction using file upload if available.

  4. Validate output fields: - Predicted value/class. - Confidence/probability (if applicable). - Request timestamp/run reference.

Functional validation checklist

  1. Experiment transitions to terminal state without silent failure.

  2. Metrics shown in model detail match leaderboard values.

  3. Prediction output schema is stable across repeated calls.

  4. Batch result row count matches input record count.

  5. Prediction errors return actionable messages.

Expected result

  1. At least one model is ready for deployment.

  2. Prediction workflow returns consistent, traceable outputs.

Common errors and recovery

  1. Experiment fails early: - Validate target column and missing value handling.

  2. Metric looks inconsistent: - Confirm same split/seed settings.

  3. Prediction input rejected: - Align field names/types with model input schema.

Screenshots

Experiments list and run status

Experiment execution and status monitoring.

Model registry and model details

Model registry with metric and artifact metadata.

Prediction execution view

Prediction UI for single and batch inference.