Resource Reference
This page documents every resource class and method in the CorePlexML Python SDK. All methods return Python dictionaries parsed from the server’s JSON responses unless noted otherwise.
ProjectsResource
- class coreplexml.projects.ProjectsResource(http)[source]
Bases:
objectManage CorePlexML projects.
Projects are the top-level organizational unit. Each project contains datasets, experiments, models, and deployments.
- Parameters:
http (HTTPClient)
- list(limit=50, offset=0, search=None)[source]
List all projects accessible to the authenticated user.
- Parameters:
- Return type:
- Returns:
Dictionary with
itemslist andtotalcount.- Raises:
AuthenticationError – If the API key is invalid.
- create(name, description='')[source]
Create a new project.
- Parameters:
- Return type:
- Returns:
Created project dictionary with
id,name, etc.- Raises:
ValidationError – If the name is empty.
- get(project_id)[source]
Get project details by ID.
- Parameters:
project_id (
str) – UUID of the project.- Return type:
- Returns:
Project dictionary.
- Raises:
NotFoundError – If the project does not exist.
- update(project_id, name=None, description=None)[source]
Update project name and/or description.
If
nameis not provided, the current name is preserved automatically (the API requiresnameon every update).
Projects are the top-level organizational unit. Every dataset, experiment, model, and deployment belongs to exactly one project.
Methods
list
list(limit=50, offset=0, search=None) -> dict
List all projects accessible to the authenticated user.
- param limit:
Maximum number of projects to return (default 50).
- param offset:
Number of projects to skip for pagination.
- param search:
Optional search query to filter by name.
- returns:
Dictionary with
itemslist andtotalcount.- raises AuthenticationError:
If the API key is invalid.
projects = client.projects.list(search="churn")
for p in projects["items"]:
print(f"{p['name']} ({p['id']})")
create
create(name, description="") -> dict
Create a new project.
- param name:
Project name (must be non-empty).
- param description:
Optional project description.
- returns:
Created project dictionary with
id,name,created_at, etc.- raises ValidationError:
If the name is empty.
project = client.projects.create("Fraud Detection", description="Q1 analysis")
get
get(project_id) -> dict
Get project details by ID.
- param project_id:
UUID of the project.
- returns:
Project dictionary.
- raises NotFoundError:
If the project does not exist.
update
update(project_id, name=None, description=None) -> dict
Update a project’s name and/or description. If name is not provided, the
current name is preserved automatically (the API requires name on every
update, and the SDK handles this transparently).
- param project_id:
UUID of the project.
- param name:
New project name (optional).
- param description:
New project description (optional).
- returns:
Updated project dictionary.
client.projects.update(project_id, description="Updated description")
delete
delete(project_id) -> dict
Delete a project and all associated resources (datasets, experiments, models, deployments).
- param project_id:
UUID of the project.
- returns:
Empty dictionary on success.
members
members(project_id) -> dict
List all members of a project.
- param project_id:
UUID of the project.
- returns:
Dictionary with paginated
itemslist plustotal,limit, andoffset.
members = client.projects.members(project_id)
for m in members["items"]:
print(f"{m.get('email')} ({m.get('role')})")
add_member
add_member(project_id, email, role="editor") -> dict
Add a user to a project by email address.
- param project_id:
UUID of the project.
- param email:
Email address of the user to add.
- param role:
Member role –
viewer,editor, oradmin(defaulteditor).- returns:
Created membership dictionary.
client.projects.add_member(project_id, "alice@example.com", role="admin")
remove_member
remove_member(project_id, member_id) -> dict
Remove a member from a project.
- param project_id:
UUID of the project.
- param member_id:
UUID of the membership record.
- returns:
Empty dictionary on success.
timeline
timeline(project_id) -> dict
Get the project activity timeline (recent events such as dataset uploads, experiment runs, deployments).
- param project_id:
UUID of the project.
- returns:
Dictionary with
eventslist.
DatasetsResource
- class coreplexml.datasets.DatasetsResource(http)[source]
Bases:
objectManage datasets and dataset versions.
Datasets are the foundation for training experiments. Upload CSV files and CorePlexML will version, profile, and analyze them automatically.
- Parameters:
http (HTTPClient)
Datasets are the foundation for training experiments. Upload CSV files and CorePlexML will version, profile, and analyze them automatically.
Methods
list
list(project_id=None, limit=50, offset=0) -> dict
List datasets, optionally filtered by project.
- param project_id:
Filter by project UUID (optional).
- param limit:
Maximum results (default 50).
- param offset:
Pagination offset.
- returns:
Dictionary with
itemslist andtotalcount.
upload
upload(project_id, file_path, name, description="") -> dict
Upload a CSV file as a new dataset. The platform automatically detects column types and creates an initial dataset version.
- param project_id:
UUID of the owning project.
- param file_path:
Local path to the CSV file.
- param name:
Display name for the dataset.
- param description:
Optional description.
- returns:
Created dataset dictionary with
id,version_id,name, etc.
ds = client.datasets.upload(
project_id, "data/train.csv", "Training Data"
)
print(f"Dataset {ds['id']}, version {ds['version_id']}")
get
get(dataset_id) -> dict
Get dataset details by ID.
- param dataset_id:
UUID of the dataset.
- returns:
Dataset dictionary.
versions
versions(dataset_id) -> dict
List all versions of a dataset.
- param dataset_id:
UUID of the dataset.
- returns:
Dictionary with paginated
itemslist plustotal,limit, andoffset.
vers = client.datasets.versions(dataset_id)
for v in vers["items"]:
print(f" v{v['version']}: {v['row_count']} rows ({v['created_at']})")
quality
quality(dataset_id) -> dict
Get a data quality report for a dataset, including missing-value counts, outlier detection, and completeness scores.
- param dataset_id:
UUID of the dataset.
- returns:
Quality metrics dictionary.
columns
columns(dataset_id) -> dict
Get column metadata (names, types, statistics) for a dataset.
- param dataset_id:
UUID of the dataset.
- returns:
Dictionary with
columnslist.
analyze
analyze(dataset_id) -> dict
Run statistical analysis on a dataset (distributions, correlations, etc.).
- param dataset_id:
UUID of the dataset.
- returns:
Analysis results dictionary.
delete
delete(dataset_id) -> dict
Delete a dataset and all its versions.
- param dataset_id:
UUID of the dataset.
- returns:
Empty dictionary on success.
download
download(dataset_id, output_path, format="csv") -> str
Download a dataset to a local file.
- param dataset_id:
UUID of the dataset.
- param output_path:
Local path to save the file.
- param format:
Output format –
csvorparquet(defaultcsv).- returns:
The
output_pathstring on success.
path = client.datasets.download(dataset_id, "/tmp/export.csv")
print(f"Downloaded to {path}")
ExperimentsResource
- class coreplexml.experiments.ExperimentsResource(http)[source]
Bases:
objectCreate and manage AutoML experiments.
Experiments run H2O AutoML to train multiple models on a dataset, automatically selecting the best model based on a chosen metric.
- Parameters:
http (HTTPClient)
- list(project_id=None, limit=50, offset=0)[source]
List experiments, optionally filtered by project.
- create(project_id, dataset_version_id, target_column, name='Experiment', problem_type='classification', config=None, engine=None, engines=None, execution_mode='single', use_gpu=False)[source]
Create a new AutoML experiment.
- Parameters:
project_id (
str) – UUID of the owning project.dataset_version_id (
str) – UUID of the dataset version to train on.target_column (
str) – Name of the target (label) column.name (
str) – Experiment name (defaultExperiment).problem_type (
str) –classificationorregression(defaultclassification).config (
Optional[dict]) – Optional training configuration overrides.engine (
Optional[str]) – Preferred engine for single mode (e.g."h2o"or"flaml").engines (
Optional[list[str]]) – Engine list for single/parallel mode. In single mode, first item is used.execution_mode (
str) –"single"(default) or"parallel".use_gpu (
bool) – Request GPU-capable worker when available.
- Return type:
- Returns:
Created experiment dictionary with
idandstatus.
- wait(experiment_id, interval=5.0, timeout=3600.0)[source]
Poll experiment until training completes.
Blocks until the experiment reaches
succeeded,failed, orerrorstatus.- Parameters:
- Return type:
- Returns:
Final experiment status dictionary.
- Raises:
CorePlexMLError – If the experiment times out.
Experiments run H2O AutoML to train multiple models on a dataset, automatically selecting the best model based on a chosen metric.
Methods
list
list(project_id=None, limit=50, offset=0) -> dict
List experiments, optionally filtered by project.
- param project_id:
Filter by project UUID (optional).
- param limit:
Maximum results (default 50).
- param offset:
Pagination offset.
- returns:
Dictionary with
itemslist andtotalcount.
create
create(project_id, dataset_version_id, target_column, name="Experiment",
problem_type="classification", config=None) -> dict
Create and start a new AutoML experiment.
- param project_id:
UUID of the owning project.
- param dataset_version_id:
UUID of the dataset version to train on.
- param target_column:
Name of the target (label) column.
- param name:
Experiment name (default
"Experiment").- param problem_type:
"classification"or"regression"(default"classification").- param config:
Optional training configuration overrides (e.g.,
max_models,max_runtime_secs).- returns:
Created experiment dictionary with
idandstatus.
exp = client.experiments.create(
project_id=project_id,
dataset_version_id=version_id,
target_column="price",
name="Price Regressor",
problem_type="regression",
config={"max_models": 20, "max_runtime_secs": 600},
)
get
get(experiment_id) -> dict
Get experiment details.
- param experiment_id:
UUID of the experiment.
- returns:
Experiment dictionary.
wait
wait(experiment_id, interval=5.0, timeout=3600.0) -> dict
Block until the experiment reaches succeeded, failed, or error
status. Polls the server at the specified interval.
- param experiment_id:
UUID of the experiment.
- param interval:
Seconds between polls (default 5.0).
- param timeout:
Maximum seconds to wait (default 3600.0).
- returns:
Final experiment status dictionary.
- raises CorePlexMLError:
If the experiment times out.
result = client.experiments.wait(exp["id"], interval=10.0, timeout=7200.0)
if result["status"] == "succeeded":
print("Training completed successfully")
else:
print(f"Training failed: {result.get('error')}")
delete
delete(experiment_id) -> dict
Delete an experiment and all its trained models.
- param experiment_id:
UUID of the experiment.
- returns:
Empty dictionary on success.
explain
explain(experiment_id) -> dict
Get model explainability data for the experiment’s best model, including feature importance and SHAP values.
- param experiment_id:
UUID of the experiment.
- returns:
Explainability data dictionary.
logs
logs(experiment_id) -> dict
Get training logs for an experiment.
- param experiment_id:
UUID of the experiment.
- returns:
Dictionary with
logslist.
ModelsResource
- class coreplexml.models.ModelsResource(http)[source]
Bases:
objectAccess trained models and make predictions.
Models are produced by AutoML experiments. Each experiment generates one or more models ranked by performance.
- Parameters:
http (HTTPClient)
- list(project_id=None, experiment_id=None, limit=50, offset=0)[source]
List models, optionally filtered by project or experiment.
Models are produced by AutoML experiments. Each experiment generates one or more models ranked by performance on the validation set.
Methods
list
list(project_id=None, experiment_id=None, limit=50, offset=0) -> dict
List models, optionally filtered by project or experiment.
- param project_id:
Filter by project UUID (optional).
- param experiment_id:
Filter by experiment UUID (optional).
- param limit:
Maximum results (default 50).
- param offset:
Pagination offset.
- returns:
Dictionary with
itemslist andtotalcount.
models = client.models.list(experiment_id=exp_id)
for m in models["items"]:
print(f"{m['algorithm']}: AUC={m['metrics'].get('auc', 'N/A')}")
get
get(model_id) -> dict
Get model details including metrics and algorithm information.
- param model_id:
UUID of the model.
- returns:
Model dictionary.
predict
predict(model_id, inputs, options=None) -> dict
Make predictions directly with a model (without a deployment).
- param model_id:
UUID of the model.
- param inputs:
Feature values – a
dictfor a single prediction or alistof dicts for batch.- param options:
Optional prediction options.
- returns:
Prediction results dictionary.
# Single prediction
result = client.models.predict(model_id, {"age": 35, "income": 75000})
# Batch prediction
batch = client.models.predict(model_id, [
{"age": 35, "income": 75000},
{"age": 28, "income": 52000},
])
explain
explain(model_id) -> dict
Get feature importance and SHAP values for a model.
- param model_id:
UUID of the model.
- returns:
Explainability data dictionary.
Note
predict_contributions() (SHAP) is not supported for H2O
StackedEnsemble models. The SDK returns an appropriate message in that
case.
parameters
parameters(model_id) -> dict
Get the hyperparameters used to train a model.
- param model_id:
UUID of the model.
- returns:
Dictionary of model hyperparameters.
delete
delete(model_id) -> dict
Delete a model.
- param model_id:
UUID of the model.
- returns:
Empty dictionary on success.
DeploymentsResource
- class coreplexml.deployments.DeploymentsResource(http)[source]
Bases:
objectDeploy models to production endpoints.
Deployments create REST API endpoints for real-time predictions, with support for staging/production stages and canary rollouts.
- Parameters:
http (HTTPClient)
- create(project_id, model_id, name, stage='staging', config=None)[source]
Create a new deployment.
- Parameters:
- Return type:
- Returns:
Created deployment dictionary.
- predict(deployment_id, inputs, options=None)[source]
Make predictions via a deployed model endpoint.
- rollback(deployment_id, to_deployment_id=None, to_model_id=None)[source]
Rollback a deployment to the previous version.
Deployments create REST API endpoints for real-time predictions, with support for staging/production stages and canary rollouts.
Methods
list
list(project_id, limit=50, offset=0) -> dict
List deployments for a project.
- param project_id:
UUID of the project.
- param limit:
Maximum results (default 50).
- param offset:
Pagination offset.
- returns:
Dictionary with
itemslist andtotalcount.
create
create(project_id, model_id, name, stage="staging", config=None) -> dict
Create a new deployment for a trained model.
- param project_id:
UUID of the project.
- param model_id:
UUID of the model to deploy.
- param name:
Deployment name.
- param stage:
"staging"or"production"(default"staging").- param config:
Optional deployment configuration dictionary.
- returns:
Created deployment dictionary with
id,stage,status, etc.
dep = client.deployments.create(
project_id=project_id,
model_id=best_model_id,
name="Fraud Detector v2",
stage="staging",
)
get
get(deployment_id) -> dict
Get deployment details.
- param deployment_id:
UUID of the deployment.
- returns:
Deployment dictionary.
predict
predict(deployment_id, inputs, options=None) -> dict
Make predictions through a deployed model endpoint. This is the recommended method for production inference.
- param deployment_id:
UUID of the deployment.
- param inputs:
Feature values – a
dictfor single prediction orlistof dicts for batch.- param options:
Optional prediction options.
- returns:
Prediction results dictionary.
pred = client.deployments.predict(deployment_id, {
"amount": 1500.00,
"merchant_category": "electronics",
"hour_of_day": 3,
})
print(f"Fraud probability: {pred.get('probabilities', {}).get('1', 'N/A')}")
promote
promote(deployment_id) -> dict
Promote a staging deployment to production.
- param deployment_id:
UUID of the deployment.
- returns:
Updated deployment dictionary.
client.deployments.promote(deployment_id)
print("Promoted to production")
rollback
rollback(deployment_id) -> dict
Rollback a deployment to the previous model version.
- param deployment_id:
UUID of the deployment.
- returns:
Updated deployment dictionary.
deactivate
deactivate(deployment_id) -> dict
Deactivate a deployment. The endpoint will stop serving predictions.
- param deployment_id:
UUID of the deployment.
- returns:
Updated deployment dictionary.
drift
drift(deployment_id) -> dict
Get drift detection results for a deployment. Returns statistical measures comparing the current input distribution to the training data.
- param deployment_id:
UUID of the deployment.
- returns:
Drift metrics dictionary.
drift = client.deployments.drift(deployment_id)
for feature, score in drift.get("features", {}).items():
print(f" {feature}: drift_score={score}")
ReportsResource
- class coreplexml.reports.ReportsResource(http)[source]
Bases:
objectGenerate and manage reports.
Reports provide downloadable PDF/HTML summaries of experiments, models, deployments, project status, and SynthGen analysis.
- Parameters:
http (HTTPClient)
- list(project_id=None, kind=None, limit=50, offset=0)[source]
List reports, optionally filtered by project or kind.
- create(project_id, kind, entity_id, options=None)[source]
Create a new report.
- Parameters:
- Return type:
- Returns:
Created report dictionary with
idandstatus.
- wait(report_id, interval=3.0, timeout=300.0)[source]
Poll report until generation completes.
- Parameters:
- Return type:
- Returns:
Final report status dictionary.
- Raises:
CorePlexMLError – If the report times out.
Reports provide downloadable PDF summaries of experiments, models, deployments, project status, and SynthGen outputs.
Methods
list
list(project_id=None, kind=None, limit=50, offset=0) -> dict
List reports, optionally filtered by project or kind.
- param project_id:
Filter by project UUID (optional).
- param kind:
Filter by report type –
"project","experiment","model","deployment", or"synthgen"(optional).- param limit:
Maximum results (default 50).
- param offset:
Pagination offset.
- returns:
Dictionary with
itemslist andtotalcount.
create
create(project_id, kind, entity_id, options=None) -> dict
Create a new report. Report generation runs as a background job.
- param project_id:
UUID of the project.
- param kind:
Report type –
"project","experiment","model","deployment", or"synthgen".- param entity_id:
UUID of the entity to report on.
- param options:
Optional report configuration (e.g.,
{"llm_insights": True}).- returns:
Created report dictionary with
idandstatus.
report = client.reports.create(
project_id=project_id,
kind="experiment",
entity_id=experiment_id,
options={"llm_insights": True},
)
get
get(report_id) -> dict
Get report details and current generation status.
- param report_id:
UUID of the report.
- returns:
Report dictionary.
wait
wait(report_id, interval=3.0, timeout=300.0) -> dict
Poll report until generation completes.
- param report_id:
UUID of the report.
- param interval:
Seconds between polls (default 3.0).
- param timeout:
Maximum seconds to wait (default 300.0).
- returns:
Final report status dictionary.
- raises CorePlexMLError:
If report generation times out.
download
download(report_id, output_path) -> str
Download a generated report to a local file.
- param report_id:
UUID of the report.
- param output_path:
Local path to save the file (e.g.,
"report.pdf").- returns:
The
output_pathstring on success.
status = client.reports.wait(report_id)
client.reports.download(report_id, "experiment_report.pdf")
PrivacyResource
- class coreplexml.privacy.PrivacyResource(http)[source]
Bases:
objectPrivacy Suite – PII detection and data transformation.
Supports 72+ PII types across HIPAA, GDPR, PCI-DSS, and CCPA compliance profiles. Create policies, scan datasets for PII, and apply transformations (masking, hashing, redaction, etc.).
- Parameters:
http (HTTPClient)
- detect(session_id, wait=True, interval=2.0, timeout=300.0)[source]
Run PII detection on a session.
- Parameters:
- Return type:
- Returns:
Detection results with PII findings.
- transform(session_id, wait=True, interval=2.0, timeout=300.0)[source]
Apply privacy transformations to a session.
- Parameters:
- Return type:
- Returns:
Transformation results.
- anonymize(session_id, wait=True, interval=2.0, timeout=300.0)[source]
Apply anonymization transformations to a session.
Alias for
transform()— provided for API consistency.
Privacy Suite provides PII detection and data transformation capabilities. It supports 72+ PII types across HIPAA, GDPR, PCI-DSS, and CCPA compliance profiles.
The typical workflow is:
Create a privacy policy with a compliance profile.
Create a session linking the policy to a dataset.
Run detection to find PII.
Apply transformations (masking, hashing, redaction, etc.).
Retrieve results.
Methods
list_policies
list_policies(project_id=None, limit=50, offset=0) -> dict
List privacy policies, optionally filtered by project.
- param project_id:
Filter by project UUID (optional).
- param limit:
Maximum results (default 50).
- param offset:
Pagination offset.
- returns:
Dictionary with
itemslist andtotalcount.
create_policy
create_policy(project_id, name, profile=None, description="") -> dict
Create a new privacy policy.
- param project_id:
UUID of the project.
- param name:
Policy name.
- param profile:
Compliance profile –
"hipaa","gdpr","pci_dss", or"ccpa"(optional).- param description:
Optional description.
- returns:
Created policy dictionary.
policy = client.privacy.create_policy(
project_id=project_id,
name="HIPAA Compliance",
profile="hipaa",
description="Scan patient data for PHI",
)
get_policy
get_policy(policy_id) -> dict
Get privacy policy details.
- param policy_id:
UUID of the policy.
- returns:
Policy dictionary.
delete_policy
delete_policy(policy_id) -> dict
Delete a privacy policy.
- param policy_id:
UUID of the policy.
- returns:
Empty dictionary on success.
create_session
create_session(policy_id, dataset_id) -> dict
Create a privacy processing session linking a policy to a dataset.
- param policy_id:
UUID of the privacy policy.
- param dataset_id:
UUID of the dataset to scan.
- returns:
Created session dictionary.
detect
detect(session_id) -> dict
Run PII detection on a session. Scans all columns in the linked dataset using the policy’s compliance profile rules.
- param session_id:
UUID of the privacy session.
- returns:
Detection results with PII findings per column.
detection = client.privacy.detect(session_id)
for finding in detection.get("findings", []):
print(f" Column '{finding['column']}': {finding['pii_type']} ({finding['count']} occurrences)")
transform
transform(session_id) -> dict
Apply privacy transformations to a session based on the policy rules (masking, hashing, redaction, generalization, etc.).
- param session_id:
UUID of the privacy session.
- returns:
Transformation results dictionary.
results
results(session_id) -> dict
Get complete results for a privacy session, including both detection findings and transformation outputs.
- param session_id:
UUID of the privacy session.
- returns:
Full session results dictionary.
SynthGenResource
- class coreplexml.synthgen.SynthGenResource(http)[source]
Bases:
objectSynthetic data generation with deep learning models.
Train CTGAN, CopulaGAN, TVAE, or Gaussian Copula models on real datasets to generate statistically similar synthetic data.
- Parameters:
http (HTTPClient)
- create_model(project_id, dataset_version_id, name, model_type='ctgan', config=None)[source]
Train a new synthetic data model.
- Parameters:
- Return type:
- Returns:
Created model dictionary with
idandstatus.
- generate(model_id, num_rows=1000, seed=None, wait=True, interval=2.0, timeout=300.0)[source]
Generate synthetic data rows.
- Parameters:
model_id (
str) – UUID of the trained SynthGen model.num_rows (
int) – Number of synthetic rows to generate (default 1000).seed (
Optional[int]) – Random seed for reproducibility (optional).wait (
bool) – Wait for async generation job completion.interval (
float) – Poll interval in seconds whenwait=True.timeout (
float) – Maximum wait time in seconds whenwait=True.
- Return type:
- Returns:
Generation results dictionary.
SynthGen trains deep generative models (CTGAN, CopulaGAN, or TVAE) on real datasets to produce statistically similar synthetic data.
Methods
list_models
list_models(project_id=None, limit=50, offset=0) -> dict
List synthetic data models.
- param project_id:
Filter by project UUID (optional).
- param limit:
Maximum results (default 50).
- param offset:
Pagination offset.
- returns:
Dictionary with
itemslist andtotalcount.
create_model
create_model(project_id, dataset_version_id, name, model_type="ctgan",
config=None) -> dict
Train a new synthetic data model. Training runs as a background job.
- param project_id:
UUID of the project.
- param dataset_version_id:
UUID of the dataset version to train on.
- param name:
Model name.
- param model_type:
Model architecture –
"ctgan","copulagan","tvae", or"gaussian_copula"(default"ctgan").- param config:
Optional training configuration (e.g.,
{"epochs": 300, "batch_size": 500}).- returns:
Created model dictionary with
idandstatus.
synth = client.synthgen.create_model(
project_id=project_id,
dataset_version_id=version_id,
name="Transaction Generator",
model_type="ctgan",
config={"epochs": 500},
)
get_model
get_model(model_id) -> dict
Get synthetic data model details.
- param model_id:
UUID of the SynthGen model.
- returns:
Model dictionary.
generate
generate(model_id, num_rows=1000, seed=None) -> dict
Generate synthetic data rows from a trained model.
- param model_id:
UUID of the trained SynthGen model.
- param num_rows:
Number of synthetic rows to generate (default 1000).
- param seed:
Random seed for reproducibility (optional).
- returns:
Generation results dictionary.
result = client.synthgen.generate(synth_model_id, num_rows=5000, seed=42)
print(f"Generated {result.get('num_rows', 0)} rows")
delete_model
delete_model(model_id) -> dict
Delete a synthetic data model.
- param model_id:
UUID of the SynthGen model.
- returns:
Empty dictionary on success.
StudioResource
- class coreplexml.studio.StudioResource(http)[source]
Bases:
objectWhat-If Analysis – scenario-based model exploration.
Create sessions tied to a deployed model, define alternative scenarios by changing input features, and compare predictions side-by-side.
- Parameters:
http (HTTPClient)
- create_session(project_id, deployment_id, baseline_input)[source]
Create a new What-If Analysis session.
What-If Analysis allows you to create sessions tied to a deployed model, define alternative scenarios by changing input features, and compare predictions side by side.
Methods
create_session
create_session(project_id, deployment_id, baseline_input) -> dict
Create a new What-If Analysis session with a baseline input.
- param project_id:
UUID of the project.
- param deployment_id:
UUID of the deployment to analyze.
- param baseline_input:
Feature values for the baseline scenario (dict).
- returns:
Created session dictionary.
session = client.studio.create_session(
project_id=project_id,
deployment_id=deployment_id,
baseline_input={"age": 30, "income": 60000, "credit_score": 700},
)
get_session
get_session(session_id) -> dict
Get session details, including the baseline and all scenarios.
- param session_id:
UUID of the studio session.
- returns:
Session dictionary.
create_scenario
create_scenario(session_id, name, changes=None) -> dict
Add a new scenario to a session. A scenario overrides one or more features from the baseline to explore counterfactual predictions.
- param session_id:
UUID of the studio session.
- param name:
Scenario name.
- param changes:
Feature overrides compared to baseline (optional dict).
- returns:
Created scenario dictionary.
scenario = client.studio.create_scenario(
session_id=session["id"],
name="Higher Income",
changes={"income": 120000},
)
run_scenario
run_scenario(scenario_id) -> dict
Execute a scenario to get its prediction from the deployed model.
- param scenario_id:
UUID of the scenario.
- returns:
Scenario results with prediction values.
compare
compare(session_id) -> dict
Compare all scenarios in a session side by side.
- param session_id:
UUID of the studio session.
- returns:
Comparison results dictionary with all scenario predictions.
comparison = client.studio.compare(session["id"])
for scenario in comparison.get("scenarios", []):
print(f" {scenario['name']}: {scenario['prediction']}")