Resource Reference ================== This page documents every resource class and method in the CorePlexML Python SDK. All methods return Python dictionaries parsed from the server's JSON responses unless noted otherwise. .. contents:: :local: :depth: 2 ProjectsResource ----------------- .. autoclass:: coreplexml.projects.ProjectsResource :members: :undoc-members: :show-inheritance: Projects are the top-level organizational unit. Every dataset, experiment, model, and deployment belongs to exactly one project. Methods ^^^^^^^ list """" .. code-block:: python list(limit=50, offset=0, search=None) -> dict List all projects accessible to the authenticated user. :param limit: Maximum number of projects to return (default 50). :param offset: Number of projects to skip for pagination. :param search: Optional search query to filter by name. :returns: Dictionary with ``items`` list and ``total`` count. :raises AuthenticationError: If the API key is invalid. .. code-block:: python projects = client.projects.list(search="churn") for p in projects["items"]: print(f"{p['name']} ({p['id']})") create """""" .. code-block:: python create(name, description="") -> dict Create a new project. :param name: Project name (must be non-empty). :param description: Optional project description. :returns: Created project dictionary with ``id``, ``name``, ``created_at``, etc. :raises ValidationError: If the name is empty. .. code-block:: python project = client.projects.create("Fraud Detection", description="Q1 analysis") get """ .. code-block:: python get(project_id) -> dict Get project details by ID. :param project_id: UUID of the project. :returns: Project dictionary. :raises NotFoundError: If the project does not exist. update """""" .. code-block:: python update(project_id, name=None, description=None) -> dict Update a project's name and/or description. If ``name`` is not provided, the current name is preserved automatically (the API requires ``name`` on every update, and the SDK handles this transparently). :param project_id: UUID of the project. :param name: New project name (optional). :param description: New project description (optional). :returns: Updated project dictionary. .. code-block:: python client.projects.update(project_id, description="Updated description") delete """""" .. code-block:: python delete(project_id) -> dict Delete a project and all associated resources (datasets, experiments, models, deployments). :param project_id: UUID of the project. :returns: Empty dictionary on success. members """"""" .. code-block:: python members(project_id) -> dict List all members of a project. :param project_id: UUID of the project. :returns: Dictionary with paginated ``items`` list plus ``total``, ``limit``, and ``offset``. .. code-block:: python members = client.projects.members(project_id) for m in members["items"]: print(f"{m.get('email')} ({m.get('role')})") add_member """""""""" .. code-block:: python add_member(project_id, email, role="editor") -> dict Add a user to a project by email address. :param project_id: UUID of the project. :param email: Email address of the user to add. :param role: Member role -- ``viewer``, ``editor``, or ``admin`` (default ``editor``). :returns: Created membership dictionary. .. code-block:: python client.projects.add_member(project_id, "alice@example.com", role="admin") remove_member """"""""""""" .. code-block:: python remove_member(project_id, member_id) -> dict Remove a member from a project. :param project_id: UUID of the project. :param member_id: UUID of the membership record. :returns: Empty dictionary on success. timeline """""""" .. code-block:: python timeline(project_id) -> dict Get the project activity timeline (recent events such as dataset uploads, experiment runs, deployments). :param project_id: UUID of the project. :returns: Dictionary with ``events`` list. ---- DatasetsResource ----------------- .. autoclass:: coreplexml.datasets.DatasetsResource :members: :undoc-members: :show-inheritance: Datasets are the foundation for training experiments. Upload CSV files and CorePlexML will version, profile, and analyze them automatically. Methods ^^^^^^^ list """" .. code-block:: python list(project_id=None, limit=50, offset=0) -> dict List datasets, optionally filtered by project. :param project_id: Filter by project UUID (optional). :param limit: Maximum results (default 50). :param offset: Pagination offset. :returns: Dictionary with ``items`` list and ``total`` count. upload """""" .. code-block:: python upload(project_id, file_path, name, description="") -> dict Upload a CSV file as a new dataset. The platform automatically detects column types and creates an initial dataset version. :param project_id: UUID of the owning project. :param file_path: Local path to the CSV file. :param name: Display name for the dataset. :param description: Optional description. :returns: Created dataset dictionary with ``id``, ``version_id``, ``name``, etc. .. code-block:: python ds = client.datasets.upload( project_id, "data/train.csv", "Training Data" ) print(f"Dataset {ds['id']}, version {ds['version_id']}") get """ .. code-block:: python get(dataset_id) -> dict Get dataset details by ID. :param dataset_id: UUID of the dataset. :returns: Dataset dictionary. versions """""""" .. code-block:: python versions(dataset_id) -> dict List all versions of a dataset. :param dataset_id: UUID of the dataset. :returns: Dictionary with paginated ``items`` list plus ``total``, ``limit``, and ``offset``. .. code-block:: python vers = client.datasets.versions(dataset_id) for v in vers["items"]: print(f" v{v['version']}: {v['row_count']} rows ({v['created_at']})") quality """"""" .. code-block:: python quality(dataset_id) -> dict Get a data quality report for a dataset, including missing-value counts, outlier detection, and completeness scores. :param dataset_id: UUID of the dataset. :returns: Quality metrics dictionary. columns """"""" .. code-block:: python columns(dataset_id) -> dict Get column metadata (names, types, statistics) for a dataset. :param dataset_id: UUID of the dataset. :returns: Dictionary with ``columns`` list. analyze """"""" .. code-block:: python analyze(dataset_id) -> dict Run statistical analysis on a dataset (distributions, correlations, etc.). :param dataset_id: UUID of the dataset. :returns: Analysis results dictionary. delete """""" .. code-block:: python delete(dataset_id) -> dict Delete a dataset and all its versions. :param dataset_id: UUID of the dataset. :returns: Empty dictionary on success. download """""""" .. code-block:: python download(dataset_id, output_path, format="csv") -> str Download a dataset to a local file. :param dataset_id: UUID of the dataset. :param output_path: Local path to save the file. :param format: Output format -- ``csv`` or ``parquet`` (default ``csv``). :returns: The ``output_path`` string on success. .. code-block:: python path = client.datasets.download(dataset_id, "/tmp/export.csv") print(f"Downloaded to {path}") ---- ExperimentsResource -------------------- .. autoclass:: coreplexml.experiments.ExperimentsResource :members: :undoc-members: :show-inheritance: Experiments run H2O AutoML to train multiple models on a dataset, automatically selecting the best model based on a chosen metric. Methods ^^^^^^^ list """" .. code-block:: python list(project_id=None, limit=50, offset=0) -> dict List experiments, optionally filtered by project. :param project_id: Filter by project UUID (optional). :param limit: Maximum results (default 50). :param offset: Pagination offset. :returns: Dictionary with ``items`` list and ``total`` count. create """""" .. code-block:: python create(project_id, dataset_version_id, target_column, name="Experiment", problem_type="classification", config=None) -> dict Create and start a new AutoML experiment. :param project_id: UUID of the owning project. :param dataset_version_id: UUID of the dataset version to train on. :param target_column: Name of the target (label) column. :param name: Experiment name (default ``"Experiment"``). :param problem_type: ``"classification"`` or ``"regression"`` (default ``"classification"``). :param config: Optional training configuration overrides (e.g., ``max_models``, ``max_runtime_secs``). :returns: Created experiment dictionary with ``id`` and ``status``. .. code-block:: python exp = client.experiments.create( project_id=project_id, dataset_version_id=version_id, target_column="price", name="Price Regressor", problem_type="regression", config={"max_models": 20, "max_runtime_secs": 600}, ) get """ .. code-block:: python get(experiment_id) -> dict Get experiment details. :param experiment_id: UUID of the experiment. :returns: Experiment dictionary. wait """" .. code-block:: python wait(experiment_id, interval=5.0, timeout=3600.0) -> dict Block until the experiment reaches ``succeeded``, ``failed``, or ``error`` status. Polls the server at the specified interval. :param experiment_id: UUID of the experiment. :param interval: Seconds between polls (default 5.0). :param timeout: Maximum seconds to wait (default 3600.0). :returns: Final experiment status dictionary. :raises CorePlexMLError: If the experiment times out. .. code-block:: python result = client.experiments.wait(exp["id"], interval=10.0, timeout=7200.0) if result["status"] == "succeeded": print("Training completed successfully") else: print(f"Training failed: {result.get('error')}") delete """""" .. code-block:: python delete(experiment_id) -> dict Delete an experiment and all its trained models. :param experiment_id: UUID of the experiment. :returns: Empty dictionary on success. explain """"""" .. code-block:: python explain(experiment_id) -> dict Get model explainability data for the experiment's best model, including feature importance and SHAP values. :param experiment_id: UUID of the experiment. :returns: Explainability data dictionary. logs """" .. code-block:: python logs(experiment_id) -> dict Get training logs for an experiment. :param experiment_id: UUID of the experiment. :returns: Dictionary with ``logs`` list. ---- ModelsResource -------------- .. autoclass:: coreplexml.models.ModelsResource :members: :undoc-members: :show-inheritance: Models are produced by AutoML experiments. Each experiment generates one or more models ranked by performance on the validation set. Methods ^^^^^^^ list """" .. code-block:: python list(project_id=None, experiment_id=None, limit=50, offset=0) -> dict List models, optionally filtered by project or experiment. :param project_id: Filter by project UUID (optional). :param experiment_id: Filter by experiment UUID (optional). :param limit: Maximum results (default 50). :param offset: Pagination offset. :returns: Dictionary with ``items`` list and ``total`` count. .. code-block:: python models = client.models.list(experiment_id=exp_id) for m in models["items"]: print(f"{m['algorithm']}: AUC={m['metrics'].get('auc', 'N/A')}") get """ .. code-block:: python get(model_id) -> dict Get model details including metrics and algorithm information. :param model_id: UUID of the model. :returns: Model dictionary. predict """"""" .. code-block:: python predict(model_id, inputs, options=None) -> dict Make predictions directly with a model (without a deployment). :param model_id: UUID of the model. :param inputs: Feature values -- a ``dict`` for a single prediction or a ``list`` of dicts for batch. :param options: Optional prediction options. :returns: Prediction results dictionary. .. code-block:: python # Single prediction result = client.models.predict(model_id, {"age": 35, "income": 75000}) # Batch prediction batch = client.models.predict(model_id, [ {"age": 35, "income": 75000}, {"age": 28, "income": 52000}, ]) explain """"""" .. code-block:: python explain(model_id) -> dict Get feature importance and SHAP values for a model. :param model_id: UUID of the model. :returns: Explainability data dictionary. .. note:: ``predict_contributions()`` (SHAP) is not supported for H2O StackedEnsemble models. The SDK returns an appropriate message in that case. parameters """""""""" .. code-block:: python parameters(model_id) -> dict Get the hyperparameters used to train a model. :param model_id: UUID of the model. :returns: Dictionary of model hyperparameters. delete """""" .. code-block:: python delete(model_id) -> dict Delete a model. :param model_id: UUID of the model. :returns: Empty dictionary on success. ---- DeploymentsResource -------------------- .. autoclass:: coreplexml.deployments.DeploymentsResource :members: :undoc-members: :show-inheritance: Deployments create REST API endpoints for real-time predictions, with support for staging/production stages and canary rollouts. Methods ^^^^^^^ list """" .. code-block:: python list(project_id, limit=50, offset=0) -> dict List deployments for a project. :param project_id: UUID of the project. :param limit: Maximum results (default 50). :param offset: Pagination offset. :returns: Dictionary with ``items`` list and ``total`` count. create """""" .. code-block:: python create(project_id, model_id, name, stage="staging", config=None) -> dict Create a new deployment for a trained model. :param project_id: UUID of the project. :param model_id: UUID of the model to deploy. :param name: Deployment name. :param stage: ``"staging"`` or ``"production"`` (default ``"staging"``). :param config: Optional deployment configuration dictionary. :returns: Created deployment dictionary with ``id``, ``stage``, ``status``, etc. .. code-block:: python dep = client.deployments.create( project_id=project_id, model_id=best_model_id, name="Fraud Detector v2", stage="staging", ) get """ .. code-block:: python get(deployment_id) -> dict Get deployment details. :param deployment_id: UUID of the deployment. :returns: Deployment dictionary. predict """"""" .. code-block:: python predict(deployment_id, inputs, options=None) -> dict Make predictions through a deployed model endpoint. This is the recommended method for production inference. :param deployment_id: UUID of the deployment. :param inputs: Feature values -- a ``dict`` for single prediction or ``list`` of dicts for batch. :param options: Optional prediction options. :returns: Prediction results dictionary. .. code-block:: python pred = client.deployments.predict(deployment_id, { "amount": 1500.00, "merchant_category": "electronics", "hour_of_day": 3, }) print(f"Fraud probability: {pred.get('probabilities', {}).get('1', 'N/A')}") promote """"""" .. code-block:: python promote(deployment_id) -> dict Promote a staging deployment to production. :param deployment_id: UUID of the deployment. :returns: Updated deployment dictionary. .. code-block:: python client.deployments.promote(deployment_id) print("Promoted to production") rollback """""""" .. code-block:: python rollback(deployment_id) -> dict Rollback a deployment to the previous model version. :param deployment_id: UUID of the deployment. :returns: Updated deployment dictionary. deactivate """""""""" .. code-block:: python deactivate(deployment_id) -> dict Deactivate a deployment. The endpoint will stop serving predictions. :param deployment_id: UUID of the deployment. :returns: Updated deployment dictionary. drift """"" .. code-block:: python drift(deployment_id) -> dict Get drift detection results for a deployment. Returns statistical measures comparing the current input distribution to the training data. :param deployment_id: UUID of the deployment. :returns: Drift metrics dictionary. .. code-block:: python drift = client.deployments.drift(deployment_id) for feature, score in drift.get("features", {}).items(): print(f" {feature}: drift_score={score}") ---- ReportsResource ---------------- .. autoclass:: coreplexml.reports.ReportsResource :members: :undoc-members: :show-inheritance: Reports provide downloadable PDF summaries of experiments, models, deployments, project status, and SynthGen outputs. Methods ^^^^^^^ list """" .. code-block:: python list(project_id=None, kind=None, limit=50, offset=0) -> dict List reports, optionally filtered by project or kind. :param project_id: Filter by project UUID (optional). :param kind: Filter by report type -- ``"project"``, ``"experiment"``, ``"model"``, ``"deployment"``, or ``"synthgen"`` (optional). :param limit: Maximum results (default 50). :param offset: Pagination offset. :returns: Dictionary with ``items`` list and ``total`` count. create """""" .. code-block:: python create(project_id, kind, entity_id, options=None) -> dict Create a new report. Report generation runs as a background job. :param project_id: UUID of the project. :param kind: Report type -- ``"project"``, ``"experiment"``, ``"model"``, ``"deployment"``, or ``"synthgen"``. :param entity_id: UUID of the entity to report on. :param options: Optional report configuration (e.g., ``{"llm_insights": True}``). :returns: Created report dictionary with ``id`` and ``status``. .. code-block:: python report = client.reports.create( project_id=project_id, kind="experiment", entity_id=experiment_id, options={"llm_insights": True}, ) get """ .. code-block:: python get(report_id) -> dict Get report details and current generation status. :param report_id: UUID of the report. :returns: Report dictionary. wait """" .. code-block:: python wait(report_id, interval=3.0, timeout=300.0) -> dict Poll report until generation completes. :param report_id: UUID of the report. :param interval: Seconds between polls (default 3.0). :param timeout: Maximum seconds to wait (default 300.0). :returns: Final report status dictionary. :raises CorePlexMLError: If report generation times out. download """""""" .. code-block:: python download(report_id, output_path) -> str Download a generated report to a local file. :param report_id: UUID of the report. :param output_path: Local path to save the file (e.g., ``"report.pdf"``). :returns: The ``output_path`` string on success. .. code-block:: python status = client.reports.wait(report_id) client.reports.download(report_id, "experiment_report.pdf") ---- PrivacyResource ---------------- .. autoclass:: coreplexml.privacy.PrivacyResource :members: :undoc-members: :show-inheritance: Privacy Suite provides PII detection and data transformation capabilities. It supports 72+ PII types across HIPAA, GDPR, PCI-DSS, and CCPA compliance profiles. The typical workflow is: 1. Create a privacy policy with a compliance profile. 2. Create a session linking the policy to a dataset. 3. Run detection to find PII. 4. Apply transformations (masking, hashing, redaction, etc.). 5. Retrieve results. Methods ^^^^^^^ list_policies """"""""""""" .. code-block:: python list_policies(project_id=None, limit=50, offset=0) -> dict List privacy policies, optionally filtered by project. :param project_id: Filter by project UUID (optional). :param limit: Maximum results (default 50). :param offset: Pagination offset. :returns: Dictionary with ``items`` list and ``total`` count. create_policy """"""""""""" .. code-block:: python create_policy(project_id, name, profile=None, description="") -> dict Create a new privacy policy. :param project_id: UUID of the project. :param name: Policy name. :param profile: Compliance profile -- ``"hipaa"``, ``"gdpr"``, ``"pci_dss"``, or ``"ccpa"`` (optional). :param description: Optional description. :returns: Created policy dictionary. .. code-block:: python policy = client.privacy.create_policy( project_id=project_id, name="HIPAA Compliance", profile="hipaa", description="Scan patient data for PHI", ) get_policy """""""""" .. code-block:: python get_policy(policy_id) -> dict Get privacy policy details. :param policy_id: UUID of the policy. :returns: Policy dictionary. delete_policy """"""""""""" .. code-block:: python delete_policy(policy_id) -> dict Delete a privacy policy. :param policy_id: UUID of the policy. :returns: Empty dictionary on success. create_session """""""""""""" .. code-block:: python create_session(policy_id, dataset_id) -> dict Create a privacy processing session linking a policy to a dataset. :param policy_id: UUID of the privacy policy. :param dataset_id: UUID of the dataset to scan. :returns: Created session dictionary. detect """""" .. code-block:: python detect(session_id) -> dict Run PII detection on a session. Scans all columns in the linked dataset using the policy's compliance profile rules. :param session_id: UUID of the privacy session. :returns: Detection results with PII findings per column. .. code-block:: python detection = client.privacy.detect(session_id) for finding in detection.get("findings", []): print(f" Column '{finding['column']}': {finding['pii_type']} ({finding['count']} occurrences)") transform """"""""" .. code-block:: python transform(session_id) -> dict Apply privacy transformations to a session based on the policy rules (masking, hashing, redaction, generalization, etc.). :param session_id: UUID of the privacy session. :returns: Transformation results dictionary. results """"""" .. code-block:: python results(session_id) -> dict Get complete results for a privacy session, including both detection findings and transformation outputs. :param session_id: UUID of the privacy session. :returns: Full session results dictionary. ---- SynthGenResource ----------------- .. autoclass:: coreplexml.synthgen.SynthGenResource :members: :undoc-members: :show-inheritance: SynthGen trains deep generative models (CTGAN, CopulaGAN, or TVAE) on real datasets to produce statistically similar synthetic data. Methods ^^^^^^^ list_models """"""""""" .. code-block:: python list_models(project_id=None, limit=50, offset=0) -> dict List synthetic data models. :param project_id: Filter by project UUID (optional). :param limit: Maximum results (default 50). :param offset: Pagination offset. :returns: Dictionary with ``items`` list and ``total`` count. create_model """""""""""" .. code-block:: python create_model(project_id, dataset_version_id, name, model_type="ctgan", config=None) -> dict Train a new synthetic data model. Training runs as a background job. :param project_id: UUID of the project. :param dataset_version_id: UUID of the dataset version to train on. :param name: Model name. :param model_type: Model architecture -- ``"ctgan"``, ``"copulagan"``, ``"tvae"``, or ``"gaussian_copula"`` (default ``"ctgan"``). :param config: Optional training configuration (e.g., ``{"epochs": 300, "batch_size": 500}``). :returns: Created model dictionary with ``id`` and ``status``. .. code-block:: python synth = client.synthgen.create_model( project_id=project_id, dataset_version_id=version_id, name="Transaction Generator", model_type="ctgan", config={"epochs": 500}, ) get_model """"""""" .. code-block:: python get_model(model_id) -> dict Get synthetic data model details. :param model_id: UUID of the SynthGen model. :returns: Model dictionary. generate """""""" .. code-block:: python generate(model_id, num_rows=1000, seed=None) -> dict Generate synthetic data rows from a trained model. :param model_id: UUID of the trained SynthGen model. :param num_rows: Number of synthetic rows to generate (default 1000). :param seed: Random seed for reproducibility (optional). :returns: Generation results dictionary. .. code-block:: python result = client.synthgen.generate(synth_model_id, num_rows=5000, seed=42) print(f"Generated {result.get('num_rows', 0)} rows") delete_model """""""""""" .. code-block:: python delete_model(model_id) -> dict Delete a synthetic data model. :param model_id: UUID of the SynthGen model. :returns: Empty dictionary on success. ---- StudioResource --------------- .. autoclass:: coreplexml.studio.StudioResource :members: :undoc-members: :show-inheritance: What-If Analysis allows you to create sessions tied to a deployed model, define alternative scenarios by changing input features, and compare predictions side by side. Methods ^^^^^^^ create_session """""""""""""" .. code-block:: python create_session(project_id, deployment_id, baseline_input) -> dict Create a new What-If Analysis session with a baseline input. :param project_id: UUID of the project. :param deployment_id: UUID of the deployment to analyze. :param baseline_input: Feature values for the baseline scenario (dict). :returns: Created session dictionary. .. code-block:: python session = client.studio.create_session( project_id=project_id, deployment_id=deployment_id, baseline_input={"age": 30, "income": 60000, "credit_score": 700}, ) get_session """"""""""" .. code-block:: python get_session(session_id) -> dict Get session details, including the baseline and all scenarios. :param session_id: UUID of the studio session. :returns: Session dictionary. create_scenario """"""""""""""" .. code-block:: python create_scenario(session_id, name, changes=None) -> dict Add a new scenario to a session. A scenario overrides one or more features from the baseline to explore counterfactual predictions. :param session_id: UUID of the studio session. :param name: Scenario name. :param changes: Feature overrides compared to baseline (optional dict). :returns: Created scenario dictionary. .. code-block:: python scenario = client.studio.create_scenario( session_id=session["id"], name="Higher Income", changes={"income": 120000}, ) run_scenario """""""""""" .. code-block:: python run_scenario(scenario_id) -> dict Execute a scenario to get its prediction from the deployed model. :param scenario_id: UUID of the scenario. :returns: Scenario results with prediction values. compare """"""" .. code-block:: python compare(session_id) -> dict Compare all scenarios in a session side by side. :param session_id: UUID of the studio session. :returns: Comparison results dictionary with all scenario predictions. .. code-block:: python comparison = client.studio.compare(session["id"]) for scenario in comparison.get("scenarios", []): print(f" {scenario['name']}: {scenario['prediction']}")