API Endpoint Options

What API endpoints are available for data ingestion, model training, and inference?

The available API endpoints for data ingestion, model training, and inference are as follows:

  1. Data Ingestion:

    • The batches endpoint provides a simpler way to upload and manage input files. A batch is a RESTful object containing a user-defined list of files, each with a unique batch ID that can be used as input for an API-based app run. This endpoint replaces the filesystem endpoint as the preferred approach to API-based file operations.
  2. Model Training:

    • With AI Hub, we utilize zero-shot/fully pretrained models, so there is no direct model training that any of our customers need to perform. As such, there are no direct “model training” API endpoints.
  3. Inference:

    • The runs endpoints introduce a streamlined way to create an app run, check its status, and return app run results. This replaces the previous run apps, job status, and run results endpoints and is the preferred approach to running AI Hub apps by API.
    • The Get run results endpoint supports returning classification confidence scores when the include_confidence_scores query parameter is set to true.
    • The conversations endpoint allows creating and managing conversations, including uploading and querying documents.

These endpoints facilitate various operations related to data ingestion, running models, and obtaining inference results.