Predictor Machine Learning

Enterprise Feature in beta

This feature is available only to customers with an enterprise Platform subscription. Contact us to discuss enabling it on your account.

Predictor™ is where you get, train, and deploy dedicated machine learning solutions in the EVRYTHNG Platform. It offers several pre-packaged models for complex supply-chain integrity problems, which we are developing with internal and external domain experts. You can select and install models that Predictor™ trains on with the data in your EVRYTHNG account. This means that the model is tailored to your processes and products. You can read more about it in our walkthrough.

A successfully trained model transparently run predictions on actions whenever a user-generated action is generated, for example when a user scans a product.

Note that this API is in early development and is subject to frequent changes.

Change in API URL

For this API, the beta domain is https://ml.evrythng.io

API Status Beta: /machineLearning/models /machineLearning/models/{modelType} /machineLearning/models/{modelType}/{modelId} /machineLearning/models/{modelType}/{modelId}/datasets /machineLearning/models/{modelType}/{modelId}/datasets/{datasetId} /machineLearning/models/{modelType}/{modelId}/deployments /machineLearning/models/{modelType}/{modelId}/deployments/{deploymentId} /machineLearning/models/{modelType}/{modelId}/deployments/{deploymentId}/predict

Callback payload after dataset is downloaded.

.secret (string, required)
    Secret key used to authorize calling the callback

.state (string, required, one of 'downloaded', 'running', 'failed')
    State of the dataset

Object containing URLs to datasets.

.datasetUrls (array of strings, required)
    One or more URLs describing the training set as URLs of 
    EVRYTHNG resources

Object containing the prediction data items.

.data (array of objects, required)
    Input on which the model generates a prediction.
{}

An object containing a list of datasets.

.datasets (array of string, required)
    The list of datasets on which the model will be trained.

Object containing the name of the model type.

.name (string, required)
    The name of the new type of model.

Object containing the name of the model type.

.name (string, required)
    The name of the new type of model.

To create a model type, send a POST request to the /machineLearning/models endpoint with the ModelTypePayloadDocument in the body.


To get a list of model types that can be activated by customers, send a GET request to the /machineLearning/models endpoint.


To get a list of model instances, send a GET request to the /machineLearning/models endpoint with the model type in the path. If the query parameter context is set to true (?context=true), it returns information about the model type.


To get the metadata for a deployed model, send a GET request to the /machineLearning/models endpoint with the model type and model ID in the path.


To create a training data set, send a POST request to the /machineLearning/models/datasets endpoint with the model type and model ID in the path and the DatasetDefinitionDocument in the body. The dataset is described as an EVRYTHNG URL.


To get a list of training datasets, send a GET request to the /machineLearning/models/datasets endpoint with the model type and model ID in the path.


To get a training dataset, send a GET request to the /machineLearning/models/datasets endpoint with the model type, model ID, and dataset ID in the path.


To update a dataset, send a PUT request to the /machineLearning/models/datasets endpoint with the model type, model ID, and dataset ID in the path and the DatasetCallbackDocument in the body.


To create a model deployment of the specified model type, send a POST request to the /machineLearning/models/deployments endpoint with the model type and model ID in the path and the DeploymentConfigDocument in the body.


To get a list of deployed models, send a GET request to the /machineLearning/models/deployments endpoint with the model type and model ID in the path.


To get metadata about a model deployment, send a GET request to the /machineLearning/models/datasets endpoint with the model type, model ID, and deployment ID in the path.


To create a prediction, send a POST request to the /machineLearning/models/deployments/predict endpoint with the model type, model ID, and deployment ID in the path and the PredictionPayloadDocument in the body.