Inference
POST/v2/models/$:MODEL_NAME/versions/$:MODEL_VERSION/infer
An inference request is made with an HTTP POST to an inference endpoint. In the request the HTTP body contains the Inference Request JSON Object. In the corresponding response the HTTP body contains the Inference Response JSON Object or Inference Response JSON Error Object. See Inference Request Examples for some example HTTP/REST requests and responses.
Request
Path Parameters
MODEL_NAME stringrequired
MODEL_VERSION stringrequired
- application/json
Body
idstring
parametersobject
inputs object[]required
outputs object[]
Responses
- 200
- 400
OK
- application/json
- Schema
- Example (auto)
Schema
model_namestringrequired
model_versionstring
idstring
parametersobject
outputs object[]required
{
"model_name": "string",
"model_version": "string",
"id": "string",
"parameters": {},
"outputs": [
{
"name": "string",
"shape": [
0
],
"datatype": "string",
"parameters": {},
"data": [
null,
0,
"string",
true
]
}
]
}
Bad Request
- application/json
- Schema
- Example (auto)
Schema
errorstring
{
"error": "string"
}
- csharp
- curl
- dart
- go
- http
- java
- javascript
- kotlin
- c
- nodejs
- objective-c
- ocaml
- php
- powershell
- python
- r
- ruby
- rust
- shell
- swift
- HTTPCLIENT
- RESTSHARP
var client = new HttpClient();
var request = new HttpRequestMessage(HttpMethod.Get, "https://deploy-preview-526--elastic-nobel-0aef7a.netlify.app");
request.Headers.Add("Accept", "application/json");
var content = new StringContent(string.Empty);
content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
request.Content = content;
var response = await client.SendAsync(request);
response.EnsureSuccessStatusCode();
Console.WriteLine(await response.Content.ReadAsStringAsync());
ResponseClear