Endpoints

You have the possibility to get results from predefined elements (datasets or scripts) using our API which returns JSON data. In addition, you can also set custom filters on the requests. Right now our API has a limit of returning 1 million rows of data at a time when querying data.

  1. 1.
    SF user account to generate an API access token
  2. 2.
    SF platform backend URL of your environment
  3. 3.
    Id of the predefined element (dataset/script)
First, you need an API access token, which can be generated by each user. This token is equivalent to your credentials so you will have the same permissions as the user who created this API access token. For information on how to generate such an API access token see Get your access token.
The second information you need is the backend URL of your environment. You can see this URL when your login screen is displayed. Here you require the first part of the URL which usually looks like exampleapi.senseforce.io
SF platform backend URL
The third and last thing you need for your API request is the ID of the element you want to request. For example, for a dataset, you can see this information when you open the desired dataset in the SF platform. The ID is the last part which is shown in the address bar (see example below).
Dataset ID
Double-check the headers before making a request. When a request library (like 'requests' in Python) or Postman is used, some headers are auto-generated so we don't have to manually set them, but this is not always true especially for 'Host' or 'Content-Type'.

Below you can find information about endpoints structure and required parameters.
post
https://<your senseforce backend platform url>
/api/dataset/execute/<id>
Execute a dataset through Senseforce API

When you want to get all data from the dataset you can send requests with an empty array as a body. Below you can find a sample Python script to do so.
import requests
import json
from pandas import DataFrame
url = "https://<your senseforce backend platform url>/api/dataset/execute/<id>"
headers = {'Content-Type': 'application/json', 'Authorization': 'Bearer <your API access token>'}
filters = []
response = requests.post(url, headers=headers, json=filters)
data = response.text
parsed_data = json.loads(data)
df = DataFrame(parsed_data)
Requesting dataset with original labels
By default, all response element property names (dataset column labels) are converted to first letter lower case.
To avoid this behavior, and to keep the original label letter case, use an additional parameter in the request useOriginalLabels and set it to true. (/api/dataset/execute/<id>?useOriginalLabels=true)
To use converted column labels, you can explicitly set useOriginalLabels to false, or simply omit the parameter.
Example of useOriginalLabels parameter usage.

All filters defined in the dataset will be applied anyway, but you can also apply additional filters to it. To define a filter you have to create a structure like the one shown in the example below.
In the example below a filter for the column named "device" is applied so that it only contains values equal to "vienna-prater-ferris-wheel-motor1".
[{
"clause": {
"type": "string",
"operator": 7,
"parameters": [{
"value": "vienna-prater-ferris-wheel-motor1"
}
]
},
"columnName": "device"
}
]
Within these clause objects (filter definitions) all operators which are also available on the SF Platform can be used. But some restrictions have to be considered. Not each operator can be applied to each column, because of their datatype. And not all of these filters have the same number of parameters. Most of them have only one, but some of them have none (e.g. IsEmpty, IsNotEmpty) and some have two (e.g. Between).
A summary of all available filter "operator" and their restrictions are given in the table below:

Operator
Operator id
# param
Required parameter datatype
Required column datatype
LessThan
1
1
same as column
integer | long | double
GreaterThan
2
1
same as column
integer | long | double
LessThanOrEqualTo
3
1
same as column
integer | long | double
GreaterThanOrEqualTo
4
1
same as column
integer | long | double
Equal
5
1
same as column
integer | long | double | string
NotEqual
6
1
same as column
integer | long | double | string
Like
7
1
string
string
RegExpMatch
8
1
string
string
NotRegExpMatch
9
1
string
string
Between
10
2
same as column
integer | long | double | timestamp
In
11
1...n
same as column
each datatype allowed
IsEmpty
12
0
-
each datatype allowed
IsNotEmpty
13
0
-
each datatype allowed
CustomToday
14
0
-
timestamp
CustomThisWeek
15
0
-
timestamp
CustomThisMonth
16
0
-
timestamp
CustomLastThreeMonths
17
0
-
timestamp
CustomLastXMinutes
18
1
integer
timestamp
CustomDay
19
1
timestamp
timestamp
NotLike
20
1
string
string
CustomYesterday
21
0
-
timestamp
CustomLastXDays
22
1
integer
timestamp
CustomLastXWeeks
23
1
integer
timestamp
CustomLastXMonths
24
1
integer
timestamp
CustomLastWeek
25
0
-
timestamp
CustomLastMonth
26
0
-
timestamp
CustomRelativeBetween
27
2
integer
timestamp
NotIn
28
1...n
same as column
each datatype allowed
NOTE: "timestamps" are integer values representing the Unix timestamp in milliseconds (!!!).
Below you can find an extended version of the example Python script where also additional filters are applied.
import requests
import json
from pandas import DataFrame
class operator:
LessThan = 1
GreaterThan = 2
LessThanOrEqualTo = 3
GreaterThanOrEqualTo = 4
Equal = 5
NotEqual = 6
Like = 7
RegExpMatch = 8
NotRegExprMatch = 9
Between = 10
In = 11
IsEmpty = 12
IsNotEmpty = 13
CustomToday = 14
CustomThisWeek = 15
CustomThisMonth = 16
CustomLastThreeMonths = 17
CustomLastXMinutes = 18
CustomDay = 19
NotLike = 20
CustomYesterday = 21
CustomLastXDays = 22
CustomLastXWeeks = 23
CustomLastXMonths = 24
CustomLastWeek = 25
CustomLastMonth = 26
CustomRelativeBetween = 27
NotIn = 28
class datatype:
timestamp = "timestamp"
string = "string"
url = "https://<your senseforce backend platform url>/api/dataset/execute/<id>"
headers = {'Content-Type': 'application/json', 'Authorization': 'Bearer <your API access token>'}
filters = [{
"clause": {
"type": datatype.string,
"operator": operator.GreaterThan,
"parameters": [{
"value": 2000
}
]
},
"columnName": "compressed size"
},
{
"clause": {
"type": datatype.string,
"operator": operator.NotEqual,
"parameters": [{
"value": "Thing XY"
}
]
},
"columnName": "(sf-messages) Thing"
},
{
"clause": {
"type": datatype.timestamp,
"operator": operator.Between,
"parameters": [{
"value": 1604043829000
},
{
"value": 1604053829000
}
]
},
"columnName": "(sf-messages) Timestamp"
}]
response = requests.post(url, headers=headers, json=filters)
data = response.text
parsed_data = json.loads(data)
df = DataFrame(parsed_data)
post
https://<your senseforce backend platform url>
/api/script/execute/{scriptId}
Execute a script through Senseforce API
The Body Parameters ("ScriptFilters" and "DatasetFilters") contain the same filter object structure in their list. See the filter object structure in the section below. The Body Parameter "DatasetFilters" can contain filter objects associated with a column of a dataset. These filters can target columns from multiple datasets as well (a script can work with one or multiple datasets).
Filter object structure

Let's use a demo script and see how this script can be executed via Senseforce API. The following script has a "value" variable and it outputs this variable as result.
Let's execute this script via Senseforce API. When you want to execute a script without any filters you can send requests with an empty object as a body. Below you can find a C# request sample:
var client = new RestClient("https://<your senseforce backend platform url>/api/script/execute/7157b7c7-dd56-46ee-93e2-2955cd91cfdc");
var request = new RestRequest(Method.POST);
request.AddHeader("Authorization", "Bearer <your API access token>");
request.AddHeader("Content-Type", "application/json");
request.AddParameter("application/json", "{}", ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
Console.WriteLine(response.Content);

In the example below a script filter for the column named "value" is applied so that it only contains values equal to "hello from script". The Request Body will be like:
request.AddParameter("application/json",
"{
"scriptFilters": [
{
"clause": {
"type": "string",
"operator": 7,
"parameters": [{ "value": "hello from script" }]
},
"columnName": "value"
}
]
}", ParameterType.RequestBody);
A summary of all available filter "operator" and their restrictions are given in the above "Filter Operators" table

Let's use a demo script that uses two datasets so you can see how the dataset filters can be applied.
In the example below, two dataset filters are set, which target different datasets. The Request Body will be like:
request.AddParameter("application/json",
"{
"datasetFilters": [
{
"clause": {
"type": "string",
"operator": 7,
"parameters": [{ "value": "virtual_event" }]
},
"columnName": "(another_test) Thing"
},
{
"clause": {
"type": "string",
"operator": 7,
"parameters": [{ "value": "virtual_event" }]
},
"columnName": "(test) Thing"
}
]
}", ParameterType.RequestBody);

Let's use a demo script which uses two datasets so you can see how the dataset filters and script filters can be applied.
In the example below, one script filter and two dataset filters are set. The Request Body will be like:
request.AddParameter("application/json",
"{
"scriptFilters": [
{
"clause": {
"type": "number",
"operator": 5,
"parameters": [{ "value": "285" }]
},
"columnName": "size"
}
],
"datasetFilters": [
{
"clause": {
"type": "string",
"operator": 7,
"parameters": [{ "value": "virtual_event" }]
},
"columnName": "(another_test) Thing"
},
{
"clause": {
"type": "string",
"operator": 7,
"parameters": [{ "value": "virtual_event" }]
},
"columnName": "(test) Thing"
}
]
}", ParameterType.RequestBody);
Dataset filters are first considered and then at the end, the script filter is applied.
Copy link
On this page
How to get data from our API
Prerequisites
How to construct the API request
post
Execute a dataset through Senseforce API
post
Execute a script through Senseforce API