How to debug your scripts locally
Please check the previous page Endpoints for further detail on how to obtain data from our API.
Prerequisites
An IDE setup for your script language e.g:
VS-Code with the plugins for the script language
Open the script in the SF platform
Generated variable names
In the SF platform, the execution of datasets for scripts and the assignment to variables is done in the background. These variable names are sanitized and deduplicated, which means they are depending on all the names of the datasets, the column names, and the order they are assigned to the script. Therefore it's no simple task to "guess" the variable name that is created in the background.
Suggested approach
In order to be able to easily switch between local development and the SF platform, you should start the script with intermediate variables and assign the variables of the dataset.
Assigning variables in SF
Retrieving data from SF
Here is a minimal example of how to retrieve data from SF and assign the columns to variables.
R Python
Copy library (httr)
library (jsonlite)
rest_url <- "https://<your senseforce backend platform url>/api/dataset/execute/<id>"
header_auth <- c ( "Authorization" = "Bearer <your API access token>" )
header_type <- c ( "Content-Type" = "application/json" )
headers <- add_headers( header_auth, header_type )
req <- POST( rest_url, body = "[]" , headers )
stop_for_status( req )
res_df <- data.frame ( fromJSON(content( req, "text" , "application/json" )) )
script_variable_1 <- res_df $ timestamp
script_variable_2 <- res_df $ someColumn
Copy import requests
import json
from pandas import DataFrame
url = "https://<your senseforce backend platform url>/api/dataset/execute/<id>"
headers = { "Content-Type" : "application/json" ,
"Authorization" : "Bearer <your API access token>" }
filters = []
response = requests . post (url, headers = headers, json = filters)
data = response . text
parsed_data = json . loads (data)
df = DataFrame (parsed_data)
script_variable_1 = df [ "timestamp" ]
script_variable_2 = df [ "someColumn" ]
Interchangeable code
With this setup, you can now use these variables the same way locally and in SF. Therefore you can copy, the interchangeable part, of your local script to SF and vice versa.
For flawless copying, it is also advisable to make use of the results also in the local script.
The interchangeable part of the code is everything after the setup of the variables and before, possible debug code of the local implementation (e.g. print(variable) ).
Senseforce platform
Local code
R Python
Copy library (httr)
library (jsonlite)
rest_url <- "https://<your senseforce backend platform url>/api/dataset/execute/<id>"
header_auth <- c ( "Authorization" = "Bearer <your API access token>" )
header_type <- c ( "Content-Type" = "application/json" )
headers <- add_headers( header_auth, header_type )
req <- POST( rest_url, body = "[]" , headers )
stop_for_status( req )
res_df <- data.frame ( fromJSON(content( req, "text" , "application/json" )) )
# variable setup
script_variable_1 <- res_df $ timestamp
script_variable_2 <- res_df $ someColumn
# start - interchangeable code
# ======================================================================
script_variable_2 <- script_variable_2 + 1000
result1 <- script_variable_1
result2 <- script_variable_2
# ======================================================================
# end - interchangeable code
print (result1)
print (result2)
Copy import requests
import json
from pandas import DataFrame
url = "https://<your senseforce backend platform url>/api/dataset/execute/<id>"
headers = { "Content-Type" : "application/json" ,
"Authorization" : "Bearer <your API access token>" }
filters = []
response = requests . post (url, headers = headers, json = filters)
data = response . text
parsed_data = json . loads (data)
df = DataFrame (parsed_data)
# variable setup
script_variable_1 = df [ "timestamp" ]
script_variable_2 = df [ "someColumn" ]
# start - interchangeable code
# ======================================================================
script_variable_2 = [x + 1000 for x in script_variable_2]
result1 = script_variable_1
result1 = script_variable_2
# ======================================================================
# end - interchangeable code
print (x)
print (y)
More complex examples
The following shows a more complex example. Further details can be found in the Endpoints section.
Multiple datasets: Some scripts use multiple datasets and for this, it is advisable to abstract the loading of datasets.
Filters: When it is necessary to simulate the filters from dashboards or automations, the API gives the possibility to add filters to a dataset.
R Python
Copy library (httr)
library (jsonlite)
url <- "https://<your senseforce backend platform url>/api/dataset/execute/"
auth <- "Bearer <your API access token>"
loadDataset <- function (id, filters, limit, offset) {
rest_url <- paste (url, id, sep = "" )
rest_url <- paste (rest_url, "?limit=" , limit, "&offset=" , offset, sep = "" )
header_auth <- c ( "Authorization" = auth)
header_type <- c ( "Content-Type" = "application/json" )
headers <- add_headers( header_auth, header_type )
req <- POST( rest_url, body = filters, headers )
stop_for_status( req )
res_str <- content( req, "text" , "application/json" )
res_f_j <- fromJSON( res_str )
res_df <- data.frame (res_f_j)
return (res_df)
}
filters <- sprintf ( '[{
"clause": {
"type": "long",
"operator": 5,
"parameters": [{
"value": 5
}]
},
"columnName": "SomeColumn"
}]' );
data1 <- loadDataset( "<datasetId1>" , filters, 100 , 0 )
data2 <- loadDataset( "<datasetId2>" , "[]" , 100 , 0 )
# variable setup
script_variable_1 <- data1 $ someColumn
script_variable_2 <- data2 $ someColumn
# start - interchangeable code
script_variable_2 <- script_variable_2 + 1000
result1 <- script_variable_1
result2 <- script_variable_2
# end - interchangeable code
print (result1)
print (result2)
Copy import requests
import json
from pandas import DataFrame
url = "https://<your senseforce backend platform url>/api/dataset/execute/"
auth = "Bearer <your API access token>"
def load_dataset ( id , filters , limit , offset ):
rest_url = url + id + "?limit=" + str (limit) + "&offset=" + str (offset)
headers = { "Content-Type" : "application/json" , "Authorization" : auth }
response = requests . post (rest_url, headers = headers, json = filters)
data = response . text
parsed_data = json . loads (data)
df = DataFrame (parsed_data)
return df
filters = [
{
"clause" : {
"type" : "long" ,
"operator" : 5 ,
"parameters" : [ {
"value" : 5
}
]
},
"columnName" : "SomeColumn"
} ]
data1 = load_dataset ( "<datasetId1>" , filters, 100 , 0 )
data2 = load_dataset ( "<datasetId2>" , [], 100 , 0 )
# variable setup
script_variable_1 = data1 [ "someColumn" ]
script_variable_2 = data2 [ "someColumn" ]
# start - interchangeable code
script_variable_2 = [x + 1000 for x in script_variable_2]
result1 = script_variable_1
result2 = script_variable_2
# end - interchangeable code
print (result1)
print (result2)