Senseforce 2.0 Manual
  • Welcome to Senseforce 2.0
  • SF MQTT API
  • SF Edge
    • Edge Package Overview
    • Edge Installation
      • Using the SF Edge Service installer in command shell
      • Headless SF Edge Servie execution
    • Edge Configuration
      • Which Plugins to load (appsettings.xml)
      • Edge Data Routing (EventDefinition.xml)
      • OTA configuration (otasettings.xml)
      • Edge Logging
      • Input plugins
        • SQL Plugin
        • SQLite Plugin
        • OPC UA Plugin
          • OPC UA server browser
        • SIEMENS S7/SINUMERIK Plugin
          • Creating connection definition file
        • S7 Plugin
        • REST Plugin
        • TCP Listener Plugin
        • XML File Parsing Plugin
        • Heidenhain Plugin
      • Output Plugins
        • MQTT Plugin
        • SQLite Plugin
    • Edge Compute plugins
      • Creating compute plugins
      • Using compute plugins
  • SF Edge Asset Management
    • Edge version repository
  • SF Platform
    • Navigation
      • Overview Screen
    • Dataset Builder
      • Filters, Aggregations and Groupings
      • Functions of the Formula Editor
      • Additional Settings
      • Special Events
      • FAQ and Troubleshooting
    • Script Editor
      • Details of the Script Editor
      • Installed Packages
      • Working with Timestamps
        • Working with Timestamps in Python
        • Working with Timestamps in R
      • FAQ and Troubleshooting
    • Widgets
      • Text
      • Headline
      • Image
      • Progress Bar & Gauge Chart
      • Table
      • Map
      • Line, Bar, Scatter and Area Chart
      • Pie Chart
      • Gantt Chart
      • Histogram
      • Overview
      • Log
      • Default colors and color conditions
      • FAQ and Troubleshooting
    • Dashboards
      • Dashboard Filters
      • Favorites
      • Data Drilldown & Widget Interaction
      • Editing multiple dashboard widgets
      • Time Zones
      • Synchronized Zooming
      • Sharable Dashboard URL
      • Multi-chart layout options
      • Default sort for table widgets
      • Releases
      • Reporting
      • FAQ and Troubleshooting
      • Applying zoom to global timestamp filter
      • Optimise the layout for different devices
    • Machine Master Data
      • Dimensions
      • Instances
      • Things
      • Use Case
      • FAQ and Troubleshooting
    • Automation
      • Trigger
      • Scheduling overview
      • Actions
      • Test your Automation
      • Zapier integration (necessary internal steps)
      • Zapier integration
      • Subscriptions
      • FAQ and Troubleshooting
    • Event Schema Management
      • Importing a Event Schema
      • FAQ and Troubleshooting
    • Virtual Events
      • Creating a Virtual Event
      • Scheduling Overview
      • Permissions and Data Access
      • FAQ and Troubleshooting
    • Details modal for elements
    • Copy / Duplicate elements
    • Whitelabeling
    • Edge Device Management
    • Element History
    • Public API
      • Get your access token
      • Endpoints
      • Debugging scripts
      • FAQ and Troubleshooting
    • User & Group Management
      • FAQ and Troubleshooting
    • Active Directory & SSO Setup
Powered by GitBook
On this page
  • How to debug your scripts locally
  • Prerequisites
  • Generated variable names
  • Suggested approach

Was this helpful?

  1. SF Platform
  2. Public API

Debugging scripts

PreviousEndpointsNextFAQ and Troubleshooting

Last updated 3 years ago

Was this helpful?

How to debug your scripts locally

Please check the previous page Endpoints for further detail on how to obtain data from our API.

Prerequisites

  1. All prerequisites from Endpointsfulfilled

  2. An IDE setup for your script language e.g:

    • VS-Code with the plugins for the script language

      • R and RTools

      • Python

    • RStudio

    • Pycharm

  3. Open the script in the SF platform

Generated variable names

In the SF platform, the execution of datasets for scripts and the assignment to variables is done in the background. These variable names are sanitized and deduplicated, which means they are depending on all the names of the datasets, the column names, and the order they are assigned to the script. Therefore it's no simple task to "guess" the variable name that is created in the background.

Suggested approach

In order to be able to easily switch between local development and the SF platform, you should start the script with intermediate variables and assign the variables of the dataset.

Assigning variables in SF

Retrieving data from SF

Here is a minimal example of how to retrieve data from SF and assign the columns to variables.

library(httr)
library(jsonlite)

rest_url <- "https://<your senseforce backend platform url>/api/dataset/execute/<id>"
header_auth <- c("Authorization" = "Bearer <your API access token>")
header_type <- c("Content-Type" = "application/json")
headers <- add_headers(header_auth, header_type)

req <- POST(rest_url, body = "[]", headers)
stop_for_status(req)
res_df <- data.frame(fromJSON(content(req, "text", "application/json")))

script_variable_1 <- res_df$timestamp
script_variable_2 <- res_df$someColumn
import requests
import json
from pandas import DataFrame

url = "https://<your senseforce backend platform url>/api/dataset/execute/<id>"
headers = {"Content-Type": "application/json",
           "Authorization": "Bearer <your API access token>"}
filters = []

response = requests.post(url, headers=headers, json=filters)
data = response.text
parsed_data = json.loads(data)
df = DataFrame(parsed_data)

script_variable_1 = df["timestamp"]
script_variable_2 = df["someColumn"]

Interchangeable code

With this setup, you can now use these variables the same way locally and in SF. Therefore you can copy, the interchangeable part, of your local script to SF and vice versa.

For flawless copying, it is also advisable to make use of the results also in the local script.

The interchangeable part of the code is everything after the setup of the variables and before, possible debug code of the local implementation (e.g. print(variable)).

Senseforce platform

Local code

library(httr)
library(jsonlite)

rest_url <- "https://<your senseforce backend platform url>/api/dataset/execute/<id>"
header_auth <- c("Authorization" = "Bearer <your API access token>")
header_type <- c("Content-Type" = "application/json")
headers <- add_headers(header_auth, header_type)

req <- POST(rest_url, body = "[]", headers)
stop_for_status(req)
res_df <- data.frame(fromJSON(content(req, "text", "application/json")))

# variable setup
script_variable_1 <- res_df$timestamp
script_variable_2 <- res_df$someColumn

# start - interchangeable code
# ======================================================================
script_variable_2 <- script_variable_2 + 1000

result1 <- script_variable_1
result2 <- script_variable_2
# ======================================================================
# end - interchangeable code

print(result1)
print(result2)
import requests
import json
from pandas import DataFrame

url = "https://<your senseforce backend platform url>/api/dataset/execute/<id>"
headers = {"Content-Type": "application/json",
           "Authorization": "Bearer <your API access token>"}
filters = []

response = requests.post(url, headers=headers, json=filters)
data = response.text
parsed_data = json.loads(data)
df = DataFrame(parsed_data)

# variable setup
script_variable_1 = df["timestamp"]
script_variable_2 = df["someColumn"]

# start - interchangeable code
# ======================================================================
script_variable_2 = [x+1000 for x in script_variable_2]

result1 = script_variable_1
result1 = script_variable_2
# ======================================================================
# end - interchangeable code

print(x)
print(y)

More complex examples

The following shows a more complex example. Further details can be found in the Endpoints section.

  • Multiple datasets: Some scripts use multiple datasets and for this, it is advisable to abstract the loading of datasets.

  • Filters: When it is necessary to simulate the filters from dashboards or automations, the API gives the possibility to add filters to a dataset.

library(httr)
library(jsonlite)

url <- "https://<your senseforce backend platform url>/api/dataset/execute/"
auth <- "Bearer <your API access token>"

loadDataset <- function(id, filters, limit, offset) {
  rest_url <- paste(url, id, sep = "")
  rest_url <- paste(rest_url, "?limit=", limit, "&offset=", offset, sep = "")
  header_auth <- c("Authorization" = auth)
  header_type <- c("Content-Type" = "application/json")
  headers <- add_headers(header_auth, header_type)
  req <- POST(rest_url, body = filters, headers)
  stop_for_status(req)
  res_str <- content(req, "text", "application/json")
  res_f_j <- fromJSON(res_str)
  res_df <- data.frame(res_f_j)
  return(res_df)
}

filters <- sprintf('[{
	"clause": {
		"type": "long",
		"operator": 5,
		"parameters": [{
				"value": 5
		}]
	},
	"columnName": "SomeColumn"
}]');

data1 <- loadDataset("<datasetId1>", filters, 100, 0)
data2 <- loadDataset("<datasetId2>", "[]", 100, 0)

# variable setup
script_variable_1 <- data1$someColumn
script_variable_2 <- data2$someColumn

# start - interchangeable code
script_variable_2 <- script_variable_2 + 1000

result1 <- script_variable_1
result2 <- script_variable_2
# end - interchangeable code

print(result1)
print(result2)
import requests
import json
from pandas import DataFrame

url = "https://<your senseforce backend platform url>/api/dataset/execute/"
auth = "Bearer <your API access token>"

def load_dataset(id, filters, limit, offset):
    rest_url = url + id + "?limit=" + str(limit) + "&offset=" + str(offset)
    headers = {"Content-Type": "application/json", "Authorization": auth}
    response = requests.post(rest_url, headers=headers, json=filters)
    data = response.text
    parsed_data = json.loads(data)
    df = DataFrame(parsed_data)
    return df

filters = [
    {
        "clause": {
            "type": "long",
            "operator": 5,
            "parameters": [{
                "value": 5
            }
            ]
        },
        "columnName": "SomeColumn"
    }]

data1 = load_dataset("<datasetId1>", filters, 100, 0)
data2 = load_dataset("<datasetId2>", [], 100, 0)

# variable setup
script_variable_1 = data1["someColumn"]
script_variable_2 = data2["someColumn"]

# start - interchangeable code
script_variable_2 = [x+1000 for x in script_variable_2]

result1 = script_variable_1
result2 = script_variable_2
# end - interchangeable code

print(result1)
print(result2)