Creating a Virtual Event
Attributes
When you create a new virtual event, on the Attributes page you have the option to choose which type of schema you want to use (a new one or an existing one, which was created through another virtual event). If you want to create the virtual event with an existing schema, then you can select the schema via the dropdown.
Only the published virtual event schemas will be visible in the list.
The following pictures show the corresponding images depending on the selected option.
When selecting an existing virtual event schema you are able to add new attributes to that schema through the virtual event which will be also visible in the other virtual events which share this schema.
Attributes in the dataset builder
The following picture shows how the attributes affect the structure of the virtual event in the dataset builder.
You do not need to create attributes for metadata. Metadata attributes are created automatically. For more information on metadata see here.
Note that you can only assign columns with an identical data type to attributes.
Put some thought into the design of the attributes, after saving the attributes can not be deleted.
Set Data Source
As a data source, you can use either a dataset or a script.
Dataset as a Data Source
If you select a dataset you can add some filters. The "thing" selection is applied like the global thing filter on dashboards. Every additional filter is simply applied to the selected column. If you do not add any filters, the filters defined in the dataset stay as they are.
Script as a Data Source
If you select a script you can add filters which are applied to the script results ( + Add Filter) and filters which are applied to the datasets used in the script ( + Add Dataset Filter). The "thing" selection is applied like the global thing filter on dashboards on all datasets used in the script. If you do not add any filters, the filters defined in the dataset stay as they are.
The row limit of the datasets used in the script is applied in the same manner as if it would be used elsewhere. If you have a script which needs to import a lot of data because it considers data of many machines you can use the "run per thing" feature.
Run per Thing Option
To enable the "run per thing" option you need to select script as your data source type.
If you activate "run per thing" the script will be executed once for every selected thing (if you do not select a thing it will run the script for all the things seperately), with the global thing filter set to this thing. Therefore you can import up to 1.000.000 rows of your dataset into your script per thing (depending on the row limit settings in your script).
Data Mapping
With the data mapping you define which column gets mapped onto which attribute of your virtual event (data types need to match). You do not need to assign a column to every attribute, it is only mandatory for the metadata thing and timestamp. But for those you have the additional option to assign Virtual Thing and the Execution Timestamp. Virtual Thing will simply write "virtual_thing" into every row of your virtual event. Execution Timestamp will assign the timestamp of the execution to the timestamp column.
You can map any string column onto the metadata thing attribute. The entries will be registered as virtual things.
Virtual Things
The idea behind the virtual things is that not all data generated by virtual events will be associated to only one thing. E.g. you can have benchmarking values of a specific type of machine. You do not want to assign this values to a specific thing, but to a new virtual thing representing the machine type. See how to handle access to virtual things here.
Schedule
Since virtual events are meant to save values that summarize long time frames the highest execution frequency is once per hour. The scheduling defines when and how often the defined action is executed.
Note that the execution time and the scheduled time might differ slightly. Especially if multiple virtual events are scheduled for the same time. Also pay attention to the timestamp filters you selected, since the filter will be applied at the execution time. E.g. if you schedule something to execute daily at 23:59 with a "today" filter applied, depending on the traffic on your platform the virtual event can execute on different days.
For details on the different scheduling options see.
Manual Execution
On the schedule page, there is a possibility to execute a virtual event at demand by pressing the "Execute" button.
Once the user clicks on the "Execute" button, a popup dialog with filter options shows up.
Filter options work in the same way as filters in the "Set Datasource" step in the virtual event editor and consist of the following filter options:
Thing
Script (depending if datasource is type of script)
Dataset
If the virtual event datasource type is set to:
Dataset: option "Add filter" is shown
Script: options "Add filter", "Add Dataset Filter" and "Run per thing" are shown
The filters from the popup override filters saved in the virtual event itself. Any changes made in filter options are applied only once and are not saved.
To execute the virtual event, the user needs to press the button "Execute". After pressing "Execute", virtual event will start executing and loading indicator will be shown. Popup dialog will remain open and all filters will be kept until user closes the popup.
Execute option is visible only if the user has virtual event write permissions and is disabled if there are any unsaved changes in the virtual event.
Virtual event execution might take a while, especially when executing a virtual event containing a lot of data. Once the user triggers the execution, the actual execution of the virtual event can still be running in the background for some time.
If the user tries to execute a virtual event which is still running in the background, an error message will appear: "This virtual event is already running. Wait a moment and execute it again later."
Last updated