CloudEngine Overview

Powerful scripting capabilities

Designing event flows in scripting mode is powered by the EnCoscript language, very similar to Javascript but made secure and resilient. High level modules let you handle various data formats and the conversion in between, such as binary, array, XML, JSON, Base64, Geo coordinates, etc. Persistent data structures allow variance and change detection in between flow occurrences.

CloudEngine is the connecting hub for EnCo assets

CloudEngine's usage is free whenever a script is triggering or triggered by another EnCo asset, even for production usage. Head to our plans & pricing page for complete details. Once registered to CloudEngine, you can access its microsite at

Visual Mode

Flow sections

A visual CloudEngine flow consists of 4 sections:

  • Input
  • Parser
  • Logic
  • Output

A valid flow has at the very least one input and one output. More complex flows can have multiple inputs and/or outputs. Each input can have a parser attached to it; though this is not required. In the same manner each output can have some logic attached to it, though not required.


CloudEngine accepts incoming requests over:

  • HTTP
  • Proximus LoRaWAN
    • MyThings - Data : sensor data
    • MyThings - Thing : sensor events (create, update, provision, …)
    • LoRa4Makers : sensor data
  • MQTT
  • RTCM (Real Time Crowd Management)
  • SMS - Inbound
  • SMS - Status
  • CRON : to trigger script execution based on predefined schedules

These inputs can have one or several (custom) tag(s) attached to them. That way, an API call can be made to flows only containing your (custom) tag.

For example, for an HTTP input with an added custom tag 'TemperatureCheck', the flow can be invoked by

curl -X POST -H "Authorization: Bearer <access-token>" "" Input configuration


If the input sends JSON data, it can be parsed by attaching a parser. That way, the parsed variables will be usable in both the Logic section and the Output section.

For example, on receiving following JSON data from the input:

    "temperature": 20,
    "placeOfOrigin": "living room",
    "timestamp": "Tuesday 24 April 2018 14:42:34"

and only being interested in using the temperature and the placeOfOrigin, a parser like this could be used:

json parser setup


On receiving input, some logic can be applied in deciding if the flow should let the data through. When no logic is provided for an output, all data will pass through.

For example if an ouput is only meant to be sent when a previously parsed temperature variable exceeds some thresholds, following Logic could be used:



CloudEngine can send to following outputs:

  • Azure Event Hub or Azure IoT Hub
  • SAP IoT Cloud
  • HTTP
  • MQTT client or MQTT broker
  • Mail
  • SMS
  • SMTP
  • sFTP
  • Waylay
  • BotService (based on Microsoft LUIS)

While configuring these outputs, it is possible to use one or more of the previously parsed variables by putting them in between double curly braces: {{myVariable}}

For example a SMS message could be sent containing parsed variables like this:

Parsed variables example

Video tutorial


Assuming you have a device/service that ouputs temperature data over HTTP for different rooms in your house. Then CloudEngine could use this data to send an SMS warning when the temperature is too high or too low. CloudEngine could also forward all your outputted data to a logging service. That way your own device/service can save storage by not keeping its own logs.

Video tutorial

video thumb

Testing your first flow

You will need to have a Bearer access-token to invoke your flow.

The flow that was created in above video tutorial can be invoked by

curl -X POST -H "Authorization: Bearer <access-token>" -H "Content-Type: application/json" "" -d '{"temperature": 22.5, "placeOfOrigin": "kitchen"}'

When you provide a temperature that is above 21.5 degrees or less than or equal to 19 degrees, (and you are subscribed to the SMS api,) you will receive a warning SMS.

Convert to script

In the bottom left corner a button can be found to convert the visual flow to a script flow. Script flows support complicated parsing and logic and are created with EnCoScript.

When a visual flow is converted to a script flow, it is still possible to go back to the visual flow by clicking the Cancel button. But this is only possible if the script flow was not saved. Once a script flow is saved, it is impossible to go back to its visual flow.

Visual Mode to Script Mode


In the bottom left corner a button can be found to convert the visual flow to a script flow. Script flows support complicated parsing and logic and are created with EnCoScript.

convert button

When a visual flow is converted to a script flow, it is still possible to go back to the visual flow by clicking the Cancel button. But this is only possible if the script flow was not saved. Once a script flow is saved, it is impossible to go back to its visual flow.

EnCoScript verbosity

Automatically generated code is usually overly complex and/or verbose. EnCoScript is not different in that matter. Depending on your use case some code can be removed/omitted.

Some cases you might frequently come accross:

  • If you have only selected one input: The automatically generated script will always contain if statements to execute parsers conditionally on the received input. But when only one input is selected, this code will always need to be executed and the if statement becomes obsolete:


  • Not all input might be required: The automatically generated script will sometimes parse more than needed and use multiple objects to store that data. Data that is not required for your script, does not necessarily need to be parsed:


  • Static modules can be imported: As described in the EnCoScript documentation, static modules can be imported in a less verbose manner by using the import statement:

    static imports


The automatically generated script from the video tutorial looks like this:

Automatically generated script

But has the exact same functionality when simplified to this:

simplified script

Script Mode


The script mode divides the screen in three sections.

  • Input: allows to select and/or edit the input channels.
  • Script editor: allows to modify the script.
  function run(object data, object tags, string asset) {
     # code

Important note: the function** must always be present**.

  • Menu bar: provides debug options as wel as some script settings and module documentation.


The debug tab will show you any logs or dumps you created with the Debug Module. It will also show you the reason of failure when your script fails.

The debug tab


The modules tab gives an overview of all EnCo Modules that can be used inside EnCo scripts. A description and code samples can be seen by clicking on the question mark of a specific module. A module's API can be consulted by clicking its purple lightning button.

The modules tab


Some settings can be tweaked from the settings tab:

  • Enabling/disabling the Debug logs from the first tab.

  • Toggle the priority of your script between Fast and Eco. Fast scripting guarantees a script to be executed immediately while the Eco mode will queue the script until resources are available to run it.

  • Script font size speaks for itself: it allows to change the size of the font used in the script editor.

  • ZEnCo mode will enlarge the script editor to fullscreen.

    The settings tab





A quick rundown of EncoScript features:

  • Five supported data types (string, integer, double, boolean, object)
  • Conditional execution ('if' statements)
  • Loops ('while' and 'foreach')
  • Functions (including recursive functions)
  • Local & global variable scope
  • The usual range of logic and math operators.
  • Support for additional Enco Modules

The EncoScript language is very simple. It supports five data types:

  • int (Integer) Note: Integers in EncoScript are 64 bit, instead of the usual 32 bit in many languages.
  • string (String)
  • double (Double)
  • boolean either 'true' or 'false', interchangeable with 1 and 0
  • object (see later for documentation on the 'object' type)

If possible, EncoScript will try to coerce types to match, in situations where it would otherwise cause issues.

The language is case sensitive, and both variables and functions need to be declared before use.

Literal keywords

There are a couple of literal keywords available inside EncoScript:

  • true
  • false
  • null

true and false are of type boolean, and can be used to check against (x == true), or they can be assigned to a variable of type boolean.

null is used to represent a value that does not exist, or is not available. Its main use case is to check the availability of an array key (if(arr["mykey"] == null), but it can also be used in other situations.


Comments are indicated by a '#' - this may either start a line, in which case the entire line is regarded as a comment, or be placed anywhere in the line - in which case any characters after the '#' are ignored.

Variable Declaration

Variables must be declared before use. The syntax is as follows:

int|string|double|boolean|object <varname>[=<expression>][,<varname>*];

For example

int counter,b,n;
string name,address;
double value;
boolean valid;

It is also possible to assign a value to a variable during declaration, for example:

int counter=0, b, c;
string link, path="/home/"+username+"/mydir", tmpstring;
boolean valid = true, testing = 0, exists, ready = false;

Variable names must start with A-Z or a-z, but numbers, "." and "_" can also used in variable names e.g

int valid9_variable.name1;


Strings are enclosed in double quotes, and may contain the special tokens \n, \r, \t, \ and \"to include a new line, carriage return, tab, backslash and double quote character respectively.

Conditional Execution

The sole construct for conditional execution is the 'if' statement.

There are two forms of 'if' statements, multi line and single line. Multi-line If statements may optionally contain 'elsif' and 'else' clauses. The 'then' keyword is optional for the multi-line form of 'if' The syntax is :

if (<expression>){
[}else if (<expression>){

For example:

if (a > 300) {
    a = a - 145;
    b = b + 3;
} else if (a > 500) {
    a = a - 245;
    b = b + 4;
} else {
    a = a + 10;


There are 2 loop constructs that are supported by EncoScript.


while (<expression>){

For example:

while (count < 500) {
    value = value + calcvalue(count);

Foreach can be used to iterate over arrays:

foreach(<key> in <array object>){

For example:

foreach (key in array) {
    string value = array[key];

To break out of a loop, you can use the break keyword. Once this keyword is reached, the loop will be stopped, regardless of the amount of remaining cycles.

For example:

#this loop will end when count reaches 250
while (count < 500) {
    if (count == 250) {


EncoScript supports the declaration of functions that may return a parameter, and accept a pre-defined set of parameters. Functions may be recursive. It is not legal to nest functions in code (i.e. declare functions within functions). Local variables may be declared within the function scope. Within a function global scope variables are available, and in the event of a naming conflict between the global and local scope, the local scope wins. The 'return' keyword can be used to return a value to the calling code.

The syntax of a function declaration is:

func <function name>([[<param type><param name>][,]]*){

For example

func test(int a, string b, double c){
    int count;
    while (count < 10) {
    return count;

In CloudEngine flows, you will need to define a run function with the following signature, in order for the flow to be valid:

function run(object data, object tags, string asset){


CloudEngine will call this function and fill in the parameters:

  • data will contain an array with input values. The name of these values depends on the type of input. More info can be found in the inbound documentation.
  • tags will contain an array that contains the tags that matched and caused the script to invoke
  • asset will contain the name of the asset the inbound event originated from


EncoScript arrays are associative and, like the rest of the language, dynamically typed.

This means that either the key, or the value can be of any type. Arrays will be implicitly declared when first accessed.


object array[0] = "Hi";
array["one"] = 1;

Arrays can also be declared inline. In this case the index will be ordinal, starting from 0


object array = ["values", "in", "array", 1]
if (array[3] == 1) {
    #this code will be executed

This also works in function calls.

Method chaining

EncoScript has support for calling methods on objects, and chaining them.

For example:


is a valid construct, and will be evaluated from left to right.


The usual selection of math and logic operators are supported:

+ - * / % == != >= <= && || ! ( ) ++ --

Operator priority is recognized within EncoScript so constructs like 3+4*7 will work as expected.

Not all operators are valid for all types

The mathematical operators (+ - * / % ++ --) are valid for all numeric types. The operator + is also valid for string operands (which concatenates the strings).

The ++ and -- operators only work as suffix, so expressions like '--i;' will NOT work. 'i++' means increment after, so Encoscript will allways use the value assigned to i first, and only then update the value of i.

The equality & inequality operators (== !=) are valid for all types. These operators yield a boolean as a result.

The comparison operators (> < >= <=) can be used to compare any type, again as long as both operands are the same type. The result is (again) a boolean.

The boolean logic operators (&& ||) work only with integer or boolean operands. All logical operators use the 'C' convention of false being zero and true being any non-zero value.

It is, however, worth noting that EncoScript wil try and convert types to match, so it can execute the expression.


Modules can be created in one of two ways. Static modules (modules that do not require additional information to function, such as the Json parse module) can be imported using either the 'import' statement, or the create function, whereas other modules can only be loaded using the create function.

Modules imported using the 'import' statement will be accessible in the code as usual. Their name will be identical to the import name: import Json; will add an object called 'Json'.

Json module example:

import Json; #Json module can now be called directly

object json1 = create("Json");

For detailed documentation on modules, please refer to the documentation available in the EncoScript editor.

Additional notes

A statement can be split over multiple lines but all statements must be closed using ';'.

Multiple statements can be on a single line, though this is usually not recommended as it can deteriorate the readability of your code.

EncoScript Examples

Process decoded data from LoRa


You have a LoRa device that emits data over LoRa4Makers or your MyThings subscription and you want to post a MQTT message to your MQTT Broker whenever the LoRa device measures a higher temperature than it has ever recorded before.


Create an EnCoScript with your LoRa device as inbound:

  1. Create new flow.

  2. Directly convert the (empty) visual flow to a script, without giving it a name nor defining any inbound endpoint.

  3. A valid flow needs at least one input, so we will add LoRa as Input.

    1. Click on the "+", select your LoRA inbound endpoint.
    2. We will select "MyThings - Data" for this tutorial (the behavior with LoRa4Makers would be identical)
    3. Close the Input endpoint modal window

    encoscript LoRa First Steps

  4. By clicking on the icon of the newly added input endpoint, a modal window will open and allow to define filtering tags. In the example below, by added the DevEUI of a specific LoRa device we will configure the script to only trigger on incoming LoRa data from that specific device. For more information on filtering tags for your selected input endpoint, click on the dark question mark icon next to the endpoint.

    encoscript LoRa First Steps

Start composing your script:

  1. Import the needed modules: We will receive the LoRa data in an array object. Therefore we will need the Array module to handle specific methods such as checking the existance of specific keys in the array. Because we want to know if a received value is higher than a previously received value, we need some way to keep track of the highest value received. Storing such a value can be done with the KeyValue module. Finally, as we will want to debug our script as we develop it, we will add the Debug module. To add these modules, we can enter the lines as below on the top of script, or we can place the cursor at the top of the page and use the "+" sign in the module list on the right bar, under each desired module to be added.

    object array = create("Array");
    object keyvalue = create("KeyValue");
    object debug = create("Debug");
  2. The first thing that we will want is to make sure we receive the data as expected and understand the format. The easiest therefore is to insert a debug.dump() line within our main function.

    object array = create("Array");
    object keyvalue = create("KeyValue");
    object debug = create("Debug");
    function run(object data, object tags, string asset){ 

    At this point, we should give a name to our script, and save it. If you wish you can organize your scripts in Collections (similar to folders) by clicking on the up arrow next to the save button.

    encoscript LoRa First Steps

  3. We can now switch the right menu to the DEBUG view, and have our device send data over LoRa (or wait for the next data update). By clicking on the Debug.dump statement in the new log, the top of the debug view will allow you to access the content of all global and local variables at that moment of the script execution. As you can see from the example below, it may be that some variable's content that would be very large is truncated in the log view.

    encoscript LoRa First Steps

  4. Our LoRa device can return multiple sensor values, such as temperature, luminosity, presence, … but we are only interested in temperature readings. We will make sure we have received that value, and output that value to the debug log. Otherwise the script does not have to do anything.

    if (data["container"]=="temperature") {
        string temperature = data["value"];
        double newtemperature = temperature;
        debug.log("Temp: " + newtemperature);
  5. Now that we have our new temperature value, we need to compare it with the highest temperature value previously received. Therefore we will use the KeyValue keystore to store the highest known value. As a key we can use any key we want (that's not already in use). Let's choose for 'highestTemperature'. Note that any other script running on CloudEngine will be able to leverage this stored value, as long as it calls it by the same key name.

    object highestTemperature = keyvalue.get("highestTemperature");
    if (highestTemperature == null) {
        highestTemperature = newtemperature;
        keyvalue.put("highestTemperature", newtemperature);
  6. We now have both the new temperature value and the highest known temperature. So we can compare them and send an MQTT message when we have found a higher temperature.

    if (newtemperature > highestTemperature) {
        # New record high temperature
        keyvalue.put("highestTemperature", newtemperature);
    object MQTT = create("MQTT", "tcp://", "/lora/temperature-record");

The final script

The final script is as represented below. Note that you can add comments (#) in your script to make it easier to read

object array = create("Array");
object keyvalue = create("KeyValue");
object debug = create("Debug");

function run(object data, object tags, string asset){ 
    if (data["container"]=="temperature") {
        # Extract the new temperature value and store it in a double variable
        string temperature = data["value"];
        double newtemperature = temperature;

        # Ouput that new temperature to the debug log
        debug.log("Temp: " + newtemperature);

        # Check if we have already a highest temperature. If not, store it in the key store
        object highestTemperature = keyvalue.get("highestTemperature");
        if (highestTemperature == null) {
            highestTemperature = newtemperature;
            keyvalue.put("highestTemperature", newtemperature);

        # Compare the new and highest recorded temperature, store in the key store if the new one is higher
        if (newtemperature > highestTemperature) {
            keyvalue.put("highestTemperature", newtemperature);

            # Create a new MQTT object and send the new highest reading to the topc /lora/temperature
            object MQTT = create("MQTT", "tcp://", "/lora/temperature-record");
            debug.log("New highest temperature sent over MQTT")

Process binary GPS data from LoRa


You receive binary data from a LoRa device that contains the latitude and longitude of a location. You want to know if this location is inside or outside of Brussels and how far it is from the center of Brussels. The binary data from your LoRa device is encoded to contain first three characters ('G', 'P' en 'S'), followed by two floating point numbers that represent the latitude and the longitude respectively.


Create an EnCoScript with your LoRa device as inbound:

Creating a new flow with LoRa as inbound follows the same steps as the previous tutorial.

Start composing your script:

  1. Import the needed modules: We will receive the LoRa data in binary format. Therefore we will need the Binary module to decode it. Because we want to know the distance from the received location to the center of Brussels we will need the Geo module. Note that to import EnCoScript modules you can use an import statement such as "import Binary" instead of using "object binary=object(Binary")", and chain the imported modules on the same comma-separated line.

    import Binary, Geo;
  2. First things first, let us declare the boundary of brussels and the center of brussels as global variables:

    object brusselsPolygonLatitudes =  [50.858609, 50.852703, 50.847230, 50.841214, 50.833147, 50.833811, 50.849280];
    object brusselsPolygonLongitudes = [ 4.346492,  4.367864,  4.369697,  4.366450,  4.348839,  4.342819,  4.337970];
    double centerOfBrusselsLat = 50.846763;
    double centerOfBrusselsLng =  4.352452;
  3. Next we will load our received payload into the Binary module and check if the first three chars are indeed GPS. If so, we will extract the latitude and longitude as well:

    string char1 = Binary.getNextChar();
    string char2 = Binary.getNextChar();
    string char3 = Binary.getNextChar();
    if (char1 == "G" && char2 == "P" && char3 == "S") {
        # GPS data received
        # do stuff
        double latitude = Binary.getNextFloat();
        double longitude = Binary.getNextFloat();
  4. Now that we have the latitude and longitude of our received location, we can use the Geo module to calculate the distace of that location to the center of Brussels:

    double distance = Geo.distanceKilometers(centerOfBrusselsLat, centerOfBrusselsLng, latitude, longitude);
  5. To check if the received location is inside the border of Brussels, we can also use the Geo module by providing the latitudes and longitudes of the border points.

    Geo.isInsideBorder(latitude, longitude, brusselsPolygonLatitudes, brusselsPolygonLongitudes)
  6. Now that we have this helper function we can use the function in an if-statement to decide what message we will send outwards:

    object logger = create("HTTP", "", "POST");
    if (isLocationInsideBrussels(latitude, longitude)) {
        logger.setData("Location inside Brussels, " + distance + "km from center.");
    } else {
        logger.setData("Location " + distance + "km from Brussels.");


GPS example result screenshot

  "codes": [
      "code": "import Binary, Geo;\n\nobject brusselsPolygonLatitudes =  [50.858609,50.852703,50.847230,50.841214,50.833147,50.833811,50.849280];\nobject brusselsPolygonLongitudes = [ 4.346492, 4.367864, 4.369697, 4.366450, 4.348839, 4.342819, 4.337970];\n\ndouble centerOfBrusselsLat = 50.846763;\ndouble centerOfBrusselsLng =  4.352452;\n\nfunction run(object data, object tags, string asset){\n    Binary.setData(data[\"PAYLOAD\"]);\n    \n    string char1 = Binary.getNextChar();\n    string char2 = Binary.getNextChar();\n    string char3 = Binary.getNextChar();\n    \n    if (char1 == \"G\" && char2 == \"P\" && char3 == \"S\") {\n        # GPS data received\n        double latitude = Binary.getNextFloat();\n        double longitude = Binary.getNextFloat();\n    \n        double distance = Geo.distanceKilometers(centerOfBrusselsLat,centerOfBrusselsLng, latitude,longitude);\n    \n        object logger = create(\"HTTP\", \"\", \"POST\");\n        logger.setContentType(\"text/plain\");\n        if (Geo.isInsideBorder(latitude, longitude, brusselsPolygonLatitudes, brusselsPolygonLongitudes)) {\n            logger.setData(\"Location inside Brussels, \" + distance + \"km from center.\");\n        } else {\n            logger.setData(\"Location \" + distance + \"km from Brussels.\");\n        }\n        logger.send();\n    }\n}",
      "language": "javascript",
      "name": "Process binary GPS data from LoRa"

More script examples

You can find several CloudEngine script examples on our GitHub.

Inbound events


CloudEngine will execute flows based on input events.

These events range from HTTP requests to SMS messages, and are filtered based on tags you can modify.

When an event (an SMS message, for example) comes in, cloudEngine will try and match this with the tags of a specific flow.


An HTTP request is sent to,b

CloudEngine will then invoke all scripts for which the tags match:

  • scripts that match to HTTP
  • scripts that match to HTTP AND a
  • scripts that match to HTTP AND a AND b

Some inbound endpoints have additional tags that will be set by CloudEngine, based on the input. SMS, for example, will contain the sender's address.

For additional info on which tags are available on which asset, please see the inbound documentation in the EncoScript editor.

Event parameters

The details of the triggering event will be passed in the parameters of the main script function:

function run(object data, object tags, string asset) {
   # code

The detailed contents depend on the source of the event, details of which are available in the documentation of that specific inbound endpoint or configuration. Some general concepts are documented here:

string asset

This parameter contains a string that identifies the origin of the event.

object tags

This is an ArrayObject that contains all the tags for this event. It will always contain at least the basetag that identifies the inbound route of the event.

object data

This is an ArrayObject that contains the (sometimes optional) data for this event.

The keys are strings and they depend on the inbound origin and are specified in the inbound documentation.

The values are always of the type ByteObject. You can assign a value to a string variable, whereby the ByteObject will be automatically interpreted as an UTF-8 string. Although this type of conversion to integer and double is also foreseen, these conversions will most often fail, as they require the bytes to be the binary form of an integer or double to work (see also the Binary module for more information)

Modules vs Objects

In EnCoScript all modules are objects. They are objects that allow specific functions to be called on them. So though both regular objects and modules have the type object, the functions that can be called on them differ greatly.

As an example, a comparison between a regular JSON object and the Json module (which as we now know, is also an object) will be provided.

JSON object vs Json Module

Expecting JSON input from HTTP in a script, the incoming data will need to be parsed. To that end the module Json can be used. The module can be created in 2 different ways:

import Json;
object myJsonModule = create("Json");

Both ways are OK and both methods do the same thing. They create a module object that will have the functions of the Json module. As the documentation of the Json module explains: the created module objects have following functions available: parse(), validate() and toXML(). So these functions can be called on those objects:


As for the previous parsing example, those functions actually have return values. Those return values are objects; JSON objects. So those return values are not a module and therefore do not have the functionality of the Json module. JSON objects do have their own functions though: getFlatArrayWithPrefix(string prefix), getFlatArray(), get(object index), toArray(), set(string index, object value), has(string name), remove(string name), size(), getArray(), expression(string expression), getString(), getDouble(), getInteger(), addString(object name, object string), addInteger(object name, object integer), addDouble(object name, object double)

Following code snippet should make this clear:

# expect following JSON
# {
#     "value": 5,
#     "explanation": "just so happens to be"
# }

import Debug;
object myJsonModule = create("Json");

function run(object data, object tags, string asset){

    object myJsonObject = myJsonModule.parse(data["BODY"]);

    object myJsonValueObject = myJsonObject.get("value");
    object myJsonExplanationObject = myJsonObject.get("explanation");

    if (myJsonValueObject.getInteger() > 0) {
        Debug.log("This should log the explanation: " + myJsonExplanationObject.getString());

Using Google API from EnCoScript

In this recipe we will show you how to interact with the Google Sheets API from EnCoScript. While we specifically target the sheet API, this can also be used as inpiration/guidance to interact with the other Google APIs. More information about the available APIs is provided by Google in the API Library.

1. Create your Google project

Before you can use the API, you will need to retrieve the necessary **OAuth2 ** credentials from Google. Detailed information is provided by Google here, specifically in the device section.

Before we can request the credentials, we have to go to the developers console and create a project.

  • If you do not yet have a project, the dashboard proposes to create one, click on Create
  • Enter a Project Name
  • Click CREATE

Once the project has been created you need to enable the Google Sheets API

  • The Google Sheets API can be found in the G Suite category, or you can just type sheet and use the search function.
  • Click on it and ENABLE

Once the API has been enabled and while you are still on the details page from that API, you can CREATE CREDENTIALS. (If you missed it just after creation, you can always navigate to the API, and then select Credentials on the left) Overview

  • Within the text, there is a link to the client ID type we need.
  • You will be requested to configure the consent screen by adding an Application name. No need to add any scopes here.
  • Now we can create an OAuth client ID of the Application type: Other
  • Copy and store the client ID and client secret (there is also the option to download a JSON containing these values from the overview screen).

Note that this is also the location where you can revoke these credentials later if you would ever need to.

2. Acquire a refresh token

With the client ID and secret we created above, we can now request a refresh token that can be used in the script. From the information provided, that refresh token only expires if one of the following occurs:

  • the token was not used for more than 6 months
  • it was explicitly revoked
  • the token was bumped because the maximum amount of tokens (100) was reached

The first step to acquire a refresh token is to visit the following URL from the browser used for the developers console:<enter-client-id-here>&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob

Note that the spreadsheets scope has been specified and that you still need to add the client ID from earlier (it might be necessary to URL encode it for it to work as part of the URL).

After allowing access, you will receive a code that you have to include in the following cURL:

curl -d "client_secret=<client-secret>&client_id=<client-id>&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&grant_type=authorization_code&code=<code>"

Here you need to add three parameters (client ID, client secret and the code) and you might also have to URL encode them to work.

The request is provided as a cURL command. cURL is a popular command-line tool that can be used to issue HTTP calls to REST APIs. The tool can be obtained from

When everything worked properly, you should now have received a JSON response that contains the necessary refresh token:

 "access_token": "<temporary-token>",
 "expires_in": 3600,
 "refresh_token": "<refresh-token>",
 "scope": "",
 "token_type": "Bearer"

3.Use the API from EnCoScript

Now that all the preparation work is over, it is possible to use the following script to create a Google Sheet:

# Basic imports
object datetime = create("DateTime", "yyyyMMddHHmmss");
object debug = create("Debug");
object json = create("Json");
object keyvalue = create("KeyValue");

# Entrypoint
function run(object data, object tags, string asset) {
  object http = create("HTTP", "", "POST");
  object jsonTitle = json.createNewObjectNode();
  jsonTitle.set("title","EnCo example sheet created at " +;
  object jsonRequest = json.createNewObjectNode();
  object result = http.send();
  int code = result["STATUSCODE"];
  if (code == 200) {
    string response = result["BODY"];
    object rData = json.parse(response);
    string id = rData.get("spreadsheetId");
    string url = rData.get("spreadsheetUrl");
    string body = "Spreadsheet created with ID: " + id + "\r\nCan be found at: " + url;
    object mail = create("Mail");
    mail.setSubject("Sheet created");
  } else {

Try out your script using the Test Inputs button at the bottom left of the script editor.

When the sheet creation succeeds, the ID and URL are stored in the KeyValue store. The script also foresees to send this information as an e-mail to the address you specified.

While these values are available in the KeyValue store, the following script can be used to append data to the sheet:

# Basic imports
object debug = create("Debug");
object text = create("Text");
object keyvalue = create("KeyValue");

# Entrypoint
function run(object data, object tags, string asset) {
  string id = keyvalue.get("spreadsheetId");
  string url = "" + text.urlEncode(id) + "/values/A1:C1:append";
  object http = create("HTTP", url, "POST");
  http.addQueryParameter("valueInputOption", "RAW");
  http.setData("{ 'values' : [ [ 'Key', 'Value', 'Time' ] ] }");
  object result = http.send();
  int code = result["STATUSCODE"];
  if (code == 200) {
    string response = result["BODY"];
  } else {

API authentication

API Solutions uses OAuth2 to protect API resources. The OAuth2 specification has become the predominant standard for securing RESTful APIs and identity delegation.

In order to gain access to the resources exposed by your API assets, you will need to pass along a valid OAuth2 bearer access token. Here is how you can generate such a token.

Obtaining your OAuth2 Application Keys

Each project you create through the API Solutions Market will result in the creation of a unique pair of OAuth2 application keys. To retrieve your keys, proceed as follows:

  • First, log in and visit your dashboard on the portal. Click the "TOKEN" menu.
  • This will result in your application keys to be shown. These keys can be used to access API resources exposed by any API asset to which you subscribed, within the scope of the selected project.

OAuth 2 Keys

Your application keys are sensitive information; keep them private at all time!

Make sure to store your application keys in a safe way, avoiding them to leak or be stolen by an unintended audience. They are quite sensitive, in that anyone obtaining them will be able to gain access to your assets. In case your application keys would become compromised, contact us through support and we'll revoke them to block access, and provide you with a new pair. This is also the reason why all API traffic should pass through secured HTTP, and why we do not support plain HTTP.

Generating a fresh bearer access token

A bearer access token is a self-contained unique identifier through which you can gain access to API resources. Before elaborating on how you can obtain such type of token, please note that these tokens are valid for a limited time span of 1 hour (or less, if explicitly revoked).

You can obtain a new token by issuing the following request against the server :

/token POST

Issue a request to, adding the following headers and body:


curl -X POST \
  '' \
  -H 'Content-Type: application/x-www-form-urlencoded' 

Response payload

  "scope": "openid", 
  "token_type": "Bearer", 
  "expires_in": {number}, 
  "access_token": {string}

The field accesstoken in the response contains your token. The field expiresin will show you the remaining validity of your token, in seconds.

Single active access token
Within the context of a single application, you will only have a single active access token at any time. If you would request a new token using the above request message while the previously issued token has not yet expired, you will get back the previous token. To obtain a new one, revoke the old token; then generate a new one.

Code Samples
To help you getting started on implementing this authentication in your code, you can find some samples on our Github.

Revoking an access token

This procedure should be used when your token has been compromised (leaked, stolen, etc.), or if you want to generate another token, but your old token is still valid.

/revoke POST

Issue a request to, adding the following headers and payload:

Header Value Required
Content-type application/x-www-form-urlencoded Yes

Request payload


Creating permanent token

You can create permanent tokens from the "TOKEN" menu of your profile's dashboard. Just click on "Add a new token" to create a new permanent token. Several permanent tokens can be created for the same account.

Permanent token

Your permanent token is sensitive information; keep it private at all time!

Make sure to store your permanent tokens in a safe way, avoiding them to leak or be stolen by an unintended audience. They are quite sensitive, in that anyone obtaining them will be able to gain access to your assets. In case a permanent token would become compromised or not needed anymore, delete this token from your "TOKEN" dashbload by clicking on the red cross next to it.

All set to call your API!

Once a valid access token has been generated, it should be added to request messages. The token should be passed in an HTTP header named Authorization, which should be assigned the value of Bearer, followed by a single space, and immediately followed by the access token. Note the single space between ‘Bearer’ and the token.

curl -i -X GET -H "Accept:application/json" -H "Authorization: Bearer <active-access-token>"

EnCo AR Viewer

The EnCo AR Viewer is the official augmented reality demonstrator for users.

The application uses your EnCo credentials and our logo as target to display information coming from your IoT device in augmented reality.

You can download the App here:

The AR Viewer works by recognizing a known image. Once recognized, it will create an overlay showing icons with associated data.

Once installed and launched, the EnCo AR App will ask you to provide:

  • HTTP tags: these will be tags for a HTTP inbound endpoint in your CloudEngine script. 2 tags must be provided, one of these must be “HTTP”
  • a bearer token: a permanent bearer token which can be generated in the Token Management section of your EnCo account The AR Viewer’s operation flow is shown on the following picture. AR CloE
  1. After entering your desired HTTP tags and a permanent bearer token, the mobile application sends the device’s unique Google Firebase token to your CloudEngine script.

  2. Your CloE script’s must have an inbound HTTP endpoint configured with the same tags as defined in the mobile app. Upon reception of the device token, your CloudEngine script stores the token using the KeyValue module.

  3. Your CloudEngine script receives data from IoT devices connected by LoRaWAN, HTTP, or MQTT, or any other incoming inbound endpoints. Your script extracts the data to be displayed in augmented reality and prepares it in a simple data json structure. Example: {“temperature”:28,”humidity”:65}

  4. This data json and the device token stored in a KeyValue are given as an input to the AR Viewer module, which uses Google Firebase to push the data to the AR Viewer app on your mobile phone.

  5. Upon first reception of data, the AR Viewer lets you associate icons to the incoming data, then switches to augmented reality view.

  6. As device data continues to flow into the script, data updates are pushed to the mobile app and the augmented reality view shows updated sensor information.


Which logo can I use as target? The target to display sensor values in Augmented Reality is the light bulb from EnCo logo. You can find a few different formats on this page (see images below).

Can I use EnCo AR Viewer with LoRa devices? You can use the EnCo AR Viewer with any data input supported by the CloudEngine inbound endpoints (LoRa via MyThings or SEaaS, MQTT, HTTP, SMS, …). The only requirement is to have the data sent via the AR Viewer module in a simple key/value pair json structure.

Does the AR Viewer supports seeing the data from multiple IoT devices? The AR Viewer is made to only recognize one single logo image. Your CloudEngine script could support multiple device inputs but the AR Viewer will not associate logo images to specific devices.

Can the device data change during usage of the AR Viewer? The data values can change during usage, and the changes will be shown in the augmented reality view (example: from {“temperature”:25,”humidity”:65} to {“temperature”:27.5,”humidity”:66}). However, a change of data structure during use of the application is not supported (example: from {“temperature”:25,”humidity”:65} to {“noiselevel”:89}).

Where can I find a sample CloudEngine script to help me get started? You can find a sample CloE script for the AR Viewer on our github:

EnCo Logo

EnCo Logo

EnCo Logo

EnCo Logo

Blog articles

Find all articles including tutorials and customer cases related to CloudEngine on our blog.

Never miss a thing

Stay informed on offers, promotions and platform updates by e-mail. You can unsubscribe at any time.

Proximus logo

All rights reserved. © 2020 Proximus | Cookie policy

This site was created and is managed in accordance with Belgian law.

Proximus API Solutions - powered by ClearMedia NV. Merksemsesteenweg 148, B-2100 Deurne.

BE 0831.425.897