Apps - Table Element: SaveToMicroservice Example

Modified on Tue, 21 Sep 2021 at 04:12 PM

Table of Contents

Overview

In this article, we will have a closer look at the "saveToMicroservice" option of the Table Element. Therefore we're going to create an example App and the respective Workflow for it.

In this App we want to display data about various books in a table. The information should be editable and the changes should be saved to the original data table on save.


Building the APP

First of all, we'll build a simple App that contains an HTML header and a Table Element for the book data.

This article does not cover how to integrate the Datasource and do the basic layouting in APPs, but the full configuration is attached at the bottom of the article.

Below is the configuration of the most important part - the table:


    {
      "id": "bookTable",
      "type": "table",
      "source": "bookData",
      "config": {
        "columns": [
          {
            "name": "bookID",
            "label": "ID",
            "edit": false
          },
          {
            "name": "title",
            "label": "Title",
            "edit": true,
            "width": "30%"
          },
          {
            "name": "authors",
            "label": "Authors",
            "edit": true,
            "width": "20%"
          },
          {
            "name": "__num_pages",
            "label": "No. of Pages",
            "edit": true
          },
          {
            "name": "ratings_count",
            "label": "No. of Ratings",
            "edit": true
          },
          {
            "name": "average_rating",
            "label": "Average Rating",
            "edit": true
          }
        ],
        "editing": {
          "edit": true,
          "add": true,
          "delete": true,
          "saveToMicroservice": {
            "fullDeletedRows": true,
            "id": "79995686-6b0c-4ae5-9211-d0812087a8f2"
          }
        }
      }
    }


First of all, we define which columns of the book dataset we want to display in our table and which ones should be editable. In the example we want to show the book id, title, the authors, the number of pages, the number of ratings and the average rating.

Every column except "bookID" should be editable, so we set the "edit" configuration of the first column to "false" and the others to "true". In general this is not necessary. If you don't specify "edit", the columns are automatically editable.


The important parts here are the "edit" configuration options:

  • edit: This options specifies that we want to make our table editable, so we set it to true.
  • add: This option enables to add new rows to the table. In edit mode, the table will have an empty row at the bottom, where the user can specify the values of the row to be added.
  • delete: This option enables to delete certain rows from the table via a context menu that shows when right-clicking a row.
  • saveToMicroservice:
  • id: This is the id of the Workflow that will update our table. We'll create it in the next section.
  • version: The version of the Workflow that should get executed. If this option is not defined, the latest version of the Workflow is used, so we skip it in our configuration.
  • fullDeletedRows: This configuration option is highly recommended because it simplifies working with deleted rows. When this option is not set, your Workflow will only receive the row ids of the removed rows, and you need to match these with the original data set, which requires advanced configuration. 


Building the Workflow

To use the "saveToMicroservice" configuration, the respective Workflow needs to fulfill some requirements, which we will discuss in the following sections.


Workflow Variables

It is possible to retrieve some additional information from the App and save it into Workflow Variables. It is not mandatory to create these, and you only need to specify them if you want to use the information somewhere in your Workflow.

Note that all of these variables are of type string.


Variable Technical NameDescription
row_id_colThis variable saves the name of the column which contains the "row_id" in the input Processors (this will be explained in detail later on).
data_idThe id of the data table that is operated on.
user_jwtThe JWT (Json Webtoken) of the current invoking user.
frt_id
The id of the Filterable Result Table that is operated on. This variable is not necessary if "data_id" is set.
This variable should not be used because it is deprecated since the removal of FRTs, but it is still available for backwards compatibility.


In our example we will only use the "data_id" to show the user which table exactly was updated (more to this topic in the Output Processors section).

So we create the variable in the Variable Manager:


Required Processors

For the App to work, we need to include certain Processors to enable the communication between the App itself and the Workflow.


Input Processors

First, we will add the mandatory Microservice Input Processors. In total we need three of them, each with a specified identifier.

IdentifierDescription
Input_added_rowsThis Processor will receive a table with the newly added rows to the dataset.
Input_updated_rowsThis one will receive a table with already existing rows that got updated by the user.
Input_deleted_rowsThis Processor will receive a list with row ids of the deleted rows. The name of the column is equal to the value of the Workflow Variable that we created in the previous section.


Here is the Processor configuration for the first one. For the other two, you just need to insert the identifier accordingly.


Be aware of the following behavior. If only one or two of the three possible actions (e.g. only edit a row) is done, one would expect that the other Microservice Input Processors would contain an empty dataframe, because no data is sent. However when looking at a Result Table right after these input Processors, the predefined test-input is used in the Workflow. To avoid unwanted behavior, we suggest filtering out these default values.


Output Processor

To send feedback to the App user, the workflow also needs to contain a Microservice Output Processor with the identifier "Output_messages". It should output a table with two columns:


Column nameDescription
statusThe given status defines the visual appearance of the popup within the App. It has three possible values:
  • SUCCESS
  • ERROR
  • WARNING
messageThis column contains the message that should be displayed in the popup.


Normally, you need to determine the state of your output messages manually according to your use case, and then pass it to the output Processor (for example through SQL, an Assert- or If Else Processor). Every message passed to the Processor will be displayed in the App, so it is also possible to send multiple notifications to the user.

In case of an error in the Workflow though, the error message of the affected Processor will be automatically sent to the App. This makes debugging the Workflow/App easier.


For simplicity, we will only pass a success notification to the output via a Custom Input Table.


In the message we include the id of the data table that was updated. For this, we use the workflow variable which was defined in a previous section. When data is sent from the App, its value is automatically set. In the text, we refer to it with the technical variable name between two '@' characters.


Completing the Example Workflow

For better organization we renamed the Microservice Processors, but that is of course not mandatory. For testing purposes, we also connected a Result Table to each output node of the Processors. This way we can see what data the input Processors received exactly.

Finally, we need to complete the Workflow to update the original data table. Only then the changes also affect the data within the table of the App. Below is a picture of the complete example workflow:




Here, Python Processors are used to update, insert and delete the data. We won't look at the individual scripts that update the table in detail, because it would exceed the goal of the article and the approach may vary for your use case. The Workflow is attached at the bottom of the article though, so you can have a closer look at each configuration.

When the changes are applied to the data, the table is saved using a Data Table Save Processor. The update of the data table will trigger an update in the App automatically.



Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select atleast one of the reasons

Feedback sent

We appreciate your effort and will try to fix the article