Release 3.10.0 (44) - Oracle Connection Update (46.28.0 // 1.170.0)

Modified on Thu, 30 Jul 2020 at 09:58 AM

Features

In the following all features are listed which are (partly) included in the release. If no other open parts (stories) are left, the feature is finished and otherwise it is clarified as ongoing.


Finished Features


Supporting additional write configurations 
Goal

Customer can read/write from/to mounted filesystems using the following file formats: .csv, .txt, .zip/.gz (contains only one .csv file)

Finished parts in the releaseAppend to CSV files inside Data Table Save
  • Inside the Data Table Save Processor it's possible to use the following additional save modes (based on the existance of the file):
    • CREATE NEW or APPEND
    • CREATE NEW or ERROR (WF fails)

  • It is possible to configure a:
    • delimiter token
    • string escape token
    • escape token
  • The Schema Mismatch Behaviour setting for append is based on the header names of the existing CSV file (data type mismatches ignored)
  • Problematic characters in filenames are handled correctly (must be able to write to native linux mounts, samba-mounts and native windows shares). E.g. their filenames will definitely contain multiple dots.




Ongoing Features


Supporting full SPARK TIMESTAMPS inside Workflow variable with Overwriting Modifiers 

Goal

We rework the existing TimeStamp variable into a DateTime variable type in ONE DATA workflows:

  • It is possible to configure how a DateTime variable should be parsed → into a spark timestamp, oracle DATE, oracle TIMESTAMP or String (with a detailed parsing schema)
  • This variable can be compared natively (without manual conversion) to any timestamp columns in a workflow. Therefore, the default variable parsing type needs to be SPARK_TIMESTAMP.
  • When this variable is included via '@variableName@' it is replaced with something like 'yyyy/mm/dd hh:mm:ss' (a human readable timestamp and not a 'long' number) before execution
Finished parts in the releaseAdjust SPARK_TIMESTAMP format to simple datetime format

Technical groundwork for feature.



Execute Select on Oracle Connections from ONE DATA

Goal

Why: When data analysts should use ONE DATA instead their known sql editor, we need to support executing their existing scripts.

What: Arbitrary SELECT scripts can be executed with the new editor.

How: Enhancing the previously added SQL editor custom component in apps to support the execution of arbitrary SELECT statements.

Finished parts in the releaseProvide dedicated endpoint for PLSQL execution

Technical groundwork for feature.

[Server] Execution of multiple SELECT queries As a user I want to execute multiple statements at once.
If there are multiple SELECT queries executed, only the result of the last will be shown.



Bugs and TR
Goal
Collection of bugs and technical refinements.
Finished parts in the release
[ONE DATA backend] Optionally override the schema configured in an Oracle database connection to use all schemata available filtered by a blacklist
Add a new optional configuration to ONE DATA backend that can be used to specify a list of schema names with wildcards (%) to blacklist schemata that are not to be suggested by the autocompletion of the new SQL editor. If this option is present, ONE DATA's /generic/[id}/database-schema endpoint will not limit the schemata returned to the schema configured in a given Oracle connection but will return all schemata except for the ones that match any entry of the blacklist.
Raise performance of SQL exec. with the editor on Oracle and PostgreSQL DBs.
Performance boost for the connections of the types Oracle and PostgreSQL
Supporting error messages
When a query was executed, but an error was returned from the db: Customer prefers to just have the ORACLE error message or at least this message is in the beginning.
[ONE DATA backend] Enable autocompletion of Oracle database table/view names from the active user's schema without needing the schema prefix
Bug Fix





Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select atleast one of the reasons

Feedback sent

We appreciate your effort and will try to fix the article