What is it about?
This is a Destination Element, in a pipeline, which consumes a single given input (which is a table in our platform) and uploads the data to a customer-provided Azure Blob Storage account. This means that, each time this Destination is executed, the whole contents of a table in the platform are dumped (uploaded) into the customer-provided blob container.
What are the pre-requisites to use this action?
The pre-requisites to use this pipeline action are:
- The customer must be authorized into an already existing Azure Blob Storage account on their own.
- A desired container, which must already exist.
- An account with permissions to write into the desired container. That user will be authorized for the customer to give their credentials to our platform. This user must have write permissions but, as a recommendation security, it must only have permissions into that desired container (and/or other containers meant for similar purposes).
- A workspace, with a project, and a pipeline in that project. Such pipeline must have a preceding integration node (or even better: any node that produces an SQL table in our platform). The Azure Blob Storage action must take one of those nodes are its only input. This will be described in the next section.
Example layout and configuration
A minimal sample pipeline layout would look like this:
A dumb pipeline, consisting only of two elements.
- A source integration, to draw data from.
- A middle node, which would sub-select or convert the input data to a new format or set of columns.
- The Azure Blob Storage action node, which will have the task of uploading the data in the incoming format from the middle node.
An example on how this operator must be configured is shown in the following image:
Depicting, in order:
- The name of the file, without extension.