Overview
A REST API integration is, actually, a custom integration completely fitting the customer's needs. Different from other types of integration, this one is characterized by the fact that the customer itself chooses how and when to populate the data, with the following life cycle:
- The customer creates the integration.
- They configure details of the streams to populate. This is different from regular integrations, which know the streams beforehand (and only allow the user to choose from the list) or list them in a predefined way from datasource introspection.
- When done, they are provided with a special entry point they can use to populate the integration data, instead of our services to interact with a source on a given schedule or command. This means: The data is now pushed by the user, instead of pulled on schedule.
What is needed to configure an integration?
The only thing that is needed to configure an integration of this type, and only on integration creation, is the streams to use. These are declared with the columns they will expect (e.g. a stream with 3 fields: foo, bar, baz; and then another stream with more fields).
Is there any limitation on databases and data types?
On stream configuration, there is a list of supported fields:
- Numeric and BigNumeric (arbitrary precision / fixed point rational numbers). When using JSON, ensure the numbers do not have more than 9 fractional positions.
- Float (64 bits) and Integer (64 bits). When using JSON, use float values.
- Strings and byte arrays. When using JSON, use string values.
- Boolean fields.
- Geography string fields (WKT, WKB or GeoJSON formats - input as string).
- Date fields and time fields. When using JSON, they are input as "YYYY-MM-DD" and "HH:MM:SS" strings.
- Datetime and Timestamp fields. When using JSON, they are input as "YYYY-MM-DD HH:MM:SS" fields.
To ease the upload of streams with a large amount of columns you can copy paste from a text file your column name and type.
Below is a code example on how to create a csv that contains your table schema so that you can copy the text from a text editor and paste it inside Datagran in the Column Schema field.
import pandas as pd
# Read in the CSV file with a comma delimiter
df = pd.read_csv('test.csv', sep=',')
# create another csv file with a different name
df.to_csv('data_new.csv', sep=',', header=False, index=False)