I am facing an issue where a system generated CSV file I am receiving comes with a linebreak after the last row, essentially identifying as having 41 rows when in reality it only has 40.
When I am parson my JSON I am splitting each record based on a "linebreak" which now breaks the flow since the last entry is "null" for every column, and in terms of schema I dont want / cannot allow null values for required fields.
In short, either when selecting my data or splitting my data, I would need a way to dynamically split the last row before proceeding with the parsing of the JSON. How do I achieve this?
Flow sequence:
- get CSV file content
- convert CSV from Base64 to string and split each record after "linebreak"
- split off the first record which are the column headers
- skip headers and select all remaining rows, mapping the various fieldnames and items
- Parse JSON based on schema where all values are "strings"
- Error: last row is NULL which doesnt match the schema

