Hi, while converting excel to .csv there are two ("@odata.etag","ItemInternalId") extra columns added in .csv file. How can I skip creation of these two columns in my .csv file...these columns are not required. Also, I want the flow to be reading multiple excels and creating .csv so I can't select specific columns or hard-code it...as i's different in all tables. Any suggestions how to resolve this issue?
Hi,
Even I am facing the same issue, can you please share what workaround worked for you?
Thanks
Priyank
Seems like a big miss. These data values are not present in the tables that we are parsing, they are not expected or wanted, and we shouldn't have to jump through magical hats to make them disappear. Please fix this. This is a big problem.
Hi @Anonymous ,
You and me in the same boat. I also want to avoid those two extra columns i.e @odata.etag and ItemInternalId while converting my excel into csv file.
Do you get any solution or any workaround to overcome this challange?
Please reply.
Thanks in advance.
Hi @CFernandes ,
It's 40+ excel/tables...if it was 1 then I know manual mapping option. Blog doesn't help in answering my question and has same issue which i want to resolve. Have thought of a workaround for this requirement. Anyways thanks for your reply 🙂
@Anonymous I understand you have 40+ column but it is a one time activity. You can also try @abm suggestion and Josh Cooks article looks promising..
If this reply has answered your question or solved your issue, please mark this question as answered. Answered questions helps users in the future who may have the same issue or question quickly find a resolution via search. If you liked my response, please consider giving it a thumbs up. THANKS!
It's dynamic schema so this might not work.
Hi @Anonymous
You could try parsing the output to Parse JSON and iterate the values and construct the CSV files using Append string variable. That might be a possibility.
Thanks