Dear all,
I am trying to build a new and improved workflow for a table that I have. Essentially right now it works by running once a day, and essentially deletes everything from the table and then I run the new HTTP API GET URI to capture my new data through a PARSE JSON and Add a new row. Easy and that works a treat.
However, as part of ongoing imporvements I need to enhance it, as its proving not practical as the same table is being used as Lookup values in another. So whenever it get deleted, the other tables values disappear which is completely normal right.
So I currently have a Dataverse table, here is a sample, lets call it DatasetA for which I use 'List rows' Action to pull out the data:
Worth noting I have normalized the data objects to keep things straight forward.
Currently has 50 rows
| id (unique identifier) |
ff9efa709974441b9e1282ab408dca3a |
| name |
Dark Matter |
| description |
Dark Matter - Cluster |
| rowid |
4da7b19c-7fbc-ef11-a72f-002248c764d2 |
Sample JSON Data
{
"id": "085d4d1df3a744e3bfa0580f9c37c30b",
"name": "Dark Matter",
"description": "Dark Matter - Cluster",
"rowid": "4da7b19c-7fbc-ef11-a72f-002248c764d2"
}
And a HTTP Output, here is a sample, lets call it DatasetB for which I use 'Parse JSON' action to pull out the data.
| id (unique identifier) |
ff9efa709974441b9e1282ab408dca3a |
| name |
Dark Matter 2 |
| description |
Dark Matter - Cluster 2 |
Sample JSON Data
{
"id": "085d4d1df3a744e3bfa0580f9c37c30b",
"name": "Dark Matter",
"description": "Dark Matter - Cluster"
}
Now you will notice that DatasetA as rowid and DatasetB does not, and in reality this rowid is needed for when I need to complete Actions like 'Delete' or 'Update' and its unique to the Table but my keys is 'id'
Either way, I the end goal is simple enough but I am struggling to get it to work as expected. I essenitally want to only add, delete and update as of when (incremental) updates.
| ADD |
From DatasetB not contains 'id' in DatasetA then add a new row with all the values. |
| DELETE |
From DatasetA not contains 'id' in DatasetB then delete row by using 'rowid' |
| UPDATE |
From DatasetA 'name' or 'description' not equal DatasetB 'name' and or 'description' then update with DatasetB values. |
So I have been trying to do these for the last week, and I struggle with arrays period, but everything I try to do any of the actions above, it for ever adding 'For each' this or 'For each' that and then the problem I find is that I end up having a bunch of nested loops and it's not doing anything. The one I struggle with the most is the 'UPDATE'.
I have watched several youtube videos and what not, but when it comes to the UPDATE this is where I struggle the most and just for the life of me cannot get it too work.
Can anyone suggest me the easiest and best approach to take when comparing datasets and outputs like this and what steps are the most efficient manner? To date, I have the following, but as I will be using this with table of anything from 50, 100, 1000 rows, I wan't something that is quick to essentially identify what does not contains or not equal to
each other and then where it does or not then to complete the necessary action, rather than it having to go through a For each row in the the table and check everything once.
Hope that makes sense :)
Currently I use and export my arrays and objects using 'Select' action, but I don't like it because unless I have 'rowid' in both Datasets it does not provide the correct values in return.
I don't like that the Datasets and the number of Objects have to be like for like otherwise it will not compare the data. Is there anything else I can use ust to filter my List rows on 'id' and where it matches 'id' in HTTP Output then do action rather than storing everything in arrays and comparing it that way?
Currently I do not have a working UPDATE flow, hence its disabled at the end of my flow, but would appreciate if someone can help be re-design this as its not working as efficiently as it could and I am not sure what I don't even makes sense.