Hi,
New member here, I have so many questions on linking SharePoint, Excel Online and PowerAutomate. Here's the most urgent one:
Scenario:
1.) Subsidiary is using Excel Online for their P&L calculation, has a worksheet for P&L Summary that has calculated values and reference cells.
2.) Uploaded it to SharePoint for easier access and updating.
3.) Values of specific cells in P&L Template in Excel Online needs to be collated into another sheet to serve as data dump.
4.) Data dump is the collation of all Daily P&L entries that will eventually turn to weekly / monthly.
5.) The condition we need to set in PowerAutomate, if selected or all values of P&L Summary sheet for this day was changed, will need to record the latest data for the day. If another was created tomorrow, a new row will be inserted in the data dump workbook.
6.) We have set 3 tables for this, Table 1 is for WBS dropdown and date, Table 2 is for the P&L Summary calculated values and Table 3 is the data dump. In total, 51 workbooks per unique WBS was set up to connect to one data dump (client request).
Problem:
Table 1 for WBS and Date has 2 columns, separated
Table 2 for P&L Summary Computation upper part
Table 2 for P&L Summary Computation lower part
Table 3 data dump:
Specific cells to get is only values of:
a.) WBS
b.) Date
c.) Revenue
d.) Cost of Contract
e.) GM
f.) GM %
PowerAutomate only reads the values of Cost Column in Table 2.
Upon running, it duplicates entries even if there are no significant changes.
Can't fetch the data of cells horizontally since they are under Cost Column.
Any suggestion how to make this better or even work? Are there holes on our implem? e.g. not saving in OneDrive for Business.
Can we get a specific cell values without using premium connectors?
If a row as defined in the flow, how to get a specific value from the summary?
Do we need to realign the summary values into columns?
How can we exactly get a cell value regardless of the table column? Any workarounds?
I will screencap the flow later or tomorrow for further reference.
Apologies for the long post.
Thanks.
JPA
Hi,
Rather than using data append in PowerQuery, we just assigned a data fetch table within the individual sheets that links to the cell values that needs to be collated. From there, Flow picks up those values, Flow will provide time stamps and will run through all 66 sheets and dumped into a single Excel Online file.
Thanks.
JPA
Thanks for the response @jparroyo. What was the workaround you found in Flow?
Hi jedunn,
Thanks for your suggestion. I will close this thread since we did a workaround in Flow. Maybe somehow in the future, your suggestion can help me on my other implems.
JPA
@jparroyo I pulled together a solution with Office Scripts to extract and ingest data. I'm sure I'm making a few assumptions about your requirements.
Here's a picture of the flow. A simple recurrence trigger starts off the flow but you could define whatever works best for you. The first script runs to extract the data on a source workbook. The second script ingests this into a destination (data dump) workbook.
Here is the first extraction script:
// This script returns specific values from specified tables in the workbook
function main(workbook: ExcelScript.Workbook): ExtractedData {
// give the column indices names matching their expected content
// you might make this solution more resilient to worksheet changes by using named ranges and the getNamedItem() method, alternatively searching for the content.
const WBS_INDEX = 0;
const DATE_INDEX = 1;
const REVENUE_ROW_INDEX = 0;
const COC_ROW_INDEX = 18;
const GM_ROW_INDEX = 19;
// get the ID and Date from table 1
let idValues = workbook.getTable("Table1").getRangeBetweenHeaderAndTotal().getValues();
// get the values from table 2
let numberValues = workbook.getTable("Table2").getRangeBetweenHeaderAndTotal().getValues();
const result:ExtractedData = {
wbs: idValues[0][WBS_INDEX] as string,
date: idValues[0][DATE_INDEX] as number,
revenue: numberValues[REVENUE_ROW_INDEX][1] as number,
coc: numberValues[COC_ROW_INDEX][1] as number,
gm: numberValues[GM_ROW_INDEX][1] as number
}
console.log(result); // these should be viewable in the flow logs
return result;
}
// An interfact to pass the specific values from the tables through to a flow
interface ExtractedData {
wbs: string;
date: number;
revenue: number;
coc: number;
gm: number;
}
Here's the second ingestion script:
/*
Office Scripts will run only one main function.
You can test this script in Excel for the web by commenting out this function, and uncommenting the next one
*/
function main(workbook: ExcelScript.Workbook, extract: ExtractedData) {
ingest(workbook, extract);
}
/*
Office Scripts will run only one main function.
// For testing in excel for the web invert which main function is commented-out
function main(workbook: ExcelScript.Workbook) {
const extract:ExtractedData = { wbs: "21-0015", date: 44355, revenue: 63048, coc: 36358.99, gm: 26689.01 };
ingest(workbook,extract);
}
*/
// This script will ingest extracted data, if it finds a matching wbs number and date, the row is updated with new data. If no matching date and wbs number and date are found, a new row is added.
function ingest(workbook: ExcelScript.Workbook, extract: ExtractedData)
{
// define the output or records table and where the wbs no. resides
const records = workbook.getTable("Table1");
// define the values for the new row to be added or updated
const newRow = [
extract.date,
extract.wbs,
extract.revenue,
extract.coc,
extract.gm
];
// get the values from the records table
let recordsData = records.getRangeBetweenHeaderAndTotal().getValues();
// start with '-1' meaning no row is found
let rowFound = -1;
// in each row of the recordsData look for a row where the wbs and date match
for (let row = 0; row < recordsData.length; row++) {
if ((recordsData[row][1] == extract.wbs) && (recordsData[row][0] == extract.date)) {
// return the first row found and break
rowFound = row;
break;
}
}
let message = "";
if (rowFound >= 0) {
// when a row is found update the values
// this assumes that as long as date and wbs numbers match you want to update the entire row
// if there is no change in data we still update the values of the row
records.getRangeBetweenHeaderAndTotal().getRow(rowFound).setValues([newRow]);
message = "A row was updated";
} else {
// if there is no match, simply add a new row
records.addRow(null/* add to end of table */,newRow);
message = "A row was added";
}
// When run through Power Automate this message will appear in the outputs for the Run Script action.
console.log(message)
}
// An interface to pass the extracted data through a flow.
interface ExtractedData {
wbs: string;
date: number;
revenue: number;
coc: number;
gm: number;
}
Source workbook looks like this:
Destination workbook looks like this:
If I understand your goals this pattern should get you what you want. Of course to make the solution more resilient to workbook changes you might avoid hard-coding the range locations of source data. Performance might be another consideration especially if tables get large. Using Office Scripts filter method to search/evaluate tables may help. I'm not a developer so I'm sure there multiple ways I'm missing on how to improve this. Hope this helps.
@jparroyo I would write this as an Office Script to get values from source workbook, pass those into a Flow, then use a second Office Script to ingest the values, compare the data to existing data, and add a new row if conditions are met. Have you used Office Scripts and the Run Script action? I'm happy to help point you to a few resources if this sounds feasible to you.