This is how I'd write the flow to get what you're after. This assumes you only have a single column that could contain commas within the text (in your example, the PartName column).
Below is the CSV data that I'm importing into my flow.

See full flow below. I'll go into each of the actions.

Get file content using path retrieves my CSV file.

Compose converts the CSV data to text using the following expression.
base64ToString(outputs('Get_file_content_using_path')?['body']?['$content'])

Select splits the data by carriage return and new line, then skips the first row (headers). And within the Map, it replaces the commas within the text to ||. Note that it doesn't replace the comma delimiters.
//From
skip(split(outputs('Compose'), decodeUriComponent('%0D%0A')), 1)
//Map
if(contains(item(), '"'), concat(slice(item(), 0, indexOf(item(), '"')), replace(slice(item(), add(indexOf(item(), '"'), 1), lastIndexOf(item(), '"')), ',', '||'), slice(item(), add(lastIndexOf(item(), '"'), 1), length(item()))), item())

Filter array uses the output from Select and removes any empty rows. You normally get an empty row at the end of your array when splitting the data. The expression used is below.
item()

Select Final uses the output from Filter array and the following expressions.
//ID
split(item(), ',')[0]
//Part Name
replace(split(item(), ',')[1], '||', ',')
//Value
split(item(), ',')[2]
//Month
split(item(), ',')[3]
//Date
split(item(), ',')[4]

After running the flow, we should get the following output.

[
{
"ID": "E4",
"Part Name": "ZZ_PC_Watmar",
"Value": "318.07",
"Month": "1",
"Date": "10/03/2023"
},
{
"ID": "D4",
"Part Name": "ZE_PC_Watmar",
"Value": "132.96",
"Month": "2",
"Date": "10/03/2023"
},
{
"ID": "V5",
"Part Name": "ZN_PC_Stone",
"Value": "370.38",
"Month": "3",
"Date": "10/03/2023"
},
{
"ID": "K1",
"Part Name": "ZV_PC_Water(Prod,Dev,UAT)",
"Value": "13.61",
"Month": "4",
"Date": "10/05/2023"
},
{
"ID": "D1",
"Part Name": "ZZ_PC_XVH",
"Value": "58.41",
"Month": "4",
"Date": "10/03/2023"
}
]