web
You’re offline. This is a read only version of the page.
close
Skip to main content

Announcements

News and Announcements icon
Community site session details

Community site session details

Session Id :
Power Platform Community / Forums / Power Apps / Integrating Python for...
Power Apps
Answered

Integrating Python for Advanced Data Manipulation in Microsoft Dataverse

(0) ShareShare
ReportReport
Posted on by

Hello everyone,

I'm currently facing a challenge and need some guidance.

My data is stored in Dataverse and SharePoint, and it feeds a Power BI report. I need to perform some complex operations on this data, including loops and other processes that aren't suitable for DAX and M language.

 

I'm considering Python for this task. Does anyone know if it's possible to run a Python script within a Dataverse database to create custom columns  that I can later utilize in Power BI?

My goal is to keep everything cloud-based. I prefer not to download data into Excel, process it with local Python or VBA code, and then re-upload it to Power BI.

 

For instance, I want to perform simple calculations like taking an original column, applying a loop to decrement its value by 1, and simultaneously incrementing the values in three additional columns by 1 until the original column's value reaches 0.

 

While I'm leaning towards Python, I'm open to any solution that integrates with Microsoft's cloud services and doesn't rely on third-party platforms.

Sincerely,
Ricardo

I have the same question (0)
  • BDallas Profile Picture
    74 Microsoft Employee on at

    @FarmaVIP  it is possible to run a Python script within a Dataverse database to create custom columns that you can later utilize in Power BI. One way to achieve this is by using the SQLAlchemy library in Python.

     

    Start by adding the SQLAlchemy library to your Python environment. Then, use it to connect to your Dataverse database and get the required data. Once you have the data, you can do various things with it in Python. For instance, you can use a loop to decrease the value of one column and increase the values in three other columns until the original column reaches 0. Finally, use SQLAlchemy to save the modified data, including the new columns, back to your Dataverse database.

     

    Here’s an example of how you can use SQLAlchemy to create a new table with custom columns in your Dataverse database: 

     

    from sqlalchemy import create_engine, Column, Integer, String
    from sqlalchemy.ext.declarative import declarative_base
    from sqlalchemy.orm import sessionmaker
    
    # Connect to the Dataverse database
    engine = create_engine('mssql+pyodbc://user:password@server/database?driver=ODBC+Driver+17+for+SQL+Server')
    
    # Create a session
    Session = sessionmaker(bind=engine)
    session = Session()
    
    # Define a new table with custom columns
    Base = declarative_base()
    
    class MyTable(Base):
     __tablename__ = 'my_table'
     id = Column(Integer, primary_key=True)
     original_column = Column(String)
     new_column_1 = Column(Integer)
     new_column_2 = Column(Integer)
     new_column_3 = Column(Integer)
    
    # Create the table in the database
    Base.metadata.create_all(engine)
    
    # Insert data into the table
    my_data = MyTable(original_column='some_value', new_column_1=0, new_column_2=0, new_column_3=0)
    session.add(my_data)
    session.commit()

     

     

     Let me know if this works for you. @ me in replies, or I'll lose your thread!!!  
    Note: 
    If this post is helpful, please mark it as the solution to help others find it easily. Also, if my answers contribute to a solution, show your appreciation by giving it a thumbs up
  • Verified answer
    ChrisPiasecki Profile Picture
    6,424 Most Valuable Professional on at

    Hi @FarmaVIP,

     

    I would recommend Azure Synapse Link for Dataverse or the newer Microsoft Dataverse direct link with Microsoft Fabric (requires less Azure infrastructure setup ahead of time, but will have some limitations at first since it is still new).

     

    Both of the above will allow you to perform large scale analytical workloads against a near-real time copy of Dataverse data inside a data lake, without impacting the operational database. You can run any additional transformations or mashup of the data inside of the lake via pipelines, spark queries, python scripts, etc.

     

    ---
    Please click Accept as Solution if my post answered your question. This will help others find solutions to similar questions. If you like my post and/or find it helpful, please consider giving it a Thumbs Up.

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Introducing the 2026 Season 1 community Super Users

Congratulations to our 2026 Super Users!

Kudos to our 2025 Community Spotlight Honorees

Congratulations to our 2025 community superstars!

Congratulations to the April Top 10 Community Leaders!

These are the community rock stars!

Leaderboard > Power Apps

#1
Vish WR Profile Picture

Vish WR 936

#2
Valantis Profile Picture

Valantis 604

#3
11manish Profile Picture

11manish 518

Last 30 days Overall leaderboard