Now, my own two cents on the topic.
First, one has to realise that onboarding a new technology is a cost-benefits analysis. If we don't do it, it's because the benfits aren't justified by the costs.
Benfits
I briefly touched on the fact that, overall, CDS currently lacks clear benefits for me. I'm well-versed in SQL and my clients don't use Dynamics. Proper modeling is so core to a good project that I won't just go for something generic unless I have a compelling reason to. And creating 5-10 tables (an average for a PowerApps project) is just not that long. So the argument of the "one-size-fits-all" of modeling just doesn't hold for me.
Costs
The list here is a bit long, so let me break it down.
Learning
Any new technology needs time to learn and to master. We need to develop best practices and good patterns for said technology. There was nothing broken with SQL for us to start with, so even if we would have feature parity between the two platforms, the learning costs alone would have me stay in SQL. It's not that major in the big picture, but it's worth keeping in mind.
Licensing fees
Lots of ink has been spilled over this, but there's a HUGE price difference in the two technologies. Most of my PowerApp project work just fine on an Azure SQL DB S1 (30$/month) with no extra storage since we already have 250 GB for that price. By comparaison, a CDS-based project is 10$/month/user + a wooping 40$/GB/month (vs 0.20$/GB/month for extra SQL storage).
Consider an average project of 50 users with a 3 years lifecycle (lets say no extra storage is needed), the price difference is 17K$ ! This alone kills my sales argument to client that low-code is great because it's faster/cheaper. Right now, full-blown custom dev by a senior consultant is cheaper... And that's a problem ! ^^
But enough has been said on this already.
Physical layer
Part of our job is making fast performing apps, and a big part of this is optimizing the physical layer of data. There's a big difference between a TINYINT and a BIGINT, between a VARCHAR and an NVARCHAR. And if you join tables on GUIDs, we're just not going to be friends. I need to place an index here and there, sometime add additional fields to cover a query and occasional materialize an entire view when it makes sense for the user experience. To the best of my knowledge, we don't have any control over that in CDS.
Complex querying
CDS views are nice, but I often need much more complex logic to represent common business problems. I need the expressivness of the T-SQL language for things like windowing clauses, pivoting data results, CTEs and paging with OFFSET/FETCH just to name a few. And I often need to upserts batches of lines at once with MERGE statements from a parsed JSON payload.
SQL programatic objects
I need strong T-SQL procedure support with transactions enabled, elegant error handling (beyond just bubbling up a system error) and dynamic SQL. On rare occasions, I need to temporairl elevate priveliges during a procedure in a controlled manner. Sometimes I need to materialize intermediary results in temporairy tables and the few times I've had to use CROSS APPLY on a table-valued function really saved my bum.
I understand we can add C# plug-ins to CDS, but my language is SQL...
Data Ops
Bringing CI/CD to a database project is quite the challenge ! Not only do you have to have deal with schema comparaisons, but you also have to deal with transitional changes of the data itself (it's a stateful system after all!) with pre/post deployment scripts.
CDS is improving in this area with solutions and CLI commands, but we're still worlds away from git-based SSDT projects with peer-reviewed pull-requests over an Az DevOps pipeline with automated testing of SQL programatic objects.
Connectivity
My projects don't live in isolation. We do BI on them, we exchange information with other systems and perform other various ETL work. Azure SQL DB inherits the hundreds of native SQL Server connectors to the outside world whereas CDS is still quite poor on this area. Power Query is nice enough to import data, permform light ETL work (more like data prep) and its got a great UX, but it doesn't cut it for serious industrial data pipelines. And yes, my PowerApps project sometimes import gigabits of data into their databases (that tend to be the case when we Papp as a light MDM/DQM tool for data stewards).
That's about the major parts for me. And you ? How do you feel ?