You're right. 100% right -- I agree with you. Let me break it down into to separate points which are better described.
The problems with the project scaling are primarily down to the infrastructure: MSBuild and the C# compiler are damn slow as is the whole rip up and replace assembly system in .Net. When you have something at the bottom of a dependency chain that is quite deep (with any enterprisey project), you have to recompile all consumers and therefore everything that depends on them and so forth. A single line change means you end up compiling the entire system on top of it. All 200Mb of DLLs need building again. That's a long time. Not only that, when it comes to testing and runtime dev (i.e. using the web front end), reloading assemblies and performing JIT is really expensive and time consuming.
Every time you do something, mount everest is destroyed and recompiled twice: one to IL and once to native. This is expensive and seriously screws productivity. It doesn't scale development-wise. Simple as. This is not specific to my current project - I've consulted at various companies since 2002 and that's exactly what it has ended up for everyone, every time.
Going back to LLVM/XCode. I'm not particularly experienced with the specifics of the abstraction that Apple provide, but I've spent 20 years building monstrous bits of C on top of Unix/Linux with GCC and Sun compiler suite. That's what we're still dealing with but we have LLVM which slings code out much faster than anything we've had before. We also have incremental build support (individual .o files per source file), faster linking and a runtime system that doesn't require recompilation.
I'm not saying it'll realistically turn into anything better at the end of the day but one some points:
1. The Xcode tooling is like lightning, even with a 200,000 line C project I imported from a previous project.
2. The startup time is 487ms compared to the same thing in C# of 2.2s to first output and 10.5s to it actually doing something.
Think I've explained myself better now. Sorry for the initial confusion.
When you have something at the bottom of a dependency chain that is quite deep (with any enterprisey project), you have to recompile all consumers and therefore everything that depends on them and so forth. A single line change means you end up compiling the entire system on top of it.
This just isn't true at all. Even in COM days it wasn't true. With COM, if you preserved binary compatibility you didn't need to re-compile a client if you compiled a new version of a DLL it calls. In .NET it's even more forgiving. Sure, if you change something major like delete a bunch of properties that are used in the client then you have to recompile the client. But in many cases you don't.
Beyond this, if your architecture has 200 DLLs, it might be completely appropriate but it is a suspect architecture. I manage a .NET project that trades $4 billion of fixed income instruments per day, interfaces to three trading platforms, interfaces to 13 custodial systems and 7 different data providers. It uses about 25 DLLs and is quite manageable.
Finally, how often do you recompile a DLL that is "quite deep" in the dependency chain? Generally, such low-level DLLs should do very little. For example, maybe it's appropriate to change them if you change from Oracle to SQL Server but how often would that happen?
COM is different. The interfaces are well defined between components and processes.
In .Net, any contracts that break between layers in your dependency chain force a rebuild of all layers above it. This is not unusual in .Net projects as the components are bound at runtime.
There are not 200 DLLs - there are 200Mb of DLLs. Total count is 122 DLLs.
This particular beast handles integrations with over 100 systems using vastly different non COTS integration methods, has over 2500 tables, 950 domain objects, 400 controllers, 45 databases, 200 API endpoints, async background processing and piles of MSMQ queues. Data volume is terabytes. It's a behemoth and it's been around for 33 years in various forms.
If you, as we do, change architectural components and APIs, dependencies are broken. The main case for this is when we version our API. We have up to 3 API versions in production. When someone introduces a new API version, the oldest is deprecated and all the historic versions are ported to the newest internal API. This is where it becomes dependency hell.
And since you attribute those problems to .NET, your solution is a small project (which incidentally uses another technology).
Not dismissing your argument as a whole, but you have to admit you made at least one giant logical fallacy right there.