I was once partially responsible for maintaining an old .NET remoting system that did absolutely nothing to isolate its transactional boundaries.
The services were baked into the model which made it easy to write in the first place but very hard to refactor. The code would load a "hollow" entity and then as the entity was interacted with: automagically over the wire the data would be lazy loaded. If you applied an SQL trace you might see over 300 separate SQL commands executed just to load a page that visually represented the entity.
This resulted in some dire performance outcomes, a simple for loop over a given collection on the entity could result in N round trips and sometimes even N*N depending on the data modelling in the back end for given types of data within a collection or association.
A primary issue of this system is that lots of client code had been written that didn't express its intent. There were about eight relatively complex clients of this software and nowhere in that code base did the client ask for the data it wanted upfront in a way that would allow someone to refactor it without rewriting every single one of those eight clients.
Now yes, you could refactor this "architecture-less" sub-system by replacing it with another "architecture-less" sub-system that was "better", but sadly you're left with the same problem: the clients will never tell you what data they need for a use-case they will just assume it will appear when its accessed. To add in those "data demands" you're performing huge shotgun changes across the entire code-base which are always too expensive to justify.
One particularly egregious example was a WinForm loading an entity and then stuffing it into the .Tag of a control where a handler would cast it back out later and access it again in a lazy-loading kind of way.
Your only hope in that case is some sort of static code analysis to attempt to reverse engineer the client's data demands but that's probably approaching as expensive as any of the changes.
The services were baked into the model which made it easy to write in the first place but very hard to refactor. The code would load a "hollow" entity and then as the entity was interacted with: automagically over the wire the data would be lazy loaded. If you applied an SQL trace you might see over 300 separate SQL commands executed just to load a page that visually represented the entity.
This resulted in some dire performance outcomes, a simple for loop over a given collection on the entity could result in N round trips and sometimes even N*N depending on the data modelling in the back end for given types of data within a collection or association.
A primary issue of this system is that lots of client code had been written that didn't express its intent. There were about eight relatively complex clients of this software and nowhere in that code base did the client ask for the data it wanted upfront in a way that would allow someone to refactor it without rewriting every single one of those eight clients.
Now yes, you could refactor this "architecture-less" sub-system by replacing it with another "architecture-less" sub-system that was "better", but sadly you're left with the same problem: the clients will never tell you what data they need for a use-case they will just assume it will appear when its accessed. To add in those "data demands" you're performing huge shotgun changes across the entire code-base which are always too expensive to justify.
One particularly egregious example was a WinForm loading an entity and then stuffing it into the .Tag of a control where a handler would cast it back out later and access it again in a lazy-loading kind of way.
Your only hope in that case is some sort of static code analysis to attempt to reverse engineer the client's data demands but that's probably approaching as expensive as any of the changes.