In the time it takes you to read this sentence, about $6.02x1023 dollars will have been wasted in lost productivity due to old, rickety, hard-to-use, non-performant software systems.
Well not so many dollars, but the loss is real -- whether it's the energy and cost-of-operation for more servers, when fewer could do the same job; whether it's hours spent waiting for CSRs to tab through tens of screens in legacy line-of-business apps to find the one field they still use; or worse, when a police dispatcher makes a mistake because software makes it hard to click the right thing fast and dangerously easy to click the wrong one.
Strategies for making "bad old systems" better are not the most popular topic for the tech elite. This makes sense for a bunch of reasons. The mythologies we live by are generally startup oriented. But, more importantly, real roll-up-your sleeves work on old systems doesn't scale. The hard problems are business-specific and not ones that licensable products can solve ... so anyone on a mission to save the world from these systems ends up with a services approach -- i.e., a process approach -- that is bottlenecked by human quality-and-scaling problems.
That's a bummer, because there is a lot that can be done with old systems once a really good team starts thinking about them. Forget the big rewrites and the magic bullets. Forget the awful HTML front ends that barely hide COBOL.
In between are some great possibilities. A lot of legacy architectures lend themselves to SOA without even knowing it. Inexpensive hardware and storage means, in some cases, legacy client bits can be moved back to the datacenter and integrated into a middle tier. Other times, the clients can be done away with altogether if someone has the chutzpah to figure out the "old" protocols and implement them with modern tech.
While I mostly work with startups, I've been involved with at least three projects in the last five years that included some re-engineering of legacy systems. One of these systems was almost 40 years old and used emulation of custom hardware (and a custom physical network) just to communicate. Two of the three were barely documented, or not documented at all. And at least one was a total clusterfudge, in the sense that it hadn't aged and "crufted" naturally, but was the result of crackpot work that was known to be broken from the relatively recent start.
In all of these cases, it was possible to improve the architecture, performance, and functionality of the system without the mega-rewrite. And since the existing systems had tons of data and transactions running, the new software needed to run smoothly side-by-side with existing software, allowing a limited and careful transition.
Here's the kicker: in two out of the three cases, although the "new" software was a vast improvement, it was never allowed to take on a larger role in the enterprise outside of a niche for which it had been developed. A bunch of startup hotshots writing cutting-edge stuff was simply not in the Enterprise IT Legacy Technology Lifecycle Planning Regime and so, like the dismissal of Heron's steam engine in the first century, the new technology was regarded as a mere novelty.
There are some reasons for conservatism and stodginess around legacy systems. But there is also a ton of value waiting to be liberated as soon as some of that stodginess can be shed.