I realized I've been putting programs together on a regular basis for 25 years now. I distinctly remember some of the earliest programs I worked on, around October 1983, and the fourth-grade teacher who let me get away with a lot more access to my school's computers than was the norm. When I somehow convinced may parents later that year to get me a machine ... things got even more out of control.
I worked with a bunch of the classic 80s "home computers": Tandy Color Computers (1, 2, and 3) and Apple II (and II+, IIe, IIc, IIgs, ...), and some not-so-classics like a CP/M machine that ran via an add-in board (with its own Z80) inside an Apple.
The languages and tools were primitive, performance was always a concern, most serious programs required at least some assembly-language component for speed and hardware access and, even if they didn't, compatibility across computer types was nearly impossible.
What I find interesting is the question of why these things seem similar, and I have a hypothesis.
I have noticed that many members of my peer group, who started programming at a young age on these early microcomputers, have an affinity for tools, structured languages, and to a lesser extent models and processes. I wonder whether this affinity is not some kind of reaction against the difficulties of programming these early microcomputers in what can only be called a forced-agile approach, where debugging and testing took an inordinate proportion of the time, "releases" were frequent, and where the only evidence of working software was ... working software.
I will be the first to admit I am quite conscious that my experiences in the years before the C/DOS/Windows/C++/Mac era make me appreciative of and (depending upon your perspective) perhaps overly-tolerant of -- process, tools, models, definitions, infrastructure, etc. as a kind of reaction.
Let's stretch the hypothesis a little further: Gen-Y, who missed this era in computing (I would say it really ended between 1987 and 1989) will have typically had their first experience with coding in a structured, well documented, "standardized" ecosystem -- whether that was C on DOS, or Pascal on a Mac, or for those still younger perhaps C++ on Windows or even Java on Linux.
Depending on their age today, and the age at which they first took up coding, this generation had compilation, linking, structured programming, OS APIs, perhaps even OO and/or some process from the beginning. For them, it is possible to imagine, the overhead of the structure and process was something to rebel against, or at least a headache worth questioning.
Hence their enthusiasm for interpreted languages and agile approaches, and the general disdain for processes with formal artifacts, or extensive before-the-fact specs.
That's the hypothesis.
A simple study could validate at least the correlation -- by gathering data on age, age when someone started coding, the language/machine/environment they were using, perceived benefits or disadvantages in those years, and their interests today. And even the bare correlation data would be fascinating.
Considering that these approaches are often "best fits" for rather different kinds of development projects, knowing what sort of prejudices ("everything looks like a nail") individuals bring to the debate might add a lot of context to decisions about how to make software projects more successful, and how to compose the right teams and resources to execute them.