When I was making the switch from C and assembly to C++ I did quite a bit of reading on object oriented programming. It's hard to find material on OOP that doesn't praise the classical pillars of encapsulation, inheritance, and polymorphism. In the early years these three pillars were advertised as solution to all the ills of software development. Object oriented programming was my first "silver bullet" experience in the software industry.
OOP has been successful. The majority of the today's most popular programming languages support classical OOP techniques as first class language features. The techniques have even pushed their way into languages like Perl and PHP, and developers including myself have used OOP to build frameworks, applications, and libraries for desktops, devices, and the web.
So then, why did I recently debate an old friend about the decline of OOP?
My friend is die hard OOP proponent. He recognizes some of the flaws in the classical OOP principle of inheritance, but for him OOP is the hammer for every nail. He proposed that the most successful UI frameworks he's ever used are all built using classic OOP principles, and therefore objects afford the most modular, reusable, and extensible software possible.
I had some time to think about this argument on a recent flight to Norway, and I think holding up frameworks like Swing and Silverlight (or to go back in time, MFC and OWL) isn't fair. You can always find places where a certain paradigm is the best solution, and for application frameworks in general, and UI frameworks in particular, OOP might have found a sweet spot. A sweet spot because UI frameworks represent an alternate reality. The architects and developers get to make the rules of the reality, and implement the code. Where else but in a UI framework can you find a workable inheritance hierarchy 9 layers deep?
It's not that UI frameworks don't face constraints. There are performance constraints, memory constraints, video constraints. Still, UI frameworks are far away from the messy (and sometimes absurd) realities of the physical world. It is the application code where IS-A relationships are difficult to find and break down quickly. It is application code where roles and responsibilities are difficult to classify and harden into code because domain knowledge can span decades and domain experts will evolve the rules with every passing fiscal quarter.
It's in application code where I've found, over the last few years, that modularization and reuse come about easier by going very big, or going very small. Big in the form of web services, where you can hide an entire platform behind well defined interfaces and standards. Small in the form of functions with well defined meanings, clean inputs and outputs, and nothing but a black box in-between.
Classes? They get stuck in the middle. There is no generally accepted standard even within a single programming language like C# on how to approach class design, or how to consume a class. Do I inherit from it? Create it with the constructor? Use a factory methods? Use a factory object? Call into the base class method override before I do my work? Or After? Too many choices make simple chores complicated.
Look again at the list of the most popular programming languages and see how many languages support more than a single paradigm. It's not that OOP has failed, or is failing. It's just easy to see the other choices now that the silver has worn off the bullet.