Found in 23 comments on Hacker News
natch · 2025-09-05 · Original thread
Someone has to mention Working Effectively With Legacy Code, by Michael Feathers, still a fantastic book.

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

I'll open by saying I've only ever had bad experiences with complete re-writes and these experiences have impacted my aversion to them.

"[Working Effectively with Legacy Code]" by Michael Feathers really helped me get through a situation like this.

My recommendation is not to try to understand the code per se, but understand the business that the code was being used in/by.

From there, over time, just start writing really high level end-to-end tests to represent what the business expects the codebase to do (i.e. starting at the top of the [test pyramid]). This ends up acting as your safety net (your "test harness").

Then it's less a matter of trying to understand what the code does, and becomes a question of what the code should do. You can iterate level by level into the test pyramid, documenting the code with tests and refactoring/improving the code as you go.

It's a long process (I'm about 4.5 years into it and still going strong), but it allowed us to move fast while developing new features with a by-product of continually improving the code base as we went.

[test pyramid]: https://martinfowler.com/bliki/TestPyramid.html [Working Effectively with Legacy Code]: https://www.amazon.com/FEATHERS-WORK-EFFECT-LEG-CODE/dp/0131...

kat · 2018-08-23 · Original thread
FYI, the Legacy code book is: Working effectively with Legacy Code by Michael Feathers. Its useful, I also strongly recommend it when you're feeling overwhelmed by a large sprawling code base.

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

PaulHoule · 2017-11-22 · Original thread
You're definitely right that unit tests are a part of the solution.

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

can be read in a few different registers (making a case for what unit tests should be in a greenfield system, why and how to backfit unit tests into a legacy system) but it makes that case pretty strongly. It can seem overwhelming to get unit tests into a legacy system but the reward is large.

I remember working on a system that was absolutely awful but was salvageable because it had unit tests!

Also generally getting control of the build procedure is key to the scheduling issue -- I have seen many new project where a team of people work on something and think all of the parts are good to go, but you find there is another six months of integration work, installer engineering, and other things you need to do ship a product. Automation, documentation, simplification are all bits of the puzzle, but if you want agility, you need to know how to go from source code to a product, and not every team does.

PaulHoule · 2017-10-04 · Original thread
If you have to write mocks in the native language, mocks will probably drive you insane.

Tools like mockito can make a big difference.

I worked on a project which was terribly conceived, specified, and implemented. My boss said that they shouldn't even have started it and shouldn't have hired the guy who wrote it! Because it had tests, however, it was salvageable, and I was able to get it into production.

This book

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

makes the case that unit tests should always run quickly, not depend on external dependencies, etc.

I do think a fast test suite is important, but there are some kinds of slower tests that can have a transformative impact on development:

* I wrote a "super hammer" test that smokes out a concurrent system for race conditions. It took a minute to run, but after that, I always knew that a critical part of the system did not have races (or if they did, they were hard to find)

* I wrote a test suite for a lightweight ORM system in PHP that would do real database queries. When the app was broken by an upgrade to MySQL, I had it working again in 20 minutes. When I wanted to use the same framework with MS SQL Server, it took about as long to port it.

* For deployment it helps to have an automated "smoke test" that will make sure that the most common failure modes didn't happen.

That said, TDD is most successful when you are in control of the system. In writing GUI code often the main uncertainty I've seen is mistrust of the underlying platform (today that could be, "Does it work in Safari?")

When it comes to servers and stuff, there is the issue of "can you make a test reproducible". For instance you might be able to make a "database" or "schema" inside a database with a random name and do all your stuff there. Or maybe you can spin one up in the cloud, or use Docker or something like that. It doesn't matter exactly how you do it, but you don't want to be the guy who nukes the production database (or a another developer's or testers database) because the build process has integration tests that use the same connection info as them.

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

agentultra · 2017-06-29 · Original thread
> I've also seen people mangle well-factored but untestable code in the process of writing tests, which can be a tragedy when dealing with a legacy codebase that was written with insufficient testing but is otherwise well-designed.

Have you read Michael Feathers' Working Effectively with Legacy Code? [0]

In his definition of legacy code it is any such code that has no test coverage. It's a black box. There are errors in it somewhere. It works for some inputs. However you cannot quantify either of those things just by "inhabiting the mind of the original developers." The only way to work effectively with that code base in order to extend it, maintain it, or modify it is to bring it under test.

This is far more difficult than it sounds with legacy code than with greenfield TDD for the aforementioned reasons: there are unquantified errors and underspecified behaviours. You can't possibly do it in one sweeping effort and so the strategy is to accept that tests are useful and to add them with each change, first, before making that change and using the test to prove the change is correct.

Slowly, over time, your legacy code base surfaces little islands of well tested code.

You have to be deliberate and careful. You have to think about what you're doing.

This is a much different experience than writing greenfield code. TDD is effortless and drives you towards the answer in this case.

[0] https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

yowlingcat · 2017-05-14 · Original thread
So as to be constructive, I'm going to reference a classic: Working Effectively With Legacy code [0]. Here's a nice clip from an SO answer [1] paraphrasing it:

"To me, the most important concept brought in by Feathers is seams. A seam is a place in the code where you can change the behaviour of your program without modifying the code itself. Building seams into your code enables separating the piece of code under test, but it also enables you to sense the behaviour of the code under test even when it is difficult or impossible to do directly (e.g. because the call makes changes in another object or subsystem, whose state is not possible to query directly from within the test method).

This knowledge allows you to notice the seeds of testability in the nastiest heap of code, and find the minimal, least disruptive, safest changes to get there. In other words, to avoid making "obvious" refactorings which have a risk of breaking the code without you noticing - because you don't yet have the unit tests to detect that.".

As you get more experience under your belt, you'll begin to see these situations again and again of code becoming large, difficult to reason about or test, and similarly having low direct business benefit for refactoring. But crucially, learning how to refactor as you go is a huge part of working effectively with legacy code and by virtue of that, maturing into a senior engineer -- to strain a leaky analogy, you don't accrue tech debt all at once, so why would it make sense to pay it off all at once? The only reason that would occur is if you didn't have a strong culture of periodically paying off tech debt as you went along.

I'm not going to insinuate that it was necessarily wrong that you decided to solve the problem as you did, and the desire to be proactive about it is certainly not something to be criticized. But it wasn't necessarily right, either. Your leadership should have prevented something like this from occurring, because in all likelihood, you wasted those extra hours and naively thought that extra hours equal extra productivity. They don't. You ought to aim for maximal results for minimal hours of work, so that you can spend as much time as you can delivering results. And, unless you're getting paid by the hour instead of salaried, you're actually getting less pay. So to recap: you're getting less pay, you're giving the company subpar results (by definition, because you're using more hours to achieve what a competent engineer could do with only 40 hour workweeks so you're 44% as efficient), and everyone's losing a little bit. Thankfully, you still managed to get the job done, and because you were able to gain authorship and ownership over the new part of the codebase, you were able to politically argue for better compensation. Good for you, you should always bargain for what you deserve. But, just because you got a more positive outcome doesn't mean you went about it the most efficient way.

The best engineers (and I would argue workers in general) are efficient. They approach every engineering problems they can with solutions so simple and effective that they seem boring, only reaching for the impressive stuff when it's really needed, and with chagrin. If you can combine that with self-advocacy, you'll really be cooking with gas as far as your career is concerned. And, it'll get you a lot further than this silly childish delusion that more hours equals more results, or more pay. Solid work, solid negotiation skills, solid marketing skills and solid communication skills earn you better pay. The rest is fluff.

[0] https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe... [1] https://softwareengineering.stackexchange.com/questions/1220...

greenyoda · 2017-03-27 · Original thread
There are even books about dealing with legacy code. I've found this one to be useful:

Working Effectively with Legacy Code, by Michael Feathers

https://www.amazon.com/dp/0131177052

jestar_jokin · 2016-08-04 · Original thread
Check out the book "Working Effectively with Legacy Code", by Michael Feathers[0].

I believe the basic approach is to write tests to capture the current behaviour at the system boundaries - for a web application, this might take the form of automated end-to-end tests (Selenium WebDriver) - then, progressively refactor and unit test components and code paths. By the end of the process, you'll end up with a comprehensive regression suite, giving developers the confidence to make changes with impunity - whether that's refactoring to eliminate more technical debt and speed up development, or adding features to fulfill business needs.

This way, you can take a gradual, iterative approach to cleaning up the system, which should boost morale (a little bit of progress made every iteration), and minimises risk (you're not replacing an entire system at once).

I've used this approach to rewrite a Node.js API that was tightly coupled to MongoDB, and migrated it to PostgreSQL.

[0] https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

agentultra · 2016-01-19 · Original thread
I've had to do this once. They don't teach you managing code like this! A friend gave me a copy of Working Effectively With Legacy Code[0] which helped me.

The gist of it: a strong suite of integration and unit tests. Isolate small code paths into logical units and test for equivalency.

[0] http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

lutorm · 2016-01-19 · Original thread
I like this book, it has a lot of tips for situations like these:

http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

josteink · 2015-07-22 · Original thread
> Perhaps you should try writing unit tests after you write your code and then you can come back and add value to the conversation.

Nice snark there. Feel free to keep it to yourself.

To add actual value to the conversation (as opposed to your contribution), I can very much recommend the book "Working Effectively with Legacy Code"[1] for how to handle unit-testing in the scenario of existing "legacy" code-bases.

It's full of useful tips and methods to get testing in place "anywhere" and has a pragmatical (as opposed religious) approach to getting it done.

To spark some interest: The book defines "legacy code" as any code not covered by unit-tests.

It may be seem dated (from 2004 and all), but it's been the most useful book I've read on unit-testing by far.

[1] http://www.amazon.com/gp/product/0131177052/ref=as_li_tl?ie=...

valbaca · 2015-04-09 · Original thread
I just finished Pragmatic Thinking and Learning: Refactor Your Wetware (http://www.amazon.com/gp/product/B00A32NYYE)

Next I'm picking up Working Effectively with Legacy Code (http://www.amazon.com/dp/0131177052). It's been in my reading list for years and I can finally get to it!

wyclif · 2015-03-21 · Original thread
Working Effectively With Legacy Code by Michael Feathers http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

Debugging with GDB: The GNU Source-Level Debugger by Stallman, Pesch, and Shebs http://www.amazon.com/Debugging-GDB-GNU-Source-Level-Debugge...

The Art of Debugging with GDB, DDD, and Eclipse by Matloff & Salzman http://www.amazon.com/gp/product/1593271743

igorgue · 2015-02-17 · Original thread
Also, read this book: http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

It helps a lot and teaches you how to use grep and other tools (a lot more others that I no longer remember) to search and find your way through legacy code.

greenyoda · 2015-02-14 · Original thread
See if you can talk to the people in your company who hired the contractor. They might at least be able to give you a high-level description of what the software is supposed to do and how it's supposed to work. They might even have specs that they prepared for the contractor or other design documentation.

If the contractor's software has no tests or is poorly written, it's going to be hard to add features to it or refactor it. You might want to read Working Effectively with Legacy Code[1] by Michael Feathers, which describes how you can get a handle on large bodies of legacy software.

[1] http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

shoo · 2014-11-11 · Original thread
This reminds me of a post by Michael Feathers, titled "The carrying cost of code" [1]. Feathers wrote the book about legacy code [2]. I think he makes approximately the same point:

> If you are making cars or widgets, you make them one by one. They proceed through the manufacturing process and you can gain very real efficiencies by paying attention to how the pieces go through the process. Lean Software Development has chosen to see tasks as pieces. We carry them through a process and end up with completed products on the other side.

> It's a nice view of the world, but it is a bit of a lie. In software development, we are essentially working on the same car or widget continuously, often for years. We are in the same soup, the same codebase. We can't expect a model based on independence of pieces in manufacturing to be accurate when we are working continuously on a single thing (a codebase) that shows wear over time and needs constant attention.

[1] - http://michaelfeathers.typepad.com/michael_feathers_blog/201... [2] - http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

loumf · 2014-08-27 · Original thread
This is the standard recommended book:

http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

I write new tests in any area I'm going to be working in.

greenyoda · 2014-06-16 · Original thread
One book that might give you some useful advice is "Working Effectively with Legacy Code" by Michael Feathers:

http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

josteink · 2014-04-15 · Original thread
> What about putting in effort into writing extensive test suites

Easier said than done. Writing test-suites for a codebase which never had a test-suite is a million times harder than writing a test-suite for new, fresh code.

In fact it's probably easier to start over than re-factoring the code to be testable in the first place, but some people might argue that would be a wee bit drastic. So not saying it can't be done, just that it does take a very significant effort.

If anyone should still feel like doing something like this, I can very much recommend the following book for advice and morale boost:

http://www.amazon.com/gp/product/0131177052/ref=as_li_ss_tl?...

(Discalimer: Affiliate link)

amboar · 2013-10-17 · Original thread
At work I have set up a nightly Jenkins job to merge all verified, un-submitted patches from Gerrit and process the resulting history with Sonarqube[1].

This has revealed many bad habits and allowed us to correct them through review comments prior to changes being submitted. It's also shown me just how "legacy"[2] our code-base is, but thanks to sonarqube (which runs PMD and FindBugs as part of its analysis) we're improving.

[1] http://www.sonarqube.org [2] http://www.amazon.com/gp/aw/d/0131177052

hvs · 2013-09-04 · Original thread
Great article that is still relevant today (sadly, I remember reading it when it first came out). If you really, really feel the need to rewrite from scratch, I recommend instead picking up a copy of "Working Effectively with Legacy Code" by Michael Feathers [1]. It will give you ways to improve those terrible code bases while not throwing out the existing code. Plus you'll still get that "new car smell" of working on your code.

[1] http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...