Found in 48 comments on Hacker News
PaulHoule · 2024-02-15 · Original thread
Note

   which is not stable and has no automated/unit tests.
as a definite sign. See

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

for a good treatment of the "whole picture" of applying tests to legacy (and new) systems including the politics.

But yeah you are facing the classic problem of maintenance programming where you have to make changes to a system that you don't completely trust or understand. It seems like slow going no matter what you do, you've got the choice of (1) developing partial understanding of bits and pieces of the system so you can make surgical changes to fix problems vs (2) rebuilding large parts of the system. If you do enough of (1) you will have some sense of what (2) entails, if you haven't done enough of (1) you'll never believe how many things a system that's been around for 5+ years is doing that you don't know about.

I would also point to

https://www.redhat.com/architect/pros-and-cons-strangler-arc...

as advanced thinking on this issue. If you can do anything, get your team writing tests. It's not a given that people tasked with writing tests will write good tests but writing good tests for a legacy system that you've struggled with can be an almost religious revelation.

It also seems like a bad smell that you're voting on things, I mean juniors shouldn't have the same level of input in this as you if you're the lead.

PaulHoule · 2023-04-20 · Original thread
See "Ashby's Law of Requisite Variety"

http://pespmc1.vub.ac.be/REQVAR.html

which, practically, means you need to bring different tools to bear for different problems depending on the nature of the problem.

At some phases of some projects almost all the uncertainty involved is around the behavior of the framework you're working inside and you can unit test until you are blue in the face and it won't do you any good.

Some code is straightforward and barely needs tests, other code (say string parsing) involves well defined functions with inputs and outputs that are tricky to implement and starting with test cases is really the way to go. (Get an IDE with a good visual debugger if you don't already have one, tests together with the developer are awesome.)

If an algorithm is tricky at all I will look it up in an algorithm book. I definitely have made formal proofs that an algorithm worked when I wasn't sure.

This book I think makes the best case for unit tests I've ever seen

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

and particularly emphasizes that unit tests need to be really fast because you're going to run them hundreds or thousands of times.

However there are needs for testing that don't fit that model and you have to fit them in that process. I wrote a multithreaded "service" and also a "super hammer" test that spawned 1000 threads and tried to provoke a race condition for 40 seconds, which is way too long to be part of your ordinary build process. You can spend anywhere from 2 minutes to 2 months training a neural network and you never 100% sure that the model you built is good (able to be put in front of customers) unless you test it. The pros frequently build several models and pick the best. It's not something you can afford to do every time you "mvn install" however, so you have to develop a process that addresses the real problems in front of you and that does not get bogged down trying things inappropriate for your problem.

ISL · 2022-09-18 · Original thread
I'm no expert, but the advice in Working Effectively with Legacy Code has been helpful on occasion:

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe....

Fully apprising management of the situation in a way they can understand may also reap long-term dividends.

PaulHoule · 2022-08-13 · Original thread
This book explains the theory and practice of TDD better than anything else:

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

In particular that book considers the problem of retrofitting an existing system to have good tests, so it covers all the bases for a system that runs in production and is in front of customers. Feathers is particularly an advocate for fast tests, he thinks a 1ms test is a little slow.

A counter to that is some subsystems are prone to race conditions and a necessary test could be to stress the subsystem with 200 threads. This automated test is of great value but takes 30 seconds to run which is quick to do once but adds up to slower development cycles when you make any change.

kevinmhickey · 2022-01-19 · Original thread
Very carefully. But seriously, check out the book "Working Effectively With Legacy Code" by Feathers. It has helped me tremendously in multiple refactorings.

His thesis is that Legacy Code is code without tests. So to make it not Legacy you need to add tests then refactor safely. Then he explains the very complex patterns that can arise and how to deal with them.

This book is one of a few that changed my career.

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

rmb177 · 2021-03-26 · Original thread
Working Effectively with Legacy Code by Michael Feathers is a good resource for how to introduce testing code into an existing system:

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

bdavisx · 2020-09-21 · Original thread
If you're looking for help fixing the mess you are dealing with, find the book Working Effectively with Legacy Code - https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe....

You might also want to read Refactoring to Patterns, but the legacy book is more important to start with.

xadoc · 2020-07-11 · Original thread
If the code has tests, I would start by looking at those tests.

If it has no tests, then I would slowly try to build tests to document the functionality that I need. In your case being Angular that might be having simple html pages with the smallest module that you need.

How to find things? If you're on Windows try AstroGrep http://astrogrep.sourceforge.net/ to quickly search and jump around in the code or in any system I use VS Code for a similar functionality. Also learn to use command line find/grep.

The book "Working Effectively with Legacy Code" also helped me be more comfortable navigating and changing large code bases, in a long term view I recommend this book to every developer https://www.amazon.co.uk/Working-Effectively-Legacy-Michael-...

Lastly, I would raise this because the company might not be aware they are buying a low quality framework that maybe ticks all the boxes in the contract but is in effect impossible to use by their current developers (you), it might be there's other people with more experience in said niche that might be able to help. In the private community maybe some people would be able to accept a short contract to help train you.

davidjnelson · 2020-03-27 · Original thread
You can also skim chapters in this book. Best book I’ve read on the topic, highly recommended https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
kolinko · 2019-05-07 · Original thread
Yes. By the way - „Working effectively with legacy code”.

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

If you start from scratch you may bump into the same edge cases that the original writers bumped into, and end up with a code that is not much better than the original - even in the original is 2 years out of date.

I’m sure there were cases when writing from scratch was a good call, but I don’t remember hearing about it.

kbouck · 2019-03-16 · Original thread
I've heard the book Working with Legacy Code [0] recommended for strategies to bring order to these kind of projects. Haven't read it myself yet...

[0]: https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

edem · 2019-02-28 · Original thread
I'd suggest the book [Working Effectively with Legacy Code](https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...).
I'll open by saying I've only ever had bad experiences with complete re-writes and these experiences have impacted my aversion to them.

"[Working Effectively with Legacy Code]" by Michael Feathers really helped me get through a situation like this.

My recommendation is not to try to understand the code per se, but understand the business that the code was being used in/by.

From there, over time, just start writing really high level end-to-end tests to represent what the business expects the codebase to do (i.e. starting at the top of the [test pyramid]). This ends up acting as your safety net (your "test harness").

Then it's less a matter of trying to understand what the code does, and becomes a question of what the code should do. You can iterate level by level into the test pyramid, documenting the code with tests and refactoring/improving the code as you go.

It's a long process (I'm about 4.5 years into it and still going strong), but it allowed us to move fast while developing new features with a by-product of continually improving the code base as we went.

[test pyramid]: https://martinfowler.com/bliki/TestPyramid.html [Working Effectively with Legacy Code]: https://www.amazon.com/FEATHERS-WORK-EFFECT-LEG-CODE/dp/0131...

kat · 2018-08-23 · Original thread
FYI, the Legacy code book is: Working effectively with Legacy Code by Michael Feathers. Its useful, I also strongly recommend it when you're feeling overwhelmed by a large sprawling code base.

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

arthurjj · 2018-03-27 · Original thread
The first edition suffers from being a victim of its own success. Much of it has become common wisdom and many of the refactoring techniques now have IDE support.

"Working Effectively with Legacy Code"^1 is usually what I recommend instead. It felt like a more up to date approach

More generally "The Pragmatic Programmer"^2 is a classic for a reason. But from you're comment you've probably already read it.

1. https://amzn.to/2I4rIWP 2. https://amzn.to/2GfxVTa

2.

skittleson · 2018-02-11 · Original thread
Working Effectively with legacy code (http://amzn.to/2CazEm5) and The Design of Everyday Things ( http://amzn.to/2H23a0R )

Both have a huge impact on how I work with code and design them. Trying to explain these concepts are hard without context. Sometimes i just copy/paste the sections i think they could benefit from.

bloat · 2018-01-25 · Original thread
I would try and clean up the bits I was working on.

This is a good book on the topic refactoring a large code base with no tests.

https://www.amazon.co.uk/Working-Effectively-Legacy-Michael-...

PaulHoule · 2017-11-22 · Original thread
You're definitely right that unit tests are a part of the solution.

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

can be read in a few different registers (making a case for what unit tests should be in a greenfield system, why and how to backfit unit tests into a legacy system) but it makes that case pretty strongly. It can seem overwhelming to get unit tests into a legacy system but the reward is large.

I remember working on a system that was absolutely awful but was salvageable because it had unit tests!

Also generally getting control of the build procedure is key to the scheduling issue -- I have seen many new project where a team of people work on something and think all of the parts are good to go, but you find there is another six months of integration work, installer engineering, and other things you need to do ship a product. Automation, documentation, simplification are all bits of the puzzle, but if you want agility, you need to know how to go from source code to a product, and not every team does.

PaulHoule · 2017-10-04 · Original thread
If you have to write mocks in the native language, mocks will probably drive you insane.

Tools like mockito can make a big difference.

I worked on a project which was terribly conceived, specified, and implemented. My boss said that they shouldn't even have started it and shouldn't have hired the guy who wrote it! Because it had tests, however, it was salvageable, and I was able to get it into production.

This book

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

makes the case that unit tests should always run quickly, not depend on external dependencies, etc.

I do think a fast test suite is important, but there are some kinds of slower tests that can have a transformative impact on development:

* I wrote a "super hammer" test that smokes out a concurrent system for race conditions. It took a minute to run, but after that, I always knew that a critical part of the system did not have races (or if they did, they were hard to find)

* I wrote a test suite for a lightweight ORM system in PHP that would do real database queries. When the app was broken by an upgrade to MySQL, I had it working again in 20 minutes. When I wanted to use the same framework with MS SQL Server, it took about as long to port it.

* For deployment it helps to have an automated "smoke test" that will make sure that the most common failure modes didn't happen.

That said, TDD is most successful when you are in control of the system. In writing GUI code often the main uncertainty I've seen is mistrust of the underlying platform (today that could be, "Does it work in Safari?")

When it comes to servers and stuff, there is the issue of "can you make a test reproducible". For instance you might be able to make a "database" or "schema" inside a database with a random name and do all your stuff there. Or maybe you can spin one up in the cloud, or use Docker or something like that. It doesn't matter exactly how you do it, but you don't want to be the guy who nukes the production database (or a another developer's or testers database) because the build process has integration tests that use the same connection info as them.

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

W0lf · 2017-06-05 · Original thread
I've gathered all the book titles in this thread and created Amazon affiliate links (if you don't mind. Otherwise you still have all the titles together :-) )

A Pattern Language, Alexander and Ishikawa and Silverstein http://amzn.to/2s9aSSc

Advanced Programming in the Unix Environment , Stevens http://amzn.to/2qPOMjN

Algorithmics: the Spirit of Computing, Harel http://amzn.to/2rW5FNS

Applied Crytography, Wiley http://amzn.to/2rsULxS

Clean Code, Martin http://amzn.to/2sIOWtQ

Clean Coder, Martin http://amzn.to/2rWgbEP

Code Complete, McConnel http://amzn.to/2qSUIwE

Code: The Hidden Language of Computer Hardware and Software, Petzold http://amzn.to/2rWfR9d

Coders at Work, Seibel http://amzn.to/2qPCasZ

Compilers: Principles, Techniques, & Tools, Aho http://amzn.to/2rCSUVA

Computer Systems: A Programmer's Perspective, O'Hallaron and Bryant http://amzn.to/2qPY5jH

Data Flow Analysis: Theory and Practice, Khedker http://amzn.to/2qTnSvr

Dependency Injection in .NET, Seemann http://amzn.to/2rCz0tV

Domain Driven Design, Evans http://amzn.to/2sIGM4N

Fundamentals of Wireless Communication, Tse and Viswanath http://amzn.to/2rCTmTM

Genetic Programming: An Intrduction, Banzhaf http://amzn.to/2s9sdut

Head First Design Patterns, O'Reilly http://amzn.to/2rCISUB

Implementing Domain-Driven Design, Vernon http://amzn.to/2qQ2G5u

Intrduction to Algorithms, CLRS http://amzn.to/2qXmSBU

Introduction to General Systems Thinking, Weinberg http://amzn.to/2qTuGJw

Joy of Clojure, Fogus and Houser http://amzn.to/2qPL4qr

Let over Lambda, Hoyte http://amzn.to/2rWljcp

Operating Systems: Design and Implementation, Tanenbaum http://amzn.to/2rKudsw

Parsing Techniques, Grune and Jacobs http://amzn.to/2rKNXfn

Peopleware: Productive Projects and Teams, DeMarco and Lister http://amzn.to/2qTu86F

Programming Pearls, Bentley http://amzn.to/2sIRPe9

Software Process Design: Out of the Tar Pit, McGraw-Hill http://amzn.to/2rVX0v0

Software Runaways, Glass http://amzn.to/2qT2mHn

Sorting and Searching, Knuth http://amzn.to/2qQ4NWQ

Structure and Interpretation of Computer Programs, Abelson and Sussman http://amzn.to/2qTflsk

The Art of Unit Testing, Manning http://amzn.to/2rsERDu

The Art of Unix Programming, ESR http://amzn.to/2sIAXUZ

The Design of Design: Essays from a Computer Scientist, Brooks http://amzn.to/2rsPjev

The Effective Engineer, Lau http://amzn.to/2s9fY0X

The Elements of Style, Strunk and White http://amzn.to/2svB3Qz

The Healthy Programmer, Kutner http://amzn.to/2qQ2MtQ

The Linux Programming Interface, Kerrisk http://amzn.to/2rsF8Xi

The Mythical Man-Month, Brooks http://amzn.to/2rt0dAR

The Practice of Programming, Kernighan and Pike http://amzn.to/2qTje0C

The Pragmatic Programmer, Hunt and Thomas http://amzn.to/2s9dlvS

The Psychology of Computer Programming, Weinberg http://amzn.to/2rsPypy

Transaction Processing: Concepts and Techniques, Gray and Reuter http://amzn.to/

Types and Programming Languages, Pierce http://amzn.to/2qT2d6G

Understanding MySQL Internals, Pachev http://amzn.to/2svXuFo

Working Effectively with Legacy Code, Feathers http://amzn.to/2sIr09R

Zen of graphics programming, Abrash http://amzn.to/2rKIW6Q

taude · 2017-05-30 · Original thread
This is a good high-level overview of the process. I highly recommend that engineers working in the weeds, read "Working Effectively with Legacy Code" [1], as it has a ton of patterns in it that you can implement, and more detailed strategies on how to do some of the code changes hinted at in this article.

[1] https://www.safaribooksonline.com/library/view/working-effec...

yowlingcat · 2017-05-14 · Original thread
So as to be constructive, I'm going to reference a classic: Working Effectively With Legacy code [0]. Here's a nice clip from an SO answer [1] paraphrasing it:

"To me, the most important concept brought in by Feathers is seams. A seam is a place in the code where you can change the behaviour of your program without modifying the code itself. Building seams into your code enables separating the piece of code under test, but it also enables you to sense the behaviour of the code under test even when it is difficult or impossible to do directly (e.g. because the call makes changes in another object or subsystem, whose state is not possible to query directly from within the test method).

This knowledge allows you to notice the seeds of testability in the nastiest heap of code, and find the minimal, least disruptive, safest changes to get there. In other words, to avoid making "obvious" refactorings which have a risk of breaking the code without you noticing - because you don't yet have the unit tests to detect that.".

As you get more experience under your belt, you'll begin to see these situations again and again of code becoming large, difficult to reason about or test, and similarly having low direct business benefit for refactoring. But crucially, learning how to refactor as you go is a huge part of working effectively with legacy code and by virtue of that, maturing into a senior engineer -- to strain a leaky analogy, you don't accrue tech debt all at once, so why would it make sense to pay it off all at once? The only reason that would occur is if you didn't have a strong culture of periodically paying off tech debt as you went along.

I'm not going to insinuate that it was necessarily wrong that you decided to solve the problem as you did, and the desire to be proactive about it is certainly not something to be criticized. But it wasn't necessarily right, either. Your leadership should have prevented something like this from occurring, because in all likelihood, you wasted those extra hours and naively thought that extra hours equal extra productivity. They don't. You ought to aim for maximal results for minimal hours of work, so that you can spend as much time as you can delivering results. And, unless you're getting paid by the hour instead of salaried, you're actually getting less pay. So to recap: you're getting less pay, you're giving the company subpar results (by definition, because you're using more hours to achieve what a competent engineer could do with only 40 hour workweeks so you're 44% as efficient), and everyone's losing a little bit. Thankfully, you still managed to get the job done, and because you were able to gain authorship and ownership over the new part of the codebase, you were able to politically argue for better compensation. Good for you, you should always bargain for what you deserve. But, just because you got a more positive outcome doesn't mean you went about it the most efficient way.

The best engineers (and I would argue workers in general) are efficient. They approach every engineering problems they can with solutions so simple and effective that they seem boring, only reaching for the impressive stuff when it's really needed, and with chagrin. If you can combine that with self-advocacy, you'll really be cooking with gas as far as your career is concerned. And, it'll get you a lot further than this silly childish delusion that more hours equals more results, or more pay. Solid work, solid negotiation skills, solid marketing skills and solid communication skills earn you better pay. The rest is fluff.

[0] https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe... [1] https://softwareengineering.stackexchange.com/questions/1220...

greenyoda · 2017-03-27 · Original thread
There are even books about dealing with legacy code. I've found this one to be useful:

Working Effectively with Legacy Code, by Michael Feathers

https://www.amazon.com/dp/0131177052

jestar_jokin · 2016-08-04 · Original thread
Check out the book "Working Effectively with Legacy Code", by Michael Feathers[0].

I believe the basic approach is to write tests to capture the current behaviour at the system boundaries - for a web application, this might take the form of automated end-to-end tests (Selenium WebDriver) - then, progressively refactor and unit test components and code paths. By the end of the process, you'll end up with a comprehensive regression suite, giving developers the confidence to make changes with impunity - whether that's refactoring to eliminate more technical debt and speed up development, or adding features to fulfill business needs.

This way, you can take a gradual, iterative approach to cleaning up the system, which should boost morale (a little bit of progress made every iteration), and minimises risk (you're not replacing an entire system at once).

I've used this approach to rewrite a Node.js API that was tightly coupled to MongoDB, and migrated it to PostgreSQL.

[0] https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

bigethan · 2016-03-28 · Original thread
This article is a great.

Similarly Working Effective with Legacy Code by Michael Feathers (http://amzn.to/1UxwVdL) is a great programming book. I appreciate it because It's really nothing but patterns for dealing with bad code (mostly Java, but most of it translates to other languages). Very little why (which I already know), lots of "how to fix X", aka, great signal to noise ratio.

lutorm · 2016-01-19 · Original thread
I like this book, it has a lot of tips for situations like these:

http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

HerpDerpLerp · 2015-11-02 · Original thread
I am not sure the above idea is mentioned by Michael Feathers in his amaze book "Working Effectively with Legacy Code" but it is a great idea, and combined with the things that Michael does cover will do you a lot of good!

http://www.amazon.co.uk/Working-Effectively-Legacy-Michael-F...

shoo · 2015-07-29 · Original thread
> > My own preference for the answer is Uncle Bob's description, which is this: technical debt is any production code that does not have (good) tests.

> That's certainly an example of technical debt.

Agreed, it is not the only example, but perhaps it is a good one, as that is a particularly important form of debt that makes the code harder to safely change. I.e. it is a form of technical debt that makes it more expensive to pay off other kinds of technical debt.

Curiously, Michael Feathers has a similar definition of legacy code [1]:

> To me, legacy code is simply code without tests.

[1] http://www.amazon.com/dp/0131177052

valbaca · 2015-04-09 · Original thread
I just finished Pragmatic Thinking and Learning: Refactor Your Wetware (http://www.amazon.com/gp/product/B00A32NYYE)

Next I'm picking up Working Effectively with Legacy Code (http://www.amazon.com/dp/0131177052). It's been in my reading list for years and I can finally get to it!

wyclif · 2015-03-21 · Original thread
Working Effectively With Legacy Code by Michael Feathers http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

Debugging with GDB: The GNU Source-Level Debugger by Stallman, Pesch, and Shebs http://www.amazon.com/Debugging-GDB-GNU-Source-Level-Debugge...

The Art of Debugging with GDB, DDD, and Eclipse by Matloff & Salzman http://www.amazon.com/gp/product/1593271743

igorgue · 2015-02-17 · Original thread
Also, read this book: http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

It helps a lot and teaches you how to use grep and other tools (a lot more others that I no longer remember) to search and find your way through legacy code.

greenyoda · 2015-02-14 · Original thread
See if you can talk to the people in your company who hired the contractor. They might at least be able to give you a high-level description of what the software is supposed to do and how it's supposed to work. They might even have specs that they prepared for the contractor or other design documentation.

If the contractor's software has no tests or is poorly written, it's going to be hard to add features to it or refactor it. You might want to read Working Effectively with Legacy Code[1] by Michael Feathers, which describes how you can get a handle on large bodies of legacy software.

[1] http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

shoo · 2014-11-11 · Original thread
This reminds me of a post by Michael Feathers, titled "The carrying cost of code" [1]. Feathers wrote the book about legacy code [2]. I think he makes approximately the same point:

> If you are making cars or widgets, you make them one by one. They proceed through the manufacturing process and you can gain very real efficiencies by paying attention to how the pieces go through the process. Lean Software Development has chosen to see tasks as pieces. We carry them through a process and end up with completed products on the other side.

> It's a nice view of the world, but it is a bit of a lie. In software development, we are essentially working on the same car or widget continuously, often for years. We are in the same soup, the same codebase. We can't expect a model based on independence of pieces in manufacturing to be accurate when we are working continuously on a single thing (a codebase) that shows wear over time and needs constant attention.

[1] - http://michaelfeathers.typepad.com/michael_feathers_blog/201... [2] - http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

loumf · 2014-08-27 · Original thread
This is the standard recommended book:

http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

I write new tests in any area I'm going to be working in.

greenyoda · 2014-06-16 · Original thread
One book that might give you some useful advice is "Working Effectively with Legacy Code" by Michael Feathers:

http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

hvs · 2013-09-04 · Original thread
Great article that is still relevant today (sadly, I remember reading it when it first came out). If you really, really feel the need to rewrite from scratch, I recommend instead picking up a copy of "Working Effectively with Legacy Code" by Michael Feathers [1]. It will give you ways to improve those terrible code bases while not throwing out the existing code. Plus you'll still get that "new car smell" of working on your code.

[1] http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

wcoenen · 2012-12-30 · Original thread
If you do decide to add tests to an existing code base, I found "Working Effectively with Legacy Code"[1] to be a good guide. Check out the table of contents.

[1] http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

mattmcknight · 2012-09-22 · Original thread
"Working Effectively with Legacy Code" is by Michael Feathers. http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
koide · 2012-08-30 · Original thread
Appropriately, it's a book: http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

Some automatic tools could help (although I doubt thay'll work on DBase III): Static analysis to see what's there, version control to start at the top and log your way through and be able to rollback to a previous working version.

But it's at the very least weeks of pain.

Confusion · 2012-08-26 · Original thread
As the link by lttlrck also advocates: throwing shit out can easily be a mistake. More usually, http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea... + http://www.amazon.com/Refactoring-Improving-Design-Existing-... can get you further, faster. Stuff keeps working while you incrementally improve it.
agentultra · 2012-08-13 · Original thread
http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

Working with legacy systems is a black art that I didn't learn about until I took a job supporting and extending one such system. The book I link to above was critical in helping me to understand the approach taken by the team I was working with. It takes a keen, detail-focused mind to do this kind of work.

The approach we took was to create a legacy interface layer. We did this by first wrapping the legacy code within a FFI. We built a test-suite that exercised the legacy application through this interface. Then we built an API on top of the interface and built integration tests that checked all the code paths into the legacy system. Once we had that we were able to build new features on to the system and replace each code path one by one.

Unsurprisingly we actually discovered bugs in the old system this way and were able to correct them. It didn't take long for the stakeholders to stop worrying and trust the team. However there was a lot of debate and argument along the way.

The problem isn't technical. You can simultaneously maintain and extend legacy applications and avoid all of the risks stakeholders are worried about. One could actually improve these systems by doing so. The real problem is political and convincing these stakeholders that you can minimize the risk is a difficult task. It was the hardest part of working on that team -- even when we were demonstrating our results!

The hardest part about working with legacy systems are the huge bureaucracies that sit on top of them.

calpaterson · 2012-08-01 · Original thread
Essentially, my understand of best practice is to write high level functional tests for the features that appear to work and then use them to ensure there are no regressions as a result of your changes. Someone people even define legacy code as "code without tests".

http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

toumhi · 2011-01-25 · Original thread
This is what Michael Feathers calls 'seams' in his book, Working With Legacy Code. Often, you have to do exploratory testing, that is, you don't really know the requirements but you make tests that the current code passes. Then you can refactor it. That way, current code behavior won't be changed.

Very good read, if you need to deal with legacy code and you don't know where to start.

http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

gte910h · 2010-10-27 · Original thread
This is actually the type of system (especially if it's very rough code quality wise in many places) I think regression tests are very useful (tests to make sure the system doesn't change function).

A book called "Working effectively with legacy code" by Feathers is great for instrumenting and regression testing old code bases then changing them without breaking them.

Non-aff link http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

I have the book "Working Effectively with Legacy Code"[0], and it's pretty much just "Put things under test, then change them.". Still a useful read, though, if you find yourself working in that sort of thing often (I do).

[0]: http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

Fresh book recommendations delivered straight to your inbox every Thursday.