http://pespmc1.vub.ac.be/REQVAR.html
which, practically, means you need to bring different tools to bear for different problems depending on the nature of the problem.
At some phases of some projects almost all the uncertainty involved is around the behavior of the framework you're working inside and you can unit test until you are blue in the face and it won't do you any good.
Some code is straightforward and barely needs tests, other code (say string parsing) involves well defined functions with inputs and outputs that are tricky to implement and starting with test cases is really the way to go. (Get an IDE with a good visual debugger if you don't already have one, tests together with the developer are awesome.)
If an algorithm is tricky at all I will look it up in an algorithm book. I definitely have made formal proofs that an algorithm worked when I wasn't sure.
This book I think makes the best case for unit tests I've ever seen
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
and particularly emphasizes that unit tests need to be really fast because you're going to run them hundreds or thousands of times.
However there are needs for testing that don't fit that model and you have to fit them in that process. I wrote a multithreaded "service" and also a "super hammer" test that spawned 1000 threads and tried to provoke a race condition for 40 seconds, which is way too long to be part of your ordinary build process. You can spend anywhere from 2 minutes to 2 months training a neural network and you never 100% sure that the model you built is good (able to be put in front of customers) unless you test it. The pros frequently build several models and pick the best. It's not something you can afford to do every time you "mvn install" however, so you have to develop a process that addresses the real problems in front of you and that does not get bogged down trying things inappropriate for your problem.
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe....
Fully apprising management of the situation in a way they can understand may also reap long-term dividends.
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
In particular that book considers the problem of retrofitting an existing system to have good tests, so it covers all the bases for a system that runs in production and is in front of customers. Feathers is particularly an advocate for fast tests, he thinks a 1ms test is a little slow.
A counter to that is some subsystems are prone to race conditions and a necessary test could be to stress the subsystem with 200 threads. This automated test is of great value but takes 30 seconds to run which is quick to do once but adds up to slower development cycles when you make any change.
His thesis is that Legacy Code is code without tests. So to make it not Legacy you need to add tests then refactor safely. Then he explains the very complex patterns that can arise and how to deal with them.
This book is one of a few that changed my career.
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
You might also want to read Refactoring to Patterns, but the legacy book is more important to start with.
If it has no tests, then I would slowly try to build tests to document the functionality that I need. In your case being Angular that might be having simple html pages with the smallest module that you need.
How to find things? If you're on Windows try AstroGrep http://astrogrep.sourceforge.net/ to quickly search and jump around in the code or in any system I use VS Code for a similar functionality. Also learn to use command line find/grep.
The book "Working Effectively with Legacy Code" also helped me be more comfortable navigating and changing large code bases, in a long term view I recommend this book to every developer https://www.amazon.co.uk/Working-Effectively-Legacy-Michael-...
Lastly, I would raise this because the company might not be aware they are buying a low quality framework that maybe ticks all the boxes in the contract but is in effect impossible to use by their current developers (you), it might be there's other people with more experience in said niche that might be able to help. In the private community maybe some people would be able to accept a short contract to help train you.
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
If you start from scratch you may bump into the same edge cases that the original writers bumped into, and end up with a code that is not much better than the original - even in the original is 2 years out of date.
I’m sure there were cases when writing from scratch was a good call, but I don’t remember hearing about it.
[0]: https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
"[Working Effectively with Legacy Code]" by Michael Feathers really helped me get through a situation like this.
My recommendation is not to try to understand the code per se, but understand the business that the code was being used in/by.
From there, over time, just start writing really high level end-to-end tests to represent what the business expects the codebase to do (i.e. starting at the top of the [test pyramid]). This ends up acting as your safety net (your "test harness").
Then it's less a matter of trying to understand what the code does, and becomes a question of what the code should do. You can iterate level by level into the test pyramid, documenting the code with tests and refactoring/improving the code as you go.
It's a long process (I'm about 4.5 years into it and still going strong), but it allowed us to move fast while developing new features with a by-product of continually improving the code base as we went.
[test pyramid]: https://martinfowler.com/bliki/TestPyramid.html [Working Effectively with Legacy Code]: https://www.amazon.com/FEATHERS-WORK-EFFECT-LEG-CODE/dp/0131...
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
"Working Effectively with Legacy Code"^1 is usually what I recommend instead. It felt like a more up to date approach
More generally "The Pragmatic Programmer"^2 is a classic for a reason. But from you're comment you've probably already read it.
1. https://amzn.to/2I4rIWP 2. https://amzn.to/2GfxVTa
2.
Both have a huge impact on how I work with code and design them. Trying to explain these concepts are hard without context. Sometimes i just copy/paste the sections i think they could benefit from.
This is a good book on the topic refactoring a large code base with no tests.
https://www.amazon.co.uk/Working-Effectively-Legacy-Michael-...
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
can be read in a few different registers (making a case for what unit tests should be in a greenfield system, why and how to backfit unit tests into a legacy system) but it makes that case pretty strongly. It can seem overwhelming to get unit tests into a legacy system but the reward is large.
I remember working on a system that was absolutely awful but was salvageable because it had unit tests!
Also generally getting control of the build procedure is key to the scheduling issue -- I have seen many new project where a team of people work on something and think all of the parts are good to go, but you find there is another six months of integration work, installer engineering, and other things you need to do ship a product. Automation, documentation, simplification are all bits of the puzzle, but if you want agility, you need to know how to go from source code to a product, and not every team does.
Tools like mockito can make a big difference.
I worked on a project which was terribly conceived, specified, and implemented. My boss said that they shouldn't even have started it and shouldn't have hired the guy who wrote it! Because it had tests, however, it was salvageable, and I was able to get it into production.
This book
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
makes the case that unit tests should always run quickly, not depend on external dependencies, etc.
I do think a fast test suite is important, but there are some kinds of slower tests that can have a transformative impact on development:
* I wrote a "super hammer" test that smokes out a concurrent system for race conditions. It took a minute to run, but after that, I always knew that a critical part of the system did not have races (or if they did, they were hard to find)
* I wrote a test suite for a lightweight ORM system in PHP that would do real database queries. When the app was broken by an upgrade to MySQL, I had it working again in 20 minutes. When I wanted to use the same framework with MS SQL Server, it took about as long to port it.
* For deployment it helps to have an automated "smoke test" that will make sure that the most common failure modes didn't happen.
That said, TDD is most successful when you are in control of the system. In writing GUI code often the main uncertainty I've seen is mistrust of the underlying platform (today that could be, "Does it work in Safari?")
When it comes to servers and stuff, there is the issue of "can you make a test reproducible". For instance you might be able to make a "database" or "schema" inside a database with a random name and do all your stuff there. Or maybe you can spin one up in the cloud, or use Docker or something like that. It doesn't matter exactly how you do it, but you don't want to be the guy who nukes the production database (or a another developer's or testers database) because the build process has integration tests that use the same connection info as them.
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
A Pattern Language, Alexander and Ishikawa and Silverstein http://amzn.to/2s9aSSc
Advanced Programming in the Unix Environment , Stevens http://amzn.to/2qPOMjN
Algorithmics: the Spirit of Computing, Harel http://amzn.to/2rW5FNS
Applied Crytography, Wiley http://amzn.to/2rsULxS
Clean Code, Martin http://amzn.to/2sIOWtQ
Clean Coder, Martin http://amzn.to/2rWgbEP
Code Complete, McConnel http://amzn.to/2qSUIwE
Code: The Hidden Language of Computer Hardware and Software, Petzold http://amzn.to/2rWfR9d
Coders at Work, Seibel http://amzn.to/2qPCasZ
Compilers: Principles, Techniques, & Tools, Aho http://amzn.to/2rCSUVA
Computer Systems: A Programmer's Perspective, O'Hallaron and Bryant http://amzn.to/2qPY5jH
Data Flow Analysis: Theory and Practice, Khedker http://amzn.to/2qTnSvr
Dependency Injection in .NET, Seemann http://amzn.to/2rCz0tV
Domain Driven Design, Evans http://amzn.to/2sIGM4N
Fundamentals of Wireless Communication, Tse and Viswanath http://amzn.to/2rCTmTM
Genetic Programming: An Intrduction, Banzhaf http://amzn.to/2s9sdut
Head First Design Patterns, O'Reilly http://amzn.to/2rCISUB
Implementing Domain-Driven Design, Vernon http://amzn.to/2qQ2G5u
Intrduction to Algorithms, CLRS http://amzn.to/2qXmSBU
Introduction to General Systems Thinking, Weinberg http://amzn.to/2qTuGJw
Joy of Clojure, Fogus and Houser http://amzn.to/2qPL4qr
Let over Lambda, Hoyte http://amzn.to/2rWljcp
Operating Systems: Design and Implementation, Tanenbaum http://amzn.to/2rKudsw
Parsing Techniques, Grune and Jacobs http://amzn.to/2rKNXfn
Peopleware: Productive Projects and Teams, DeMarco and Lister http://amzn.to/2qTu86F
Programming Pearls, Bentley http://amzn.to/2sIRPe9
Software Process Design: Out of the Tar Pit, McGraw-Hill http://amzn.to/2rVX0v0
Software Runaways, Glass http://amzn.to/2qT2mHn
Sorting and Searching, Knuth http://amzn.to/2qQ4NWQ
Structure and Interpretation of Computer Programs, Abelson and Sussman http://amzn.to/2qTflsk
The Art of Unit Testing, Manning http://amzn.to/2rsERDu
The Art of Unix Programming, ESR http://amzn.to/2sIAXUZ
The Design of Design: Essays from a Computer Scientist, Brooks http://amzn.to/2rsPjev
The Effective Engineer, Lau http://amzn.to/2s9fY0X
The Elements of Style, Strunk and White http://amzn.to/2svB3Qz
The Healthy Programmer, Kutner http://amzn.to/2qQ2MtQ
The Linux Programming Interface, Kerrisk http://amzn.to/2rsF8Xi
The Mythical Man-Month, Brooks http://amzn.to/2rt0dAR
The Practice of Programming, Kernighan and Pike http://amzn.to/2qTje0C
The Pragmatic Programmer, Hunt and Thomas http://amzn.to/2s9dlvS
The Psychology of Computer Programming, Weinberg http://amzn.to/2rsPypy
Transaction Processing: Concepts and Techniques, Gray and Reuter http://amzn.to/
Types and Programming Languages, Pierce http://amzn.to/2qT2d6G
Understanding MySQL Internals, Pachev http://amzn.to/2svXuFo
Working Effectively with Legacy Code, Feathers http://amzn.to/2sIr09R
Zen of graphics programming, Abrash http://amzn.to/2rKIW6Q
[1] https://www.safaribooksonline.com/library/view/working-effec...
"To me, the most important concept brought in by Feathers is seams. A seam is a place in the code where you can change the behaviour of your program without modifying the code itself. Building seams into your code enables separating the piece of code under test, but it also enables you to sense the behaviour of the code under test even when it is difficult or impossible to do directly (e.g. because the call makes changes in another object or subsystem, whose state is not possible to query directly from within the test method).
This knowledge allows you to notice the seeds of testability in the nastiest heap of code, and find the minimal, least disruptive, safest changes to get there. In other words, to avoid making "obvious" refactorings which have a risk of breaking the code without you noticing - because you don't yet have the unit tests to detect that.".
As you get more experience under your belt, you'll begin to see these situations again and again of code becoming large, difficult to reason about or test, and similarly having low direct business benefit for refactoring. But crucially, learning how to refactor as you go is a huge part of working effectively with legacy code and by virtue of that, maturing into a senior engineer -- to strain a leaky analogy, you don't accrue tech debt all at once, so why would it make sense to pay it off all at once? The only reason that would occur is if you didn't have a strong culture of periodically paying off tech debt as you went along.
I'm not going to insinuate that it was necessarily wrong that you decided to solve the problem as you did, and the desire to be proactive about it is certainly not something to be criticized. But it wasn't necessarily right, either. Your leadership should have prevented something like this from occurring, because in all likelihood, you wasted those extra hours and naively thought that extra hours equal extra productivity. They don't. You ought to aim for maximal results for minimal hours of work, so that you can spend as much time as you can delivering results. And, unless you're getting paid by the hour instead of salaried, you're actually getting less pay. So to recap: you're getting less pay, you're giving the company subpar results (by definition, because you're using more hours to achieve what a competent engineer could do with only 40 hour workweeks so you're 44% as efficient), and everyone's losing a little bit. Thankfully, you still managed to get the job done, and because you were able to gain authorship and ownership over the new part of the codebase, you were able to politically argue for better compensation. Good for you, you should always bargain for what you deserve. But, just because you got a more positive outcome doesn't mean you went about it the most efficient way.
The best engineers (and I would argue workers in general) are efficient. They approach every engineering problems they can with solutions so simple and effective that they seem boring, only reaching for the impressive stuff when it's really needed, and with chagrin. If you can combine that with self-advocacy, you'll really be cooking with gas as far as your career is concerned. And, it'll get you a lot further than this silly childish delusion that more hours equals more results, or more pay. Solid work, solid negotiation skills, solid marketing skills and solid communication skills earn you better pay. The rest is fluff.
[0] https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe... [1] https://softwareengineering.stackexchange.com/questions/1220...
Working Effectively with Legacy Code, by Michael Feathers
I believe the basic approach is to write tests to capture the current behaviour at the system boundaries - for a web application, this might take the form of automated end-to-end tests (Selenium WebDriver) - then, progressively refactor and unit test components and code paths. By the end of the process, you'll end up with a comprehensive regression suite, giving developers the confidence to make changes with impunity - whether that's refactoring to eliminate more technical debt and speed up development, or adding features to fulfill business needs.
This way, you can take a gradual, iterative approach to cleaning up the system, which should boost morale (a little bit of progress made every iteration), and minimises risk (you're not replacing an entire system at once).
I've used this approach to rewrite a Node.js API that was tightly coupled to MongoDB, and migrated it to PostgreSQL.
[0] https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
Similarly Working Effective with Legacy Code by Michael Feathers (http://amzn.to/1UxwVdL) is a great programming book. I appreciate it because It's really nothing but patterns for dealing with bad code (mostly Java, but most of it translates to other languages). Very little why (which I already know), lots of "how to fix X", aka, great signal to noise ratio.
http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
http://www.amazon.co.uk/Working-Effectively-Legacy-Michael-F...
> That's certainly an example of technical debt.
Agreed, it is not the only example, but perhaps it is a good one, as that is a particularly important form of debt that makes the code harder to safely change. I.e. it is a form of technical debt that makes it more expensive to pay off other kinds of technical debt.
Curiously, Michael Feathers has a similar definition of legacy code [1]:
> To me, legacy code is simply code without tests.
http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
Clean code:
http://www.amazon.com/Clean-Code-Handbook-Software-Craftsman...
Next I'm picking up Working Effectively with Legacy Code (http://www.amazon.com/dp/0131177052). It's been in my reading list for years and I can finally get to it!
Debugging with GDB: The GNU Source-Level Debugger by Stallman, Pesch, and Shebs http://www.amazon.com/Debugging-GDB-GNU-Source-Level-Debugge...
The Art of Debugging with GDB, DDD, and Eclipse by Matloff & Salzman http://www.amazon.com/gp/product/1593271743
It helps a lot and teaches you how to use grep and other tools (a lot more others that I no longer remember) to search and find your way through legacy code.
If the contractor's software has no tests or is poorly written, it's going to be hard to add features to it or refactor it. You might want to read Working Effectively with Legacy Code[1] by Michael Feathers, which describes how you can get a handle on large bodies of legacy software.
[1] http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
> If you are making cars or widgets, you make them one by one. They proceed through the manufacturing process and you can gain very real efficiencies by paying attention to how the pieces go through the process. Lean Software Development has chosen to see tasks as pieces. We carry them through a process and end up with completed products on the other side.
> It's a nice view of the world, but it is a bit of a lie. In software development, we are essentially working on the same car or widget continuously, often for years. We are in the same soup, the same codebase. We can't expect a model based on independence of pieces in manufacturing to be accurate when we are working continuously on a single thing (a codebase) that shows wear over time and needs constant attention.
[1] - http://michaelfeathers.typepad.com/michael_feathers_blog/201... [2] - http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
I write new tests in any area I'm going to be working in.
http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
[1] http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
[1] http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
Some automatic tools could help (although I doubt thay'll work on DBase III): Static analysis to see what's there, version control to start at the top and log your way through and be able to rollback to a previous working version.
But it's at the very least weeks of pain.
Working with legacy systems is a black art that I didn't learn about until I took a job supporting and extending one such system. The book I link to above was critical in helping me to understand the approach taken by the team I was working with. It takes a keen, detail-focused mind to do this kind of work.
The approach we took was to create a legacy interface layer. We did this by first wrapping the legacy code within a FFI. We built a test-suite that exercised the legacy application through this interface. Then we built an API on top of the interface and built integration tests that checked all the code paths into the legacy system. Once we had that we were able to build new features on to the system and replace each code path one by one.
Unsurprisingly we actually discovered bugs in the old system this way and were able to correct them. It didn't take long for the stakeholders to stop worrying and trust the team. However there was a lot of debate and argument along the way.
The problem isn't technical. You can simultaneously maintain and extend legacy applications and avoid all of the risks stakeholders are worried about. One could actually improve these systems by doing so. The real problem is political and convincing these stakeholders that you can minimize the risk is a difficult task. It was the hardest part of working on that team -- even when we were demonstrating our results!
The hardest part about working with legacy systems are the huge bureaucracies that sit on top of them.
http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
Very good read, if you need to deal with legacy code and you don't know where to start.
http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
A book called "Working effectively with legacy code" by Feathers is great for instrumenting and regression testing old code bases then changing them without breaking them.
Non-aff link http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
[0]: http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
for a good treatment of the "whole picture" of applying tests to legacy (and new) systems including the politics.
But yeah you are facing the classic problem of maintenance programming where you have to make changes to a system that you don't completely trust or understand. It seems like slow going no matter what you do, you've got the choice of (1) developing partial understanding of bits and pieces of the system so you can make surgical changes to fix problems vs (2) rebuilding large parts of the system. If you do enough of (1) you will have some sense of what (2) entails, if you haven't done enough of (1) you'll never believe how many things a system that's been around for 5+ years is doing that you don't know about.
I would also point to
https://www.redhat.com/architect/pros-and-cons-strangler-arc...
as advanced thinking on this issue. If you can do anything, get your team writing tests. It's not a given that people tasked with writing tests will write good tests but writing good tests for a legacy system that you've struggled with can be an almost religious revelation.
It also seems like a bad smell that you're voting on things, I mean juniors shouldn't have the same level of input in this as you if you're the lead.