But there is something dangerous and seductive about things that almost work,
https://www.amazon.com/Friends-High-Places-W-Livingston/dp/0...
has a chapter about the most dangerous trap in product development where you are chasing some asymptote such that you can work hard, harder and hardest and you just converge on something that is 97% correct which in the end is just useless.
https://en.wikipedia.org/wiki/Asymptote
which is described as a risk in great detail
https://www.amazon.com/Friends-High-Places-W-Livingston/dp/0...
it's quite a terrible risk because you often think "if only I double or triple the resources I apply to do this I'll get it." Really though you get from 90% there to 91% to 92% there.... You never get there because there is a structural mismatch between the problem you have and how you're trying to solve it.
My take is that people have been too incredulous about the idea that you can just add more neurons and train harder and solve all problems... But if you get into the trenches and ask "why can't this network solve this particular task?" you usually do find structural mismatches.
What's been exciting just recently (last month or so) are structurally improved models which do make progress beyond the asymptote because they are confronting
https://www.businessballs.com/strategy-innovation/ashbys-law...
https://www.amazon.com/Friends-High-Places-W-Livingston/dp/0...
tells (among other things) a harrowing tale of a common mistake in technology development that blindsides people every time: the project that reaches an asymptote instead of completion that can get you to keep spending resources and spending resources because you think you have only 5% to go except the approach you've chosen means you'll never get the last 4%. It's a seductive situation that tends to turn the team away from Cassandras who have a clear view.
Happens a lot in machine learning projects where you don’t have the right features. (Right now I am chewing on the problem of “what kind of shoes is the person in this picture wearing?” and how many image classification models would not at all get that they are supposed to look at a small part of the image and how easy it would be to conclude that “this person is on a basketball court so they are wearing sneakers” or “this is a dude so they aren’t wearing heels” or “this lady has a fancy updo and fancy makeup so she must be wearing fancy shoes”. Trouble is all those biases make the model perform better up to a point but to get past that point you really need to segment out the person’s feet.)