If you're trying to compare 100+ ideas and choose the "best" one to explore, I'd suggest looking into a simulation based approach. Monte Carlo simulation[1] is probably a good place to start. There are dozens of textbooks that cover the topic.
Now the downside to this is that you have to have parameter ranges for the model to simulate, and you don't necessarily know the probability distribution for each variable in the model up front. That means you have to estimate/guess at them. This makes the exercise slightly error-prone. There is, however, a mechanism you can use to teach yourself (or others) to do a better job of estimation. The technique I'm thinking of is "calibrated probability assessment"[2].
The book How To Measure Anything[3] by Douglas Hubbard does a really nice job of laying out how to use calibrated probability assessments, mathematical models, and monte carlo simulation, to build a probability distribution for things that look hard/impossible to measure.
Anyway, if you build a model for all of your ideas, and monte carlo simulate all of them to get a probability distribution for the return, then you at least have something somewhat objective to base a decision on.
One last note though: when doing this kind of simulation, one big risk (aside from mis-estimating a parameter) is that you leave a particular parameter out completely. I don't know of any deterministic way to make sure you include all the relevant features in a model. The best way I know of to address that is to "crowd source" some help and get as many people as you can (people who have relevant knowledge / experience) to evaluate and critique your model.
Honestly, my favorite resource for much of this is the book How to Measure Anything by Douglas Hubbard. He goes into detail in understanding the value of information and how and why to use Monte Carlo Simulations.
Now the downside to this is that you have to have parameter ranges for the model to simulate, and you don't necessarily know the probability distribution for each variable in the model up front. That means you have to estimate/guess at them. This makes the exercise slightly error-prone. There is, however, a mechanism you can use to teach yourself (or others) to do a better job of estimation. The technique I'm thinking of is "calibrated probability assessment"[2].
The book How To Measure Anything[3] by Douglas Hubbard does a really nice job of laying out how to use calibrated probability assessments, mathematical models, and monte carlo simulation, to build a probability distribution for things that look hard/impossible to measure.
Anyway, if you build a model for all of your ideas, and monte carlo simulate all of them to get a probability distribution for the return, then you at least have something somewhat objective to base a decision on.
One last note though: when doing this kind of simulation, one big risk (aside from mis-estimating a parameter) is that you leave a particular parameter out completely. I don't know of any deterministic way to make sure you include all the relevant features in a model. The best way I know of to address that is to "crowd source" some help and get as many people as you can (people who have relevant knowledge / experience) to evaluate and critique your model.
[1]: https://en.wikipedia.org/wiki/Monte_Carlo_method
[2]: https://en.wikipedia.org/wiki/Calibrated_probability_assessm...
[3]: https://www.amazon.com/How-Measure-Anything-Intangibles-Busi...