Oooh.. this sounds like a great computer science problem.
"How to get an objective rating in the presence of adversaries"
It is probably extensible to generic reviews as well... so things like the Amazon scam. But in contrast to Amazon, conference participants are motivated to review.
I honestly don't see why all participants can't be considered as part of the peer review pool and everybody votes. I'd guess you run a risk of being scooped but maybe a conference should consist of all papers with the top N being considered worthy of publication. Maybe the remaining could be considered pre-publication... I mean everything is on ArviX anyways.
So instead of bids you have randomization. Kahneman's latest book talks about this and it's been making the rounds on NPR, NyTimes etc...
"How to get an objective rating in the presence of adversaries"
It is probably extensible to generic reviews as well... so things like the Amazon scam. But in contrast to Amazon, conference participants are motivated to review.
I honestly don't see why all participants can't be considered as part of the peer review pool and everybody votes. I'd guess you run a risk of being scooped but maybe a conference should consist of all papers with the top N being considered worthy of publication. Maybe the remaining could be considered pre-publication... I mean everything is on ArviX anyways.
So instead of bids you have randomization. Kahneman's latest book talks about this and it's been making the rounds on NPR, NyTimes etc...
https://www.amazon.com/Noise-Human-Judgment-Daniel-Kahneman/...