The standard thing to do to avoid overflows is to use logarithms for all the probabilities, because you only ever have to multiply probabilities, but that might be overkill for a project like this.
top1214 wrote:9 10 -100 Gives a probability of 8.25% for 100% drop rate, which we know is incorrect.
Negative drops are not malformed data!
-9 3 -4000 has interesting predictions.
Fred Nefler wrote:top1214 wrote:9 10 -100 Gives a probability of 8.25% for 100% drop rate, which we know is incorrect.
Negative drops are not malformed data!
-9 3 -4000 has interesting predictions.
9 10 isn't the pattern of a 100% drop though. It'd be 10 10. 9 10 -100 is just madness.
top1214 wrote:Fred Nefler wrote:top1214 wrote:9 10 -100 Gives a probability of 8.25% for 100% drop rate, which we know is incorrect.
Negative drops are not malformed data!
-9 3 -4000 has interesting predictions.
9 10 isn't the pattern of a 100% drop though. It'd be 10 10. 9 10 -100 is just madness.
Well, yes. But in an effort to break the analysis, don't you throw in things that are madness? And at any item drop bonus >= -100, nothing should drop, unless it's a 100% drop.
Fred Nefler wrote:So by "1 pixel dropped" does the tool want "exactly 1 pixel dropped" or "1 or more pixels dropped"? Yiab used the latter (and that's what his 8-bit data uses), and I use the former (when listing data on wiki talk pages).
YiabData = [
{counts: [6, 51, 337, 541], boost: 20},
{counts: [0, 2, 38, 179], boost: 40},
{counts: [0, 0, 6, 105], boost: 50},
{counts: [1, 48, 332, 606], boost: 20},
{counts: [0, 0, 35, 160], boost: 40},
{counts: [0,0,7,115], boost: 50 },
{counts: [2, 45, 354, 536], boost: 20},
{counts: [0, 0, 27, 192], boost: 40},
{counts: [0, 0, 12, 105], boost: 50},
{counts: [0, 41, 355, 567], boost: 20},
{counts: [0, 1, 31, 184], boost: 40},
{counts: [0, 0, 8, 107], boost: 50} ]
starwed wrote:After a conversation in AFHk clan, I finally got around to doing something I meant to do long ago -- coding up a tool to find the base drop rates of items, given a diverse set of observations.
The idea is that, for non-conditional items, we know that the drop rate is an integer. So we could just, by brute force, calculate the chance that a particular set of observations would occur given a particular base rate. Normalised, that translates into a % belief that the particular drop rate is the real one. (Since in the end everything is normalised, that means we just need to calculate something proportional to the probability, which is what the code actually does.)
>>Here is the tool.<<
>(Multi-drop version) <
You enter the data with one row for each set of observations at a particular +item find, in the format Drops Trials +Item. There's an example at the bottom of the page to hopefully make usage clear.
I've tried to check it to make sure the code is correct, but it would be awesome if people tried to find data that breaks the analysis. If it passes, it might be helpful in spading the base rates of items.
top1214 wrote:It looks like these have recently gone away
Return to The Spade's Textbook: How to Spade
Users browsing this forum: No registered users and 1 guest