2
$\begingroup$

Say I have two algorithms that I don't know how they work but I know what they are meant to achieve. I tried algorithm A once and it succeeded. And tried algorithm B 100 times and succeeded 99 times. So A succeeded 100% times and B succeeded 99% times, but my common sense tells me B is still more "reliable" than A. How can I express this mathematically, or, how can I calculate which algorithm should I choose if I have to choose one when all I know about each of them is how many times it has been tested and how many times it succeeded (if it didn't that just means it failed for what I need)?

Please be as simple as possible I'm a newbie at math.

  • 0
    Wouldn't `(SuccessfulAttempts - 1) / Tries` be a good approximation?2010-12-13
  • 0
    ${\rm SuccessfulAttempts}/{\rm Tries}$ is a good approximation provided that ${\rm Tries}$ is sufficiently large.2010-12-14
  • 0
    I do the `-1` so that cases with small `Tries` don't get a very high number, so that, for example, `1 / 1` doesn't get 100%2010-12-14

1 Answers 1

2

Perhaps what you are looking for is "Confidence Interval for Proportion". There are calculators for that online, see e.g. this or this. Here is also a calculator that determines the minimum sample size you would need in order for your sample to meet the desired precision requirements for a study .