The likely outcome of applying software to the reading of ordinary proofs would be to discover that most proofs contain steps that are not formally correct. For example, in classical geometry many steps in the arguments are "generically true" but false for special degenerate configurations. Sometimes these configurations, such as a triangle with all three vertices the same point, are implicitly excluded from the cases considered in the theorem, or can be handled in the proof using additional steps that might be considered trivial and are not articulated in the written argument. In such cases the theorem would be correct in substance but some steps could not be filled in by a computerized proof assistant, because the steps are wrong.
Less often, there are more difficult cases, where the non-generic configurations and components can't be ignored, or the intuitive idea of what is generic behavior is incorrect. In this case a theorem may be correct, but a proof that doesn't deal with the special cases errs not only in the details but in its overall argument.
A similar thing happens in computational linguistics. When trying to process natural language with formal grammars, it turns out that simple sentences are multiply ambiguous, with a large number of unintended but correct interpretations. People constantly use additional knowledge to disambiguate the language they hear, but software does not yet have that capability.