# Close to truth

I have been thinking about induction and deduction. I was taught that I could prove something was true with deduction but not with induction. A logical argument gives truth with a capital T. But for years I have not accepted this way to thinking. All the logical argument gives is a relationship. If the axioms are True then the conclusion is True and if the conclusion is False then one or more of the axioms is False. But how do you get your first couple of True axioms, the axioms that are needed for the first True conclusion. Not with logic obviously. How axioms have been identified in the past is by induction. They are statements we find trustworthy because we have never encountered them to be suspect. It does seem a bit ironic that deduction is held to be more rigorous than induction when at the bottom of a deduction are axioms arrived at by induction. So I just assume there are no truths with a capital T.

But induction is much stronger than it is usually portrayed. Popper seemed to think that a strong case for inductive arguments could not be made and that the best that could be done was to falsify those that could be falsified and temporarily assuming that the rest were OK (but certainly not true even without a capital). This is somewhat counter-intuitive, because we do trust inductions more if they are ‘confirmed’. Confirmation is somehow more valued than falsification – probably because we are more interested in what has a good chance of being true than what is almost certain to be false.

The Bayesian probability adherents make the argument that by confirmations piling up, each making it more probable that a statement is true, a statement can become so close to true that it makes no-never-mind. Many believe that our minds use a Bayesian approach to understanding the world. Of course nothing that is statistical is going to merit an actual true with a capital T. So I have to again accept that there is no true with a capital T. Even if many confirmations and no falsification is close.

But there is a deeper problem than even induction under Bayesian rules of probability. Our knowledge is not little bits and pieces that can be confirmed or found false, this is a simplification that can confuse. What we have is a huge web of knowledge, not independent bits. This does not lend itself to actual Bayesian calculations, but the general idea is still valid. New (and therefore suspect) ideas are confirmed or falsified by being set in that web of knowledge – they eventually fit or don’t fit. Each confirmation strengthens the web as well as the new idea; and, each falsification can be interpreted as a fault in the web as well as the failure of the new idea, but it is almost always the web that stays and the new idea that is thrown away. This has been going on for a few centuries and the web is very strong. It takes an upheaval every once in a while but it is as close to true as we have. It is in essence a product of induction and not deduction.