Lately I was reading articles about Lambda calculus and Church Encoding, and although I formed a remote understanding of what they entail, I am having trouble finding purpose for using higher-order functions to represent numeric values or lists instead of using numeric values or lists directly.
In programming, performing Lambda calculus on Church-Encoded expressions is extremely taxing on the machine's resources, and just seems like a considerably less efficient technique of doing anything. I find that programmers usually consider it bad programming practice, except from Scheme or Haskell programmers, for odd reasons.
Is there any actual practical reason to use Church encoding? Or is it only useful in theoretical study?
Analysis of a (formal) language becomes technically simpler the smaller it is. If you can represent, for example, data types as functions (which is what the Church encoding gives you), then you don't need to add data types to your language just to be able to examine how you can manipulate data in it.
For a practical programming language, yes, you most often add bespoke data type support; but you can still define their semantics by reducing them to functions and then using just the semantics of the smaller, data type-less language.