is it the operating system that chooses the encoding and decoding schemes like ASCII, UTF 8 for keyboard inputs and screen outputs ?
Also I want to know that how does compiler decides upon the encoding scheme for character strings?
Typically, operating systems receive numeric IDs of pressed keys from the keyboard. These are then converted to whatever is appropriate. I, for example, use the same OS and keyboard for writing text in 4+ languages. I just switch the "keyboard layout" in the OS and type and the OS interprets the same keys differently. Whatever the OS uses to represent textual data is up to the OS. In the old days in systems like MSDOS you could usually use only one language at a time (that is, one in addition to English or whatever else language for which ASCII is enough, is there any, btw?) because for each character there were only 8 bits of storage. With some hacks and workarounds those limiting 8 bits could be overcome. Modern operating systems these days use Unicode to represent text internally (it could be UTF-16 in Windows or UTF-8 in Linux).
It's up to the compiler (with restrictions coming from the programming language standard to which the compiler claims conformance) how to encode characters and character strings. Some allow and use 8-bit only (ASCII + 128 extra chars + code pages/locales), others support Unicode (UTF-8 or UTF-16 or almost Unicode, i.e. UCS-2), yet others support other multibyte encoding schemes (for languages like Chinese, for example).