winapikeyboarduser-controls

How to prevent WM_CHAR for keypresses handled by WM_KEYDOWN?


I've written a custom Windows control that processes WM_CHAR messages for text input and WM_KEYDOWN messages for "navigation" keystrokes from keys like the arrows, Delete, Home, etc.

I would like my control to process the '+' and '-' keys on the numeric keypad as navigation keys. They are distinguishable from the regular '=+' and '-_' keys on the main portion of the keyboard because they have unique virtual key codes (VK_ADD and VK_SUBTRACT), so processing them in my WM_KEYDOWN works fine.

The problem, of course, is that the control also receives WM_CHAR messages for those keystrokes, because the TranslateMessage call in the message loop treats them like regular keys that generate text. Since my control is in a library that may be used by various applications, the message loop is out of scope.

Therefore, my WM_CHAR handler needs to be able to distinguish between '+' and '-' characters generated from the main keyboard and those from the numeric keypad so that it could ignore the latter ones.

But the character code in a WM_CHAR is indeed a character code and no longer the virtual key code. I tried checking the "extended key" bit in metadata, but apparently these are not extended keys.

That leaves the scan code. The WM_CHAR documentation says the scan codes vary from OEM to OEM. But the Keyboard Input Overview documentation suggests that the scan codes were standardized in the HID specification.

Can I trust that I'll always get a HID scan code?

The codes generated from my Microsoft keyboard do match the HID values, but the conflict in the documentation makes me wonder whether I can actually rely on that for other machines.

My fallback solution would be to have my control note when it processes a VK_ADD or VK_SUBTRACT so that it can know to ignore the next WM_CHAR with a corresponding '+' or '-'.


Solution

  • Both WM_KEYDOWN and WM_CHAR give you the scan code which generated the messages, so it should not be hard to ignore WM_CHAR messages for scan codes that have already sent you WM_KEYDOWN messages for VK_ADD etc, regardless of whether the scan codes are standardized or not. Scan codes don't change while a keyboard is active.

    For example, try something like this:

    bool wIgnoredScanCodes[256] = {};
    
    ...
    
    switch (uMsg)
    {
        case WM_KEYDOWN: {
            if (wParam == VK_ADD) {
                BYTE bScanCode = (lParam >> 16) & 0xFF;
                wIgnoredScanCodes[bScanCode] = true;
            }
            ...
            break;
        }
    
        case WM_CHAR: {
            BYTE bScanCode = (lParam >> 16) & 0xFF;
            if (!wIgnoredScanCodes[bScanCode]) {
                ...
            }
            break;
        }
    }
    

    However, there could be multiple keyboards attached to the machine (rare, but possible), and there is no way to distinguish between them in the WM_KEY... and WM_CHAR messages. So, it is possible that one keyboard could use a given scan code for VK_ADD etc, and another keyboard uses the same scan code for some other purpose. If you need to make this distinction, then you will have to use the Raw Input API instead, which provides you with a per-device identifier in the WM_INPUT message.