intellij-ideaartificial-intelligencegithub-copilot

Is there a “How do you like Github Copilot to respond?” in GitHub Copilot?


With ChatGPT we have the option: “How would you like ChatGPT to respond?” Where we can give indications on how we expect it to respond.

Is the same feature available in GitHub Copilot(GC) or are there some workarounds to get it? Example having a file locally that GC reads to get those user preferences before giving a response.

The goal is to be able share those settings in the same development team, hence avoiding too many divergences in the code and tests generated by GC. Thank you in advance for your help


Solution

  • You may have to use mixing VSCode and IntelliJ, or upgrade to GitHub Copilot for Enterprise. These are 2 ways for that approach: Local and Knowledge Base.

    Local

    You need at least GitHub Copilot for Business, or for Individual (I'm not tested Individual yet but it likely can)

    The supported so called Instructions, either loading via file or text direct both are fine. But now it only supports in VSCode

    In VSCode, these are many settings you can set: Ctrl + , > settings > type "github copilot instruction".

    The instruction is good for base context like coding convention (coding standard), language scope, prerequisite API list,... but bear in mind keeping it short and meaningful because it would cause slow response - large token consumed.

    You may update to following version to have this feature

    Copilot

    Knowledge Base

    GitHub Copilot for Enterprise license required, the idea is you will attach a group of central repositories into a Knowledge Base - Coding Convention markdown, API documentation,..., then for every question to Copilot either in website or any IDE/Editor, it would firstly do *RAG into there knowledge and response back to you subsequently

    Read more here https://docs.github.com/en/enterprise-cloud@latest/copilot/customizing-copilot/managing-copilot-knowledge-bases

    *RAG: stands for Retrieval-Augmented Generation, which is a typical LLM/SLM term where the model need to pull external knowledge base to enhance its generation responses - instead of finetuning (retraining) which is high cost/time consuming. Typically, knowledge base is vector database or search-based engine to leverage ML algorithms basically.