pythonpytorchnlphuggingface-transformerstext-classification

What is the classification head of a hugging face AutoModelForTokenClassification Model


I am a beginner to hugging face and transformers and have been trying to figure out what is the classification head of the AutoModelForTokenClassification? Is is just a BiLSTM-CRF layer or is it something else?

In general where do find details about the heads of these AutoModels?

I have tried looking into the docs but couldn't find anything.


Solution

  • The AutoModel* is not pytorch model implementation, it is an implemented factory pattern. That means it returns an instance of a different class depending on the provided parameters. For example:

    from transformers import AutoModelForTokenClassification
    
    m = AutoModelForTokenClassification.from_pretrained("roberta-base")
    print(type(m))
    

    Output:

    <class 'transformers.models.roberta.modeling_roberta.RobertaForTokenClassification'>
    

    You can check the head either with the official documentation of the class or with parameters:

    m.parameters
    

    Output:

    <bound method Module.parameters of RobertaForTokenClassification(
      (roberta): RobertaModel(
        (embeddings): RobertaEmbeddings(
          (word_embeddings): Embedding(50265, 768, padding_idx=1)
          (position_embeddings): Embedding(514, 768, padding_idx=1)
          (token_type_embeddings): Embedding(1, 768)
          (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
        (encoder): RobertaEncoder(
          (layer): ModuleList(
            (0): RobertaLayer(
              (attention): RobertaAttention(
                (self): RobertaSelfAttention(
                  (query): Linear(in_features=768, out_features=768, bias=True)
                  (key): Linear(in_features=768, out_features=768, bias=True)
                  (value): Linear(in_features=768, out_features=768, bias=True)
                  (dropout): Dropout(p=0.1, inplace=False)
                )
    <... truncated ...>
            (11): RobertaLayer(
              (attention): RobertaAttention(
                (self): RobertaSelfAttention(
                  (query): Linear(in_features=768, out_features=768, bias=True)
                  (key): Linear(in_features=768, out_features=768, bias=True)
                  (value): Linear(in_features=768, out_features=768, bias=True)
                  (dropout): Dropout(p=0.1, inplace=False)
                )
                (output): RobertaSelfOutput(
                  (dense): Linear(in_features=768, out_features=768, bias=True)
                  (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                  (dropout): Dropout(p=0.1, inplace=False)
                )
              )
              (intermediate): RobertaIntermediate(
                (dense): Linear(in_features=768, out_features=3072, bias=True)
                (intermediate_act_fn): GELUActivation()
              )
              (output): RobertaOutput(
                (dense): Linear(in_features=3072, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
          )
        )
      )
      (dropout): Dropout(p=0.1, inplace=False)
      (classifier): Linear(in_features=768, out_features=2, bias=True)
    )>