dmis-lab

dmis-lab/biobert-large-cased-v1.1-squad

No description available.

Model Documentation

Model Card for biobert-large-cased-v1.1-squad

Model Details

Model Description

More information needed
  • Developed by: DMIS-lab (Data Mining and Information Systems Lab, Korea University)
  • Shared by [Optional]: DMIS-lab (Data Mining and Information Systems Lab, Korea University)


  • Model type: Question Answering
  • Language(s) (NLP): More information needed
  • License: More information needed
  • Parent Model: gpt-neo-2.7B
  • Resources for more information:
  • GitHub Repo
  • Associated Paper


  • Uses



    Direct Use

    This model can be used for the task of question answering.

    Downstream Use [Optional]

    More information needed.

    Out-of-Scope Use

    The model should not be used to intentionally create hostile or alienating environments for people.

    Bias, Risks, and Limitations

    Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.



    Recommendations

    Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

    Training Details

    Training Data

    The model creators note in the associated paper: > We used the BERTBASE model pre-trained on English Wikipedia and BooksCorpus for 1M steps. BioBERT v1.0 (þ PubMed þ PMC) is the version of BioBERT (þ PubMed þ PMC) trained for 470 K steps. When using both the PubMed and PMC corpora, we found that 200K and 270K pre-training steps were optimal for PubMed and PMC, respectively. We also used the ablated versions of BioBERT v1.0, which were pre-trained on only PubMed for 200K steps (BioBERT v1.0 (þ PubMed)) and PMC for 270K steps (BioBERT v1.0 (þ PMC))

    Training Procedure



    Preprocessing

    The model creators note in the associated paper: > We pre-trained BioBERT using Naver Smart Machine Learning (NSML) (Sung et al., 2017), which is utilized for large-scale experiments that need to be run on several GPUs

    Speeds, Sizes, Times

    The model creators note in the associated paper: > The maximum sequence length was fixed to 512 and the mini-batch size was set to 192, resulting in 98 304 words per iteration.

    Evaluation

    Testing Data, Factors & Metrics

    Testing Data

    More information needed

    Factors

    More information needed

    Metrics

    More information needed

    Results

    More information needed

    Model Examination

    More information needed

    Environmental Impact

    Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
  • Hardware Type: More information needed
  • Training: Eight NVIDIA V100 (32GB) GPUs [ for training],
  • Fine-tuning: a single NVIDIA Titan Xp (12GB) GPU to fine-tune BioBERT on each task
  • Hours used: More information needed
  • Cloud Provider: More information needed
  • Compute Region: More information needed
  • Carbon Emitted: More information needed
  • Technical Specifications [optional]

    Model Architecture and Objective

    More information needed

    Compute Infrastructure

    More information needed

    Hardware

    More information needed

    Software

    More information needed.

    Citation



    BibTeX:
    bibtex
    @misc{mesh-transformer-jax,
     @article{lee2019biobert,
      title={BioBERT: a pre-trained biomedical language representation model for biomedical text mining},
      author={Lee, Jinhyuk and Yoon, Wonjin and Kim, Sungdong and Kim, Donghyeon and Kim, Sunkyu and So, Chan Ho and Kang, Jaewoo},
      journal={arXiv preprint arXiv:1901.08746},
      year={2019}
    }
    

    Glossary [optional]

    More information needed

    More Information [optional]

    For help or issues using BioBERT, please submit a GitHub issue. Please contact Jinhyuk Lee(lee.jnhk (at) gmail.com), or Wonjin Yoon (wonjin.info (at) gmail.com) for communication related to BioBERT.

    Model Card Authors [optional]

    DMIS-lab (Data Mining and Information Systems Lab, Korea University) in collaboration with Ezi Ozoani and the Hugging Face team

    Model Card Contact

    More information needed

    How to Get Started with the Model

    Use the code below to get started with the model.
    Click to expand

    python
    from transformers import AutoTokenizer, AutoModelForQuestionAnswering

    tokenizer = AutoTokenizer.from_pretrained("dmis-lab/biobert-large-cased-v1.1-squad")

    model = AutoModelForQuestionAnswering.from_pretrained("dmis-lab/biobert-large-cased-v1.1-squad")


    Files & Weights

    FilenameSizeAction
    flax_model.msgpack 1.35 GB
    pytorch_model.bin 1.35 GB