Home GPT-Neo
Post
Cancel

GPT-Neo

  • 📙Paper: GPT-Neo
  • 📚Publisher: other
  • 🏠Author Affiliation: EleutherAI
  • 🔑Public: ✅
  • 🌐Architecture
    • Encoder-Decoder
    • Decoder-Only
  • 📏Model Size
    • 125M; 1.3B; 2.7B
  • 🗂️Data pre-processing
    • Data Resource
      • The Pile
    • De-duplication: /
    • Filter Strategies
      • /
  • 🍉Tokenizer
    • Technology
      • Byte-level Byte-Pair-Encoding (BBPE)
      • SentencePiece
    • Details
      • GPT-2 tokenizer
      • Train a new tokenizer on your own dataset
  • 🧪Hyperparameters (GPT-Neo 2.7B)
    • optimizer: Adam
      • betas: 0.9, 0.95
      • eps: 1e-8
    • batch size: /
    • context window: 2,048
    • gradient accumulation steps: /
    • warmup steps: 3,000
    • learning rate: /
    • weight decay: 0.1
    • decay schedule
      • Cosine
      • Linear
      • Polynomial
      • Inverse Square
    • precision floating point: /
  • 🏃‍♀️Training
    • model initialization: from scratch
    • training strategies
      • left-to-right
      • fill-in-the-middle
    • trained tokens/steps: /
    • hardware: Training and inference is officially supported on TPU and should work on GPU as well.
    • training time: /
This post is licensed under CC BY 4.0 by the author.

CodeGPT

GPT-Neo