Home MHPP
Post
Cancel

MHPP

  • đź“™Paper: [MHPPExploring the Capabilities and Limitations of Language Models Beyond Basic Code Generation](https://arxiv.org/pdf/2405.11430)
  • đź“šPublisher: arxiv
  • 🏠Author Affiliation: University of Edinburgh, University of Hong Kong, Harbin Institute of Technology, South China University of Technology, City University of Hong Kong, University of Cambridge
  • Dataset: https://github.com/SparksofAGI/MHPP
  • Abstract: Recent advancements in large language models (LLMs) have greatly improved code generation, specifically at the function level. For instance, GPT-4 has achieved an 88.4% pass rate on HumanEval. However, this draws into question the adequacy of existing benchmarks in thoroughly assessing function-level code generation capabilities. Our study analyzed two common benchmarks, HumanEval and MBPP, and found that these might not thoroughly evaluate LLMs’ code generation capacities due to limitations in quality, difficulty, and granularity. To resolve this, we introduce the Mostly Hard Python Problems (MHPP) dataset, consisting of 140 unique human-curated problems. By focusing on the combination of natural language and code reasoning, MHPP gauges LLMs’ abilities to comprehend specifications and restrictions, engage in multi-step reasoning, and apply coding knowledge effectively. Initial evaluations of 22 LLMs using MHPP showed many high-performing models on HumanEval failed to achieve similar success on MHPP. Moreover, MHPP highlighted various previously undiscovered limitations within various LLMs, leading us to believe that it could pave the way for a better understanding of LLMs’ capabilities and limitations.
This post is licensed under CC BY 4.0 by the author.

CodeQwen1.5

ReflectionCoder