- 🗂️Benchmark Name: MultiPL-HumanEval
- 📚Publisher:
Arxiv
- 🔗URL: https://github.com/nuprl/MultiPL-E/tree/main/datasets
- Number of Instances:
164
per programming language - Problem Description’s Natural Language:
English
- Code Solution’s Programming Language:
Python
,Bash
,C++
,C#
,D
,Go
,Java
,JavaScrpt
,Julia
,Lua
,Perl
,PHP
,R
,Racket
,Ruby
,Rust
,Scala
,Swift
,TypeScript
- Data Statistics
- Test Case: ✅
- Average Number of Test Cases:
7.8
- Average Number of Characters in Problem Description:
453.9
- Average Number of Lines in Problem Description:
13.0
- Average Number of Characters in Code Solution: /
- Average Number of Lines in Code Solution: /
- Scenario:
Multilingual
MultiPL-HumanEval
This post is licensed under CC BY 4.0 by the author.
Recently Updated