- 🗂️Benchmark Name: MBXP-HumanEval
- 📚Publisher:
Arxiv
- 🏠Author Affiliation:
AWS AI Labs
- 🔗URL: https://github.com/amazon-research/mbxp-exec-eval
- Number of Instances:
164
per programming language - Problem Description’s Natural Language:
English
- Code Solution’s Programming Language:
Python
,Java
,JavaScript
,Kotlin
,Perl
,PHP
,Ruby
,Scala
,Swift
- Data Statistics
- Test Case: ✅
- Average Number of Test Cases:
7.8
- Average Number of Characters in Problem Description:
825.6
- Average Number of Lines in Problem Description:
30.0
- Average Number of Characters in Code Solution: /
- Average Number of Lines in Code Solution: /
- Scenario:
Multilingual
MBXP-HumanEval
This post is licensed under CC BY 4.0 by the author.
Recently Updated