Skip to content

Releases: MartinXPN/LambdaJudge

v1.4 - [email protected] update with Amazon Linux 2023, SQL tasks, C23 & C++23 support, improved security

31 Dec 09:37
505b8cb
Compare
Choose a tag to compare
  • Improved reliability (with a retry mechanism) for callback requests
  • Run containers with Python 3.12
  • Close the process.stdin after sending input (so that submissions can read EOF)
  • Added text submission support
  • Upgraded to Amazon Linux 2023
  • Added SQL tasks
  • C++20 and C++23 support
  • C20 and C23 support
  • Use environment variables instead of a secret manager
  • Improved security by allowing the API Gateway access only using a certain access key

Full Changelog: v1.3...v1.4

v1.3 - Improved reliability and security

09 Aug 07:24
Compare
Choose a tag to compare
  • Asset files are handled with bytes
  • Upgrade to Python 3.11 runtime
  • Run programs even without test cases
  • Fixed rounding errors for score calculation with Decimal
  • Provide a random prefix for custom checkers to use for reporting status messages (for security)

v1.2 - New ML runtime, Improved type hinting, linting, and address sanitizers

13 Mar 09:52
bcbdacc
Compare
Choose a tag to compare
  • Added Address Sanitizers for C++ codes
  • Improved type hinting (developer experience)
  • Added a new Python ML Runtime
  • Allow code linting

v1.1 - Improved stability and work on huge test cases

19 Sep 19:49
Compare
Choose a tag to compare

This release includes stability improvements for big test cases where the input and output of the program might contain several MB of text.

v1.0 - Initial stable version

06 Sep 08:19
Compare
Choose a tag to compare
  • Supports C++, Python, Java, JavaScript (Node.js), C#.
  • Verdicts: Solved, Wrong answer, Time limit exceeded, Memory limit exceeded, Output limit exceeded, Runtime error, Compilation error, Skipped.
  • Supports files (input_files and target_files)
  • Support for projects (multi-file submissions)
  • Linting through pre-commit
  • Test coverage

Evaluate many submissions:

import random
import time
from pathlib import Path

import boto3
import json
import glob
from multiprocessing import Pool

from models import SubmissionRequest, SubmissionResult, RunResult, Status


def evaluate(submission: SubmissionRequest) -> SubmissionResult:
    try:
        lambdas = boto3.client('lambda')
        response = lambdas.invoke(
            FunctionName='Bouncer',
            InvocationType='RequestResponse',
            Payload=json.dumps({'body': submission.to_json()}),
        )

        res = response['Payload'].read().decode('utf-8')
        res = json.loads(res)
        # print('res:', res)
        return SubmissionResult.from_json(res['body'])
    except:
        print('Something went wrong:', submission.problem)
        return SubmissionResult(overall=RunResult(status=Status.WA, memory=0, time=0, return_code=0),
                                compile_result=RunResult(status=Status.WA, memory=0, time=0, return_code=0))


def evaluate_all():
    all_submissions = glob.glob('../submissions/*.cpp') + glob.glob('../submissions/*.py')

    # all_submissions = all_submissions[:10]
    # all_submissions = ['../submissions/000787-D.cpp']
    random.shuffle(all_submissions)
    print(all_submissions)
    print(len(all_submissions))
    submissions = [SubmissionRequest(
        language='python' if s.endswith('.py') else 'C++11',
        code={'main.py' if s.endswith('.py') else 'main.cpp': Path(s).read_text()},
        problem=s.split('-')[-1].split('.')[0],     # 001122-J.cpp => J
    ) for s in all_submissions]

    with Pool(100) as p:
        results = p.starmap(evaluate, [(submission,) for submission in submissions])

    final = {submission.split('/')[-1].split('-')[0]: 'OK' if res.overall.status == Status.OK else 'FAIL'
             for submission, res in zip(all_submissions, results)}
    return final


if __name__ == '__main__':
    start_time = time.time()
    all_results = evaluate_all()
    print('--------------------')
    with open('results-new.txt', 'w') as f:
        for k, v in sorted(all_results.items(), key=lambda kv: kv[0]):
            print(f'{k}: {v}')
            f.write(f'{k}: {v}\n')

    print('Took:', time.time() - start_time, 'seconds')

M2 passed

15 Jul 16:52
Compare
Choose a tag to compare
v0.0.1

ignore stderr if stdout is present when running the submission