Skip to content

Releases: aloth/JudgeGPT

v1.0.3 - WWW '26 Paper, Data Tools & Documentation

14 Feb 17:18

Choose a tag to compare

This release accompanies the acceptance of our paper "Industrialized Deception" at ACM TheWebConf '26 (WWW '26) and adds data analysis tooling, improved documentation, and project branding.

New Features:

  • Data Analysis Tools: MongoDB export functionality with data_analysis/export_data.py for extracting and analyzing survey responses
  • Announcement Box Control: URL parameter to toggle the in-app announcement box (?announce=off)

Documentation:

  • CITATION.cff added for standardized citation metadata
  • Data Dictionary (DATA_DICTIONARY.md) documenting the complete survey data schema
  • WWW '26 citation and DOI (10.1145/3774905.3795471) added to README

Assets:

  • Hero images for README and social sharing
  • WebConf '26 paper title pages (300 DPI, print quality)
  • Mastodon badge added to README

Full Changelog: v1.0.2...v1.0.3

v1.0.2 - Enhanced Stability and Database Error Handling

01 Sep 14:07
5755915

Choose a tag to compare

This patch release, v1.0.2, introduces important backend improvements to enhance the stability and robustness of the JudgeGPT survey application.

Key Enhancement:

  • Robust Database Error Handling: We have implemented comprehensive try...except blocks around all MongoDB write operations within the save_participant and save_response functions. Previously, a database connection issue (e.g., a timeout or network disruption) could cause the application to crash or fail silently. Now, the application will gracefully handle these database errors.

This change significantly improves the application's resilience and provides a better user experience in the event of backend service interruptions.

There are no changes to the core survey questions, UI layout, or data collection schema in this version.

Full Changelog: v1.0.1...v1.0.2

v1.0.1 - Performance Enhancement for Result Aggregation

13 May 13:44
fca0201

Choose a tag to compare

This patch release, v1.0.1, focuses on internal performance enhancements following our initial public survey launch.

Key Improvement:

  • Optimized Result Aggregation: The aggregate_results function, responsible for calculating summary statistics and accuracy metrics, has been significantly optimized. Specifically, the calculation of HM_Accuracy (Human/Machine Accuracy) and LF_Accuracy (Legitimacy/Fake Accuracy) has been refactored to use vectorized Pandas operations instead of less performant row-wise df.apply() calls. This leads to a notable speed-up in data processing, particularly as the dataset of participant responses grows.

There are no changes to the user-facing survey, data collection structure, or overall functionality introduced in v1.0.0. This release ensures the backend processing remains robust and scalable as we gather more valuable data for the JudgeGPT project.

Full Changelog: v1.0.0...v1.0.1

v1.0.0 - Public Survey Launch

25 Feb 18:09
1108dae

Choose a tag to compare

This release marks the official launch of JudgeGPT’s public survey! The data collection process has started, and the data structure is now stable. Participants can now assess AI-generated news fragments and contribute to research on misinformation detection.