About the Documentation
=======================
These benchmarks and other documentation were produced by the Metadata Quality
Benchmarks Sub-Group, which was formed in 2018 within the DLF-AIG (Digital Library Federation
Assessment Interest Groups) Metadata Working Group.
Work in this space included a survey of digital libraries regarding metadata field
usage and metadata quality activities and needs. The survey and data have been
publicly released, along with an initial white paper summarizing general data and a
paper documenting certain findings in more depth. Information is available `on the
main Metadata Working Group website `_.
Review of Benchmarks Documentation
----------------------------------
These benchmarks are maintained in a GitHub repository. Suggestions, corrections,
questions, etc. specifically about the content of the benchmarks may be submitted
as issues in the repository (this does require a GitHub account): https://github.com/DLFMetadataAssessment/MetadataQualityBenchmarks/
There is currently no plan for a formal revision schedule, however, any submitted
issues will be reviewed by Metadata Working Group members and addressed based on
consensus. Additional documentation may be added as time permits, according to
needs expressed by comments or group members.
Acknowledgements
----------------
A number of people were involved with the development of this documentation from 2018
through the public release in 2025.
Special thanks to Rachel Wittmann and Andrea Pyant who spearheaded initial efforts and
set up the survey through the University of Utah IRB.
We also appreciate the participation of survey respondents and the peer reviewers
who assisted in providing specific feedback on earlier drafts of this work. This
could not have been completed without input from the community.
Contact
-------
- More information about the Metadata Working Group may be found on `this wiki page `_.
- Current leadership information for the Group is listed on the DLF `group page `_.