Generic CURL command:
curl -H "Accept: application/json" https://tools.ostrails.eu/champion/tests
Test a discovered data GUID for the ability to implement authentication and authorization in its resolution protocol. Currently passes InChI Keys, DOIs, Handles, and URLs. It also searches the metadata for the Dublin Core 'accessRights' property, which may point to a document describing the data access process. Recognition of other identifiers will be added upon request by the community.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_data_authorization |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_data_authorization |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.VrP6sm |
| Error messages will appear here: | |
|---|---|
Test that the identifier of the data is an unambiguous element of the metadata. Tested options are schema:distribution, http://www.w3.org/ns/ldp#contains, iao:IAO_0000136, IAO:0000136,ldp:contains,foaf:primaryTopic,schema:distribution,schema:contentUrl,schema,mainEntity,schema:codeRepository,schema:distribution,schema:contentUrl, dcat:distribution, dcat:dataset,dcat:downloadURL,dcat:accessURL,sio:SIO_000332, sio:is-about, obo:IAO_0000136
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_data_identifier_in_metadata |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_data_identifier_in_metadata |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.5Xy1dJ |
| Error messages will appear here: | |
|---|---|
Test if the data uses a formal language broadly applicable for knowledge representation. This particular test takes a broad view of what defines a 'knowledge representation language'; in this evaluation, a knowledge representation language is interpreted as one in which terms are semantically-grounded in ontologies. Any form of ontologically-grounded linked data will pass this test.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_data_kr_language_strong |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_data_kr_language_strong |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.l8fVBn |
| Error messages will appear here: | |
|---|---|
Test if the data uses a formal language broadly applicable for knowledge representation. This particular test takes a broad view of what defines a 'knowledge representation language'; in this evaluation, a knowledge representation language is interpreted as one in which terms are semantically-grounded in ontologies. Any form of structured data will pass this test
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_data_kr_language_weak |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_data_kr_language_weak |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.qUroF6 |
| Error messages will appear here: | |
|---|---|
Data may be retrieved by an open and free protocol. Tests data GUID for its resolution protocol. Currently passes InChI Keys, DOIs, Handles, and URLs. Recognition of other identifiers will be added upon request by the community.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_data_protocol |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_data_protocol |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.DfMGZW |
| Error messages will appear here: | |
|---|---|
Tests whether a machine is able to find 'grounded' metadata. i.e. metadata terms that are in a resolvable namespace, where resolution leads to a definition of the meaning of the term. Examples include JSON-LD, embedded schema, or any form of RDF. This test currently excludes XML, even when terms are namespaced. Future versions of this test may be more flexible.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_grounded_metadata |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_grounded_metadata |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.lXAOu2 |
Tests metadata GUID for the ability to implement authentication and authorization in its resolution protocol. Currently passes InChI Keys, DOIs, Handles, and URLs. Recognition of other identifiers will be added upon request by the community.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_metadata_authorization |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_metadata_authorization |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.VrP6sm |
| Error messages will appear here: | |
|---|---|
Metric to test if the metadata contains the unique identifier to the metadata itself. This is done using a variety of 'scraping' tools, including DOI metadata resolution, the use of the 'extruct' Python tool, and others. The test is executed by searching for the predicates 'http[s]://purl.org/dc/terms/identifier','http[s]://schema.org/identifier.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_metadata_identifier_in_metadata |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_metadata_identifier_in_metadata |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.5Xy1dJ |
| Error messages will appear here: | |
|---|---|
Metric to test if the unique identifier of the metadata resource is likely to be persistent. Known schema are registered in FAIRSharing (https://fairsharing.org/standards/?q=&selected_facets=type_exact:identifier%20schema). For URLs that don't follow a schema in FAIRSharing we test known URL persistence schemas (purl, oclc, fdlp, purlz, w3id, ark).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_metadata_identifier_persistence |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_metadata_identifier_persistence |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.VRo9Dl |
| Error messages will appear here: | |
|---|---|
Maturity Indicator to test if the linked data metadata contains an explicit pointer to the license. Tests: xhtml, dvia, dcterms, cc, data.gov.au, and Schema license predicates in linked data, and validates the value of those properties.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_metadata_includes_license |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_metadata_includes_license |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.VrP6sm |
| Error messages will appear here: | |
|---|---|
Maturity Indicator to test if the metadata contains an explicit pointer to the license. This 'weak' test will use a case-insensitive regular expression, and scan both key/value style metadata, as well as linked data metadata. Tests: xhtml, dvia, dcterms, cc, data.gov.au, and Schema license predicates in linked data, and validates the value of those properties.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_metadata_includes_license_weak |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_metadata_includes_license_weak |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.YAdSwh |
| Error messages will appear here: | |
|---|---|
Maturity Indicator to test if the metadata uses a formal language broadly applicable for knowledge representation. This particular test takes a broad view of what defines a 'knowledge representation language'; in this evaluation, anything that can be represented as structured data will be accepted.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_metadata_kr_language_weak |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_metadata_kr_language_weak |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.qUroF6 |
| Error messages will appear here: | |
|---|---|
Maturity Indicator to test if the metadata links outward to third-party resources. It only tests metadata that can be represented as Linked Data.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_metadata_outward_links |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_metadata_outward_links |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.LLXjWx |
| Error messages will appear here: | |
|---|---|
Metric to test if the metadata contains a persistence policy, explicitly identified by a persistencePolicy key (in hashed data) or a http://www.w3.org/2000/10/swap/pim/doc#persistencePolicy predicate in Linked Data. DOIs are assumed to have metadata persistence.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_metadata_persistence |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_metadata_persistence |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.lEZbPK |
| Error messages will appear here: | |
|---|---|
Metadata may be retrieved by an open and free protocol. Tests metadata GUID for its resolution protocol. Currently passes InChI Keys, DOIs, Handles, and URLs. Recognition of other identifiers will be added upon request by the community.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_metadata_protocol |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_metadata_protocol |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.DfMGZW |
| Error messages will appear here: | |
|---|---|
Maturity Indicator to test if the linked data metadata uses terms that resolve to linked (FAIR) data.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_metadata_uses_fair_vocabularies |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_metadata_uses_fair_vocabularies |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.RCUuvt |
| Error messages will appear here: | |
|---|---|
Tests whether a machine is able to discover the resource by search, using Microsoft Bing.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_searchable |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_searchable |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.x1f1l4 |
| Error messages will appear here: | |
|---|---|
Tests whether a machine is able to find structured metadata. This could be (for example) RDFa, embedded json, json-ld, or content-negotiated structured metadata such as RDF Turtle.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_structured_metadata |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_structured_metadata |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.2dUpZs |
| Error messages will appear here: | |
|---|---|
Metric to test if the metadata resource has a unique identifier. This is done by comparing the GUID to the patterns (by regexp) of known GUID schemas such as URLs and DOIs. Known schema are registered in FAIRSharing (https://fairsharing.org/standards/?q=&selected_facets=type_exact:identifier%20schema)
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_unique_identifier |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_unique_identifier |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.NHCOKK |
| Error messages will appear here: | |
|---|---|
This check verifies if the following minimum metadata are present in the ontology metadata: * title: declared using [dc:title](http://purl.org/dc/elements/1.1/title), [dcterms:title](http://purl.org/dc/terms/title) or [schema:name](http://schema.org/name) * description: declared using [dc:abstract](http://purl.org/dc/elements/1.1/abstract), [dcterms:abstract](http://purl.org/dc/terms/abstract), [dc:description](http://purl.org/dc/elements/1.1/description), [dcterms:description](http://purl.org/dc/terms/description), [schema:description](http://schema.org/description), [rdfs:comment](http://www.w3.org/2000/01/rdf-schema#comment), [doap:description](http://usefulinc.com/ns/doap#description), [doap:shortdesc](http://usefulinc.com/ns/doap#shortdesc) or [skos:note](http://www.w3.org/2004/02/skos/core#note) * license: declared using [dcterms:license](http://purl.org/dc/terms/license), [schema:license](http://schema.org/licesne), [doap:license](http://usefulinc.com/ns/doap#license) or [cc:license](http://creativecommons.org/ns#license). * version iri: declared using [owl:versionIRI](http://www.w3.org/2002/07/owl#versionIRI) * creator: declared using [dc:creator](http://purl.org/dc/elements/1.1/creator), [dcterms:creator](http://purl.org/dc/terms/creator), [pav:createdBy](http://purl.org/pav/createdBy), [pav:authoredBy](http://purl.org/pav/authoredBy), [schema:creator](http://schema.org/creator), [prov:wasAttributedTo](http://www.w3.org/ns/prov#wasAttributedTo) or [doap:developer](http://usefulinc.com/ns/doap#developer) * namespace URI: declared using [vann:preferredNamespaceUri](https://purl.org/vocab/vann/) The test will pass if all ontology metadata are present. The test will fail otherwise, indicating the properties that are missing.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/OM1 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/OM1 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/OM1 |
| Error messages will appear here: | |
|---|---|
This test verifies if the ontology has a persistent URL. We do so by checking if the ontology URI follows any of the following URI schemes: * w3id.org * doi.org * purl.org (or purl.something.org) * linked.data.gov.au * dbpedia.org * www.w3.org * perma.cc * data.europa.eu
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/PURL1 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/PURL1 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/PURL1 |
| Error messages will appear here: | |
|---|---|
This check verifies if the ontology uses an open protocol (HTTP or HTTPS). The test will pass if the ontology URI starts with http or https. It will fail otherwise.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/HTTP1 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/HTTP1 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/HTTP1 |
| Error messages will appear here: | |
|---|---|
This test verifies whether there is an id for this ontology version, and whether the id is unique (i.e., different from the ontology URI). The test will pass if: 1. The ontology has a versionIRI ([owl:versionIRI](http://www.w3.org/2002/07/owl#versionIRI)) and 2. The versionIRI used is different from the ontology URI. Otherwise the test will fail. The test will also verify whether version information is present (through [owl:versionInfo](http://www.w3.org/2002/07/owl#versionInfo)), but this is optional.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/VER1 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/VER1 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/VER1 |
| Error messages will appear here: | |
|---|---|
This test verifies if the ontology URI that was found within the ontology document is resolvable. Note that the ontology URI found in the ontology may be different from the URI used in the assessment. The test will pass if the vocabulary is resolvable in any of the following RDF serializations: RDF/XML, TTL, N-Triples, JSON-LD. The test will fail if no known RDF serialization is returned, or the serialization returned is not among one of the aforementioned.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/URI1 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/URL1 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/URI1 |
| Error messages will appear here: | |
|---|---|
This test verifies if a license (or rights) are associated with the ontology. The test will pass if a license is declared using any of the following properties: [dcterms:license](http://purl.org/dc/terms/license), [schema:license](https://schema.org/license), [doap:license](http://usefulinc.com/ns/doap#license) or [cc:license](http://creativecommons.org/ns#license). If a license is not found, but rights are declared (using [dc:rights](http://purl.org/dc/elements/1.1/rights), [dcterms:rights](http://purl.org/dc/terms/rights) or [dcterms:accessRights](http://purl.org/dc/terms/accessRights)), the test will pass as well. Otherwise, the test will fail (i.e., no license or rights are declared).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/OM4.1 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/OM1 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/OM4.1 |
| Error messages will appear here: | |
|---|---|
This test verifies if the ontology reuses other vocabularies for declaring metadata terms (see tests OM1... OM5). The test will pass if metadata annotations using properties from any of the following vocabularies are used: * [Dublin Core](http://purl.org/dc/elements/1.1/) (dc, dcterms) * [Schema.org](https://schema.org/) (schema) * [Vocabulary for annotating vocabulary descriptions](http://purl.org/vocab/vann/) (vann) * [W3C Provenance standard](http://www.w3.org/ns/prov#) (prov) * [Bibliographic Ontology](http://purl.org/ontology/bibo/) (bibo) * [Provenance, Authoring and Versioning](http://purl.org/pav/) (pav) * [Firend of a friend](http://xmlns.com/foaf/0.1/) (foaf) * [Description of a project](http://usefulinc.com/ns/doap#) (doap) * [Metadata vocabulary for ontology descriptions](https://w3id.org/mod#) (mod) * [Web Ontology Language](http://www.w3.org/2002/07/owl#) (owl) * [Resource description framework schema](http://www.w3.org/2000/01/rdf-schema#) (rdfs) The test will also return the vocabularies that were found. If none are found, the test will fail.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/VOC1 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/VOC1 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/VOC1 |
| Error messages will appear here: | |
|---|---|
This test verifies whether the ontology prefix is available in [prefix.cc]((https://prefix.cc/)) or the [Linked Open Vocabularies (LOV)]((https://lov.linkeddata.es/)) registries. The test will pass if: 1. there is a prefix declared in the assessed ontology, 2. the prefix is found in [LOV](https://lov.linkeddata.es/) or [prefix.cc](https://prefix.cc/) 3. if found in [LOV](https://lov.linkeddata.es/) or [prefix.cc](https://prefix.cc/) , the namespace URI associated with the prefix is the same as the assessed ontology URI (or preferred namespace URI) Otherwise, the test will fail.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/FIND2 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/FIND2 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/FIND2 |
| Error messages will appear here: | |
|---|---|
This test verifies if the ontology can be found in a public registry like the Linked Open Vocabularies (LOV) public registry. The test will pass if the assessed ontology URI is found in the list of vocabularies returned by LOV. Alternatively, if there is a [schema:includedInDataCatalog](https://schema.org/includedInDataCatalog) annotation, the test will pass. The test will fail otherwise.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/FIND3 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/FIND3 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/FIND3 |
| Error messages will appear here: | |
|---|---|
This test verifies if the ontology has a valid RDF serialization (TTL, N3, RDF/XML or JSON-LD are supported). The test will fail if no RDF serialization could be loaded for analysis (e.g., the ontology has typos that prevent its parsing). The test uses the OWLAPI to load ontologies or vocabularies.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/RDF1 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/RDF1 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/RDF1 |
| Error messages will appear here: | |
|---|---|
This check verifies if detailed provenance information is available for the ontology: * issued date: declared using [dcterms:issued](http://purl.org/dc/terms/issued), [dcterms:submitted](http://purl.org/dc/terms/submitted) or [schema:datePublished](https://schema.org/datePublished) * publisher: declared using [dc:publisher](http://purl.org/dc/elements/1.1/publisher), [dcterms:publisher](http://purl.org/dc/terms/publisher) or [schema:publisher](https://schema.org/publisher) The test will pass if both metadata fields are present. The test will fail otherwise, indicating the properties that are missing.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/OM5.2 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/OM5.2 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/OM5.2 |
| Error messages will appear here: | |
|---|---|
This test verifies if the ontology has an HTML documentation. The test will attempt to download an HTML representation using the ontology URI, with content negotiation
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/DOC1 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/DOC1 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/DOC1 |
| Error messages will appear here: | |
|---|---|
This test verifies if the following recommended metadata are present in the ontology metadata: * Namespace prefix: declared using [vann:preferredNamespacePrefix](https://purl.org/vocab/vann/) * Version info: declared using [owl:versionInfo](http://www.w3.org/2002/07/owl#versionInfo) or [schema:schemaVersion](http://schema.org/schemaVersion) * Creation date: declared using [dcterms:created](http://purl.org/dc/terms/created), [schema:dateCreated](http://schema.org/dateCreated), [doap:created](http://usefulinc.com/ns/doap#created), [prov:generatedAtTime](http://www.w3.org/ns/prov#generatedAtTime) or [pav:createdOn](http://purl.org/pav/) * Citation: declared using [dcterms:bibliographicCitation](http://purl.org/dc/terms/bibliographicCitation) * Contributor (optional): declared using [dc:contributor](http://purl.org/dc/elements/1.1/contributor), [dcterms:contributor](http://purl.org/dc/terms/contributor), schema:contributor, [doap:documenter](http://usefulinc.com/ns/doap#documenter), [doap:maintainer](http://usefulinc.com/ns/doap#maintainer), [doap:helper](http://usefulinc.com/ns/doap#helper), [doap:translator](http://usefulinc.com/ns/doap#translator) or [pav:contributedBy](http://purl.org/pav/). The test will pass if all the recommended metadata properties are available in the ontology metadata (using any of the vocabularies listed above). The test will also check if contributor is present, but with no penalty (as not all ontologies have a contributor).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/OM2 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/OM2 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/OM2 |
| Error messages will appear here: | |
|---|---|
This test verifies whether HTML and an RDF representation is available for the target vocabulary by doing content negotiation on the ontology URI. The test will pass if the vocabulary is available in HTML and in any of the following RDF serializations: * RDF/XML (application/rdf+xml), * TTL (text/turtle), * N-Triples (text/n3), * JSON-LD (application/ld+json) The test will fail if no HTML is returned, if no known RDF serialization is returned, or the serialization returned is not among one of the aforementioned.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/CN1 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/CN1 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/CN1 |
| Error messages will appear here: | |
|---|---|
This test verifies if the ontology includes the following detailed metadata: * Digital Object Identifier (DOI): declared using [bibo:doi](http://purl.org/ontology/bibo/doi), [schema:identifier](https://schema.org/identifier) (if a doi is provided) or [dcterms:identifier](http://purl.org/dc/terms/identifier) (if a doi is provided) * publisher: declared using [dc:publisher](http://purl.org/dc/elements/1.1/publisher), [dcterms:publisher](http://purl.org/dc/terms/publisher) or [schema:publisher](https://schema.org/publisher) * logo: declared using [foaf:logo](http://xmlns.com/foaf/0.1/logo) or [schema:logo](https://schema.org/logo) * status: declared using [bibo:status](http://purl.org/ontology/bibo/status) or [mod:status](https://w3id.org/mod#status) * source: declared using [dcterms:source](http://purl.org/dc/terms/source) or [prov:hadOriginalSource](http://www.w3.org/ns/prov#hadOriginalSource) * issued date: declared using [dcterms:issued](http://purl.org/dc/terms/issued) * previous version (optional): declared using [dc:replaces](http://purl.org/dc/elements/1.1/replaces), [dcterms:replaces](http://purl.org/dc/terms/replaces), [prov:wasRevisionOf](http://www.w3.org/ns/prov#wasRevisionOf), [owl:priorVersion](http://www.w3.org/2002/07/owl#priorVersion) or [pav:previousVersion](http://purl.org/pav/previousVersion) * backward compatibility (optional): declared using [owl:backwardCompatibleWith](http://www.w3.org/2002/07/owl#backwardCompatibleWith) * modified date (optional): declared using [dcterms:modified](http://purl.org/dc/terms/modified) or [schema:dateModified](https://schema.org/dateModified) The test will pass if all the detailed metadata properties are available in the ontology metadata (using any of the vocabularies listed above). The test will also check if previosu version, backward compatibility and modified date are present, but with no penalty (as not all ontologies have a previous version).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/OM3 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/OM3 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/OM3 |
| Error messages will appear here: | |
|---|---|
This test verifies if the version IRI resolves. The test will pass if there is a version IRI for the ontology/vocabulary (detected using [owl:versionIRI](http://www.w3.org/2002/07/owl#versionIRI) in the ontology metadata) and whether doing a request to said IRI returns a resource. The test will fail if the resource is not found (404 response) or returns an error.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/VER2 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/VER2 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/VER2 |
| Error messages will appear here: | |
|---|---|
This test verifies the extent to which all ontology terms have labels. The test will pass if all classes, properties and data properties have either an [rdfs:label](http://www.w3.org/2000/01/rdf-schema#label) or [skos:prefLabel](http://www.w3.org/2004/02/skos/core#prefLabel). For skos vocabularies, only the skos:Concepts are assessed. Otherwise, the test will fail, indicating the level of completeness found (i.e., percentage of documented terms)
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/VOC3 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/VOC3 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/VOC3 |
| Error messages will appear here: | |
|---|---|
This check verifies whether all ontology terms have descriptions. The test will pass if all classes, properties and data properties have at least one [rdfs:comment](http://www.w3.org/2000/01/rdf-schema#comment), [skos:definition](http://www.w3.org/2004/02/skos/core#definition) or [obo:IAO_0000118](http://purl.obolibrary.org/obo/IAO_0000118) annotation. For skos vocabularies, only the skos:Concepts are assessed. The test will fail otherwise, showing the level of completeness obtained (i.e. the percentage of documented terms).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/VOC4 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/VOC4 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/VOC4 |
| Error messages will appear here: | |
|---|---|
This test verifies if the ontology license is resolvable. The test will pass if the license available in the ontology metadata resolves to a resource. The test will fail if no license is declared (OM4.1), if the license is not a URI/URL, or if the response when requesting is 404 or an error.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/OM4.2 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/OM4.2 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/OM4.2 |
| Error messages will appear here: | |
|---|---|
This check verifies if the ontology URI is equal to the ontology ID. The test passes if the ontology URI used to load the ontology document is the same as the ontology id found in the document itself. Otherwise the test will fail.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/URI2 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/URL2 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/URI2 |
| Error messages will appear here: | |
|---|---|
This test verifies if the ontology imports/extends other vocabularies (besides RDF, OWL and RDFS). The test will pass if other vocabularies are imported ([owl:imports](http://www.w3.org/2002/07/owl#imports)), or if classes, properties or data properties outside the ontology URI or namespace URI are used. The test wil fail if no terms are reused.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/VOC2 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/VOC2 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/VOC2 |
| Error messages will appear here: | |
|---|---|
Metadata are accessible even when the ontology is no longer available. Since the metadata is usually included in the ontology, this test verifies if the ontology can be found in the [Linked Open Vocabularies (LOV) public registry](https://lov.linkeddata.es). The test will pass if the assessed ontology/vocabulary URI is found in the LOV list of vocabularies. Alternatively, if there is a [schema:includedInDataCatalog](https://schema.org/includedInDataCatalog) annotation, the test will pass. The test will fail otherwise.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/FIND_3_BIS |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/FIND_3_BIS |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/FIND_3_BIS |
| Error messages will appear here: | |
|---|---|
This check verifies if an ontology prefix is declared in the ontology metadata. The test will pass if a [vann:preferredNamespacePrefix](http://purl.org/vocab/vann/) is declared. Otherwise, the test will fail.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/FIND1 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/FIND1 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/FIND1 |
| Error messages will appear here: | |
|---|---|
This check verifies if basic provenance metadata is available for the ontology: * creator: declared using [dc:creator](http://purl.org/dc/elements/1.1/creator), [dcterms:creator](http://purl.org/dc/terms/creator), [pav:createdBy](http://purl.org/pav/createdBy), [pav:authoredBy](http://purl.org/pav/authoredBy), [schema:creator](https://schema.org/creator) or [doap:developer](http://usefulinc.com/ns/doap#developer) * creation date: declared using [dcterms:created](http://purl.org/dc/terms/created), [schema:dateCreated](https://schema.org/dateCreated), [doap:created](http://usefulinc.com/ns/doap#created), [prov:generatedAtTime](http://www.w3.org/ns/prov#generatedAtTime) or [pav:createdOn](http://purl.org/pav/createdOn) * contributor (optional): declared using [dc:contributor](http://purl.org/dc/elements/1.1/contributor), [dcterms:contributor](http://purl.org/dc/terms/contributor), [schema:contributor](https://schema.org/contributor), [doap:documenter](http://usefulinc.com/ns/doap#documenter), [doap:maintainer](http://usefulinc.com/ns/doap#maintainer), [doap:helper](http://usefulinc.com/ns/doap#helper), [doap:translator](http://usefulinc.com/ns/doap#translator) or [pav:contributedBy](http://purl.org/pav/contributedBy). * previous version (optional): declared using [dc:replaces](http://purl.org/dc/elements/1.1/replaces), [dcterms:replaces](http://purl.org/dc/terms/replaces), [prov:wasRevisionOf](http://www.w3.org/ns/prov#wasRevisionOf), [owl:priorVersion](http://www.w3.org/2002/07/owl#previousVersion), [pav:previousVersion](http://purl.org/pav/previousVersion) The test will pass if creator and creation date are present. The test will fail otherwise, indicating the properties that are missing.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/foops/test/OM5.1 |
|---|---|
| Execution Endpoint | https://foops.linkeddata.es/assess/test/OM5.1 |
| Applicable to Digital Object Type | https://schema.org/DefinedTermSet |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/foops/metric/OM5.1 |
| Error messages will appear here: | |
|---|---|
Maturity Indicator to test if the metadata uses a formal language broadly applicable for knowledge representation. This particular test takes a broad view of what defines a 'knowledge representation language'; in this evaluation, a knowledge representation language is interpreted as one in which terms are semantically-grounded in ontologies. Any form of RDF will pass this test (including RDF that is automatically extracted by third-party parsers such as Apache Tika).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/tests/fc_metadata_kr_language_strong |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/assess/test/fc_metadata_kr_language_strong |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://doi.org/10.25504/FAIRsharing.l8fVBn |
| Error messages will appear here: | |
|---|---|
This test checks that the identifier associated with a software (if there’s one) follows a common schema such as URN, URL, DOI, etc. This is achieved by looking for identifiers in the README file and checking if they follow one of these schemas: URN, SWHID, DOI and URL.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-01-3 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-01-3 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/persistent_and_unique_identifier |
| Error messages will appear here: | |
|---|---|
This test checks if an identifier is found in the README or CITATION.cff files. This is done by looking for an identifier in the CITATION.cff and README files in the root of the repository.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-07-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-07-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/persistent_and_unique_identifier |
| Error messages will appear here: | |
|---|---|
This test checks if contact and/or support metadata is found in the repository. This is achieved by checking if there is contact or support information in the README file or any other package file ( (e.g., setup.py, setup.cfg, pom.xml, etc.).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-05-2 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-05-2 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_documentation |
| Error messages will appear here: | |
|---|---|
This test checks if there is a persistent identifier (i.e., Software Heritage ID, Zenodo DOI) associated with the software repository in files like CITATION.cff, codemeta.json and README. This is achieved by looking for an identifier in the files previously mentioned.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-01-2 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-01-2 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/persistent_and_unique_identifier |
| Error messages will appear here: | |
|---|---|
This test checks whether the provided URL points to a repository hosted on a source code repository such as GitHub or GitLab. It verifies both that the URL contains a known domain (e.g., "github.com" or "gitlab.com") and that it is reachable (i.e., the request returns a valid response).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-09-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-09-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/version_control_use |
| Error messages will appear here: | |
|---|---|
This test checks if the software has any releases in the source code repository. This is achieved by checking if there’s any releases created in the repository.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-03-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-03-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/has_releases |
| Error messages will appear here: | |
|---|---|
This test checks if documentation is found in the repository. This is achieved by looking for a README file, a ReadTheDocs URL in the README file and wikis (in case of GitHub projects).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-05-3 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-05-3 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_documentation |
| Error messages will appear here: | |
|---|---|
This test checks if there are continuous integration workflows in the repository. This is done by looking for a .github.workflows directory in Github and .gitlab-ci.yml file in Gitlab.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-19-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-19-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/repository_workflows |
| Error messages will appear here: | |
|---|---|
This test checks if dependencies are declared in the repository. This is achieved by checking if there are dependencies listed in the README file or other files like requirements.txt and pom.xml.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-13-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-13-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/requirements_specified |
| Error messages will appear here: | |
|---|---|
This test checks whether the repository is active. It first looks for a repostatus badge with the value “Active” in the README file. If the badge is not present, the test verifies whether there have been any commits to the repository in the past, indicating ongoing development.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-17-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-17-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/version_control_use |
| Error messages will appear here: | |
|---|---|
This test checks if the repository license is SPDX compliant. This is done by checking if all licenses found in the repository match any of the licenses supported by SPDX.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-15-2 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-15-2 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_has_license |
| Error messages will appear here: | |
|---|---|
This test checks if a codemeta.json, CITATION.cff file or package file that includes metadata about a software component exists. This is done by looking for CITATION.cff, codemeta.json and package files (e.g., setup.py, setup.cfg, pom.xml, etc.) in the root of the repository.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-04-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-04-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/descriptive_metadata |
| Error messages will appear here: | |
|---|---|
This test checks if there are contributors found in the repository. This is done by looking for contributors in files like CONTRIBUTOR, codemeta.json and package files.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-06-2 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-06-2 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_documentation |
| Error messages will appear here: | |
|---|---|
This test checks if authors have an ORCID assigned. This is done by looking for authors in the codemeta.json or AUTHORS files and checking if all of them have an ORCID assigned.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-06-3 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-06-3 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_documentation |
| Error messages will appear here: | |
|---|---|
This test checks if there are tests present in the repository. This is done by checking if there are files or directories that mention “test” in their names in the root and one level deep in the repository tree.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-14-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-14-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_has_tests |
| Error messages will appear here: | |
|---|---|
This test checks if the identifier found in the repository resolves back to the source code repository. This is done by checking if the landing page where the identifier resolves to links back to the source code repository using requests.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-07-2 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-07-2 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/persistent_and_unique_identifier |
| Error messages will appear here: | |
|---|---|
This test checks if the repository has a repo status badge in the README file.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-05-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-05-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/version_control_use |
| Error messages will appear here: | |
|---|---|
This test checks if all release identifiers follow the same scheme. This is achieved by checking if there’s any releases created in the repository and that their URLs follow the same pattern or scheme.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-03-4 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-03-4 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/versioning_standards_use |
| Error messages will appear here: | |
|---|---|
This test checks if all version identifiers follow either SemVer or CalVer conventions. This is achieved by checking if there’s any releases created in the repository and that their version numbers (or tags) follow Semantic Versioning (SemVer) or Calendar Versioning (CalVer).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-03-3 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-03-3 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/versioning_standards_use |
| Error messages will appear here: | |
|---|---|
This test checks if there is at least one persistent identifier (i.e., Software Heritage ID, Zenodo DOI) in the README file of the source code repository and if said identifier resolves.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-01-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-01-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/persistent_and_unique_identifier |
| Error messages will appear here: | |
|---|---|
This test checks if each release has an identifier and version number. This is achieved by checking if there’s any releases created in the repository and that they have a tag and a URL.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-03-2 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-03-2 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/has_releases |
| Error messages will appear here: | |
|---|---|
This test checks if the last release tag version number in the source code repository (if there is one) corresponds to the version number stated in the package file of the repository. This is achieved by checking if the version of the last release created in the repository corresponds to the version stated in the package file in the root of the repository.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-03-5 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-03-5 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/has_releases |
| Error messages will appear here: | |
|---|---|
This test checks if the repository has installation instructions. This is achieved by checking in the README file of the repository if there is information about the installation of the software.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-13-2 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-13-2 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_documentation |
| Error messages will appear here: | |
|---|---|
This test checks whether the software declares a license. It first looks for a LICENSE (or similarly named) file in the root directory of the repository. If not found, it searches for license information in other common locations such as the README file, metadata files like codemeta.json or CITATION.cff, or configuration files such as setup.py.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-15-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-15-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_has_license |
| Error messages will appear here: | |
|---|---|
This test checks if dependencies are in a machine-readable file (not just in README). This is done by checking if dependencies are available in a machine-readable format like requirements.txt or pom.xml in the root of the repository.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-13-4 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-13-4 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/requirements_specified |
| Error messages will appear here: | |
|---|---|
This test checks if dependencies have version numbers. This is achieved by checking if all dependencies listed in machine-readable files and in the README file have a version number.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-13-3 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-13-3 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/requirements_specified |
| Error messages will appear here: | |
|---|---|
This test checks if there is a README file in the repository.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-04-2 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-04-2 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_documentation |
| Error messages will appear here: | |
|---|---|
This test checks if a codemeta.json file exists in the repository. This is done by looking for a codemeta.json file in the root of the repository.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-04-5 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-04-5 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/descriptive_metadata |
| Error messages will appear here: | |
|---|---|
This test checks if there are authors found in the repository. This is done by looking for authors in files like AUTHORS, codemeta.json and CITATION.cff.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-06-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-06-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_documentation |
| Error messages will appear here: | |
|---|---|
This test checks if a metadata record is found on Zenodo and/or Software Heritage. This is done by searching in the README file for a Zenodo or Software Heritage badge.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-08-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-08-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/archived_in_software_heritage |
| Error messages will appear here: | |
|---|---|
This test checks if descriptive metadata within the repository includes description, programming language, date created and keywords. This is done by looking for those fields in codemeta.json, README, CITATION.cff or any package metadata file ( (e.g., setup.py, setup.cfg, pom.xml, etc.).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-04-4 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-04-4 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/descriptive_metadata |
| Error messages will appear here: | |
|---|---|
This test checks if the repository has title and description of the software available within any metadata file. This is achieved by checking if description and title are in the README file, codemeta.json file or any other package file (e.g., setup.py, setup.cfg, pom.xml, etc.).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-04-3 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-04-3 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/descriptive_metadata |
| Error messages will appear here: | |
|---|---|
This test checks if there is a referencePublication or a citation file to a paper. This is done by looking for an article citation in the README file, for a CITATION.cff file or a referencePublication field in the codemeta.json file.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-12-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-12-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_has_citation |
| Error messages will appear here: | |
|---|---|
This test checks if there is a CITATION.cff file or citations made in the README file. This is done by looking in the root of the repository for a CITATION.cff file and by checking if the README file contains citations in a bibtex format.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-18-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-18-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_has_citation |
| Error messages will appear here: | |
|---|---|
This test checks if there are actions or workflows that automate tests. This is done by checking if there are workflows that mention “test” in their names in .github.workflows directory for Github and .gitlab-ci.yml file for Gitlab.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-14-2 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-14-2 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_has_tests |
| Error messages will appear here: | |
|---|---|
This test checks if metadata files like codemeta.json, CITATION.cff and package file mention the repository license. This is done by looking if a license field is within said files.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-16-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-16-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_has_license |
| Error messages will appear here: | |
|---|---|
The 'FAIR Test ARK T-F1 - Persistent Identifiers for Database Content' is a database-level test that requires the minting of persistent identifiers for at least some of the content within the database being evaluated. FM ARK F1 expects a FAIRsharing URL or DOI as its input and evaluates that FAIRsharing record for the guaranteed persistence of the identifier schema as defined by the identifier schema record type in FAIRsharing (see our gitbook documentation linked as part of this record for more information about id schemas for databases and how to curate them in a FAIRsharing record). An important facet of FAIR for databases is that they mint persistent identifiers. As described by the GFF, global uniqueness guarantees the unambiguous referencing of the described resource; it is insufficient for it to be unique only locally. Persistence refers to the requirement that this globally unique identifier is never reused elsewhere and continues to identify the same resource over time. These features are key to the Findability of the database being evaluated. The test will return as follows: Pass: URL/DOI must point to an existing FAIRsharing database record; the record must have an 'implements' to an id_schema record; that id_schema record must have the persistent field set to 'true'. Indeterminate: URL/DOI is non-existent or malformed. Fail: if any of the conditions in 'pass' fail; specifically, if the URL/DOI is not a database record, OR if there is no 'implements' relationship to an id_schema, OR if the persistent field is set to 'false'.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://fairsharing.org/7163 |
|---|---|
| Execution Endpoint | https://fair-tests.fairsharing.org/test/ft_ark_f1 |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://fairsharing.org/7163 |
| Error messages will appear here: | |
|---|---|
The 'FAIR Test - ARK T-F1 - Globally unique, persistent and resolvable identifiers for database content' is a database-level test that requires the minting of community-relevant, publicly available GUPRIs for at least some of the content within the database being evaluated. FM ARK F1-GUPRI expects a FAIRsharing URL or DOI as its input and evaluates the database described by the FAIRsharing record for the guaranteed global uniqueness, persistence and resolvability of the identifier schema as defined by the identifier schema record type in FAIRsharing. As described by the GFF, global uniqueness guarantees the unambiguous referencing of the described resource; it is insufficient for it to be unique only locally. Persistence refers to the requirement that this globally unique identifier is never reused elsewhere and continues to identify the same resource over time. Resolvability of identifier makes it practically useful, with predictable identifier resolution behavior being key. These features are key to the Findability of the database being evaluated. The test will return as follows: Pass: URL/DOI must point to an existing FAIRsharing database record; the record must have an 'implements' to an id_schema record; that id_schema record must have the globally unique, persistent and resolvable fields all set to 'true'. Indeterminate: URL/DOI is non-existent or malformed. Fail: if any of the conditions in 'pass' fail; specifically, if the URL/DOI is not a database record, OR if there is no 'implements' relationship to an id_schema, OR if any of globally unique, persistent and resolvable fields are set to 'false'.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://fairsharing.org/7455 |
|---|---|
| Execution Endpoint | https://fair-tests.fairsharing.org/test/ft_ark_f1gupri |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://fairsharing.gitbook.io/fairsharing/about-our-records/fair-assistance#fair-principles-f1-guid-meta-data-are-assigned-globally-unique-and-persistent-identifiers |
| Error messages will appear here: | |
|---|---|
The 'FAIR Test - ARK T-A2 - Database has a preservation policy' is a database-level test that requires the presence of a data preservation policy for the database being evaluated. FM ARK A2-Preservation expects a FAIRsharing URL or DOI as its input and evaluates the database described by the FAIRsharing record: the record must have a URL in the data preservation policy field and may optionally also have a non-empty value in the name field. As described by the GFF, there should at least be access to high quality and machine actionable metadata that describes those resources sufficiently to minimally understand their nature and their provenance, even when the relevant data are not available anymore. There are a number of ways to implement this, however this metric deals with preservation of information relating to an identifier by determining if there is a preservation policy in place with the database being evaluated. The test will return as follows: Pass: URL/DOI must point to an existing FAIRsharing database record; the record must have a value in the URL section of the data preservation policy field that resolves (i.e. does not give an error code). Indeterminate: URL/DOI is non-existent, returns an error code of some kind, or is malformed. Fail: if any of the conditions in 'pass' fail; specifically, if the URL/DOI is not a database record, OR if there is an empty URL value for the data preservation policy field.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://fairsharing.gitbook.io/fairsharing/about-our-records/fair-assistance#fairsharing-developed-tests |
|---|---|
| Execution Endpoint | https://fair-tests.fairsharing.org/test/ft_ark_a2preservation |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://github.com/FAIRMetrics/Metrics/blob/master/MaturityIndicators/Gen2/Gen2_MI_A2.md |
| Error messages will appear here: | |
|---|---|
This test checks if there is a version stated for the software in metadata files. This is done by looking for a version number in metadata files like package files, codemeta.json and CITATION.cff.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-03-6 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-03-6 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/descriptive_metadata |
| Error messages will appear here: | |
|---|---|
This test checks if authors have their roles stated in a codemeta.json. This is achieved by looking for authors in the codemeta.json file and checking if all of them have a role assigned. This process follows the codemeta v2 and v3 standards.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-06-4 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-06-4 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/software_documentation |
| Error messages will appear here: | |
|---|---|
This test checks if the repository has an issue tracker. This is done by checking if the URL for the issues resolves.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-20-1 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-20-1 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric |
| Error messages will appear here: | |
|---|---|
This test checks if the repository has commits that are linked to open and closed issues. This is done by cross-checking if issue identifiers within the issue list are present in the commit history’s messages (i.e. “closes #123”).
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-17-3 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-17-3 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/version_control_use |
| Error messages will appear here: | |
|---|---|
This test checks if the repository has a commit history. This is done by looking for commits in the software’s repository.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://w3id.org/rsfc/test/RSFC-17-2 |
|---|---|
| Execution Endpoint | https://api.rsfc.linkeddata.es/assess/test/RSFC-17-2 |
| Applicable to Digital Object Type | https://schema.org/SoftwareSourceCode |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://w3id.org/everse/i/indicators/version_control_use |
| Error messages will appear here: | |
|---|---|
Verifies that a record contains provenance metadata elements. The test examines three elements: distrbtr (distributor/publisher) in the distribution statement (/codeBook/stdyDscr/citation/distStmt/distrbtr) AuthEnty (author/authoring entity) in the responsibility statement (/codeBook/stdyDscr/citation/rspStmt/AuthEnty) grantNo (grant number) in the production statement (/codeBook/stdyDscr/citation/prodStmt/grantNo) of DDI 2.5 metadata.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://ostrails.github.io/assessment-component-metadata-records/test/FT_R1-2_M_CPI.ttl |
|---|---|
| Execution Endpoint | https://fair-tests.cessda.eu/assess/test/provenance |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000002, http://www.fairsharing.org/ontology/subject/SRAO_0000002 |
| Implementation of Metric | https://purl.org/fair-metrics/FM_R1.2 |
| Error messages will appear here: | |
|---|---|
Validates that a record uses DDI Time Method controlled vocabulary. The test examines timeMeth (time method) elements in the data collection method section (/codeBook/stdyDscr/method/dataColl/timeMeth) of DDI 2.5 metadata.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://ostrails.github.io/assessment-component-metadata-records/test/FT_Gen2-MI-I2A_M_DTMV.ttl |
|---|---|
| Execution Endpoint | https://fair-tests.cessda.eu/assess/test/ddi-time-method |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000002, http://www.fairsharing.org/ontology/subject/SRAO_0000002 |
| Implementation of Metric | https://github.com/FAIRMetrics/Metrics/blob/master/MaturityIndicators/Gen2/Gen2_MI_F1B.md |
| Error messages will appear here: | |
|---|---|
Confirms that a record uses DDI Mode of Collection controlled vocabulary. The test examines collMode (collection mode) elements in the data collection method section (/codeBook/stdyDscr/method/dataColl/collMode) of DDI 2.5 metadata.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://ostrails.github.io/assessment-component-metadata-records/test/FT_Gen2-MI-I2A_M_DMOCV.ttl |
|---|---|
| Execution Endpoint | https://fair-tests.cessda.eu/assess/test/ddi-collection-mode |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000002, http://www.fairsharing.org/ontology/subject/SRAO_0000002 |
| Implementation of Metric | https://github.com/FAIRMetrics/Metrics/blob/master/MaturityIndicators/Gen2/Gen2_MI_F1B.md |
| Error messages will appear here: | |
|---|---|
Checks whether a record uses DDI Sampling Procedure controlled vocabulary. The test examines sampProc (sampling procedure) elements in the data collection method section (/codeBook/stdyDscr/method/dataColl/sampProc) of DDI 2.5 metadata.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://ostrails.github.io/assessment-component-metadata-records/test/FM_Gen2-MI-I2A_M_DSPV.ttl |
|---|---|
| Execution Endpoint | https://fair-tests.cessda.eu/assess/test/ddi-sampleproc |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000002, http://www.fairsharing.org/ontology/subject/SRAO_0000002 |
| Implementation of Metric | https://github.com/FAIRMetrics/Metrics/blob/master/MaturityIndicators/Gen2/Gen2_MI_F1B.md |
| Error messages will appear here: | |
|---|---|
Verifies that a record uses DDI Analysis Unit controlled vocabulary. The test examines anlyUnit (analysis unit) elements in the study description's summary description section (/codeBook/stdyDscr/stdyInfo/sumDscr/anlyUnit) of DDI 2.5 metadata.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://ostrails.github.io/assessment-component-metadata-records/test/FT_Gen2-MI-I2A_M_DAUV.ttl |
|---|---|
| Execution Endpoint | https://fair-tests.cessda.eu/assess/test/ddi-analysis-unit |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000002, http://www.fairsharing.org/ontology/subject/SRAO_0000002 |
| Implementation of Metric | https://github.com/FAIRMetrics/Metrics/blob/master/MaturityIndicators/Gen2/Gen2_MI_I2A.md |
| Error messages will appear here: | |
|---|---|
Checks whether a record uses CESSDA Topic Classification controlled vocabulary. The test examines topcClas (topic classification) elements in the study information subject section (/codeBook/stdyDscr/stdyInfo/subject/topcClas) of DDI 2.5 metadata.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://ostrails.github.io/assessment-component-metadata-records/test/FT_Gen2-MI-I2A_M_CTV.ttl |
|---|---|
| Execution Endpoint | https://fair-tests.cessda.eu/assess/test/topic-class |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000002, http://www.fairsharing.org/ontology/subject/SRAO_0000002 |
| Implementation of Metric | https://github.com/FAIRMetrics/Metrics/blob/master/MaturityIndicators/Gen2/Gen2_MI_I2A.md |
| Error messages will appear here: | |
|---|---|
Validates that a record contains properly attributed ELSST (European Language Social Science Thesaurus) keywords. The test examines keyword elements in the study information subject section (/codeBook/stdyDscr/stdyInfo/subject/keyword) of DDI 2.5 metadata.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://ostrails.github.io/assessment-component-metadata-records/test/FT_I2_M_CEK.ttl |
|---|---|
| Execution Endpoint | https://fair-tests.cessda.eu/assess/test/elsst-keywords |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000002, http://www.fairsharing.org/ontology/subject/SRAO_0000002 |
| Implementation of Metric | https://fairsharing.org/ |
| Error messages will appear here: | |
|---|---|
Verifies that a record contains approved access rights terminology. The test examines the conditions element within the data access section (/codeBook/stdyDscr/dataAccs/useStmt/conditions) of DDI 2.5 metadata.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://ostrails.github.io/assessment-component-metadata-records/test/FM_Gen2-MI-A1.2_M_CARV.ttl |
|---|---|
| Execution Endpoint | https://fair-tests.cessda.eu/assess/test/access-rights |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000002, http://www.fairsharing.org/ontology/subject/SRAO_0000002 |
| Implementation of Metric | https://github.com/FAIRMetrics/Metrics/blob/master/MaturityIndicators/Gen2/Gen2_MI_A1.2.md |
| Error messages will appear here: | |
|---|---|
Confirms that a DDI 2.5 metadata record uses an approved persistent identifier scheme. The test examines IDNo elements with agency attributes in the citation's title statement section (/codeBook/stdyDscr/citation/titlStmt/IDNo) of DDI 2.5 metadata.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://ostrails.github.io/assessment-component-metadata-records/test/FT-F1-PID_M_CESSDA.ttl |
|---|---|
| Execution Endpoint | https://fair-tests.cessda.eu/assess/test/pid |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401, http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://github.com/FAIRMetrics/Metrics/blob/master/MaturityIndicators/Gen2/Gen2_MI_F1B.md |
| Error messages will appear here: | |
|---|---|
description
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/F1 |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/F1 |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/F1 |
| Error messages will appear here: | |
|---|---|
Test a DOI to determine if funder information is available in the datacite or crossref metadata
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/community-tests/community_funding_information_registered |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/community-tests/assess/test/community_funding_information_registered |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://w3id.org/fair-metrics/esrf/FM_R1-2_M_Fund_ESRF |
| Error messages will appear here: | |
|---|---|
Test a DOI to determine if license information is available in the datacite or crossref metadata
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/community-tests/community_license_information |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/community-tests/assess/test/community_license_information |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://w3id.org/fair-metrics/esrf/FM_R1-1_M_DOI_Lic_ESRF |
Use Crossref and Datacite APIs to scan a metadata record for author affiliation. Also check landing page for citation_author_institution meta property
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/community-tests/community_metadata_includes_author_affiliation |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/community-tests/assess/test/community_metadata_includes_author_affiliation |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://w3id.org/fair-metrics/esrf/FM_R1-2_M_Affil_ESRF |
| Error messages will appear here: | |
|---|---|
Test a DOI against OpenAlex to determine if the resource output is open-access
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/community-tests/community_open_access_publication |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/community-tests/assess/test/community_open_access_publication |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://w3id.org/fair-metrics/esrf/FM_F4_M_DOI_OA_ESRF |
| Error messages will appear here: | |
|---|---|
Assessed by checking that the datacite:identifier field is present and that its identifierType attribute contains one of the following controlled values: ARK, DOI, Handle, IGSN, arXiv, PURL, URL, URN, PMID or PMCID. The presence of a valid identifier of one of these types confirms that the resource has been assigned a globally unique and persistent identifier, as required by the OpenAIRE Guidelines for Data Archives.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/F1_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/F1_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/F1_data |
| Error messages will appear here: | |
|---|---|
Assessed by checking the presence and completeness of key descriptive metadata fields: datacite:creator, datacite:title, datacite:publisher, datacite:date, datacite:description, and datacite:subject. The population of these fields confirms that the resource is described with sufficiently rich metadata to enable discovery and contextualization.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/F2_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/F2_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/F2_data |
| Error messages will appear here: | |
|---|---|
Assessed by checking that the datacite:relatedIdentifier field is present with a relationType attribute set to one of the following values: IsMetadataFor, Describes, Documents or IsSupplementTo. The presence of at least one of these qualified links confirms that the metadata explicitly references the identifier of the data it describes.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/F3_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/F3_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/F3_data |
| Error messages will appear here: | |
|---|---|
Assessed by verifying that the metadata are exposed and harvestable via the OAI-PMH protocol in compliance with the OpenAIRE Guidelines for Data Archives, enabling the aggregation and indexing of the resource into the OpenAIRE infrastructure. Conformance to the OpenAIRE OAI-PMH requirements ensures that the (meta)data are registered in a searchable and widely accessible resource.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/F4_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/F4_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/F4_data |
| Error messages will appear here: | |
|---|---|
Assessed by verifying that the resource's metadata are exposed and retrievable via the OAI-PMH protocol, in compliance with the OpenAIRE Guidelines for Data Archives. Compliance is determined by validating that the repository endpoint conforms to the OpenAIRE OAI-PMH requirements, ensuring metadata can be harvested through this open, standardized protocol.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/A1_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/A1_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/A1_data |
| Error messages will appear here: | |
|---|---|
Assessed by verifying that metadata are exposed via the OAI-PMH protocol, which is open, free, and universally implementable. Compliance is determined by validating that the repository endpoint conforms to the OpenAIRE Guidelines for Data Archives OAI-PMH requirements, ensuring that no restrictions exist on accessing or implementing the communications protocol used to retrieve the metadata.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/A1.1_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/A1.1_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/A1.1_data |
| Error messages will appear here: | |
|---|---|
Assessed by verifying that the metadata are exposed via the OAI-PMH protocol using the Datacite standardized metadata schema as prescribed by the OpenAIRE Guidelines for Data Archives. Conformance to this schema confirms that the (meta)data are represented in a formal, accessible, and broadly applicable language, ensuring interoperability across systems and communities.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/I1_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/I1_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/I1_data |
| Error messages will appear here: | |
|---|---|
Assessed by checking that the datacite:resourceType and datacite:rights fields are populated with values drawn from the eu-repo namespace, as prescribed by the OpenAIRE Guidelines for Data Archives. The use of these standardized vocabularies confirms that the (meta)data employ FAIR-aligned, interoperable terms that are formally defined, openly accessible, and consistently applicable across systems.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/I2_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/I2_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/I2_data |
| Error messages will appear here: | |
|---|---|
Assessed by checking that the datacite:relatedIdentifier field is present and that both the relatedIdentifierType and relationType attributes are populated with valid controlled values. The presence of these qualified references confirms that the resource explicitly declares typed, meaningful relationships to other (meta)data.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/I3_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/I3_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/I3_data |
| Error messages will appear here: | |
|---|---|
Assessed by checking the presence of key contextual and descriptive metadata fields: datacite:resourceType, datacite:format, datacite:description, datacite:size, datacite:version. The population of these fields confirms that the resource is described with a plurality of accurate and relevant attributes, enabling users to properly evaluate and reuse the (meta)data.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/R1_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/R1_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/R1_data |
| Error messages will appear here: | |
|---|---|
Assessed by checking that the datacite:rights field is present and contains a valid license URI. The presence of this field confirms that the resource has been released with a clear and accessible usage license, allowing users to understand the conditions under which the (meta)data can be accessed and reused.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/R1.1_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/R1.1_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/R1.1_data |
| Error messages will appear here: | |
|---|---|
Assessed by checking the presence of provenance-relevant metadata fields: datacite:creator, datacite:contributor, datacite:date, and datacite:version, as well as the presence of datacite:relatedIdentifier with a relationType attribute set to one of the following values: IsSupplementTo, IsSupplementedBy, IsContinuedBy, Continues, IsDescribedBy, Describes, HasMetadata, IsMetadataFor, HasVersion, IsVersionOf, IsNewVersionOf, IsPreviousVersionOf, IsPartOf, HasPart, IsDocumentedBy, Documents, IsCompiledBy, Compiles, IsVariantFormOf, IsOriginalFormOf, IsIdenticalTo, IsDerivedFrom, or IsSourceOf. The population of these fields confirms that the (meta)data are associated with detailed provenance, enabling users to trace the origin, history, and context of the resource.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/R1.2_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/R1.2_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/R1.2_data |
| Error messages will appear here: | |
|---|---|
Assessed by validating that the metadata record conforms to the OpenAIRE Guidelines for Data Archives, which inherently incorporate domain-relevant community standards. This includes adherence to established protocols (OAI-PMH), metadata schemas (DataCite), and eu-repo namespace, ensuring interoperability and alignment with the broader open science and research community standards.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/R1.3_data |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/R1.3_data |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/R1.3_data |
| Error messages will appear here: | |
|---|---|
Assessed by checking that the datacite:identifier field is present and that its identifierType attribute contains one of the following controlled values: ARK, DOI, Handle, IGSN, arXiv, PURL, URL, URN, PMID or PMCID. The presence of a valid identifier of one of these types confirms that the resource has been assigned a globally unique and persistent identifier, as required by the OpenAIRE Guidelines v4.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://beta.services.openaire.eu/osTrails/tests/F1_v4 |
|---|---|
| Execution Endpoint | https://beta.services.openaire.eu/osTrails/tests/F1_v4 |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://beta.services.openaire.eu/osTrails/metrics/F1_v4 |
| Error messages will appear here: | |
|---|---|
Assessed by checking the presence and completeness of key descriptive metadata fields: datacite:creator, datacite:title, dc:publisher, datacite:date, dc:description, oaire:fundingReference, and datacite:subject. The population of these fields confirms that the resource is described with sufficiently rich metadata to enable discovery and contextualization.
| Test ID (for Benchmark Algorithm Spreadsheet) | http://88.197.87.183:65192/osTrails/tests/F2_v4 |
|---|---|
| Execution Endpoint | http://88.197.87.183:65192/osTrails/tests/F2_v4 |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | http://88.197.87.183:65192/osTrails/metrics/F2_v4 |
| Error messages will appear here: | |
|---|---|
F1 requires that metadata be assigned a globally unique, persistent and resolvable identifier. This metric is particularly concerned with the globally uniqueness aspect of GUPRIs; global uniqueness is required to prevent identifier collision across institutions, systems and domains. Otherwise, an identifier shared by multiple resources will confound efforts to describe that resource, or to use the identifier to retrieve it. In this metric, the definition of global uniqueness follows the FAIRsharing guidance on Globally Unique, Persistent and Resolvable Identifier (GUPRI) schemas. The identifier being evaluated is checked for a match (using regular expressions) to an existing id schema within FAIRsharing. Pass: The id_schema record must have the globally unique field set to 'true'. Indeterminate: URL/DOI is non-existent or malformed. Fail: if the globally unique field is set to 'false'.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://fairsharing.org/7820 |
|---|---|
| Execution Endpoint | https://fair-tests.fairsharing.org/test/ft_f1_m_idgloballyunique |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric |
| Error messages will appear here: | |
|---|---|
F1 requires that metadata be assigned a globally unique, persistent and resolvable identifier. This metric is particularly concerned with the resolvability aspect of GUPRIs. Resolvability is required so that identifiers are actionable for both humans and machines. In this metric, the definition of resolvability follows the FAIRsharing guidance on Globally Unique, Persistent and Resolvable Identifier (GUPRI) schemas. The identifier being evaluated is checked for a match (using regular expressions) to an existing id schema within FAIRsharing. Pass: The id_schema record must have the resolvability field set to 'true'. Indeterminate: URL/DOI is non-existent or malformed. Fail: if the resolvability field is set to 'false'.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://fairsharing.org/7822 |
|---|---|
| Execution Endpoint | https://api.fairsharing.org/fair_tests/ft_f1_m_idresolvable |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401, http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric |
| Error messages will appear here: | |
|---|---|
F1 requires that metadata be assigned a globally unique, persistent and resolvable identifier. This metric is particularly concerned with the persistence aspect of GUPRIs; persistence is required to guarantee long-term continuity of reference. In this metric, the definition of persistence follows the FAIRsharing guidance on Globally Unique, Persistent and Resolvable Identifier (GUPRI) schemas. The identifier being evaluated is checked for a match (using regular expressions) to an existing id schema within FAIRsharing. Pass: The id_schema record must have the persistence field set to 'true'. Indeterminate: URL/DOI is non-existent or malformed. Fail: if the persistence field is set to 'false'.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://fairsharing.org/7821 |
|---|---|
| Execution Endpoint | https://fair-tests.fairsharing.org/test/ft_f1_m_idpersistent |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric |
| Error messages will appear here: | |
|---|---|
The 'FAIR Test - I1 - Metadata - Database-level knowledge representation languages (semantic)' is a database-level test that checks for relationships between the FAIRSharing metadata record and any model/format records with a semantic format. If a FAIRsharing database record is found and is linked to a semantic model/format record then this test will pass. If the record submitted is not a database or is not so linked then the test will fail. An indeterminate value will be returned if no FAIRsharing record metadata is found.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://fairsharing.org/7837 |
|---|---|
| Execution Endpoint | https://api.fairsharing.org/fair_tests/ft_i1_m_db_knowledge_semantic |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401, http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric |
| Error messages will appear here: | |
|---|---|
The 'FAIR Test - I1 - Metadata - Database-level knowledge representation languages (syntactic)' is a database-level test that checks for relationships between the FAIRSharing metadata record and any model/format records with a syntactic format. If a FAIRsharing database record is found and is linked to a syntactic model/format record then this test will pass. If the record submitted is not a database or is not so linked then the test will fail. An indeterminate value will be returned if no FAIRsharing record metadata is found.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://fairsharing.org/7836 |
|---|---|
| Execution Endpoint | https://api.fairsharing.org/fair_tests/ft_i1_m_db_knowledge_syntactic |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401, http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://fairsharing.gitbook.io/fairsharing/about-our-records/fair-assistance/fair-benchmark-institutional-repository-datasets#fair-metric-i1-metadata-database-level-knowledge-representation-languages-syntactic |
| Error messages will appear here: | |
|---|---|
DESCRIPTION
| Test ID (for Benchmark Algorithm Spreadsheet) | https://ostrails.github.io/assessment-component-metadata-records/test/SARA-TEST.ttl |
|---|---|
| Execution Endpoint | |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric |
| Error messages will appear here: | |
|---|---|
DESCRIPTION
| Test ID (for Benchmark Algorithm Spreadsheet) | https://ostrails.github.io/assessment-component-metadata-records/test/TESTING.ttl |
|---|---|
| Execution Endpoint | |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric | https://github.com/FAIRMetrics/Metrics/blob/master/MaturityIndicators/Gen2/Gen2_MI_I2A.md |
| Error messages will appear here: | |
|---|---|
The ERDERA Project has strict requirements for minimal metadata to onboard their Virtual Platform. These include: Migration away from deprecated EJP purl properties to the w3id equivalents. Presence of the VPDiscoverable prpoperty. Properties: dcat:theme dcat:contactPoint dct:description dcat:keyword dct:language dct:license dct:publisher dct:title dcat:contactPoint dcat:landingPage. It does not test the substructure of the dct:publisher, but it must be a foaf:Agent with a foaf:name.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/community-tests/erdera_core_vp_metadata |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/community-tests/assess/test/erdera_core_vp_metadata |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://w3id.org/fair-metrics/erdera/FM_R1-3_M_VP_L1 |
| Error messages will appear here: | |
|---|---|
The ERDERA Project has strict requirements for minimal metadata to onboard their Virtual Platform. For Level 2, you must first be Level 1 compliant (see https://w3id.org/fair-metrics/erdera/FM_R1-3_M_VP_L1). In addition, you must declare yourself as a dcat Data Service, and have a landingPage at a minimimum.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://tests.ostrails.eu/community-tests/erdera_vp_l2_metadata |
|---|---|
| Execution Endpoint | https://tests.ostrails.eu/community-tests/assess/test/erdera_vp_l2_metadata |
| Applicable to Digital Object Type | https://schema.org/Dataset |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric | https://w3id.org/fair-metrics/erdera/FM_R1-3_M_VP_L2 |
| Error messages will appear here: | |
|---|---|
A2 requires that metadata remain accessible even if the digital object is no longer available. The purpose of this principle is to ensure that information about a resource persists over time, independent of the continued availability of the research object itself. As per FAIR Principle F3, when this metadata remains discoverable, even in the absence of the research object, it will also contain an explicit reference to the identifier of the research object. This metric evaluates whether the hosting database or repository declares a formal data preservation policy. The evaluation retrieves the FAIRsharing record for the database and checks for the presence of a value in the “Data Preservation Policy” field, as defined in the FAIRsharing database conditions documentation. The presence of a declared preservation policy in the FAIRsharing record is interpreted as evidence that the database has articulated commitments to long-term metadata availability. If a preservation policy is listed in the FAIRsharing record, the resource passes this metric; if not, it fails. This metric measures database-level persistence policies as persistence information is rarely included in record-level metadata.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://fairsharing.org/FAIRsharing.lEZbPK |
|---|---|
| Execution Endpoint | https://fair-tests.fairsharing.org/test/ft_a2_m_dbpersistencepolicy |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric |
| Error messages will appear here: | |
|---|---|
FAIR Test - F2 - Metadata - Discovery-Oriented Metadata Fields evaluates whether a metadata record includes a core set of mandatory descriptive elements that are essential for basic discovery. Specifically, it checks the resolved metadata for the presence of the following four fields: title, contributor names, summary/abstract/description, and publication date (defined as the date the record was first made publicly available). To pass, all of these fields must be present and populated within a structured, common format such as schema.org JSON-LD, DataCite XML, or Dublin Core XML. If any of these fields are empty, the evaluation is expected to fail.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://fairsharing.org/7828 |
|---|---|
| Execution Endpoint | https://fair-tests.fairsharing.org/test/ft_f2_m_discoveryfields |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric |
| Error messages will appear here: | |
|---|---|
FAIR Test - F2 - Metadata - Tagging To Aid Discovery evaluates the metadata for the presence of at least one keyword or tag of any kind (free-text or controlled). With regards to Findability (as opposed to e.g., Interoperability), the presence of any type of tag (irrespective of whether that tag is part of a controlled vocabulary) is the key feature for this metric. The metric expects the identifier to point to structured metadata (e.g., schema.org, DataCite, or DC) and verifies that the keyword/subject property is not empty. The metric passes if at least one tag is identified and fails if the keyword attribute is missing or null.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://fairsharing.org/8021 |
|---|---|
| Execution Endpoint | https://fair-tests.fairsharing.org/test/ft_f2_m_discoverytags |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric |
| Error messages will appear here: | |
|---|---|
FAIR Test - F2 - Metadata - Has Publisher Information evaluates whether the metadata includes explicit information regarding the organisation responsible for publishing the metadata record. It looks for a structured “publisher” field within the record. In the context of an institutional repository, this is typically the institution itself, or an external repository (like Zenodo) if the record is registering an object hosted elsewhere. The test will fail if this value is not present.
| Test ID (for Benchmark Algorithm Spreadsheet) | https://fairsharing.org/8022 |
|---|---|
| Execution Endpoint | https://fair-tests.fairsharing.org/test/ft_f2_m_discoverypublisher |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | http://www.fairsharing.org/ontology/subject/SRAO_0000401 |
| Implementation of Metric |
| Error messages will appear here: | |
|---|---|
DESCRIPTION OF THE TEST
| Test ID (for Benchmark Algorithm Spreadsheet) | https://landingpage-example.com |
|---|---|
| Execution Endpoint | |
| Applicable to Digital Object Type | — |
| Applicable to Scholarly Domain | — |
| Implementation of Metric |