A simple standard for sharing literal mappings (SSSLM).
This repository implements a data model for literal mappings that supports encoding labels, synonyms, synonym types, internationalization, and other important information for curation, construction of lexica, and population of NER/NEN tools.
SSSLM echos the name of SSSOM, which is a related standard for ontological mappings. SSSLM can be pronounced S-S-S-L-M, sess-lem, or however brings joy to you.
import ssslm
from ssslm import LiteralMapping
from curies import NamedReference
# Construct a mapping using Pydantic objects
m1 = LiteralMapping(
reference=NamedReference(prefix="NCBITaxon", identifier="9606", name="Homo sapiens"),
text="human",
)
# get a pandas dataframe
df = ssslm.literal_mappings_to_df([m1])
# Write mappings to TSV
ssslm.write_literal_mappings([m1], "literal_mappings.tsv")
# Read mappings from TSV
mappings = ssslm.read_literal_mappings("literal_mappings.tsv")
Note that references are standardized using the
curies
package. It's up to you to
use a meaningful set of prefixes, so consider adopting
the Bioregistry as a standard.
The SSSLM data model is defined using Pydantic, and corresponds to the following columns in a TSV file:
text
the label/synonym text itselfcurie
the compact uniform resource identifier (CURIE) for a biomedical entity or conceptname
the standard name for the conceptpredicate
the predicate which encodes the synonym scope, written as a CURIE from the OBO in OWL (oboInOWL
) or RDFS controlled vocabularies, e.g., one of:rdfs:label
oboInOwl:hasExactSynonym
oboInOwl:hasNarrowSynonym
(i.e., the synonym represents a narrower term)oboInOwl:hasBroadSynonym
(i.e., the synonym represents a broader term)oboInOwl:hasRelatedSynonym
(use this if the scope is unknown)
type
the (optional) synonym property type, written as a CURIE from the OBO Metadata Ontology (omo
) controlled vocabulary, e.g., one of:OMO:0003000
(abbreviation)OMO:0003001
(ambiguous synonym)OMO:0003002
(dubious synonym)OMO:0003003
(layperson synonym)OMO:0003004
(plural form)- ...
provenance
a comma-delimited list of CURIEs corresponding to publications that use the given synonym (ideally using highly actionable identifiers from semantic spaces likepubmed
,pmc
,doi
)contributor
a CURIE with the ORCID identifier of the contributordate
the optional date when the row was curated in YYYY-MM-DD formatlanguage
the (optional) ISO 2-letter language code. If missing, assumed to be American English.comment
an optional commentsource
the source of the synonyms, usuallybiosynonyms
unless imported from elsewheretaxon
the optional NCBITaxon CURIE, if the term is taxon-specific, likeNCBITaxon:9606
for humans
Here's an example of some rows in the synonyms table (with linkified CURIEs):
text | curie | predicate | provenance | contributor | language |
---|---|---|---|---|---|
alsterpaullone | CHEBI:138488 | rdfs:label | pubmed:30655881 | orcid:0000-0003-4423-4370 | en |
9-nitropaullone | CHEBI:138488 | oboInOwl:hasExactSynonym | pubmed:11597333, pubmed:10911915 | orcid:0000-0003-4423-4370 | en |
Limitations
- It's hard to know which exact matches between different vocabularies could be used to deduplicate synonyms. Right now, this isn't covered but some partial solutions already exist that could be adopted.
- This doesn't keep track of NER annotations, such as when you want to keep track of the start and end position in a full sentence or paragraph
- This doesn't keep track of transformations done to make mappings. It's more oriented towards curation.
The most recent code and data can be installed directly from GitHub with uv:
$ uv --preview pip install git+https://github.com/cthoyt/ssslm.git
or with pip:
$ UV_PREVIEW=1 python3 -m pip install git+https://github.com/cthoyt/ssslm.git
Note that this requires setting UV_PREVIEW
mode enabled until the uv build
backend becomes a stable feature.
Contributions, whether filing an issue, making a pull request, or forking, are appreciated. See CONTRIBUTING.md for more information on getting involved.
The code in this package is licensed under the MIT License.
This package was created with @audreyfeldroy's cookiecutter package using @cthoyt's cookiecutter-snekpack template.
See developer instructions
The final section of the README is for if you want to get involved by making a code contribution.
To install in development mode, use the following:
$ git clone git+https://github.com/cthoyt/ssslm.git
$ cd ssslm
$ uv --preview pip install -e .
Alternatively, install using pip:
$ UV_PREVIEW=1 python3 -m pip install -e .
Note that this requires setting UV_PREVIEW
mode enabled until the uv build
backend becomes a stable feature.
This project uses cruft
to keep boilerplate (i.e., configuration, contribution
guidelines, documentation configuration) up-to-date with the upstream
cookiecutter package. Install cruft with either uv tool install cruft
or
python3 -m pip install cruft
then run:
$ cruft update
More info on Cruft's update command is available here.
After cloning the repository and installing tox
with
uv tool install tox --with tox-uv
or python3 -m pip install tox tox-uv
, the
unit tests in the tests/
folder can be run reproducibly with:
$ tox -e py
Additionally, these tests are automatically re-run with each commit in a GitHub Action.
The documentation can be built locally using the following:
$ git clone git+https://github.com/cthoyt/ssslm.git
$ cd ssslm
$ tox -e docs
$ open docs/build/html/index.html
The documentation automatically installs the package as well as the docs
extra
specified in the pyproject.toml
. sphinx
plugins like
texext
can be added there. Additionally, they need to be added to the
extensions
list in docs/source/conf.py
.
The documentation can be deployed to ReadTheDocs using
this guide. The
.readthedocs.yml
YAML file contains all the configuration
you'll need. You can also set up continuous integration on GitHub to check not
only that Sphinx can build the documentation in an isolated environment (i.e.,
with tox -e docs-test
) but also that
ReadTheDocs can build it too.
- Log in to ReadTheDocs with your GitHub account to install the integration at https://readthedocs.org/accounts/login/?next=/dashboard/
- Import your project by navigating to https://readthedocs.org/dashboard/import then clicking the plus icon next to your repository
- You can rename the repository on the next screen using a more stylized name (i.e., with spaces and capital letters)
- Click next, and you're good to go!
Zenodo is a long-term archival system that assigns a DOI to each release of your package.
- Log in to Zenodo via GitHub with this link: https://zenodo.org/oauth/login/github/?next=%2F. This brings you to a page that lists all of your organizations and asks you to approve installing the Zenodo app on GitHub. Click "grant" next to any organizations you want to enable the integration for, then click the big green "approve" button. This step only needs to be done once.
- Navigate to https://zenodo.org/account/settings/github/, which lists all of your GitHub repositories (both in your username and any organizations you enabled). Click the on/off toggle for any relevant repositories. When you make a new repository, you'll have to come back to this
After these steps, you're ready to go! After you make "release" on GitHub (steps for this are below), you can navigate to https://zenodo.org/account/settings/github/repository/cthoyt/ssslm to see the DOI for the release and link to the Zenodo record for it.
You only have to do the following steps once.
- Register for an account on the Python Package Index (PyPI)
- Navigate to https://pypi.org/manage/account and make sure you have verified your email address. A verification email might not have been sent by default, so you might have to click the "options" dropdown next to your address to get to the "re-send verification email" button
- 2-Factor authentication is required for PyPI since the end of 2023 (see this blog post from PyPI). This means you have to first issue account recovery codes, then set up 2-factor authentication
- Issue an API token from https://pypi.org/manage/account/token
You have to do the following steps once per machine.
$ uv tool install keyring
$ keyring set https://upload.pypi.org/legacy/ __token__
$ keyring set https://test.pypi.org/legacy/ __token__
Note that this deprecates previous workflows using .pypirc
.
After installing the package in development mode and installing tox
with
uv tool install tox --with tox-uv
or python3 -m pip install tox tox-uv
, run
the following from the console:
$ tox -e finish
This script does the following:
- Uses bump-my-version to
switch the version number in the
pyproject.toml
,CITATION.cff
,src/ssslm/version.py
, anddocs/source/conf.py
to not have the-dev
suffix - Packages the code in both a tar archive and a wheel using
uv build
- Uploads to PyPI using
uv publish
. - Push to GitHub. You'll need to make a release going with the commit where the version was bumped.
- Bump the version to the next patch. If you made big changes and want to bump
the version by minor, you can use
tox -e bumpversion -- minor
after.
- Navigate to https://github.com/cthoyt/ssslm/releases/new to draft a new release
- Click the "Choose a Tag" dropdown and select the tag corresponding to the release you just made
- Click the "Generate Release Notes" button to get a quick outline of recent changes. Modify the title and description as you see fit
- Click the big green "Publish Release" button
This will trigger Zenodo to assign a DOI to your release as well.