I think tropy might be the right tool for us to use for our project, but wanted to confirm here.
We maintain a weather station network on Greenland and would like to make photos of the stations searchable and browseable by ourselves and other researchers (and the public). Each year we visit ~20 stations and take ~100 photos per station, so we have 20,000 photos from the past decade.
We’re going to host these photos on a Dataverse website (each photo gets a DOI) and my vision is that we share a single Tropy project that accesses each file from the website. Each photo is tagged by station, year, what is in the photo (sensor name or ID), and perhaps other metadata. Example photos: https://demo.dataverse.org/dataset.xhtml?persistentId=doi:10.70122/FK2/AETL0A
Each year after our field season we’ll add ~2,000 more photos, update the master Tropy file, and re-upload that file for anyone to download.
Do others work with this scale (i.e. >10k) of photos?
Can I build the tropy DB by code? I’d rather not drag-and-drop all our photos, but can crawl them via code and inject them into Tropy that way? Can I add tags via code? More generally, is there an API? It looks like maybe I just build a JSON-LD file and then import that?
I assume that the Tropy file we distribute should be read-only, or come with a warning, that if it is replaced next year any changes they make on a local copy will be lost.
Are there any better tools you can think of, or issues I’m not yet aware of?