I’m realizing that many editors of digital scholarly text editions haven’t considered that they might benefit greatly by recording highly atomized textual data in a database from which they could produce any number of editions. The more commonly practiced approach is to encode a text as a single TEI XML document. One XML document per text edition. So much more is possible if we start with granular textual data a build editions from there. #digitalhumanities #textualstudies
5.11.2022 17:30I’m realizing that many editors of digital scholarly text editions haven’t considered that they might benefit greatly by recording...Watch my colleague Dr. Sarah Yardney present our update on the CEDAR project (Critical Editions for Digital Analysis and Research)
#digitalhumanities
#textualstudies
https://youtu.be/yD1mbQSA5Gg
Today work continues creating complicated digital text editions in a database environment. The CEDAR project is comparing manuscript editions, integrating spatial data, and generally working with highly granular textual data. https://cedar.uchicago.edu/
2.11.2022 13:55Today work continues creating complicated digital text editions in a database environment. The CEDAR project is comparing manuscript...Given that TEI is not a data model for texts, curious to know what data models people use to represent digital texts in databases for example. None? #textualanalysis #textualstudies
1.11.2022 12:17Given that TEI is not a data model for texts, curious to know what data models people use to represent digital texts in databases for...#humanistodon There I said it.
1.11.2022 01:46#humanistodon There I said it.Greetings from Univ of #Chicago Glad to be in this new space.
I'm the Associate Director for Research and Publications in Digital Studies.
I focus on digital textual studies in the broadest definition.
I take a database approach to modeling textual data, insisting on extreme atomization and integration.
#digitalhumanities #philology