Time Machine is a FET-Flagship/LSRI CSA project in preparation of a full FET Flagship/LSRI proposal, funded by the EU through the H2020 programme. It wants to build the Big Data of the Past for protecting, unlocking and valorising the vast cultural heritage of Europe. TM brings together 33 core partners from the academic world, the heritage sector and industry that specialize either in history and heritage or in state-of-the-art research and development in ICT and robotics. Both the University of Antwerp and Ghent University belong to the founding partners of the project. The consortium has also secured the formal support and collaboration of over 200 entities, among which many renowned heritage institutions (libraries, archives, museums), related projects and governments.
- PI: Professor Frédéric Kaplan (Ecole Polytechnique Fédérale de Lausanne)
- Ghent University spokesperson: Professor Jeroen Deploige
The Time Machine Coordination and Support Action (2019-2020) will produce a fully developed Large-Scale Research Infrastructures (LSRI) initiative to build the Big Data of the Past in the next decade. Fundamental milestones of the LSRI project proper will be a) a new and improved mass digitisation infrastructure and the coordination of ongoing and new digitisation campaigns, b) automated data interpretation and processing with deep learning artificial intelligence engines and c) multidimensional visualisations and (inferred) simulations, including 4D.
The Big Data of the Past will protect, unlock and activate one of Europe’s most important assets, its cultural heritage. In the process, fundamental progress and breakthroughs will be generated in the fields of ICT and robotics. The Time Machine infrastructure will be a game changer for research in the Social Sciences and Humanities and will offer broad opportunities for innovation in education, policy making, the GLAM sector (Galleries, Libraries, Archives, Museums), (sustainable) tourism and other parts of European society and economy.
Building the Big Data of the Past
• Development of fully automatic and integrated digitisation pipelines for cultural heritage artefacts
• Development of a secure yet open-source, cloud-based digital archive, including digital rights management for (re)born digital heritage materials
• Development of the technology to harmonise and semantically integrate a variety of heterogeneous data (with both human and machine annotation)
• Development of a global storage system: a robust, fully decentralised and versioned Internet filing system, perhaps with experimental biological data storage systems for the long-term data preservation
Developing the computing technology for extracting information out of the Big Data of the Past
• AI methods: development of generic architecture for information extraction out of extremely diverse sources, capable of a holistic interpretation and analysis of the data
• Deep Learning: development of reasoning engines for data synthesis, linking and inference
• Spatiotemporal simulation: development of a multiscale hierarchical system capable of offering a standard way to infer and express spatiotemporal information (unambiguous indexing of spatiotemporal entities)
• High Performance Computing: development of three simulation engines (4D, Universal Representation, Large-Scale Inference) for which there need to be exascale computers (developed under the European High-Performance Computing Joint Undertaking (EuroHPC JU). Simulators will be capable of extrapolating missing information and to suggest plausible alternatives
Laying the foundations of new interpretative methods for Humanities and Social Sciences
• Development of digital or artificial hermeneutics
• Development of a mature theoretical framework to deal with heterogeneous and potentially biased data, while allowing for ambiguity and multiple conflicting interpretations
- E.g. Simulation criticism
- E.g. Manipulation assessment and awareness (development of guiding values and principles)
• Scholarship: Investigating new research methods facilitated by the Big Data of the Past. Development of new models with deeper levels of critical analysis (primarily within SSH)
• Education: investigation of the effect of the Big Data of the Past on content and forms of education at all levels
• Specific areas and uses: assess the potential of TM for all other applications beyond scholarship and education
• Dissemination activities
• Policy (uptake), legal issues, ethics
• Knowledge transfer between researchers (SSH - STEM) and with stakeholders
• Exploitation: supporting entrepreneurship triggered by TM results (SME’s, businesses, social innovation etc.)
Role of Ghent University
In the CSA project (march 2019- march 2020) Ghent University will contribute to:
• Work Package 2: Addressing scientific and technological challenges associated with creating the Big Data of the Past (specifically challenges concerning data and theory)
• Work Package 4: Develop the strategy and implementation plan for the TM Exploitation Avenues (specifically the avenue of scholarship - Innovative, multi-scale SSH research)
The Ghent University core team combines fundamental research in the humanities with innovative technological expertise. Closely related to the aims of Time Machine, the Pirenne Institute for Medieval Studies and colleagues from other research groups (CartoGIS, Ghent Centre for Digital Humanities, TELIN, IDLab/Imec) have a long tradition of applying state-of-the-art ICT technology, including digitisation and (big) data modelling, to the study of heritage, from centuries-old manuscripts to digital-born data.
In the future, Ghent University with the help of societal partners aims to develop a local Time Machine with a focus on the Flemish cities of Ghent and Bruges from the Middle Ages till today.