Diana Roig Sanz

Universitat Oberta de Catalunya

In the last two decades, computational tools have been used in the humanities and the social sciences to study patterns of cultural change, both in the present and in the past, in a growing and interdisciplinary field. While the main goal has been to measure culture in an innovative way, it has evolved differently among the wide range of disciplines that study the human condition and at various university departments all over the world. This paper contributes to a better understanding of the potentials and pitfalls of using machine learning and artificial intelligence in the humanities and applies data science and digital tools to the study of translated literature and global translation flows. Specifically, it examines the opportunities and pitfalls of computationally analyzing large cultural data sets and describes how we can combine quantification, the statistical study of literary translations in an historical period, and data visualization on a large scale with qualitative methods. A general hypothesis is that one of the main possibilities offered by a Big Translation History approach (BTH) is to help decentralize translation and world literature, in a broad sense, by breaking with national historiographies. This might be particularly significant for researchers working on periods in which borders have changed, those dealing with translated literature in the diaspora, and those working on translations of regional literatures. This paper defines BTH as a conceptual and methodological tool that can be grounded on three fundamentals: (1) large scale research (geographical and chronological); (2) massive data, understood using a two-pronged approach involving both big data and little data, and drawing on a wide range of often heterogeneous and non-structured sources; and (3) the use of computational techniques as part of the research process and for the production of knowledge, rather than helping only with visualization.