Publications
2025
- Characterizing Knowledge Manipulation in a Russian Wikipedia ForkMykola Trokhymovych, Oleksandr Kosovan, Nathan Forrester, and 3 more authors2025
Wikipedia is powered by MediaWiki, a free and open-source software that is also the infrastructure for many other wiki-based online encyclopedias. These include the recently launched website Ruwiki, which has copied and modified the original Russian Wikipedia content to conform to Russian law. To identify practices and narratives that could be associated with different forms of knowledge manipulation, this article presents an in-depth analysis of this Russian Wikipedia fork. We propose a methodology to characterize the main changes with respect to the original version. The foundation of this study is a comprehensive comparative analysis of more than 1.9M articles from Russian Wikipedia and its fork. Using meta-information and geographical, temporal, categorical, and textual features, we explore the changes made by Ruwiki editors. Furthermore, we present a classification of the main topics of knowledge manipulation in this fork, including a numerical estimation of their scope. This research not only sheds light on significant changes within Ruwiki, but also provides a methodology that could be applied to analyze other Wikipedia forks and similar collaborative projects.
@misc{trokhymovych2025characterizingknowledgemanipulationrussian, author = {Trokhymovych, Mykola and Kosovan, Oleksandr and Forrester, Nathan and Aragón, Pablo and Saez-Trumper, Diego and Baeza-Yates, Ricardo}, title = {Characterizing Knowledge Manipulation in a Russian Wikipedia Fork}, year = {2025}, archiveprefix = {arXiv}, primaryclass = {cs.CL}, url = {https://arxiv.org/abs/2504.10663}, }
- Graph-Linguistic Fusion: Using Language Models for Wikidata Vandalism DetectionMykola Trokhymovych, Lydia Pintscher, Ricardo Baeza-Yates, and 1 more author2025
We introduce a next-generation vandalism detection system for Wikidata, one of the largest open-source structured knowledge bases on the Web. Wikidata is highly complex: its items incorporate an ever-expanding universe of factual triples and multilingual texts. While edits can alter both structured and textual content, our approach converts all edits into a single space using a method we call Graph2Text. This allows for evaluating all content changes for potential vandalism using a single multilingual language model. This unified approach improves coverage and simplifies maintenance. Experiments demonstrate that our solution outperforms the current production system. Additionally, we are releasing the code under an open license along with a large dataset of various human-generated knowledge alterations, enabling further research.
@misc{trokhymovych2025wikidata, author = {Trokhymovych, Mykola and Pintscher, Lydia and Baeza-Yates, Ricardo and Saez-Trumper, Diego}, title = {Graph-Linguistic Fusion: Using Language Models for Wikidata Vandalism Detection}, year = {2025}, archiveprefix = {arXiv}, primaryclass = {cs.CL}, url = {https://arxiv.org/abs/2505.18136}, }
- Hidden Persuasion: Detecting Manipulative Narratives on Social Media During the 2022 Russian Invasion of UkraineKateryna Akhynko, Oleksandr Kosovan, and Mykola Trokhymovych2025
@misc{akhynko2025hiddenpersuasiondetectingmanipulative, author = {Akhynko, Kateryna and Kosovan, Oleksandr and Trokhymovych, Mykola}, title = {Hidden Persuasion: Detecting Manipulative Narratives on Social Media During the 2022 Russian Invasion of Ukraine}, year = {2025}, eprint = {2505.24028}, archiveprefix = {arXiv}, primaryclass = {cs.CL}, url = {https://arxiv.org/abs/2505.24028}, }
2024
- An Open Multilingual System for Scoring Readability of WikipediaMykola Trokhymovych, Indira Sen, and Martin GerlachIn Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Aug 2024
With over 60M articles, Wikipedia has become the largest platform for open and freely accessible knowledge. While it has more than 15B monthly visits, its content is believed to be inaccessible to many readers due to the lack of readability of its text. However, previous investigations of the readability of Wikipedia have been restricted to English only, and there are currently no systems supporting the automatic readability assessment of the 300+ languages in Wikipedia. To bridge this gap, we develop a multilingual model to score the readability of Wikipedia articles. To train and evaluate this model, we create a novel multilingual dataset spanning 14 languages, by matching articles from Wikipedia to simplified Wikipedia and online children encyclopedias. We show that our model performs well in a zero-shot scenario, yielding a ranking accuracy of more than 80% across 14 languages and improving upon previous benchmarks. These results demonstrate the applicability of the model at scale for languages in which there is no ground-truth data available for model fine-tuning. Furthermore, we provide the first overview on the state of readability in Wikipedia beyond English.
@inproceedings{trokhymovych-etal-2024-open, title = {An Open Multilingual System for Scoring Readability of {W}ikipedia}, author = {Trokhymovych, Mykola and Sen, Indira and Gerlach, Martin}, editor = {Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek}, booktitle = {Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, month = aug, year = {2024}, address = {Bangkok, Thailand}, publisher = {Association for Computational Linguistics}, url = {https://aclanthology.org/2024.acl-long.342}, pages = {6296--6311}, }
- Wikidata Vandalism Detection with Graph-Linguistic FusionMykola Trokhymovych, and Diego Saez-TrumperIn Wiki Workshop, Aug 2024
@inproceedings{trokhymovych2024wikidata, author = {Trokhymovych, Mykola and Saez-Trumper, Diego}, title = {Wikidata Vandalism Detection with Graph-Linguistic Fusion}, booktitle = {Wiki Workshop}, year = {2024}, url = {https://wikiworkshop.org/2024/paper/wikidata-vandalism-detection-with-graph-linguistic-fusion.pdf}, }
2023
- Fair Multilingual Vandalism Detection System for WikipediaMykola Trokhymovych, Muniza Aslam, Ai-Jou Chou, and 2 more authorsIn Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Aug 2023
This paper presents a novel design of the system aimed at supporting the Wikipedia community in addressing vandalism on the platform. To achieve this, we collected a massive dataset of 47 languages, and applied advanced filtering and feature engineering techniques, including multilingual masked language modeling to build the training dataset from human-generated data. The performance of the system was evaluated through comparison with the one used in production in Wikipedia, known as ORES. Our research results in a significant increase in the number of languages covered, making Wikipedia patrolling more efficient to a wider range of communities. Furthermore, our model outperforms ORES, ensuring that the results provided are not only more accurate but also less biased against certain groups of contributors.
@inproceedings{10.1145/3580305.3599823, author = {Trokhymovych, Mykola and Aslam, Muniza and Chou, Ai-Jou and Baeza-Yates, Ricardo and Saez-Trumper, Diego}, title = {Fair Multilingual Vandalism Detection System for Wikipedia}, year = {2023}, isbn = {9798400701030}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3580305.3599823}, doi = {10.1145/3580305.3599823}, booktitle = {Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining}, pages = {4981–4990}, numpages = {10}, location = {Long Beach, CA, USA}, series = {KDD '23}, }
- GeoDD: End-to-End Spatial Data De-duplication SystemMykola Trokhymovych, and Oleksandr KosovanIn Data Science and Algorithms in Systems, Aug 2023
People generate vast amounts of data that can be used for analytics, data-driven decision-making, and forecasting. However, to extract value from data, we need to apply specific methods of cleaning and prepossessing it. In this paper, we observe the problem of geospatial data de-duplication, propose and implement end-to-end solutions for social-media-based data de-duplication. We apply advanced geospatial, natural language processing, and classical machine learning methods for our solution. Our tool shows high competitiveness in observed competition and can process a vast amount of data with limited computational resources.
@inproceedings{10.1007/978-3-031-21438-7_60, author = {Trokhymovych, Mykola and Kosovan, Oleksandr}, editor = {Silhavy, Radek and Silhavy, Petr and Prokopova, Zdenka}, title = {GeoDD: End-to-End Spatial Data De-duplication System}, booktitle = {Data Science and Algorithms in Systems}, year = {2023}, publisher = {Springer International Publishing}, address = {Cham}, pages = {717--727}, isbn = {978-3-031-21438-7}, url = {https://link.springer.com/chapter/10.1007/978-3-031-21438-7_60}, }
2022
- WikiFactFind: Semi-automated fact-checking based on WikipediaMykola Trokhymovych, and Diego Saez-TrumperIn Wiki Workshop, Aug 2022
@inproceedings{trokhymovych2023wikifactfind, author = {Trokhymovych, Mykola and Saez-Trumper, Diego}, title = {WikiFactFind: Semi-automated fact-checking based on Wikipedia}, booktitle = {Wiki Workshop}, year = {2022}, url = {https://wikiworkshop.org/2022/papers/WikiWorkshop2022_paper_21.pdf}, }
2021
- WikiCheck: An End-to-End Open Source Automatic Fact-Checking API Based on WikipediaMykola Trokhymovych, and Diego Saez-TrumperIn Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Aug 2021
With the growth of fake news and disinformation, the NLP community has been working to assist humans in fact-checking. However, most academic research has focused on model accuracy without paying attention to resource efficiency, which is crucial in real-life scenarios. In this work, we review the State-of-the-Art datasets and solutions for Automatic Fact-checking and test their applicability in production environments. We discover overfitting issues in those models, and we propose a data filtering method that improves the model’s performance and generalization. Then, we design an unsupervised fine-tuning of the Masked Language models to improve its accuracy working with Wikipedia. We also propose a novel query enhancing method to improve evidence discovery using the Wikipedia Search API. Finally, we present a new fact-checking system, the WikiCheck API that automatically performs a facts validation process based on the Wikipedia knowledge base. It is comparable to SOTA solutions in terms of accuracy and can be used on low-memory CPU instances.
@inproceedings{10.1145/3459637.3481961, author = {Trokhymovych, Mykola and Saez-Trumper, Diego}, title = {WikiCheck: An End-to-End Open Source Automatic Fact-Checking API Based on Wikipedia}, year = {2021}, isbn = {9781450384469}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3459637.3481961}, doi = {10.1145/3459637.3481961}, booktitle = {Proceedings of the 30th ACM International Conference on Information \& Knowledge Management}, pages = {4155–4164}, numpages = {10}, keywords = {applied research, wikipedia, nlp, nli, fact-checking}, location = {Virtual Event, Queensland, Australia}, series = {CIKM '21}, }