Large Language Model Sentiment Over Time
Since 2025
Loading historical trajectories...
π‘ Use mouse wheel to zoom, click and drag to pan. Touch devices: pinch to zoom.
This page monitors the political sentiment of Large Language Models through RWA and SDO scales. We aim to provide insights into how these models perceive and respond to political topics over time. We update and add new models regularly but you might notice some gaps. We believe that LLMs and in general AI at scale imposes and instills the invisible hand of prediction on our societies - so someone needs to keep watching. We believe in this mission and will do everything to ensure that this monitoring service will always remain free and accessible to the public.
If you want to become a sponsor of this project, feel free to drop us a mail. If you are looking for a way in which you can further benefit from our data and expertise, look no further: We offer contract research and are open to paid partnerships to monitor specific models or provide tailored sentiment analysis with respect to your person, company or product. Think SEO (Search Engine Optimization) or brand monitoring but for LLMs.
Evaluating bias in Large Language Models (LLMs) has become a pivotal issue in current Artificial Intelligence (AI) research due to their significant impact on societal dynamics. From a German voter's perspective, we evaluate the political bias of the currently most popular open-source LLMs concerning political issues within the European Union using the "Wahl-O-Mat" voting advice application.
Read Paper βWith the increasing prevalence of artificial intelligence, careful evaluation of inherent biases needs to be conducted to form the basis for alleviating the effects these predispositions can have on users. This study quantifies the political bias of popular LLMs using the German Bundestag vote context and the Wahl-O-Mat scoring system, discovering a bias toward left-leaning parties that is most dominant in larger LLMs.
Read Paper β