Digital Economy
European AI Sovereignty Instead of Chinese Models
cepNews
While open-source principles are fundamentally welcome in European digital policy for many reasons, the argument overlooks systematic security risks that are structurally embedded in Chinese models.
Firstly, LLMs can be equipped with backdoors that cannot be removed even through intensive safety training procedures. Such "sleeper agents" can remain undetected for years and only reveal their true behaviour when a specific trigger is activated – for example, in the context of a possible Taiwan crisis. Current research shows that larger models are particularly susceptible to this problem. Paradoxically, backdoor-infected systems often perform even better at ordinary tasks, making detection even more difficult. For European companies that control critical infrastructure or production processes with such models, this would be a serious security risk. The Economist argues that Chinese AI offers "insurance" against US lockout. This is true, but the logic ignores similar risks in the event of an escalation in Taiwan.
Secondly, political control of Chinese AI models does not only take place at the application level, but is deeply embedded in the training data architecture. A Nature study found significant inconsistencies in bilingual LLMs on China-related topics: Chinese-language versions systematically present narratives that conform to the party line, while English-language versions are more critical. This divergence is not least the result of stringent censorship on the Chinese internet, which distorts the training data corpus. The well-known Chinese open-source provider DeepSeek illustrates the problem: in 85 per cent of cases, the model censors politically sensitive topics without disclosing the mechanisms. An empirical analysis showed that DeepSeek-R1 systematically features additional Chinese propaganda compared to ChatGPT – not only when it comes to explicitly political topics, but also regarding lifestyle and cultural topics. Such ideological distortions are strategic instruments of Chinese soft power, keyword "Digital Silk Road". This means that open Chinese models used in European companies could subtly but systematically privilege Chinese perspectives.
Thirdly, the good news: Europe already has technologically mature, democratically legitimised open-source alternatives that are perfectly adequate for most industrial applications. Providers such as Mistral offer sufficient performance in this area. Research also clearly shows that finely tuned smaller models often outperform generic frontier models for specialised industrial tasks.
The Economist's basic intuition is correct: open-source AI is the right way for Europe to become more competitive again and catch up in AI. However, the conclusion is dangerous. Europe should not rely on Chinese open-source models, but on European ones. There are several good reasons for this: Research on backdoors in LLMs shows that models developed in authoritarian contexts can be structurally compromised, with trigger mechanisms that cannot be eliminated as things stand today. In addition, domain-specific smaller models are often more powerful than frontier models for industrial applications. Such smaller models also exist in Europe. European open-source AI is the only sustainable response to Chinese dominance in this area. Anything else is technological subjugation in open-source guise.