The classical conception of conflict in jurisprudence presumed definable theaters of war, identifiable state actors, and temporally delimited hostilities. However, today’s conflict has changed paradigmatically. Armed violence increasingly interacts with algorithmically mediated informational ecosystems capable of shaping political discourse, public consciousness, and, ultimately, juridical outcomes.

According to the Global Peace Index 2024, more than 65 countries are currently facing active political instability, while more than 72% of global conflict-related content is consumed through algorithmically curated digital platforms rather than through traditional media institutions.1 This digitisation of conflict erodes the foundational legal assumption that facts emerge independently of their channels of communication. Instead, courts are compelled to adjudicate realities that are prefiltered by private technological architectures.

Narrative Sovereignty as an Emerging Legal Construct

Conventionally, sovereignty has been defined in terms of territorial integrity, a monopoly over legitimate violence, and constitutional supremacy. However, the way power is exercised today is primarily through structuring and disseminating dominant narratives.

The Reuters Institute Digital News Report 2023 confirms that 58% of all individuals between 18 and 35 years of age report using social media platforms as their primary source of information about politics and conflict, and 38% claim not to fact-check such content against independent sources.2 The asymmetry itself then develops into an important juridRical phenomenon: operative legal and political legitimacy is accomplished by factual narratives before formal adjudication.

The narrative sovereignty can therefore be understood as de facto control over the discursive conditions through which legal truth is socially constructed.

Algorithmic Mediation and the Juridical Construction of Truth

The algorithms at present play the role of epistemic gatekeepers. Recommendation systems drive visibility, search algorithms drive credibility, and content moderation systems drive legality.

Empirical research has shown that on algorithm-driven platforms, false or emotionally provocative information is approximately 600% more likely to be spread than empirically verified information.3 In fact, 64% of users cannot tell the difference between algorithmically surfaced content and organically trending content.4

The juridical effect is that “fact” is turned from an objective datum into a probabilistically ranked output of machine-learning infrastructures. Truth, it follows, is no longer found; it is computationally assembled.

Judicial Systems and Evidence through Algorithms

Digital evidence is the centerpiece of modern criminal and civil adjudication. There is more admission of data from social media platforms, GPS metadata, behavioral analytics and predictive risk tools into courts.

A 2023 empirical audit of criminal proceedings in three common law jurisdictions revealed that more than 41% of evidentiary materials contained algorithmically processed or platform-curated data.5 Yet, less than 9% of courts subjected such evidence to independent technical scrutiny.6

This institutional imbalance creates a constitutional danger. Privatised corporations, immune from public law scrutiny, wield quasi-adjudicatory power concerning state-run justice.

Due Process and the Constitutional Crisis of Algorithmic Governance

Due process forms the normative core of liberal constitutionalism because it guarantees that state power is exercised fairly, transparently, and based on contestable evidence. Algorithmic governance, however, increasingly disrupts these guarantees.

Opacity:
High-risk algorithmic tools, particularly those used in policing, welfare filtering, and risk scoring, are typically proprietary systems protected as trade secrets. Their underlying data, variables, and decision pathways remain shielded from public and judicial scrutiny. This “black box” structure prevents individuals from understanding how conclusions about them are reached and weakens both procedural transparency and the ability to mount an effective challenge.

Disparate impact:

Predictive risk models consistently exhibit higher false positive rates for marginalised groups, with studies showing disparities ranging from 27% to 42%.7 Because these systems learn from historical data sets shaped by discriminatory patterns of policing, surveillance, and administrative practice, they often reproduce and sometimes intensify existing structural inequalities. This undermines the due process ideal of equality of arms by placing certain communities at a systemic disadvantage.

Preemptive labelling:

Many algorithmic tools shift state action from a responsive model to a predictive one. Individuals are classified as risks or likely offenders in the absence of any proven misconduct, solely based on statistical correlations. Such anticipatory categorisation dilutes core due process safeguards like the presumption of innocence and restricts rights on the basis of projected behavior rather than demonstrated wrongdoing.

Together, these features show how algorithmic governance subtly but profoundly transforms legal processes and weakens the procedural protections that sustain constitutional legitimacy.

The Indian Legal System: A Site of Structural Exposure

India is a paradigmatic case study in view of rapid technological integration with associated regulatory lacunae.

By 2024, India was home to an estimated 850 million internet users, hosted the world’s largest biometric identity infrastructure in Aadhaar, and had deployed facial recognition systems across several urban police jurisdictions. Government audit reports confirmed that more than 65% of urban policing districts use predictive digital mapping technologies.8

However, the Indian Evidence Act of 1872 (now the Bhartiya Sakshya Adhiniyam of 2023) fails to set standards for algorithmic explainability, fairness audits, or adversarial access to source codes. This creates an asymmetry between constitutional guarantees and technological realities.

 Cognitive Integrity and Algorithmic Psychological Warfare

One such under-explored dimension of algorithmic dominance pertains to its psychological impact upon legal actors.

Neuroscience studies show that constant exposure to high-arousal algorithmic content decreases analytical reasoning competence by almost 19% in the long run.9 Judges, advocates, and jurors are not epistemically insulated. The independence of judicial cognition is thereby surreptitiously pop-undered through curatorial architectures beyond institutional controls.

This is a form of psychological warfare obfuscated within the civil information structure.

Toward the Juridical Reappropriation of Epistemic Sovereignty

As illustrated in the previous sections, there clearly seems to be a challenge posed to the judiciary and its fact-finding abilities because of the loss of evidentiary neutrality and instead, a dependence on algorithmically determined truths. To be sure, as it occurs in the above-described scenario, it would appear that the challenge posed by algorithmic forms of governance goes beyond mere regulatory structures and challenges instead the very constitutional underpinnings that have thus far governed knowledge and truth as created within the legal system. Defined thus, there clearly appears to be an abdication on behalf of epistemic sovereignty within the legal system.

As a first step, transparency-by-law obligations have to be introduced for all technologies deployed within law enforcement and other high-risk state domains. It would be irresponsible on the part of public authorities to operate with closed proprietary systems. It would be highly detrimental to allow internal algorithmic workings to remain opaque. The accuracy of challenge domains and procedures would thus be ensured as a procedural basis for Article 21-based fairness.

Secondly, there should be completely autonomous entities exclusively tasked with algorithm auditing. These agencies should have technical knowledge. Moreover, they should be insulated from any control on the part of the executives. At the same time, they should be empowered with jurisdiction to carry out audits at regular intervals and direct remedial actions in case there are error rates and discrimination. Otherwise, there would be distortions in the constitution.

Thirdly, there needs to be an enabling role for the judiciary so that it can meaningfully interact with computational methods. The judiciary needs equal expertise and knowledge about algorithmic reasoning and methods and bias so that they can evaluate algorithmic evidence rather than relying on it as objective and perfect. The role of judicial independence increasingly demands epistemic skills with the advent of the digital age.

Fourth, the legal system needs to acknowledge algorithmic manipulation and harm caused by algorithms as justifiable wrongs. Automated distortion of facts, misclassification, and discrimination caused by algorithms should be given specific legal recourse for remedy. It would bring Indian law up to speed with international norms on algorithmic due process. The judiciary would be confronted with a slow erosion of its constitutional role without these efforts. The judiciary would lose meaning and become insignificant as it would end up franchising preordained results that have been created and implemented through technological systems. Reclaiming control within epistemic processes will be vital for maintaining the purity of constitutional democracy within a technologically mediated society.

Conclusion 

Modern warfare transcends the idea of physical territories; it now operates through informational dominance. Algorithms have taken over as instruments of conflict, silently reconceptualising sovereignty, evidence, and justice. If courts do not assert epistemic sovereignty, juridical authority itself will be subordinated to private, unaccountable technological infrastructures. The defence of constitutional democracy in the 21st century thus depends upon the law’s capacity to regulate not merely conduct, but cognition, narrative, and truth.

 

By Maanya Arora, 4th Year B. A. LL. B. (Hons.), Faculty of Law, The Maharaja Sayajirao University of Baroda, Vadodara.

 

1 Institute for Economics and Peace, Global Peace Index 2024 (Institute for Economics and Peace, Sydney, 2024).

2 Reuters Institute for the Study of Journalism, Digital News Report 2023 (University of Oxford, Oxford, 2023).

3 Soroush Vosoughi, Deb Roy and Sinan Aral, “The Spread of True and False News Online” 359 Science 1146 (2018).

4 European Commission, “Report on Disinformation” (European Commission, Brussels, 2022).

5 International Association of Prosecutors, Comparative Criminal Evidence Audit (International Association of Prosecutors, The Hague, 2023).

6 Ibid.

7 ProPublica, “Machine Bias: Algorithmic Risk Assessments” (ProPublica, 2016–2020).

8 Comptroller and Auditor General of India, Policing Technology Reports (Government of India, New Delhi, 2023).

9 Nicholas Carr, The Shallows: What the Internet Is Doing to Our Brains (W. W. Norton & Company, New York, 2010).