top of page

Darwin Monkey: Next Generation Neuromorphic Computing and Competition for Cognitive Capability and Control

  • Writer: Deft9 Solutions
    Deft9 Solutions
  • Nov 11, 2025
  • 6 min read

Elise Annett

James Giordano

 


The Darwin Monkey System: A Paradigm Shift from AI to Synthetic Cognition


Fully integrated neuromorphic computing represents an important – and provocative – developmental iteration of artificial intelligence (AI). To date, most AI system operations have been based upon symbolic reasoning and/or statistical inference derived from large data sets. However, Darwin Monkey, a large-scale neuromorphic computing system newly developed by Chinese researchers, was explicitly engineered to mirror the structural and functional architecture of a brain. This system operates via decentralized neural networks that entail plastic, dynamic signal architectures that closely resemble biological neurons and synapses. As such, platforms like Darwin Monkey enable multi-scale modeling of cognitive processes that range from micro-level neural dynamics to macro-level decision-making and behavioral patterning. This representational capacity affords these systems the capability to encode, interpret, and replicate forms of cognition (inclusive of valently-weighted cognitive processes, i.e.- “emotions” [DAMASIO 1999] that to date have been regarded as hallmarks of sentient species (and in some ways as uniquely human) [LOVELESS 2014].


The distinction here is substantive. It signals a pivot to fully neuro-analogous computation wherein a system that is modeled upon the structure and function of a nervous system executes the tasks and output of that system, absent having actual neural cells. The Darwin Monkey system takes this functionality to an additional tier in its professed ability to (1) engage neuro-mimetic cognition and behavior, blurring the boundary between reactive automation and anticipatory intelligence, and in this way (2) uses itself as a model to execute “self-referential” predictions of human neurocognitive responses. In other words, it employs its neuro-analogous architecture and capabilities (i.e.- it ‘reflects upon itself’) to achieve insight(s) to and judgements about neuro-homologous systems’ processes, functions and (behavioral) outputs (i.e.- to ‘know about others’) [LOVELESS 2023].  If the professed abilities of Darwin Monkey to ability to relate self-to-others, and others-to-self are verified, this system (and those that may follow) can be regarded as both a significant technical achievement and a profound (if not contentious) philosophical milestone in its approach to bridging the gap between objective ‘explanation’ (of what neurocognitive function is) and subjective ‘understanding’ (of the event) [KUSHNER 2018].

 

Dual-Use and Strategic Leverage

The dual-use nature of neuromorphic systems, particularly within the scope of cognitive warfare, can and should be seen as a clear and present concern. Systems like Darwin Monkey could be leveraged to model, predict, and influence human decision-making under operationally relevant conditions. At tactical levels, neuromorphic systems can be employed for neurocognitive surveillance and to inform, guide and/or execute behavioral influence operations and targeted manipulation of particular individuals or collective groups.

As well, such systems could be employed to optimize human performance by fortifying closed-loop human-machine interfaces to enable real-time adaptation to operator stress, cognitive load, or fatigue. Yet these same systems also present opportunities for the non-consensual modulation of emotional states, attentional focus, or action selection. In such scenarios, the boundary between enhancement and influence can become operationally and ethically blurry, as the locus of agency shifts from the human actor to machine architectures.


A New Operational Domain and the Ethics of Cognitive Control

There is ongoing bioethical discourse addressing issues surrounding cognitive influence exercised through computationally based neurotechnological platforms  [FARAHANY 2023], [GIORDANO 2016], [SHOOK 2019], [DEFRANCO 2019]. If a neuromorphic system assesses and responds to an operator’s internal states with sufficient accuracy and feedback resolution, we must question if it has become an “actor” in that individual’s cognitive processes and behavioral actions. An ethical dilemma emerges when this AI-actor shapes (viz.- enthuses or constrains cognition relevant to decisions and actions).


Nation-states and non-state actors are actively exploring the use of neurodata and affective computing in operations designed to undermine social cohesion, manipulate political behavior, and preempt resistance to strategic initiatives [GIORDANO 2024]. The integration of neuromorphic platforms with biometric data streams, social behavior analytics, and networked AI agents introduces a mode of influence that is persistent, personalized, and adaptive. The increasing sophistication of such neuromorphic systems demands concomitant development of conceptual and doctrinal frameworks for cognitive security.


We opine that this must be regarded as a distinct domain within national and international security that encompasses protection of cognition, emotional regulation, decision-making processes, behavioral predispositions, and the informational substrates upon which they rely. We posit that cognitive security should include safeguarding individuals and populations from manipulation, deception, or coercion via neuromorphically-enabled AI influence operations. In the absence of international standards, governance regulations, and enforceable protocols, this becomes a race to – if not battle for - cognitive dominance.


Toward a Responsible Framework

Mitigating the risks and leveraging the strategic potential of neuromorphic systems such as Darwin Monkey necessitate immediate, deliberate action. Any such efforts should involve dialectical discussions toward monitoring and pacing innovation by balancing incentives for innovation with pragmatic engagement of ethics. Toward such goals we propose the following four steps:


First, the United States (US) and its allied partners must make parallel investments in expanding neuromorphic capabilities, to both ensure technological parity and to actively shape the operational, ethical, and geopolitical parameters of use. This will require a convergent enterprise that integrates defense, intelligence, research, industrial and political engagement [DEFRANCO 2019]. Absent such a whole-of-nation approach, the development of these systems can become siloed, thereby increasing possibilities for programmatic misdirection, and fostering increasing risk of adversarial advantage.


Second, operational policies must be established in advance of deployment. Clear, enforceable guidance must elucidate the roles and boundaries of cognitive technologies. These frameworks must explicitly define: (1) authorized use cases; (2) limits and proportionality thresholds; (3) protocols for continuous oversight; (4) safeguards to preserve ethical use-in-practice; and (5) policies for governance and response. Failing to establish these measures can create a gray zone that enables rapid exploitation of both AI and related cognitive technologies as weapons, and of the cognitive domain as a viable battlespace.


Third, and to this point, cognitive security must be elevated to a foundational element of national security strategy. This entails a cognitive preparedness posture that includes public education, ethical awareness and oversight, multinational surveillance, and the design of countermeasures against cognitive manipulation and behavioral subversion.  


Finally, international norm-setting is imperative. The current absence of a coherent multilateral framework governing neuromorphic computational system development, neurotechnological tools, and cognitive warfare capabilities fosters conditions that are easily exploitable by peer-competitor/adversarial nations and non-state agents. We have previously argued that regnant signatory conventions and treaties governing biological weapons are inadequate  [GERSTEIN 2017]. Thus, it is imperative to explicitly address synthetic cognition, cognitive weapons and the evolving realities of cognitive warfare, and that the US must play a leading role in any such discourses, deliberations and regulatory developments.


Conclusion

The emergence of neuromorphic systems such as Darwin Monkey represents a technological achievement and a paradigm shift in the conception of capability, control, conflict and power. The ability to model, predict, and influence cognition at scale reconfigures the character of modern warfare and deterrence. The challenge that such technology poses is evident. We believe that the task ahead is for the US and its allies to confront the challenge and seize opportunity to establish a posture of ethically responsible preparedness.


 

References


Damasio, A. (1999). The feeling of what happens: Body and emotion in the making of consciousness. A Harvest Book.

 

DeFranco, J., DiEuliis, D., & Giordano, J. (2019). Redefining neuroweapons. Prism, 8(3), 48-63.

 

DeFranco, J., DiEuliis, D., Bremseth, L. R., Snow, J. J., & Giordano, J. (2019). Emerging technologies for disruptive effects in non-kinetic engagements. HDIAC Curr, 6(2), 49-54.

 

Farahany, N. A. (2023). The battle for your brain: defending the right to think freely in the age of neurotechnology. St. Martin's Press.

 

Snow, J., & Giordano, J. (2019). Aerosolized Nanobots: parsing fact from fiction for health security—a dialectical view. Health security, 17(1), 77-79.

 

Giordano, J., & Wurzman, R. (2016). Integrative computational and neurocognitive science and technology for intelligence operations: Horizons of potential viability, value and opportunity. STEPS-Science, Technology, Engineering and Policy Studies, 2(1), 34-38.

 

Giordano, J. (2024). Chem-bio, data and cyberscience and technology in deterrence operations.

 

Kushner, T., & Giordano, J. (2018). If It Only Had a Brain: What “Neuro” Means for Science and Ethics. Cambridge Quarterly of Healthcare Ethics, 27(4), 540-543.

 

Loveless, S. E., & Giordano, J. (2014). Neuroethics, painience, and neurocentric criteria for the moral treatment of animals. Cambridge Quarterly of Healthcare Ethics, 23(2), 163-172.

 

Loveless, S. E., & Giordano, J. (2023). Do You Mind? Toward Neurocentric Criteria for Assessing Cognitive Function Relevant to the Moral Regard and Treatment of Non-Human Organisms. AJOB neuroscience, 14(2), 170-173.

 

Shook, J. R., & Giordano, J. (2019). Consideration of context and meanings of neuro-cognitive enhancement: the importance of a principled, internationally capable neuroethics. AJOB neuroscience, 10(1), 48-49.

 

Disclaimer

The views and opinions presented in this essay are those of the authors and do not necessarily represent those of the United States government, Department of Defense, the National Defense University, or the Cognitive Security Institute.



Authors

Elise Annett is an Institutional Research Associate at the National Defense University, Washington, DC.

 

Dr. James Giordano is the Director for the Center for Disruptive Technology and Future Warfare of the Institute for National Strategic Studies at the National Defense University, Washington, DC.

 

CSI-logo_hz.png

The Cognitive Security Institute is a registered 501(c)(3) organization,
EIN: 92-3238363, State of Oregon Registration#: 66753.

©2025 Cognitive Security Institute.

All rights reserved.

bottom of page