The vulnerability, tracked as CVE-2025-12735, poses significant risks to server environments and AI-powered applications that process user input.
The library’s widespread adoption makes this vulnerability particularly concerning for organizations running NLP and AI applications in production environments.
According to the SSVC framework, this vulnerability represents a Technical Impact of Total, meaning adversaries gain complete control over the software’s behavior or achieve total disclosure of all system information.
| Identifier | Value |
|---|---|
| CVE ID | CVE-2025-12735 |
| GitHub Advisory | GHSA-jc85-fpwf-qm7x |
| CERT/CC Note | VU#263614 |
| Disclosure Date | November 7, 2025 |
| Last Updated | November 9, 2025 |
The vulnerability stems from a design flaw in the Parser class’s evaluate() method. An attacker can exploit this flaw by defining arbitrary functions within the parser’s context object.
By crafting malicious payloads from user-controlled input, an attacker can execute system-level commands on the host system.
This could lead to unauthorized access to sensitive local resources, data exfiltration, or complete system compromise.
The flaw allows attackers to bypass security restrictions and gain total control over affected applications.
Organizations using expr-eval should immediately audit their dependencies and prioritize patching. Two primary remediation paths are available:
Patch via Pull Request #288: Apply the security patch from the expr-eval repository. The patch introduces a defined allowlist of safe functions, mandatory registration mechanisms for custom functions, and updated test cases to enforce these constraints.
Upgrade to patched versions: Update to the latest patched version of expr-eval or expr-eval-fork. Notably, expr-eval-fork v3.0.0 is now available and addresses this vulnerability, along with a prior Prototype Pollution vulnerability that remained unaddressed in the unmaintained original repository.
Use automated tools like npm audit to identify affected versions across your infrastructure.
Since the library is fundamental to many AI and NLP systems, implementing this fix quickly is essential before exploitation becomes widespread.
Implement updates as soon as patches are deployed to production systems.
Security researcher Jangwoo Choe responsibly disclosed the issue, working with GitHub Security and npm on coordinated disclosure to ensure responsible reporting and adequate time for fixes.
Cyber Awareness Month Offer: Upskill With 100+ Premium Cybersecurity Courses From EHA's Diamond Membership: Join Today
The post Critical RCE Flaw in Popular npm Library Threatens AI and NLP Applications appeared first on Cyber Security News.
Making the leap to space feels like a big departure from the usually grounded horror…
Xbox and Discord have now officially unveiled the new starter edition of Xbox Game Pass…
The infamous hacking group ShinyHunters has struck again, this time targeting Instructure, the company behind…
In a massive, internationally coordinated operation, the Frankfurt am Main Public Prosecutor’s Office – Central…
A popular artificial intelligence repository on Hugging Face was recently found hiding dangerous malware that…
Traditional ransomware disrupts organizations by encrypting data and demanding payment for decryption keys. However, a…
This website uses cookies.