
Security researchers are tracking a growing attack pattern now labeled AI Recommendation Poisoning — where hackers and aggressive marketers abuse AI share links to silently inject persistent instructions into AI assistants.
OpenClaw — an open-source AI assistant designed to connect to messaging platforms, cloud services, and local tools — recently patched a critical log poisoning vulnerability that exposed a subtle but dangerous weakness in AI-driven automation.
