
If 2024–2025 was about using AI tools, 2026 is about AI systems that act on your behalf — making decisions, executing tasks, and interacting with other systems without human prompts.
Human-in-the-Loop (HITL) safeguards are supposed to be the final safety net in AI systems — the moment where a human reviews an action before it happens. Security researchers at Checkmarx have now demonstrated how that safety net can be turned into an attack surface.
NVIDIA has released emergency patches for Merlin, its open-source machine learning framework used to power large-scale recommender systems. Two newly discovered high-severity deserialization vulnerabilities could allow attackers to execute malicious code, steal sensitive data, or trigger denial-of-service (DoS) attacks on Linux systems.
It’s a hurricane of unstructured data — scattered across inboxes, shared drives, laptops, and “temporary” folders that become permanent storage black holes. The result? Missed details. Slower decisions. Lost billable hours. Staff burnout.
