There’s something quietly fascinating about the shift happening in IT right now. For years, AI in enterprise tools felt experimental—almost decorative. A feature here, a sidebar assistant there, nothing that could touch real infrastructure without setting off alarm bells for compliance, governance, or just common sense. Today felt different. Devolutions’ announcement at Microsoft Ignite finally landed as one of those moments where AI moves from novelty to operational necessity, and the most striking part is how *normal* it feels—almost like it always should have existed.
Devolutions introduced its new Model Context Protocol (MCP) server integrated directly into Remote Desktop Manager (RDM), and the idea feels refreshingly grounded. Instead of handing sensitive data to external LLMs and hoping everything stays private, they built a secure automation layer that sits inside the product, controlling what the AI sees and what the AI does. That’s the key: the model never receives credentials, and every automated action is governed, logged, and permission-controlled. For IT teams juggling dozens—or thousands—of endpoints and privileged sessions, it’s the kind of evolution that feels overdue.
What makes this interesting isn’t the marketing language. It’s that the use cases are instantly recognizable. Routine tasks like documenting assets, generating consistent naming conventions, building scripts, running session automations—all the boring yet unavoidable work—finally moves from hours to minutes. You can almost feel the sigh of relief from anyone who has lived through 2 AM maintenance windows or inherited a mess of unlabeled connection records. One small detail that stuck with me: copilots like GitHub Copilot can now securely launch protocol-based sessions inside RDM. No juggling windows, no unsafe credential handling, and no ugly scripting gymnastics.
A quote from Enterprise Management Associates summed the moment nicely: RDM’s MCP server doesn’t just bolt AI onto IT workflows; it accelerates them responsibly. The messaging is very much “AI as augmentation,” not replacement. And somehow, that feels both realistic and strategic—especially in a field where uptime and security are non-negotiable.
The architecture is flexible enough to support multiple LLMs—OpenAI, Anthropic, Google Gemini, Mistral, and even local models via Ollama—meaning enterprises aren’t locked into a single vendor or compliance stance. Model choice becomes policy-driven rather than imposed. That’s smart. Especially now, when AI regulation is tightening and global IT teams operate under wildly different oversight climates.
Walking past Devolutions’ booth at Ignite (booth #5644, if someone wants to see it), there’s a sense this isn’t a flashy add-on—it’s a new chapter for the platform. And the timing aligns with the quiet but persistent shift to hybrid-first operations, where remote access isn’t temporary—it’s the backbone.
There’s also a certain confidence in how they framed the roadmap. Automated onboarding, terminal automation for SQL and SSH, context-aware troubleshooting, and interactive RDP automation via Devolutions Agent… these are the types of features where AI stops answering questions and starts performing work.
If you’re attending Ignite, Adam Listek is hosting a session with the kind of title that hints the industry finally crossed the bridge into applied automation: “Smarter Connections: AI and the Future of Remote Management.” It will be one worth watching, mainly because it feels like the conversation has shifted from Can we trust AI with infrastructure? to How much work should we let it automate?
And maybe that’s the real story—not that AI arrived in remote access management, but that it finally did so responsibly, deliberately, and with a clear understanding of the stakes.
Leave a Reply