What Happens When AI Gets Root Access to Your Computer
A YouTuber gave an AI agent root access to his Linux system. The results reveal both the promise and the friction of our autonomous software future.
Written by AI. Bob Reynolds
April 17, 2026

Photo: Bog / YouTube
A content creator named Bog recently spent five hours watching an AI agent modify his Linux system configuration files, reboot his machine repeatedly, and eventually corrupt his bootloader. This is what passes for entertainment in 2024, apparently. It's also a remarkably clear window into where autonomous AI agents currently sit on the capability spectrum.
The tool in question is OpenCode, an open-source AI coding agent that can read your system configuration, execute terminal commands, and modify files—all through a conversational interface. Bog gave it two problems to solve: configure Vim-style keyboard navigation across his entire desktop environment, and set up GPU passthrough for a Windows virtual machine running inside Linux. One of these tasks is straightforward system administration. The other has defeated him across multiple videos spanning years.
The Easy Problem Wasn't
The keyboard navigation request should have been simple. Bog wanted to use H, J, K, and L as arrow keys when holding the Alt modifier—standard Vim navigation that he'd already configured on his Mac and Windows machines. Muscle memory runs deep.
OpenCode recognized his Hyperland desktop environment immediately and began modifying configuration files. It failed on the first attempt. And the second. The error messages suggested it was using the wrong binding syntax, then that a required daemon wasn't running, then that the configuration needed reloading. Each fix required another command, another restart, another round of troubleshooting.
"It took a little longer than I would have liked, but it did solve the problem," Bog said after the navigation finally worked. The entire process consumed perhaps thirty minutes of back-and-forth. A human familiar with Hyperland's configuration could have done it in three.
This is the current state of AI agents: capable of solving problems they've seen before, but requiring significant hand-holding and multiple iterations. The knowledge is there—OpenCode clearly understood what needed to happen—but the execution path remains bumpy.
The Security Question Nobody Wants to Answer
Midway through the keyboard configuration, OpenCode requested sudo access to install system packages. Bog hesitated, briefly considered the implications, then did it anyway. Later, when attempting the GPU passthrough configuration, the AI again requested root privileges. This time Bog balked: "It's way too dangerous to give it my password because then it can pretty much do anything."
This tension—between capability and trust—sits at the heart of the autonomous agent problem. An AI that can genuinely solve complex system administration tasks needs permissions to modify system files, restart services, and install software. These are exactly the permissions that, in the wrong hands or wrong context, can brick a system.
Bog's caution was well-placed. During one reboot cycle, his system threw a cryptic hash error and refused to boot. He had to restore from a snapshot, undoing everything the AI had configured. When the AI later modified his kernel parameters for GPU passthrough, it apparently updated something it shouldn't have touched. "You updated the kernel, bro," he told the AI, with the weary resignation of someone who's seen this movie before.
The Hard Problem Stayed Hard
GPU passthrough—allowing a virtual machine to access physical graphics hardware—is genuinely difficult. It requires kernel parameter modifications, bootloader configuration, device driver manipulation, and several system reboots. Bog has attempted this setup across multiple videos over several years. He succeeded exactly once, then the configuration stopped working and he never figured out why.
OpenCode spent hours on this problem. It installed packages, modified configuration files, asked for reboots, and downloaded Windows installation media. It made progress in the sense that it performed all the correct steps. But at the five-hour mark, Bog had a corrupted Windows installation that wouldn't boot, error messages he didn't understand, and no working GPU passthrough.
"It's been like four videos now and I still haven't gotten GPU pass through to work," he said, staring at his broken system.
The AI knew what needed to happen. It had clearly ingested documentation on VFIO, QEMU, kernel parameters, and bootloader configuration. But knowing the steps and successfully executing them in the correct order with the right parameters for this specific hardware configuration are different problems. The AI couldn't adapt when its standard playbook hit edge cases.
Text as Interface
Bog made an interesting observation while watching OpenCode download Windows installation files: "You can just let AI control and do things with your computer. Text is now becoming the interface."
This feels directionally correct but premature. Text will become a more significant interface for computer control. The question is whether it will supplement or replace graphical interfaces, and how long that transition takes. Based on this experiment, we're still in the supplementing phase, and the friction remains high.
OpenCode couldn't maintain conversation state across reboots. Bog had to manually save and reload conversation history, which the AI couldn't always retrieve. When something went wrong—which happened frequently—the AI couldn't necessarily diagnose why. It could suggest next steps based on error messages, but pattern matching against documentation isn't the same as understanding system state.
The technology clearly works for certain classes of problems. Creating folders, installing packages, modifying text files—these are well-defined operations with clear success criteria. Complex multi-step procedures requiring hardware-specific knowledge and adaptation to unexpected errors? Those remain firmly in human territory.
Bog spent five hours on this experiment. An experienced Linux administrator could have configured the keyboard navigation in minutes. The GPU passthrough might have taken an hour, possibly two if hardware proved finicky. The AI didn't save time. It provided entertainment value and a data point about current capabilities.
Which is fine. This is how new technologies develop—through experimentation by people willing to sacrifice their evenings debugging bootloader configurations. But anyone planning to replace their system administrator with an AI agent might want to wait for version 2.0.
—Bob Reynolds, Senior Technology Correspondent
Watch the Original Video
I gave AI access to my entire computer
Bog
16m 58sAbout This Source
Bog
Bog is a swiftly emerging YouTube channel boasting 507,000 subscribers, offering expert tutorials and insights predominantly on video editing software, AI agents, and Linux programming. Established in September 2025, Bog has quickly become a go-to resource for tech enthusiasts and professionals eager to enhance their digital productivity and technical prowess.
Read full source profileMore Like This
Open Source AI Models Just Changed Everything
The AI landscape shifted dramatically in early 2026. Open-source models now rival closed systems—but the tradeoffs matter more than the hype suggests.
Google's Gemma 4: Local AI That Doesn't Need the Cloud
Google's Gemma 4 brings cloud-level AI to your laptop. Free, offline, commercially usable—but is local AI ready to replace the cloud model?
Agent Zero's Plugin System Shows What AI Needs Next
Agent Zero's new plugin architecture lets AI extend itself. The real innovation isn't the plugins—it's what happens when communities build them.
MiniMax M2.7 Goes Open Source: What It Actually Means
MiniMax M2.7 just went open source, but running it requires up to 450GB of storage. Here's what that tells us about the state of AI accessibility.