Microsoft has Copilot Plus PCs loaded with AI, and rumors are that Apple is all in on AI, too, but if you don't want AI in everything you do, there is another option: Linux.
You could have a command that recommends commands and then you select them on a drop-down list.
Alternatively if the dataset is verified you wouldn’t need to worry about it running dangerous commands, since it doesn’t know any. Or you could have a list of verified commands that run automatically and any command not on that list requires confirmation.
But this is missing the point that most of the time I know exactly what command I want to run so adding a LLM Is quite useless. The reason so much of linux is still relying on commands is because for a lot of people (myself included) commands are quick and efficient.
You could have a command that recommends commands and then you select them on a drop-down list.
Still dangerous. One character (even a space) might make a huge difference. You wouldn’t want a hallucinating probability matrix barf out a command and run it only half understanding what it does. By building it yourself, you get a better understanding.
But this is missing the point that most of the time I know exactly what command I want to run so adding a LLM Is quite useless. The reason so much of linux is still relying on commands is because for a lot of people (myself included) commands are quick and efficient.
Like, if I could type “extract the audio of this video and re-encode it as a medium quality MP3, break up the audio into 30 consecutive tracks” in a shell, and the next line was populated with the appropriate ffmpeg command, but not yet executed, I could quickly look over the command, nothing looks fishy, so I go ahead and run the command.
It’s no different than what the internet has been doing for us for decades. People tell us commands to run, we use our best judgement, maybe check a couple things, and then run the commands. If the internet suggests a command or a LLM suggests a command, what’s the difference?
You are going to allow an LLM to run commands on your system?
You could have a command that recommends commands and then you select them on a drop-down list.
Alternatively if the dataset is verified you wouldn’t need to worry about it running dangerous commands, since it doesn’t know any. Or you could have a list of verified commands that run automatically and any command not on that list requires confirmation.
But this is missing the point that most of the time I know exactly what command I want to run so adding a LLM Is quite useless. The reason so much of linux is still relying on commands is because for a lot of people (myself included) commands are quick and efficient.
Still dangerous. One character (even a space) might make a huge difference. You wouldn’t want a hallucinating probability matrix barf out a command and run it only half understanding what it does. By building it yourself, you get a better understanding.
100% agreed here.
Maybe.
Like, if I could type “extract the audio of this video and re-encode it as a medium quality MP3, break up the audio into 30 consecutive tracks” in a shell, and the next line was populated with the appropriate ffmpeg command, but not yet executed, I could quickly look over the command, nothing looks fishy, so I go ahead and run the command.
And it will be optimized for nothing looking fishy, right.
It’s no different than what the internet has been doing for us for decades. People tell us commands to run, we use our best judgement, maybe check a couple things, and then run the commands. If the internet suggests a command or a LLM suggests a command, what’s the difference?