I tried to build a system for translating subvocal speech activity (i.e., “talking” silently to oneself) to be received by a computer as text input. Excited by the outstanding work of MIT’s Arnav Kapur, Eric Wadkins, and others on AlterEgo, I was interested to explore whether placing electrodes inside the mouth might enable a less conspicuous form factor and higher fidelity input signal, as well as whether initializing from a pre-trained speech-to-text translation model might enable faster and more robust learning of a larger vocabulary. I got my hands on some gear from OpenBCI and also a mouthguard, hacked together the rig at right, and found there was way too much electrical noise inside the mouth for this to work.