upvote
It sounds like it's a claim along the lines that you can't tell "I love Lucy" is on because you are listening to the audio and not looking at the screen.
reply
fMRI is a step above dowsing rods. It's plugging a multimeter into an outlet and guessing what type and brand of appliances you are running in your house.
reply
Have you heard of time-domain reflectometry? A $20,000 multimeter could have the "impossible" feature you describe all but built in.
reply
I'd say you're right about any given individual channel: the activation of a single voxel doesn't tell us much about all the fancy computation happening in that ~1 mm^3 of tissue.

But the pattern of activity of thousands of voxels across cortex does contain reliable information! And a decent amount of it too, at least in sensory cortices.

reply
Try it with a crude task - eg finger tapping. It’s pretty convincing.
reply
I was at a talk maybe 15 years ago in which the speaker gave pretty convincing evidence that given a time series of voltages you could learn a lot of things about what kind of appliances you've got running.
reply
There are a lot of devices that have reasonably distinct patterns to their power consumption. Motors- especially well pumps, but also large central air fans and some others- are going to look very different from a microwave or vacuum cleaner or refrigerator, especially if you have time of day on your readings.

Constant lower draw devices- chargers, lights, speakers and such- are going to be harder to distinguish, though.

reply
Could you share your thoughts about neuralink? Is there enough signal for this to really work?
reply
Caveat: brain-computer interfaces are not quite my field, but I think the consensus is (judging from some conversations with folks who know more):

Neuralink is doing interesting BCI research, with decent hardware, but it's not really a step-change above and beyond the rest of the field.

There's definitely a lot of promise in using BCIs for rehabilitation of patients with brain injuries but their input-output capabilities are still incredibly crude: for example, we can't reliably "write" to the brain to make people perceive things beyond very simple stimuli (e.g. a phantom touch sensation, or a visual phosphene).

This is understandable: the brain has a bajillion neurons and we only have ~1,000 electrodes that aren't particularly precise in how/where they zap the brain---and even if they were, we don't really know well enough how the brain works to "control" perception finely.

Other problems for BCIs include (i) "representational drift", where the brain's code changes over time, so you need to keep fine-tuning your interface in some sort of closed loop fashion and (ii) damage/scarring to neural tissue.

> Is there enough signal for this to really work?

I'm not quite sure what Neuralink's marketing claims are, so I'm not sure what you mean by "this" here. But intracranial electrodes do have a surprising amount of signal, especially relative to non-invasive methods (I'm currently collecting some iEEG data myself!)

I really want the sci-fi future where we have brain-computer interfaces that augment our cognition and perception, but we're nowhere close---though we're getting better.

reply