It's categorically not safe to assume that I've mentioned every important argument in this thread
the panpsychism -> substrate dependence argument (super abridged) goes as follows
Scan a brain, build a neuron-level simulation.
If this simulation captures input/output behavior, it will make the same noises about consciousness as the original. if the brain was from a philosopher who wrote papers about consciousness, the simulation will write papers about consciousness
Since this does *the same computational steps* as the original, it seems rather absurd to assume that it's conscious experience is different. If it were, that would imply that you've taken away something *with causal effect* (it's easy to forget, but consciousness actually **does things** according to panpyschism, even though these things also have a material perspective), but miraculously the Input/output behavior is unchanged. ????
( I believe some version of this argument is why the target community does not respect panpyschism very much. And as mentioned, I agree with the argument for neuron-level simulations. )
Why does substrate-dependence change this? Because if the brain is substrate-dependent, a neuron-level simulation does not have the same input/output behavior. Simulating neurons assumes that they are these nice abstractable computational units, but they're not. The simulation just wouldn't work.
In practice, this probably means simulations aren't feasible. In theory, you could of course have an atom-level simulation, and that *would* preserve input/output behavior. But this is doing totally different computational steps.