The question is, will progress hit a wall anytime soon. And the relevant question for that is probably, is there a thing about going from whatever-the-current-best-language-model is to human level reasoning that requires consciousness? If yes, that's a strong reason to believe these people are wrong, since they all don't know that consciousness matters.
In general, if something happens consciously, this is obviously done via consciousness. So vision is always the obvious example; don't expect AI to catch up to humans anytime soon. But reasoning is tricky because much of reasoning happens *unconsciously*.
However, the step where we take the good ideas and build on that does happen consciously. If I think about a hard problem, I wait for my brain to generate ideas, but most of them are terrible. So plausibly there *is* such a step which takes consciousness.
This all fits very well with what GPT-3 and other models are doing. They generate the kind of sentences that our brain spits out unconsciously, but totally lack the supervisory step. And that supervisory step could plausibly be really hard to replicate.
This monologue was surprisingly effective at calming myself down.