Yeah, the standard idea on Margin of Error is that it is there to account for the fact that the poll is based on a sample of voters.

In theory, if every single person were polled and responded, then the margin of error would be 0. We know the EXACT proportion who said Yes (for example).

Whenever a smaller sample is used to represent the entire population, there is a chance that things went wrong. Maybe this may be some bias the pollster brings (even if it's unintentional), what if the pollster feels more comfortable talking to smiling middle-aged couples, and less comfortable talking to tattooed hipsters, and what if the tattooed hipsters leaned No more often and the smilers leaned Yes?

But that is the sort of bias that good pollsters usually try very hard to eliminate. The bigger issue is just white-noise randomness. If there were 20 voters (10 yes, 10 no), and I asked 6 of them their opinion, it turns out that only 37% of the time will I hear 3 Yes and 3 No. In general, the bigger the sample, the more likely it is that my results either A) is exactly the true proportion or B) is close to the true proportion.

Specifically the Margin of Error (MoE) is a way to represent a Confidence Interval (CI), from Result-MoE up to Result+MoE.

Confidence Intervals are built based on percentages: such as, "here is a 95% confidence interval."

What a 95% confidence interval means is this: "if our poll's proportion of Yes (53.6%, for example) was also the actual proportion of Yes in the entire group of voters, AND we re-ran the poll and built a new CI of the same size; well then 95% of those intervals would also include the value 53.6%"

Which is a weird definition. But anyway, the main point is that polls are not 100% accurate representations of reality. Smaller MoE means more people were polled (or results were more extreme, but that usually has a smaller impact overall)