I find it helpful to think about the challenge of skepticism from the perspective of Bayesian confirmation. We begin with (i) a space of 'possible worlds' or hypothetical scenarios which represent all the different ways the world could be, and (ii) your 'priors' or initial distribution of credence over these scenarios -- i.e. which ones you believe to be more or less likely. Then, when you acquire new evidence E, you update your beliefs by ruling out those scenarios that are inconsistent with E, and redistribute their old credence values over the remaining options so that your new degree of belief in each hypothesis X matches your old conditional probability X|E.
Now, the essence of the skeptic's challenge is that various 'skeptical scenarios' -- that I'm a BIV, or that the sun will not rise tomorrow -- would seem as well confirmed by our empirical evidence as common-sense hypotheses are. Compare three rival hypotheses:
(H1) The Sun exploded in 1999
(H2) Actual history + Sun doesn't rise tomorrow
(H3) Actual history + Sun rises tomorrow
Actual history lets us rule out H1, but as a matter of form it is neutral between H2 and H3. So if we think H3 is more likely, this must be reflected in our prior probabilities. We must hold, a priori, that some scenarios are objectively more probable or likely to eventuate than others -- e.g. that scenarios which start with our actual history and are followed by another sunrise are a priori more probable (collectively) than the other scenarios which start with our actual history but are followed by something different.
This is what the anti-skeptic is committed to. There's not necessarily anything wrong with affirming such claims, but it would be nice to have an explanation why some prior probability rather than another is the rational one to have. (Compare the "counter-inductivist" who assigns the opposite prior probabilities, and thus takes Actual History to count as strong evidence that the Sun won't rise tomorrow! Or the solipsist who thinks the BIV scenario is more likely than the external-world scenario we believe in. We think they're being unreasonable, but on what grounds?)
My hope is that general rational principles -- of coherence, systematic unity, simplicity, and the like -- can provide the basis we need for privileging some priors over others. Otherwise, we may be committed to thinking that it's just a brute, inexplicable fact which priors a rational agent ought to have, which seems a pretty wild claim -- and an awfully fortunate coincidence if the One True Prior happens to be ours! Still, if it comes to that, we may think there are worse things than dogmatism.