Compatibilists respond that this can't be helped even if determinism is false. If our cognitive functioning isn't caused, then it's uncaused ("random"), and that's no better. Our initial state of being may not be determined by prior events, but that doesn't mean it's determined "by us" as ultimate responsibility would require. Rather, it's not determined by anything or anyone at all. (Some compatibilists have gone further and claimed that freedom requires determinism. That isn't quite right: in the right conditions, indeterminism is compatible with the same kind of non-ultimate responsibility as determinism is.)
A person cannot be ultimately responsible for their initial state of being. We have control over some things, but our exercise of this capacity must be underpinned by sub-personal mechanisms that govern how we exercise control. We cannot always control the manner by which we control, or choose the bases on which we choose, on pain of regress. As I once put it:
Suppose you got to choose your own personality. On what basis could you make such a choice? You must base it on some prior preferences that you have. But did you ever get to choose those preferences? If so, on what basis was that choice made? We must eventually reach some foundational standards of evaluation (preferences) that you never chose to have. So "pure" freedom is impossible.
This is a strong conclusion. I'm claiming that the kind of "ultimate responsibility" free-will libertarians are angling for is incoherent. Not even God could have it. And you know something's gone terribly wrong when you're hoping for powers that even an omnipotent being would lack! If ultimate responsibility is impossible in this way, it can't really be required for the kind of ordinary moral responsibility humans aspire to. Perhaps we can't know for sure that we really are morally responsible beings, but here's one thing we surely do know: there are at least some possible worlds containing responsible beings. So whatever our criteria for attributing free will and responsibility to agents, it had better at least be logically possible to satisfy.
Am I right that this rules out the libertarian criterion of 'pure self-creation'?