The Shapiro/Wright paper is a high point in the Absolute Generality collection. For a start,

- First, they focus on Dummettian considerations. I’ve already urged here that considerations against the possibility of absolutely general quantification based on Skolemite worries, or on worries about “metaphysical realism”, or indeed on worries about “interpretations”, don’t seem compelling. It seems to me that the key interesting issues hereabouts do indeed arise from considerations about indefinite extensibility (pressed by Dummett, but having their roots, as Shapiro and Wright remind us, in remarks of Cantor’s and Russell’s).
- It is also the case that, unlike some of the others in the collection, this paper is written with fairly relentless clarity and explicitness (and though it isn’t free of technicalia, the details are kept on a tight rein).

Shapiro and Wright take up a hint in Russell, and (in Sec. 2) consider the following — at least as a first characterization of the scope of the indefinitely extensible:

If the concept P is indefinitely extensible, then there is a one-to-one function from all the ordinals into the Ps.

The argument is this. Suppose, for reductio, that there is a one-to-one function f from the ordinals smaller than some α into the Ps. Then the collection of Ps of the form f(β), where β < α will be (on one generous but reasonable understanding) a "definite" totality of Ps. But recall that by Dummett's informal characterization, an

indefinitely extensible concept is one such that, if we can form a definite conception of a totality all of whose members fall under the concept, we can, by reference to that totality, characterize a larger totality all of whose members fall under it.

So, since by hypothesis P is indefinitely extensible, then there must be, after all, a P which isn’t one of the f(β), where β < α. Choose one, and extend the function f by setting f(α) to have that value. This shows that for any ordinal α, if all the ordinals less than α can be injected into the Ps, then the ordinals less than or equal to α can be injected into the Ps. So, by a transfinite induction along the ordinals, all the ordinals can be injected into the Ps.

Very neat. And though the argument does rest on quite powerful set-theoretic assumptions, it indeed seems rather telling. And by a similar argument,

If there is a one-to-one function from all the ordinals into the Ps, the concept P is indefinitely extensible.

So we get, plausibly, a biconditional connection between the concept P’s being indefinitely extensible and there being an injection of ordinals into the Ps — which makes the case of the ordinals the paradigm case of an indefinitely extensible totality.

Now, as Shapiro and Wright emphasize, this connection doesn’t yet give us an elucidatory account of the notion of indefinitely extensibility (for why is the concept ordinal itself indefinitely extensible?): but — if we are right so far — at least we’ve got a sharp constraint on an acceptable account. But are we right?

The trouble is that the argued connection makes all genuinely indefinitely extensible totalities big, while some Dummettian examples of indefinitely extensible totalities are small. Take for example Dummett’s discussion in his paper on the philosophical significance of Gödel’s theorem. He says that arithmetical truth (of first-order arithmetic) is shown by the theorem to be an indefinitely extensible concept. But why? After all, there’s a perfectly good and determinate Tarskian definition of the set.

But suppose we think of a ‘definite’ totality — more narrowly than before — as one that can be given as recursively enumerable (which is perhaps a thought that chimes with other Dummettian ideas). Then start with some such ‘definite’ set of arithmetical truths A_{0}, e.g. the theorems of PA. Gödelize to extend the theory to A_{1}, and keep on going. Any particular theory that is still r.e. can be Gödelized. But note that this time there is evidently a limit on how far along the (full, classical) ordinals we can continue the process — for there are only countably many r.e. sets available to be Gödelized and uncountably many ordinals (even ‘small’ countable ordinals).

So what are we to make of this? Well, one line would be to cleave to the Russellian alignment of the indefinitely extensible with injectability-into-the ordinals, AND similtaneously agree with Dummett that truth-of-arithmetic is indefinitely extensible, by not accepting the classical ordinals in all their glory. The more you restrict the ordinals you accept, the more indefinitely extensible concepts there will be for you. But what of those who are happy with oodles of ordinals? Then the moral seems to be this. There is a difference between saying that the concept P is such that, given any ‘definite’ totality of Ps, we can always find a P that isn’t in that totality (we can always diagonalize out of any given set of Ps), and saying that the totality is (so to speak) indefinitely indefinitely extensible. And that seems right and important.

But how can we develop these ideas of ‘definite’ totalities/indefinite extensibility? The story continues …

I haven’t read the paper, but I think their argument should not need strong axioms. Have you checked which axioms of ZFC they use?