In a sense, it is not reasonable to expect finite and well-behaved confidence for parameters. The argument goes as follows:

When testing parameters in classical settings such as for the *t*-test, the assumption of normality is crucial, or at least semi-crucial. It is possible to construct ok confidence intervals using the Berry-Esseen theorem, but it is at least sometimes possible to show that no well-behaved approximate confidence interval exists when we cannot reliably bound the third moment.

The setting of most confidence intervals is some normal approximation of the sampling distribution of $\theta$, where the implicit hypotheses are that $\theta$ is some $\theta_0$ and normality holds for both the alternative and null hypothesis.

Here the problem comes knocking on the door. Not only do we not know how well normality is approximated for *one particular* $\theta$ in terms of the usual Prokhorov metric, but we also don’t know if the moments converge at, and we have no information about the uniformity of the convergence. Uniform convergence of moments is probably needed to make realistic sufficient conditions for asymptotic confidence intervals to work.

This is probably worth investigating more. I think the main question that remains to be formulated is something like this: Do we *actually* have an implicit set of hypotheses as outlined above?