The setting of most confidence intervals is some normal approximation of the sampling distribution of $\theta$, where the implicit hypotheses are that $\theta$ is some $\theta_0$ and normality holds for both the alternative and null hypothesis.
Here the problem comes knocking on the door. Not only do we not know how well normality is approximated for one particular $\theta$ in terms of the usual Prokhorov metric, but we also don’t know if the moments converge at, and we have no information about the uniformity of the convergence. Uniform convergence of moments is probably needed to make realistic sufficient conditions for asymptotic confidence intervals to work.