Let Z_2 = \frac{1}{n}\sum_{j \in [k]} \unit(N_j = 1) be the empirical avereage of the number of unique observations out of n i.i.d samples, where N_j = \sum_{i=1}^n \unit(x_i = i) counts the number of occurrences for all j \in [k]. The following algorithm is an optimal uniformity tester as long as n \leq k.
Setting \tau = (1 - \frac{1}{k})^{n-1} - \frac{n\eps^2}{8k} \mathcal{T}(X_1, \dots, X_n)=\begin{cases} 0, & Z_2 \leq \tau\\ 1, & Z_2 > \tau \end{cases}
Let \mathcal{T} denote the algorithm described above. As long as n \geq O(\frac{\sqrt{k}}{\eps^2})
\begin{equation} \Pr{\Bigg[ \mathcal{T}(X_1, \dots, X_n) = 0 \Big| d_{TV}(p, U_K) = 0 \Bigg]} \geq \frac{2}{3} \end{equation}
and
\begin{equation} \Pr{\Bigg[ \mathcal{T}(X_1, \dots, X_n) = 1 \Big| d_{TV}(p, U_K) \geq \eps \Bigg]} \geq \frac{2}{3} \end{equation}
Proof: I was able to re-derive them here.
The proof for samples for sample complexity was originally derived by(Paninski, 2008) and later explained in great detail in (Canonne, 2022) lecture notes in Chapter 2.2. The proof showing that this upper bound is tight can be found in (Paninski, 2008) as well (though I have not re-derived it yet).