Given
$$ U_1, U_2, \dots, U_{12} \overset{iid}{\sim} \mathbb{U}[-\tfrac{1}{2}, \tfrac{1}{2}], $$ set $$ Z = \sum_{i=1}^{12}{U_i}. $$
Then, $$ \mathbb{E}[Z] = \mathbb{E}[\sum_{i=1}^{12}{U_i}] = \sum_{i=1}^{12}{\mathbb{E}[U_i]} = \sum_{i=1}^{12}{0} = 0, $$ and $$ \mathbb{V}[Z] = \mathbb{V}[\sum_{i=1}^{12}{U_i}] = \sum_{i=1}^{12}{\mathbb{V}[U_i]} = \sum_{i=1}^{12}{\tfrac{1}{12}} = 1, $$ where $\mathbb{V}[\sum_{i=1}^{12}{U_i}] = \sum_{i=1}^{12}{\mathbb{V}[U_i]}$ because $U_1, U_2, \dots, U_{12}$ are $iid$.
We sample $n = 10^4$ observations in three different ways: appealing to CLT (see above), using the Box-Muller algorithm and finally calling R's base rnorm.
The central values are well aligned, meaning algorithms produce similarly distributed samples to the center, but the tails of the observations from the Box-Muller algorithm differ from CLT and rnorm.
From KDE, we can see two things. First, the CLT approximation is wider between $[-2, \tfrac{1}{2}]$ and $[\tfrac{1}{2}, 2]$, yet it doesn't offer a good approximation at the peak. Second, rnorm provides fatter tails compared to the other two algorithms.
Still, I think it's worth coming up with additional ways to compare the samples, especially to characterise the tails. Any idea on how to do that?