Conversation
|
The two sided algorithm can fail (very low acceptance probability) when the range is big but far away from the mean. There is a method missing, which uses the one sided algorithm and does rejection sampling to get to the two sided case. (I will implement this) The problem is how to determine efficiently which of the 4 methods to use.
The paper gives useful probabilities, but they are all quite expensive to evaluate. This would not matter if a lot of samples are drawn but could be quite a bit if only a few are used. I think we could also use the inverse cdf approach implemented by @Caellian with some numerical optimizations. This has the advantage that it does not use rejection sampling and would be vectorized well (maybe?) |
|
In the paper you're referencing (Robert, Christian P. (1995)) and later in Chopin 2012, a cutoff is computed analytically as 0.47 σ (or 0.477 σ) for finite intervals and 0.5 σ for semi-finite ones, so if "a ≥ 0.477 σ" switch to exponential-tail sampler (Robert/Marsaglia/Chopin). So you can compare both bounds to figure out where they fall in the constructor, and then just use whichever algorithm is most efficient for that specific range. Branching is unavoidable - it will always have to be either a vtable lookup or a jump (preferable), rust is more likely to optimize away a simple jump if the range is constant, but CPU branch prediction should really remove this cost in most cases. let a = (start - mean) / std;
let b = (end - mean) / std;
match () {
// Extremely narrow interval: treat as degenerate
_ if (b - a).abs() < 1e-6 => 0.5 * (a + b)
// Narrow interval
// Inverse CDF works best here, with f64
// Use log-space for extreme a,b (e.g., > 8 sigma or < -8 sigma)
_ if (b - a) < 1e-3 => sample_inverse_cdf(a, b)
// Both tails positive (left cutoff above mean)
_ if a >= 0.47 => sample_exponential_tail(a, b)
// Both tails negative (right cutoff below mean)
// symmetric: flip and reuse upper tail sampler
_ if b <= -0.47 => -sample_exponential_tail(-b, -a)
// Straddling zero (typical central case)
// Standard rejection sampler works efficiently
_ if a < 0.47 && b > -0.47 => sample_rejection(a, b)
// Asymmetric truncation (one tail + near-zero cutoff)
// mixed region: tail on one side, cutoff on the other
// use exponential or hybrid rejection
_ if a >= 0.0 => sample_rejection(a, b)
_ if b <= 0.0 => -sample_rejection(-b, -a)
_ => panic!("Invalid truncation bounds or NaN"),
} |
|
Thanks, this looks already like a good approach :) |
|
|
Only if range includes [-0.47 σ, 0] or [0, 0.47 σ] should it use rejection sampling. [-0.47 σ, 0.47 σ] is where majority of the mass is. For [0.45 σ, &infty;] use a tail algorithm (Robert (lemma 2.2)/Marsaglia/Chopin); this is what I mean by Wikipedia says Marsaglia is on average faster than Robert even though it has higher rejection rate because it does not require the costly numerical evaluation of the exponential function. /// Marsaglia
fn sample_exponential_tail<R: Rng + ?Sized>(rng: &mut R, a: f64, b: f64) -> f64 {
assert!(a > 0.477 && a < b); // this is here only for example purposes, remove it
// NOTE: caller reversed a & b if b < 0.0, so same function is called and
// only the returned value is negated
// NOTE: if range intersects 0, then use current sampler impl with rejection
loop {
let u1: f64 = rng.random::<f64>().max(1e-16); // sample uniform [0, 1] f64
let x = (a * a - 2.0 * u1.ln()).sqrt();
if x > b {
// reject if beyond upper bound; will be always true if b is
// infinity, assume it's optimized away by compiler or branch
// prediction
continue;
}
let u2: f64 = rng.random::<f64>(); // and another one
if u2 < a / x {
return x;
}
}
} |
|
I have updated the code, it now should sample in all cases with a reasonable performance. If we want to give value stability guaranties we need to decide on a strategy when to use which methods. Changing this later will probably break this. |
|
It may take me some time to review this in detail, but I can give some initial comments:
|
| pub fn new(mean: f64, stddev: f64, lower: f64, upper: f64) -> Result<Self, Error> { | ||
| if !(stddev > 0.0) { | ||
| return Err(Error::InvalidStdDev); | ||
| } | ||
| if !(lower < upper) { | ||
| return Err(Error::InvalidBounds); | ||
| } | ||
|
|
||
| let std_lower = (lower - mean) / stddev; | ||
| let std_upper = (upper - mean) / stddev; |
There was a problem hiding this comment.
The specific first issue that I noticed causing panics when fuzzing is that even if lower < upper, std_lower may equal std_upper, making the sampling in NormalTruncatedTwoSided fail.
(I'm not certain how best to resolve this. Perhaps make NormalTruncatedTwoSided::sample sample based on the original lower..upper range?)
There was a problem hiding this comment.
The implementation throws away the original finite interval too early and then assumes the standardized interval is always representable as a non-empty f64 range.
For the finite-interval path, I think the safer fix is to keep lower and upper in original coordinates and sample the proposal from lower..upper, not from std_lower..std_upper. The current two-sided method is just uniform-rejection with a normal-shaped acceptance ratio; that can be written directly in x-space:
- proposal: sample
$x \sim U(\mathrm{lower}, \mathrm{upper})$ - accept with probability proportional to
$\exp!\left(-\frac{(x-\mu)^2 - m}{2\sigma^2}\right)$ - where
$m$ is the minimum squared distance to$\mu$ over the interval:-
$m = 0$ iflower <= mu <= upper -
$m = (\mathrm{lower} - \mu)^2$ ifmu < lower -
$m = (\mathrm{upper} - \mu)^2$ ifmu > upper
-
- where
That is algebraically the same sampler, but it avoids the std_lower == std_upper problems entirely.
I’d also suggest handling this in the constructor, not just in sample: if lower < upper but standardization collapses to std_lower == std_upper, force an original-space finite-interval sampler instead of storing only standardized bounds.
| pub fn new(mean: f64, stddev: f64, lower: f64, upper: f64) -> Result<Self, Error> { | ||
| if !(stddev > 0.0) { | ||
| return Err(Error::InvalidStdDev); | ||
| } |
There was a problem hiding this comment.
It may slightly improve API consistency to have NormalTruncated generalize Normal and allow stddev = 0.0 iff mean is in [lower, upper). The wikipedia page is also careful to define the real-valued truncated normal distribution in a way that allows this.
| (0.0, 1.0, 0.1, 0.9), | ||
| (0.0, 1.0, 0.35, 1.5), |
There was a problem hiding this comment.
The rejection sampling loop for NormalTruncatedTwoSided is like 3 separate cases in one. (Left of mean, including mean, right of mean.) Could you add a few more test parameters so that all three are covered?
(I think this is the last of my initial comment round.)
Caellian
left a comment
There was a problem hiding this comment.
Some small remarks before this gets merged.
Also, I'd really prefer the Truncated<D> syntax. I get that this PR is introducing only one, but if NormalTruncated is added, it will be a breaking change to remove it later. Truncated<D> can specialize on only normal distribution for now and incrementally added to, but also leaves the path open for composable types like Sum<Truncated<D1>, D2>.
It shouldn't have a catch-all case for sampling unhandled distributions because that requires specialization to work. But the library doesn't expose that many. A trait like Truncate that allows per-distribution strategy enums:
pub struct Truncated<D: Truncate> {
inner: D,
lower: f64,
upper: f64,
strategy: D::Strategy
}This would suffice in allowing other crates to define their own distributions and handle truncation how they wish, but also allow the rand_distr crate (or something downstream) to have much richer syntax for handling more complex operations down the line.
| #[derive(Debug)] | ||
| enum Method { | ||
| Rejection(NormalTruncatedRejection), | ||
| OneSided(bool, NormalTruncatedOneSided), // bool indicates if lower bound is used |
There was a problem hiding this comment.
Using bool to encode direction makes the variants hard to read at call sites. I’d strongly prefer separate enum variants here, e.g. LowerOneSided / UpperOneSided, so the dispatch stays self-describing.
| enum Method { | ||
| Rejection(NormalTruncatedRejection), | ||
| OneSided(bool, NormalTruncatedOneSided), // bool indicates if lower bound is used | ||
| TailInterval(bool, NormalTruncatedTailInterval), // bool indicates mirrored upper-tail proposal |
| /// Constructs a new `NormalTruncated` distribution with the given | ||
| /// mean, standard deviation, lower bound, and upper bound. | ||
| pub fn new(mean: f64, stddev: f64, lower: f64, upper: f64) -> Result<Self, Error> { | ||
| if !(stddev > 0.0) { |
There was a problem hiding this comment.
This only checks stddev > 0.0, but the constructor later relies on other distributions via unwrap(). Can we validate finiteness here as well and keep new panic-free like the rest of the crate’s Result constructors?
| // We use naive rejection sampling | ||
| Ok(NormalTruncated(Method::Rejection( | ||
| NormalTruncatedRejection { | ||
| normal: crate::Normal::new(mean, stddev).unwrap(), |
There was a problem hiding this comment.
This unwrap() keeps a panic path inside a fallible constructor. I’d prefer to validate inputs up front and never rely on Normal::new(...).unwrap() here.
| // Also catches the case where both bounds are infinite | ||
| Ok(NormalTruncated(Method::Rejection( | ||
| NormalTruncatedRejection { | ||
| normal: crate::Normal::new(mean, stddev).unwrap(), |
There was a problem hiding this comment.
the constructor is fallible already, so this should not be another hidden panic edge
| impl Distribution<f64> for NormalTruncatedTwoSided { | ||
| fn sample<R: Rng + ?Sized>(&self, rng: &mut R) -> f64 { | ||
| loop { | ||
| let z = rng.random_range(self.standard_lower..self.standard_upper); |
There was a problem hiding this comment.
This assumes the standardized interval is non-empty. Given the notes by mstoeckl about std_lower == std_upper, I think this line needs an explicit guard or a construction invariant documented nearby.
|
|
||
| #[test] | ||
| fn truncated_normal() { | ||
| let parameters = [ |
There was a problem hiding this comment.
Could we add a few parameters here for the numerically awkward cases too: extreme tails like [20, inf), very narrow finite intervals, and constructor-rejection cases for non-finite inputs? Those are where truncated-normal samplers usually fail first.
|
|
||
| for (seed, (mu, sigma, lower, upper)) in parameters.into_iter().enumerate() { | ||
| let dist = rand_distr::NormalTruncated::new(mu, sigma, lower, upper).unwrap(); | ||
| dbg!(&dist); |
|
|
||
| if upper == f64::INFINITY { | ||
| // This threshold depends on how fast normal vs exponential sampling is. This value was found empirically, but it can probably be tuned better. | ||
| if std_lower > 0.3 { |
There was a problem hiding this comment.
What does empirically mean here? I don't see it in discussion and cited sources use different values. Given that cutoff can affect correctness of returned values I think empirical should be linked to tests or somehow link to where 0.3 comes from.
| pub fn new(mean: f64, stddev: f64, lower: f64, upper: f64) -> Result<Self, Error> { | ||
| if !(stddev > 0.0) { | ||
| return Err(Error::InvalidStdDev); | ||
| } | ||
| if !(lower < upper) { | ||
| return Err(Error::InvalidBounds); | ||
| } | ||
|
|
||
| let std_lower = (lower - mean) / stddev; | ||
| let std_upper = (upper - mean) / stddev; |
There was a problem hiding this comment.
The implementation throws away the original finite interval too early and then assumes the standardized interval is always representable as a non-empty f64 range.
For the finite-interval path, I think the safer fix is to keep lower and upper in original coordinates and sample the proposal from lower..upper, not from std_lower..std_upper. The current two-sided method is just uniform-rejection with a normal-shaped acceptance ratio; that can be written directly in x-space:
- proposal: sample
$x \sim U(\mathrm{lower}, \mathrm{upper})$ - accept with probability proportional to
$\exp!\left(-\frac{(x-\mu)^2 - m}{2\sigma^2}\right)$ - where
$m$ is the minimum squared distance to$\mu$ over the interval:-
$m = 0$ iflower <= mu <= upper -
$m = (\mathrm{lower} - \mu)^2$ ifmu < lower -
$m = (\mathrm{upper} - \mu)^2$ ifmu > upper
-
- where
That is algebraically the same sampler, but it avoids the std_lower == std_upper problems entirely.
I’d also suggest handling this in the constructor, not just in sample: if lower < upper but standardization collapses to std_lower == std_upper, force an original-space finite-interval sampler instead of storing only standardized bounds.
CHANGELOG.mdentrySummary
Added a
TruncatedNormaldistributionMotivation
#7
Details
It follows Robert, Christian P. (1995). "Simulation of truncated normal variables"
The test still needs to be improved. The code seems to be working, but it is still a draft.