Perhaps this example is what you describe? flatMap (used by the for comprehension) enables monads to combine in powerful ways. The first part shows uncorrelated X,Y such that Y ~ -X, then the second part shows correlated Y = -X such that every y = -x
{
val X: Podel[Int] = Bernoulli(p = 0.5)
val Y: Podel[Int] = X.map(-1 * _)
val Z: Podel[Int] = X + Y
val d: Podel[Int] = for {
x <- X
y <- Y
z <- Z
} yield {
z
}
d.hist(sampleSize = 10000, optLabel = Some("Uncorrelated X, Y"))
}
{
val X: Bernoulli = Bernoulli(p = 0.5)
val d: Podel[Int] = for {
x <- X
y = - 1 * x
z = x + y
} yield {
z
}
d.hist(sampleSize = 10000, optLabel = Some("y = -x"))
}
/*
So far, the weakness I find with probability monad, and simulation, is lack of precision. This makes them useless when the probability you want to find is precise such as 5.324e-11. Only formulas can give you the answer.
-1 25.52% #########################
0 49.60% #################################################
1 24.88% ########################
y = -x
0 100.00% ####################################################################################################
So far, the weakness I find with probability monad, and simulation, is lack of precision. This makes them useless when the probability you want to find is precise such as 5.324e-11. Only formulas can give you the answer.