/* Medical diagnosis in Picat. From https://probmods.org/chapters/conditioning.html (WebPPL): """ This classic Bayesian inference task is a special case of conditioning. Kahneman and Tversky, and Gigerenzer and colleagues, have studied how people make simple judgments like the following: The probability of breast cancer is 1% for a woman at 40 who participates in a routine screening. If a woman has breast cancer, the probability is 80% that she will have a positive mammography. If a woman does not have breast cancer, the probability is 9.6% that she will also have a positive mammography. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer? What is your intuition? Many people without training in statistical inference judge the probability to be rather high, typically between 0.7 and 0.9. The correct answer is much lower, less than 0.1, as we can see by running this WebPPL inference: """ Exact probabilities (from my Gamble model) var : breastCancer #f: 0.922360248447205 #t: 0.07763975155279508 mean: 0.07763975155279508 var : positiveMammogram #t: 1.0 mean: 1.0 Cf my Gamble model gamble_medical_diagnosis.rkt This program was created by Hakan Kjellerstrand, hakank@gmail.com See also my Picat page: http://www.hakank.org/picat/ */ import ppl_distributions, ppl_utils. import util. main => go. /* var : breast cancer Probabilities: false: 0.9274661508704062 true: 0.0725338491295938 mean = [false = 0.927466,true = 0.0725338] var : positive mammogram Probabilities: true: 1.0000000000000000 mean = [true = 1.0] */ go ?=> reset_store, run_model(100_000,$model,[show_probs_trunc,mean]), nl, % show_store_lengths, % fail, nl. go => true. model() => BreastCancer = flip(0.01), PositiveMammogram = condt(BreastCancer, flip(0.8), flip(0.096)), observe(PositiveMammogram==true), if observed_ok then add("breast cancer",BreastCancer), add("positive mammogram",PositiveMammogram), end.