/* Family out problem in Picat. From Eugene Charniak "Bayesian Networks without Tears" page 51 """ Suppose when I go home at night, I want to know if my family is home before I try the doors. (Perhaps the most convenient door to enter is double locked when nobody is home.) Now, often when my wife leaves the house, she turns on an outdoor light. However, she sometimes turns on this light if she is expecting a guest. Also, we have a dog. When nobody is home, the dog is put in the back yard. The same is true if the dog has bowel troubles. Finally, if the dog is in the backyard, I will probably hear her barking (or what I think is her barking), but sometimes I can be confused by other dogs barking. This example, partially inspired by Pearl’s (1988) earthquake example, is illustrated in figure 1. There we find a graph not unlike many we see in AI. We might want to use such diagrams to predict what will happen (if my family goes out, the dog goes out) or to infer causes from observed effects (if the light is on and the dog is out, then my family is probably out). The important thing to note about this example is that the causal connections are not absolute. Often, my family will have left without putting out the dog or turning on a light. Sometimes we can use these diagrams anyway, but in such cases, it is hard to know what to infer when not all the evidence points the same way. Should I assume the family is out if the light is on, but I do not hear the dog? What if I hear the dog, but the light is out? Naturally, if we knew the relevant probabilities, such as P(family-out | light-on, ¬ hear- bark), then we would be all set. However, typically, such numbers are not available for all possible combinations of circumstances. Bayesian networks allow us to calculate them from a small set of probabilities, relating only neighboring nodes. """ This is a port of my Gamble model gamble_family_out_problem.rkt This program was created by Hakan Kjellerstrand, hakank@gmail.com See also my Picat page: http://www.hakank.org/picat/ */ import ppl_distributions, ppl_utils. main => go. /* var : family out Probabilities: true: 0.5108084098312111 false: 0.4891915901687889 mean = [true = 0.510808,false = 0.489192] var : bowel problem Probabilities: false: 0.9928931003849570 true: 0.0071068996150429 mean = [false = 0.992893,true = 0.0071069] var : light on Probabilities: true: 1.0000000000000000 mean = [true = 1.0] var : dog out Probabilities: false: 0.5519692034350014 true: 0.4480307965649985 mean = [false = 0.551969,true = 0.448031] var : hear bark Probabilities: false: 1.0000000000000000 mean = [false = 1.0] */ go ?=> reset_store, time2(run_model(50_000,$model,[show_probs_trunc,mean, presentation=["family out","bowel problem", "light on","dog out","hear bark"]])), nl. go => true. model() => FamilyOut = flip(0.15), BowelProblem = flip(0.01), LightOn = cond(FamilyOut==true, flip(0.6),flip(0.05)), DogOut = cases([ [(FamilyOut,BowelProblem), flip(0.99)], [(FamilyOut,not BowelProblem), flip(0.90)], [(not FamilyOut,BowelProblem), flip(0.97)], [(not FamilyOut,not BowelProblem), flip(0.30)], [true,false] ]), HearBark = cond(DogOut == true, flip(0.7), flip(0.01)), /* Op.cit: """ To take the earlier example, if I observe that the light is on (light-on = true) but do not hear my dog (hear-bark = false), I can calculate the (observe/fail al probability of family-out given these pieces of evidence. (For this case, it is .5.) """ */ observe(LightOn,not HearBark), if observed_ok then add_all([ ["family out",FamilyOut], ["bowel problem",BowelProblem], ["light on",LightOn], ["dog out",DogOut], ["hear bark",HearBark]]) end.