/* Coins learning 2 in Picat. From srl-pp-tutorial-wasp-stockholm.pdf "Statistical Relational learning and Probabilistic Programming" by Luc De Raedt, Anton Dries, and Angelika Kimmig https://dtai.cs.kuleuven.be/problog/wasp17-tutorial.html slide 394f: """ - Flipping a coin with unknown weight - Prior: uniform distribution on (0,1) - Observation: 5x heads in a row """ The ProbLog model return the following which corresponds to the density of the probabilist """ weight(c1,0.1): 3.3994697e-13 weight(c1,0.3): 2.1679411e-06 weight(c1,0.5): 0.0041497433 weight(c1,0.7): 0.1317485 weight(c1,0.9): 0.86409959 <---- weight(c2,0.1): 3.2276726e-06 weight(c2,0.3): 0.024109997 weight(c2,0.5): 0.66724754 <---- weight(c2,0.7): 0.30628626 weight(c2,0.9): 0.0023529733 """ Here we observe 13 tosses and detects the probability of throwing a head (true). The corresponding values of the ProbLog model is in "params ix". Cf my Gamble model gamble_coins_learning2.rkt This program was created by Hakan Kjellerstrand, hakank@gmail.com See also my Picat page: http://www.hakank.org/picat/ */ import ppl_distributions, ppl_utils. import util. main => go. /* test = [true,true,true,true,true,true,true,true,true,true,true,true,true] var : params ix Probabilities: 0.9: 0.8695751484696208 0.7: 0.1251713111009593 0.5: 0.0052535404294198 mean = 0.872864 variance = 0.00511107 Variable lengths: params ix = 4378 test = [true,false,true,true,true,true,true,false,false,true,false,false,true] var : params ix Probabilities: 0.5: 0.6800000000000000 0.7: 0.2400000000000000 0.3: 0.0800000000000000 mean = 0.532 variance = 0.011776 Variable lengths: params ix = 25 */ go ?=> Test1 = [true,true,true,true,true,true,true,true,true,true,true,true,true], Test2 = [true,false,true,true,true,true,true,false,false,true,false,false,true], member(Test,[Test1,Test2]), println(test=Test), reset_store, run_model(300_000,$model(Test),[show_probs_trunc,mean,variance]), show_store_lengths, nl, fail, nl. go => true. model(Test) => Params = [0.1,0.3,0.5,0.7,0.9], ParamsPrior = [0.05,0.2,0.5,0.2,0.05], Len = Params.len, Ix = categorical(ParamsPrior,1..Len), Data = [flip(Params[Ix]) : I in 1..Test.len], foreach(I in 1..Test.len) observe(Data[I] == Test[I]), end, if observed_ok then % add("ix",Ix), % add("ix == 1",cond(Ix == 1,true,false)), % add("ix == 2",cond(Ix == 2,true,false)), % add("ix == 3",cond(Ix == 3,true,false)), % add("ix == 4",cond(Ix == 4,true,false)), % add("ix == 5",cond(Ix == 5,true,false)), add("params ix",Params[Ix]) end. /* Using observe_abc instead and 10_000 accepted parameters test = [1,1,1,1,1,1,1,1,1,1,1,1,1] min_accepted_samples = 10000 Stdev is very small: 0.00000 and might give spurious results. Please adjust the scale parameter var : params ix Probabilities: 0.9: 0.8613000000000000 0.7: 0.1348000000000000 0.5: 0.0039000000000000 mean = 0.87148 variance = 0.00520261 test = [1,0,1,1,1,1,1,0,0,1,0,0,1] min_accepted_samples = 10000 var : params ix Probabilities: 0.5: 0.5132000000000000 0.7: 0.2155000000000000 0.3: 0.1993000000000000 0.9: 0.0509000000000000 0.1: 0.0211000000000000 mean = 0.51516 variance = 0.0278822 */ go2 ?=> Test1 = [1,1,1,1,1,1,1,1,1,1,1,1,1], Test2 = [1,0,1,1,1,1,1,0,0,1,0,0,1], member(Test,[Test1,Test2]), println(test=Test), reset_store, run_model(300_000,$model2(Test),[show_probs_trunc,mean,variance, min_accepted_samples=10000,show_accepted_samples=false]), % show_store_lengths,nl, fail, nl. go2 => true. model2(Test) => Params = [0.1,0.3,0.5,0.7,0.9], ParamsPrior = [0.05,0.2,0.5,0.2,0.05], Len = Params.len, Ix = categorical(ParamsPrior,1..Len), Data = [bern(Params[Ix]) : I in 1..Test.len], observe_abc(Test,Data,1.0), if observed_ok then % add("ix",Ix), % add("ix == 1",cond(Ix == 1,true,false)), % add("ix == 2",cond(Ix == 2,true,false)), % add("ix == 3",cond(Ix == 3,true,false)), % add("ix == 4",cond(Ix == 4,true,false)), % add("ix == 5",cond(Ix == 5,true,false)), add("params ix",Params[Ix]) end.