Picat PPL - A Lightweight Probabilistic Programming Toolkit
Picat PPL (Probabilistic Programming Light) is a lightweight probabilistic programming
framework implemented entirely in Picat. It is designed for exploring uncertainty, solving
probability puzzles, and experimenting with small-scale probabilistic models - rather than
performing heavy Bayesian data analysis.
Picat PPL was created to be able to use Picat for two use cases:
- To be able to model probability puzzles/problems using the same syntax and functions (as possible) as in Gamble and WebPPL (my two favorite probabilistic programming systems). See the listing of the models below.
- Playing with exact probabilities (up to Picat's precision of about 1.0e-16) with probability distributions, either in probabilistic models, or at the REPL.
Picat PPL's syntax is intentionally similar to that of Gamble, WebPPL, and Turing.jl (see my Gamble models, WebPPL models, and Turing.jl models), offering an intuitive and declarative style while staying true
to Picat's expressive logic-functional foundation. The supported function be evaluated interactively in Picat's REPL,
or used as components in a (probabilistic) model, or integrated in other Picat programs.
The inference engine in Picat PPL is based on simple rejection sampling. As a result, probabilities computed within probabilistic models are approximate and depend on the number of generated samples. In contrast, the probability distribution functions (such as the PDF, CDF, quantile, mean, and variance functions) yield exact analytical results wherever possible. This distinction makes the system both educational and precise: exact for pure distribution analysis, and stochastic for probabilistic reasoning.
Picat PPL provides a comprehensive collection of probability distributions, both discrete and continuous, each supporting:
- Random generation
- PDF / PMF
- CDF
- Quantile function
- Mean and variance
These functions can be used independently or composed within probabilistic models, allowing for clear experimentation and exploration of probabilistic concepts.
In essence, Picat PPL brings a transparent, compact, and fully self-contained probabilistic layer to Picat - ideal for simulation, reasoning, and curiosity-driven research into probability and uncertainty.
Note and some little background of Picat PPL: There is actually a probabilistic programming system already in Picat: PicatPRISM, which is a Picat port of one of the earlier probabilistic programming systems PRISM created by Taisuke Sato's and implemented in Neng-Fa Zhou's B-Prolog system. Neng-Fa was involved in the development of PRISM as well.
I've tested PicatPRISM (as well as PRISM), but realized that I missed too many features from the PPL systems I'm used to (Gamble, WebPPL, Turing.jl, BLOG, etc), so I decided to roll my own. Hence Picat PPL.
The main modules in Picat PPL are:
These modules are planned to be continiously improved with more utilities and probability distributions.
Exact probability distributions
Picat PPL contains (in ppl_distributions.pi) quite a few probability distributions for calculating exact probabilties. Almost all of them supports the following:
- Random generation
- PDF/PMF
- CDF
- Quantils
- Mean and Variance
See ppl_distributions.pi (and ppl_distributions_test.pi) for details.
The supported probability distributions:
-
benford_dist(B)
-
bernoulli_dist(P), bernoulli_dist(), aliased as bern(P), bern()
-
beta_binomial_dist(N,Alpha,Beta)
-
beta_negative_binomial_dist(Alpha,Beta,N)
-
beta_prime_dist(P,Q)
-
beta_prime_dist(P,Q,B)
-
beta_prime_dist(P,Q,A,B)
-
binomial_dist(N,P). For random generation there are two alternatives: binomial_dist_smart(N,P) and binomial_dist_btpe(N,P) that uses some heuristics which method to use.
-
binomial_processdist(P,T)
-
birthday_dist(Classes,Coincident), birthday_dist() (defaults to 365, 2). Only dist, pdf, and quantile.
-
categorical_dist(Probs,Values) (for generation also categorical(Probs,Values))
-
cauchy_dist(A,B)
-
chi_dist(Nu)
-
chi_squared_dist(Nu)
-
coupon_collector_dist(M,N)
-
crp_dist(Theta,N) (Chinese Restaurant Process, CRP)
-
dice(), dice(N) (the same as random_integer1(N))
-
dirichlet_dist(Alpha)
-
discrete_laplace_dist(Mu,P)
-
discrete_markov_process_dist(Mu,P) (related: stationary_dist)
-
discrete_uniform_dist(Low,High)
-
erlang_dist(K,Lambda)
-
exponential_dist(Lambda): Note: the mean of exponential_dist(1/100) is 100.0 (not 1/100 as in Gamble).
-
extreme_value_dist(), extreme_value_dist_dist(Alpha,Beta)
-
f_dist(Nu1,Nu2)
-
flip(), flip(P)
-
frechet_dist(Alpha,Beta), frechet_dist(Alpha,Beta,Mu)
-
gamma_dist(Alpha,Theta)
-
geometric_dist(P)
-
generalized_extreme_value_dist(Mu,Sigma,Xi) (no variance)
-
gumbel_dist(Alpha,Beta)
-
hypergeometric1(Kk,N,K,Nn, hypergeometric_dist(N,NSucc,NTot,K)
-
k_record_dist(K,N)
-
kumaraswamy_dist(Alpha,Beta)
-
laplace_dist(Mu,P)
-
log_gamma_dist(Mu,Sigma)
-
log_normal_dist(Mu,Sigma) (and as lognormal_dist(Mu,Sigma))
-
logistic_dist(Mu,S), logistic()
-
logseries_dist(Theta)
-
matching_dist(N)
-
max_stable_dist(Mu,Sigma)
-
min_stable_dist(Mu,Sigma)
-
multinomial_dist(N,Ps)
-
multivariate_hypergeometric_dist(N,NumBalls) (random, pdf, (cdf), mean)
-
n_heads_in_a_row_after_k_tosses_dist (only generation, pdf,cdf,quantile). Related: expected_tosses_needed_for_n_heads(N), expected_tosses_needed_for_n_successes(N, P), probability_of_run_size(N,P,R)>
-
negative_binomial_dist(Alpha,Beta,N)
-
negative_hypergeometric_dist(R,NSucc,NTot)
-
normal_dist(Mu,Sigma)
-
num_records_dist(Mu,Sigma)
-
order statistics_estimator_of_m_u(Xs,I), order statistics_estimator_of_m_u_all(Xs), order statistics_estimator_of_m_v(Xs): Note: These does not support PDF, CDF, etc
-
order_statistics_continuous(PDF,CDF,N,R
-
order_statistics_with_replacement_discrete(PDF,CDF,N,K)
-
order_statistics_without_replacement(M,N,I)
-
pareto1_dist(K,Alpha), pareto2_dist(K,Alpha,Mu),
pareto3_dist(K,Gamma,Mu),pareto4_dist(K,Alpha,Gamma,Mu)
-
pascal_dist(N,P)
-
poisson_dist(Lambda)
-
poisson_process_dist(Mu,T)
-
probability_of_run_size(N,P,R) (not a proper distribution)
-
rademacher_dist()
-
random_integer(N), random_integer1(N)
-
shifted_geometric_dist(P)
-
random_walk_process_dist(P,T)
-
student_t_dist(Nu), student_t_dist(Mu,Sigma,Nu)
-
sum_prob_dist(A,B,N)
-
triangular_dist(), triangular_dist(Min,Max)
-
uniform_dist(Low,High) (for generation, also uniform(Low,High))
-
weibull_dist(Alpha,Beta), weibull_dist(Alpha,Beta,Mu)
-
wiener_dist(Mu,Sigma,T), wiener_dist(T)
-
zipf_dist(S), zipf_mma_dist(S), zipf_dist(N,S),
zipf_mma_dist(N,Rho)
Please note that some of the "wilder" distribution might be ... wild, and - in certain cases - give error due to too large values.
Most random generating functions has a variant for generating N samples. These are named with an _n. E.g. categorical_n([1/2,1/3,1/6],[a,b,c],100) generates 100 samples.
Example using binomial distribution:
% Generate a random sample
Picat> X=binomial_dist(10,0.49)
X = 5
% Generate 10 random samples
Picat> X=binomial_dist_n(10,0.49,10)
X = [4,5,4,7,5,4,7,6,5,4]
% PDF
Picat> X=binomial_dist_pdf(10,0.49,8)
X = 0.038897483585189
% CDF
Picat> X=binomial_dist_cdf(10,0.49,8)
X = 0.990897167987681
Picat> X=1-binomial_dist_cdf(10,0.49,8)
X = 0.009102832012319
% Quantile
Picat> X=binomial_dist_quantile(10,0.49,0.99)
X = 8
% Mean and variance
Picat> X=binomial_dist_mean(10,0.49)
X = 4.9
Picat> X=binomial_dist_variance(10,0.49)
X = 2.499
Picat PPL also includes some convenience functions for easier handling of these probability functions:
-
pdf, pdf_all
-
cdf, cdf_all
-
quantile, quantile_all
-
meanf (note the f)
-
variancef (note the f)
The first argument to these function is a probability distribution (as a term), and must be '$' escaped, e.g. $binomial_dist(10,0.49).
Some examples:
% A single PDF:
Picat> X=pdf($binomial_dist(10,0.49),8)
X = 0.038897483585189
% All PDFs
Picat> pdf_all($binomial_dist(10,0.49)).printf_list
1 0.011437409348143
2 0.04944997571109
3 0.126695362606191
4 0.213022104774134
5 0.245601956092531
6 0.196642089028334
7 0.107960362603791
8 0.038897483585189
% In decreasing probability order
Picat> pdf_all($binomial_dist(10,0.49)).sort_down(2).printf_list
5 0.245601956092531
4 0.213022104774134
6 0.196642089028334
3 0.126695362606191
7 0.107960362603791
2 0.04944997571109
8 0.038897483585189
1 0.011437409348143
% The default quantiles for the shown values are 0.01 .. 0.99.
% This can be changed by adding parameters for the lower and upper quantiles
% to use.
Picat> pdf_all($binomial_dist(10,0.49),0.00001, 0.99999).sort_down(2).printf_list
5 0.245601956092531
4 0.213022104774134
6 0.196642089028334
3 0.126695362606191
7 0.107960362603791
2 0.04944997571109
8 0.038897483585189
1 0.011437409348143
9 0.008304909349343
0 0.001190424238276
10 0.000797922662976
% CDF all
Picat> cdf_all($binomial_dist(10,0.49),0.00001, 0.99999).sort_down(2).printf_list
10 1.0
9 0.999202077337024
8 0.990897167987681
7 0.951999684402491
6 0.8440393217987
5 0.647397232770366
4 0.401795276677834
3 0.1887731719037
2 0.062077809297509
1 0.012627833586419
0 0.001190424238276
% Quantiles
Picat> quantile_all($binomial_dist(10,0.49)).printf_list
0.000001 0
0.00001 0
0.001 0
0.01 1
0.025 2
0.05 2
0.25 4
0.5 5
0.75 6
0.84 6
0.975 8
0.99 8
0.999 9
0.99999 10
0.999999 10
Picat> X=meanf($binomial_dist(10,0.49))
X = 4.9
Picat> X=variancef($binomial_dist(10,0.49))
X = 2.499
Modeling in Picat PPL
One of the reason that Picat PPL was created was to support probabilistic programming ideas for solving probability puzzles, etc in Picat. It has a syntax that is inspired by higher level PPL systems such as Racket/Gamble and WebPPL.
Here is simple Picat PPL model: The Monty Hall problem (cf The Monty Hall problem). For more information about the supported model functions, see the examples below and ppl_distributions.pi and ppl_utils.pi.
% Import the PPL modules
import ppl_distributions, ppl_utils, ppl_common_utils.
go ?=>
reset_store, % Clears everything for a new experiment
NumRuns = 10_000, % 10 000 runs
run_model(NumRuns,
$model, % The model we run
[show_probs_rat,mean] % The statistics to show
),
% fail, % if there are more than one parameter to check
nl.
go => true.
model() =>
% WLOG: We always select door d1
Doors = [d1,d2,d3],
% Randomly select the prize door
Prize = uniform_draw(Doors),
% Which door does Monty open?
MontyOpen = cond(Prize == d1,
uniform_draw([d2,d3]), % Monty must pick D2 or D3 randomly
cond(Prize==d2,d3,d2)), % Monty must not open the prize door
% We observe that Monty opens door 2
observe(MontyOpen == d2),
% If there is an observation in the model,
% then the observed_ok function is used to filter out
% the rejected solutions.
if observed_ok then
add("prize",Prize) % Add the value of Prize to the store
end.
Runs this as:
$ picat ppl_monty_hall.pi
A sample output for the model
var : prize
Probabilities:
d3: 0.66331860184813174 (1651 / 2489)
d1: 0.33668139815186821 (838 / 2489)
mean = [d3 = 0.663319,d1 = 0.336681]
Note: Picat PPL uses rejection sampling for calculating the probabilities of the random variables in a model, so the values are not exact. Increasing the number of runs will get more precise result, but then it will take longer to run. I tend to use 10 000 runs for an experiment, but sometimes it might be enough with 1000 samples. And sometimes much more is needed, e.g. 100 000, or 1 000 000 runs to get a fairly reasonable result, especially for models with observations since these can reduce accepted solutions drastically.
The work horse of modeling is the run_model/2-3 function.
-
run_model(NumRuns,Model): Run the model NumRuns run.
-
run_model(NumRuns,Model,Options): Run the model NumRuns run, and output the results using the Options (see below for more on Options).
Model is the name of the model to run, and there are two principal cases:
-
run_model(NumRuns,model): This runs the procedure called model/0 (without parameters).
-
run_model(NumRuns,$model(Parameter1,Parameter2,..., ParameterN): This runs the procedure called model/N with N parameters. Note that the model must be escaped with '$' since otherwise Picat will interpret this as a function to be evaluated.
Here are some of the most common random generating functions/utilities to be used in a Picat PPL model:
-
flip(P): randomly return true/false depending of probability P. Alias: flip() has P = 1/2.
-
bernoulli(P): randomly return 0/1 depending of probability P. Alias: bern() has P = 1/2.
-
random_integer(N): return a random number between 0 and N-1. random_integer1(N) returns a random number between 1 and N.
-
uniform_draw(List): Return randomly one of the elements in the list List.
-
categorical(Probs,Values): Return an element from the list Values according to the probabilities in Probs.
Example: categorical([1/2,1/3,1/6],[a,b,c])
-
binomial_dist(N,P): Binomial distribution
-
poisson_dist(Lambda): Poisson distribution
-
normal_dist(Mu,Sigma): Normal (Gaussian) distribution
-
beta_dist(A,B): Beta distribution
-
uniform(A,B): Uniform distribution
-
exponential_dist(Lambda): Exponential distribution.
-
draw_without_replacement(N,List): Draw N random elements from List, but without replacement. draw_without_replacement(List): Draw List.len elements from List without replacement (i.e. it's a shuffle)
-
shuffle(List): Return a shuffled version of the element in List
-
resample(N,List): Draw N element from List, with replacement (the same element might be returned many times). resample(List): Returns a list of List.len samples from List (with replacement).
-
count_occurrences(What,L) : Return the number of occurrences of the element What in the list List.
-
draw_until_one_pattern(Lst,Patterns,Choices): Draw elements from the list Choices until one of the pattern in Patterns occurs. Then return the list of the drawn elements.
simplex(List): Normalize the values in List so they sum to 1.
-
cases(L): The list L contains lists of [Condition,When]. cases(L) returns the When for the first Condition that is successful.
-
case(Val,L): The list L contains lists of [Value,When]. cases(Val,L) returns the When for the first Value == Val.
-
condt(Condition,When,Else): This is similar to Picat's cond/3/, but it does not require "==true" etc. Example: condt(flip(),head,tail). Compare with cond(flip()==true,head,tail).
-
check(Condition): Returns true/false depending on whether Condition is true or false. This is the goto function when one want to get the probabilities of Condition in a model.
-
ones(N,What): Returns N copies of What in a list.
-
argmin(L), argmax(L): Returns the index of the minimum/maximum values in a list. Note: It returns a list of indices since there might be more than one min/max value.
-
sublists(L,N): Returns all the sublists of size N in the list L.
All the probability distributions listed above can be used as random generators as well.
A note on observe and add
observe and add are two very important modeling functions in Picat PPL (especially observe differs in certain ways compared tp Gamble and WebPPL):
-
add(Name,Variable): Add the value of Variable to the store list of the random variable Name. Variant: add_all(List): Add all [Name,Variable] pairs to the store list of each Names. If the model has an observe statement, then the add statments are usually put inside if observed_ok then ... end. However, there is also possible to add "unobserved" values outside the observed_ok statement, for example for debugging purposes (and it's then a good practice to give these a clear name, e.g. "X (unobserved)").
-
observe(Condition): Add the condition to the "condition store" for the model, which instruct Picat PPL to reject all instances that does not satisfy the Condition.
Using observe for random variables for continuous probability distributions need a special care: The probability of an exact value for continuous probability distribution is 0, so there must be an interval of the valid range of values for it to be considered observed. For example: observe( abs(X-ObservedVal) <= 0.1). See below for some comments on observe on larger datasets (BDA).
-
observed_ok: This is used for adding values of random variables to the store list when an observe have been used. The use case is if observed_ok then add("name",Variable) end. Note: This is a difference from how Gamble and WebPPL works (in these system, an observation reduces samples automatically).
-
observe_abc(Data,Samples),observe_abc(Data,Samples,Scale), observe_abc(Data,Samples,Scale,Methods): This is a convenience function inspired by ABC (Approximate Bayesian Calculation) for forcing (observing) that a generated samples (Samples) is sufficiently similar to a given data set (Data). The scale parameter Scale, scales the limit. The Methods parameter can include:
-
mean
-
stdev
-
variance
-
median
- quantiles: the common ones
q10,q25,q50,q75,q90, as well as $q(N) where N is a number from 1..99. Note: these q/1 variants must be $-escaped.
- moments: raw_moments (
$moment(K)), central_moment ($central_moment(K)), standardized_moment: ($standardized_moment(K)). Note: all these must be $-escaped.
Default is mean and stdev. The (experimental) help function [Methods,Explain]=recommend_abc_stats_explained(Data) might give some useful hints of the statistics to use.
For some (especially discrete) datasets there might be very small standard deviation (in some times even 0). Therefore this function checks for this and gives a warning for standard deviations less than 0.01.
It should be noted (nay - emphasized) that some of this should be considered experimental (also: the user might have to experiment to get good enought results), and it's probably not as good as those PPL systems that focusing on these tasks. See more on this in the BDA / parameter recovering section below.
And - of course - most of Picat's other features can be used in a Picat PPL model such as:
- functions and procedures
- foreach and while loops
- list comprehensions
- etc
If is often a matter of style to use recursive functions (the style of Gamble), Prolog style recursions, or foreach/while loops to implement a model. Personally I tend to play with different styles/approaches.
Warning: The use of Picat's nondeterministic features (such as member/2) in a model might not work as excepted and should be used with care: The Picat PPL engine assumes that the models are run deterministically except for the randomness of the supported random functions. However, using member/2 and fail/0 outside a model - such as before/after run_model/2-3 - is very useful for running several experiments with different parameters.
Options to run_model
For the output of a Picat PPL model there are several different options. They are listed in the third Option parameter of run_model(NumRuns,Model,Options).
-
show_probs: show the probabilities for each random variable, ordered in descreasing order of probability. Variant: show_probs_trunc: reduces the listing of the probabilities to a smaller number of values, default the largest 4 and the smallest 4. This can be set with the option truncate_size=.... show_probs_rat and show_probs_rat_trunc: These two variants reduce the probabilities to a rational number with small numerator and denominator.
-
mean: Show the mean value of the random variable.
-
variance: Show the variance value of the random variable.
-
show_percentiles: Show the percentile for the random variable. The default percentile values are: [0.001,0.01,0.025,0.05,0.10,0.25,0.50,0.75,0.84,0.90,0.95,0.975,0.99,0.999,0.9999,0.99999], but can be set with the option percentiles=[...].
-
show_hpd_intervals: Shows the HPD intervals for the random variables. Default: [0.5,0.84,0.90,0.95,0.99,0.999,0.9999,0.99999], but can be set with the option hpd_intervals=[...]
-
show_histogram: Shows the histogram for the random variables. If the number of unique values is larger than histogram_limit (default 40), then the program generates fewer bins.
-
show_summary_stats: Shows a variety of statistics for the random variable: count, min, max, sum, mean, variance (opulation), standard deviation (population), variance (population), variance (sample), skewness, kurtosis, median, Q1, Q3.
-
show_simple_stats: Shows a simpler list of statistics as a list: length, min, median, median, max, variance, standard deviation.
-
show_scatter_plot: Shows the histogram a simple ASCII scatter plot of the data. Note, requires numeric data. Configuration: scatter_plot_size=[Width,Height]
-
random_seed: Sets the random seed for the random generation. Default: there is no random seed, and each run is started with the Picat function random2() to get a start random seed.
-
min_accepted_samples=MinSamples is a special parameter that ignores the number of runs given by the first parameter of run_model. Instead it generate samples until there are MinSamples accepted samples. For debugging, there is also an option show_accepted_samples=true that simply prints the number of accepted samples so far. Note: This is only useful when using observe (and observed_ok), since without this, all samples are accepted.
-
use_local_tabling: (Default false). If true: For each run, call Picat's initialize_table to clear the table area. This can be use to memoize recursive function calls "locally" when one function calls another function which in turn calls the first function. Note: this might have some effect on other part of Picat PPL so it should be used with care; and is to be considered experimental. See ppl_hurricane.pi and ppl_italian_murder.pi (go3/0) for some examples.
Here is a simple model with many of these options activated:
go =>
reset_store,
run_model(10_000,$model,[show_probs_trunc,truncate_size=6,
mean,variance,
show_percentiles,
show_hpd_intervals,hpd_intervals=[0.84,0.94],
show_histogram,
show_summary_stats]),
nl.
model() =>
X = poisson_dist(10),
add("x",X).
A sample output running this model:
var : x
Probabilities (truncated):
9: 0.1291000000000000
10: 0.1265000000000000
8: 0.1115000000000000
11: 0.1092000000000000
12: 0.0974000000000000
7: 0.0880000000000000
.........
20: 0.0019000000000000
21: 0.0014000000000000
23: 0.0002000000000000
1: 0.0002000000000000
25: 0.0001000000000000
22: 0.0001000000000000
mean = 10.0202
variance = 9.97979
Percentiles:
(0.001 2)
(0.01 4)
(0.025 4)
(0.05 5)
(0.1 6)
(0.25 8)
(0.5 10)
(0.75 12)
(0.84 13)
(0.9 14)
(0.95 15)
(0.975 17)
(0.99 18)
(0.999 21)
(0.9999 23.000199999998586)
(0.99999 24.800020000002405)
HPD intervals:
HPD interval (0.84): 6.00000000000000..14.00000000000000
HPD interval (0.94): 4.00000000000000..15.00000000000000
Histogram (total 10000)
1: 2 (0.000 / 0.000)
2: 20 # (0.002 / 0.002)
3: 73 ### (0.007 / 0.009)
4: 202 ######### (0.020 / 0.030)
5: 388 ################## (0.039 / 0.069)
6: 595 ############################ (0.059 / 0.128)
7: 880 ######################################### (0.088 / 0.216)
8: 1115 #################################################### (0.112 / 0.328)
9: 1291 ############################################################ (0.129 / 0.457)
10: 1265 ########################################################### (0.127 / 0.583)
11: 1092 ################################################### (0.109 / 0.692)
12: 974 ############################################# (0.097 / 0.790)
13: 737 ################################## (0.074 / 0.863)
14: 543 ######################### (0.054 / 0.918)
15: 346 ################ (0.035 / 0.952)
16: 201 ######### (0.020 / 0.972)
17: 128 ###### (0.013 / 0.985)
18: 67 ### (0.007 / 0.992)
19: 44 ## (0.004 / 0.996)
20: 19 # (0.002 / 0.998)
21: 14 # (0.001 / 1.000)
22: 1 (0.000 / 1.000)
23: 2 (0.000 / 1.000)
25: 1 (0.000 / 1.000)
Summary statistics
Count: 10000
Min: 1
Max: 25
Sum: 100202
Mean: 10.020200000000024
Variance (pop): 9.9797919599999876
Std dev (pop): 3.1590808726590063
Variance (sample): 9.9807900390038888
Std dev (sample): 3.1592388385501797
Skewness (unbiased): 0.0031929254256123467
Kurtosis excess: -3.0005891334657218
Median: 10
Q1 (25%): 8
Q3 (75%): 12
"observe as a constraint" / "reversibility"
I put these terms in quotes since I don't want to draw the similarity with probabilistic programming and logic programming/constraint modeling too far. But, there are some similarities between these two paradigms, which is are some of the reasons that I like them both.
Let's take "observe as a constraint" first. The role of a constraint in constraint programming is to reduce the domains of some decision variables. Similarly, in probabilistic programming, an observation is a way to force the system to set a random variable to a certain value, or a range of certain values.
"reversibility" is a concept from logic programming which means that one can use the variables in predicate in several way, i.e. there is no direction of the the variables: they can be input variables as well as output variables. One of the best examples of this reversibility in Picat (and Prolog) is the non-deterministic member(Element,List)predicate:
-
member(X,[1,2,3,4]) will non-derministically generate the unification of X with 1, 2, 3, and 4.
-
member(2,[1,2,3,4]) will check if the element 2 is a member in the list [1,2,3,4] (which it is, so this example succeeds)
- L = new_list(4),
member(3,L) with (non-deterministially) create 4 lists with 3 as an element in the first, second, third, and fourth position respectively.
In probabilistic program there is a similar idea that there is no fixed direction of the probabilstic function. For example in X = binomial_dist(N,P), all these three variables can be fixed or as random variables themselves. (Note: for continuous probability distributions, e.g. X = normal_dist(Mu,Sigma) we cannot simply state this with =; instead one have to use some acceptable interval which defines how "similar" X is, for example abs(X-normal_dist(Mu,Sigma))<=0.001).
A simple example of both these features (observation as constraints and reversibility) is this simple problem: How to identify a person's sex by their height. This is slighly altered model from See ppl_gender_height.pi.
model() =>
ObservedHeight = 180, % The observed value, 180 cm.
% Gender is even distributed
Gender = uniform_draw(["Male","Female"]),
% Define the height from the gender
% From https://en.wikipedia.org/wiki/List_of_average_human_height_worldwide
% Here are the values for Sweden.
Height = cond(Gender == "Male",
normal_dist(181.5,sqrt(50)), % in cm
normal_dist(166.8,sqrt(50))),
% The constraint: Observe a height (interval +/- 1.0)
observe(abs(Height-ObservedHeight) <= 1.0),
% If the obervation ("constraint") is successful (satisfied), add it
% to the store
if observed_ok then
add("gender",Gender),
% add("height",Height),
end.
The output of this model:
observedHeight: 180 (cm)
var : gender
Probabilities:
Male: 0.8294314381270903
Female: 0.1705685618729097
Which means that there is a probability of about 83% that this person is a male.
We could also use the same model - with some simple changes - to observe that this is a female (and then male), and then get the ranges of probable heights for the specific gender. This is model2 in the same program (in go2/0).
model2(ObservedGender) =>
Gender = uniform_draw(["Male","Female"]),
Height = cond(Gender == "Male",
normal_dist(181.5,sqrt(50)), % in cm
normal_dist(166.8,sqrt(50))),
observe(Gender == ObservedGender),
if observed_ok then
add("gender",Gender),
add("height",Height)
end.
The output is:
observedGender: Male
var : gender
Probabilities:
Male: 1.0000000000000000
mean = [Male = 1.0]
var : height
Probabilities (truncated):
208.085784765518241: 0.0001964250638381
204.926380749458076: 0.0001964250638381
204.779039077674639: 0.0001964250638381
204.243852168352419: 0.0001964250638381
.........
158.284888115629229: 0.0001964250638381
157.440393568938504: 0.0001964250638381
155.607331346145116: 0.0001964250638381
153.755536221383153: 0.0001964250638381
mean = 181.399
observedGender: Female
var : gender
Probabilities:
Female: 1.0000000000000000
mean = [Female = 1.0]
var : height
Probabilities (truncated):
191.013584387093658: 0.0001980590215884
187.854280695754255: 0.0001980590215884
187.74447212062779: 0.0001980590215884
187.658912843387753: 0.0001980590215884
.........
143.750190582955781: 0.0001980590215884
143.295663612388978: 0.0001980590215884
142.903471658505879: 0.0001980590215884
141.829663018950583: 0.0001980590215884
mean = 166.841
Here are some other fun height related problems:
To conclude, there are quite a few similarities between constraint programming (CP) and probabilistic programming (PP):
- both define allowed domains to their variables (random variables and constraint variables)
- both have a (pre)defined set of high level functions that interacts with these variables.
- both have a way to reduce/constaint the variables:
observe in Picat PPL, the constraints in CP
- both supports "reversible thinking", i.e. using the variables as input and/or output
- in some way, the way of thinking are the same, but in other ways they are quite different
- And: both are very fun to work with!
Using Picat PPL for Bayesian Data Analysis / Recovering distribution parameters
One case - and in certain research areas the most important case - of a probabilistic programming system
is to do Bayesian Data Analysis (BDA) on a
dataset, for example to recover parameters for a probability distribution in a model given
a dataset to observe (using observe). As mentioned above, Picat PPL is in general especially not
good at this. (I should mention that I'm not very much into BDA,
and use it just for recreational/educational purposes.)
The reason for this is that the Picat PPL mechamism for the observe inference,
is simply too simple to handle large data set with its rejection sampling method.
However, there are some tricks that might give some results, at least for
educational/educational purposes.
One thing that might work - and can actually be quite fast - is to use the ABC (Approximate Bayesian Computation) inspired observation method observe_abc(Data,Sample that restricts the difference of the mean and standard deviation (or some other statistical measure) of the data and the generated sample. Here's an example using this (from ppl_gumbel_recover.pi). Things to notice:
-
observe( abs(Mean-YMean) < Stdev)
observe( abs(Stdev-YStdev) < Stdev)
- The limit used (here
Stdev) might have to be adjusted to be wider or smaller depending on time, accuracy, etc. Starting with the standard deviation seems to be a good rule of thumb.
-
observe_abc(Data,Sample) and observe_abc(Data,Sample,Scale) are two convenience procedures that doing this. The scale parameter might be tweaked to get a better (smaller or wider) acceptable error interval. See more on this above.
-
min_accepted_samples=1000,show_accepted_samples=true: Ensure that 1000 accepted samples are generated (and print a message each time a sample is accepted).
go ?=>
Data = gumbel_dist_n(12,3,100),
println(data=Data),
println([data_len=Data.len,mean=Data.mean,variance=Data.variance,stdev=Data.stdev]),
reset_store,
run_model(10_000,$model(Data),[show_probs_trunc,mean,
show_hpd_intervals,hpd_intervals=[0.84],
min_accepted_samples=1000,show_accepted_samples=true
]),
nl,
nl.
go => true.
model(Data) =>
Len = Data.len,
Mean = Data.mean,
Stdev = Data.stdev,
Variance = Data.variance,
A = uniform(1,20),
B = uniform(1,20),
% A = normal_dist(Mean,4),
% B = normal_dist(Mean,4),
Y = gumbel_dist_n(A,B,Len),
YMean = Y.mean,
YStdev = Y.stdev,
% observe( abs(Mean-YMean) < Stdev),
%observe( abs(Stdev-YStdev) < Stdev),
% Using the convenience function
observe_abc(Data,Y)
Post = gumbel_dist(A,B),
if observed_ok then
add("a",A),
add("b",B),
add("post",Post)
end.
Output of the means from a sample run:
a: 12.686
b: 3.28662
post: 10.8731
This took about 4s on my machine.
The option min_accepted_samples=MinSamples (see above) might be helpful for forcing the minimum number of accepted samples (instead of playing with different values of the number of runs). This will not yield a faster solution, but might save some time when testing the model, though my changing its parameter - such as the scale or the statistical measure methods - this can be faster.
Here are some more models using this technique for BDA/parameter recover problems:
Note: For discrete models, one might think of two things:
- The standard deviation might be 0. The a pseudo standard deviation is useful:
Stdev = max(1,Data.stdev)
- The observervations should probably be done with ≤ instead of <:
observe(abs(Mean-YMean)=<Stdev)
Here are some discrete models using this "ABC inspired" method:
If that does not work, there might be some other things that help. For models large domain, one can widen the acceptable interval and see how it works. For example, from observe(normal_dist(Mu,Sigma)-Data[I] <= 0.1, change it to observe(normal_dist(Mu,Sigma)-Data[I] <= 1.
If possible/applicable, sorting the dataset (as well and the generated samples) might also give some better results.
Another - and perhaps the last - tip is to - drastically - reduce the data set, to - say - the first 3 or 4 data points. And by selecting these data points carefully this might even gve some fairly good results, such as just the smallest, mid and largest values in tha data set.
For more advanced use of BDA, it is definitely recommended to use systems such as PyMC, Stan, Turing. jl (see my Turing.jl page), etc. Also, Gamble and WebPPL are often better than Picat PPL for these BDA tasks as well, even if they are not really designed for large datasets. See my Gamble and WebPPL pages for some examples.
Picat PPL models/files
Here are some Picat PPL model. Most of them are ported from my Gamble models.
These Picat PPL files are available in the zip file all_public.zip.
- ppl_6_digit_numbers.pi: 6 digit numbers (Blom et.al)
- ppl_7_deaths_in_one_month.pi: 7 deaths in one month (Fenton)
- ppl_8_boys_and_2_girls.pi: 8 boys and 2 girls
- ppl_8_schools.pi: 8 schools (BDA problem)
- ppl_24_game.pi: 24 game
- ppl_100_coins.pi: 100 coins (Litt)
- ppl_100_heads_in_a_row.pi: 100 heads in a row (van Jouanne-Diedrich)
- ppl_a_marble_chance_puzzle.pi: A marble chance puzzlew (Cole Frederick)
- ppl_ab_testing.pi: A/B testing (Bååt)
- ppl_ab_testing2.pi: A/B testing
- ppl_ab_test_simple.pi: A/B test simple (Bååt)
- ppl_ab_test_simple2.pi: A/B test simple (Bååt)
- ppl_abc_test.pi: ABC test (test of
observe_abc/2-4)
- ppl_actors_award.pi: Actor's Award (Figaro)
- ppl_adding_to_7.pi: Adding to 7 (Wood)
- ppl_aircraft_position.pi: Aircraft position (BLOG)
- ppl_aircraft_static.pi: Aircraft statist (BLOG)
- ppl_alarm_multi.pi Alarm multi (ProbLog)
- ppl_alarm.piAlarm problem (AIMA)
- ppl_alien_extinction_riddle.piAlien extinction riddle (MindYourDecisions)
- ppl_all_girls_world.pi: All Girls World? (BrainStellar)
- ppl_always_take_the_middle_taxi.pi: Always take the middle taxi (JaneStreet)
- ppl_an_experiment_in_personal_taste_for_money.pi: An experiment in personal taste for money (Mosteller)
- ppl_animal_population.pi: Animal population
- ppl_appears_tall.pi: Appears tall (ProbLog)
- ppl_ar1.pi: AR(1) (Stan)
- ppl_area_under_normal_curve.pi: Area under Normal curve (Statistics101 Resampling)
- ppl_ascii_plot.pi: ASCII Scatter plot
- ppl_asia.pi: Asia (bayesean network)
- ppl_bag_of_marbles.pi: Bag of marbles (de Readt)
- ppl_ball_box.pi: Ball box (ProbLog)
- ppl_ball_entering_q.pi: Ball entering Q
- ppl_ball_selection_game.pi: Ball selection game
- ppl_banachs_match_box_problem.pi: Banach's Match box problem (Blom et.at.)
- ppl_bar_visiting.pi: Bar visiting
- ppl_baseball_payroll.pi: Baseball payroll (Resampling Stats)
- ppl_battalion.pi: Battallion (BLOG)
- ppl_battery_comparison.pi: Battery comparison (Resampling Stats)
- ppl_bayesian_linear_regression.pi: Bayesian linear regression
- ppl_bayesian_network.pi: Bayesian Network (ProbLog)
- ppl_bayesian_null_hypothesis_test.pi: Baysian null hypothesis test (WebPPL)
- ppl_bayes.pi: Bayes (Resampling Stats)
- ppl_bda.pi: BDA (Bayesian Data Analysis) (WebPPL)
- ppl_bda2.pi: BDA (WebPPL))
- ppl_bda3.pi: BDA3 (WebPPL)
- ppl_bda_presidential_election.pi: BDA Presidential election (ProbMods)
- ppl_beaver_fever.pi: Beaver Fever (madeofmistak3)
- ppl_benford_dist.pi: Benford dist
- ppl_bernoulli_test.pi: Bernoulli test
- ppl_bertrands_paradox.pi: Bertrand's paradox
- ppl_bertrands_paradox_resampling.pi: Bertrand's paradox (Resampling Stats)
- ppl_beta_binomial_dist.pi: Beta binomial dist
- ppl_beta_binomial_recover.pi: Beta binomial recover
- ppl_beta_binomial_urn_model.pi: Beta binomial urn model
- ppl_beta_comparison.pi: Beta comparison (infer.net)
- ppl_biased_coin.pi: Biased coin (cplint)
- ppl_binomial_basketball.pi: Binomial basketball (Mathematica)
- ppl_binomial_coin.pi: Binomial coin (Mathematica)
- ppl_binomial_dice2.pi: Binomial dice (Mathematica)
- ppl_binomial_dist.pi: Binomial distribution
- ppl_binomial_process.pi: Binomial process distribution
- ppl_binomial_trial_count.pi: Binomial trial count (infer.net)
- ppl_birthday.pi: Birthday (BLOG)
- ppl_birthday_dist.pi: Birthday distribution (R)
- ppl_birthday2.pi: Birthday (PSI)
- ppl_birthday4.pi: Birthday paradox
- ppl_birthday_coincidence.pi: Birthday coincidence
- ppl_birth_death_model.pi: Birth-death model
- ppl_book_bags.pi: Book bags (Netica)
- ppl_book_sorting_puzzle.pi: Book sorting puzzle (Saxena)
- ppl_brain_twister_44_dice_and_cards.pi: Brain twiter #44: Dice and cards
- ppl_breaking_a_stick_in_three_pieces.pi: Breaking a stick in three places
- ppl_break_a_stick_in_two.pi: Break a stick in two (Julian Simon)
- ppl_breaking_stick.pi: Breaking stick (BrainStellar)
- ppl_brian_ate_pizza_last_night.pi: Brian ate pizza last night (Pfeffer)
- ppl_bridge.pi: Bridge (Resampling Stats)
- ppl_bugs_book_2_1_2.pi: BUGS book, 2.1.2
- ppl_bugs_book_2_3_1.pi: BUGS book, 2.3.1
- ppl_bugs_book_2_4_1.pi: BUGS book, 2.4.1
- ppl_bugs_book_2_5_1.pi: BUGS book, 2.5.1
- ppl_bugs_book_2_6_1.pi: BUGS book, 2.6.1
- ppl_bugs_book_2_7_1.pi: BUGS book, 2.7.1
- ppl_bugs_book_2_7_2.pi: BUGS book, 2.7.1
- ppl_bugs_book_3_3_2.pi: BUGS book, 3.3.2
- ppl_bugs_book_3_3_3.pi: BUGS book, 3.3.3
- ppl_bugs_book_3_4_1.pi: BUGS book, 3.4.1
- ppl_bugs_book_3_4_1b.pi: BUGS book, 3.4.1b
- ppl_bugs_book_3_5_1.pi: BUGS book, 3.5.1
- ppl_bullets_of_fate.pi: Bullets of fate
- ppl_burglary_multihouse.pi: Burglary multihouse (BLOG)
- ppl_caesar.pi: Simple Caesar cipher (Dice)
- ppl_car_caravans.pi: Car caravans (Glick)
- ppl_car_in_box.pi: Car in box
- ppl_card_draw_problem.pi: Card draw problem
- ppl_card_problem.pi: Card problem
- ppl_cards.pi: Cards
- ppl_casino_puzzle.pi: Casino puzzle (Sarwar)
- ppl_cat.pi: Cat problem
- ppl_cats_rats_and_elephants.pi: Cats, rats, and elephants (Downey)
- ppl_cauchy_dist.pi: Cauchy distribution
- ppl_causal_model1.pi: Causal model
- ppl_changepoint.pi: Changepoint
- ppl_cdf_from_data.pi: CDF from data
- ppl_cheating.pi: Cheating
- ppl_cheating2.pi: Cheating (Davidson-Pilon)
- ppl_chess_tournament.pi: Chess tournament (BrainStellar)
- ppl_chi_dist.pi: Chi distribution
- ppl_chi_squared_dist.pi: Chi^2 distribution
- ppl_chi_squared_inverse_dist.pi: Chi squared inverse
- ppl_chi_squared_test.pi: Chi square test
- ppl_chicken_pecking.pi: Chicken pecking (PSI)
- ppl_chinese_restaurant_process.pi: Chinese restaurant process
- ppl_chinese_restaurant_process2.pi: Chinese restaurant process, including the crp_dist probability distribution
- ppl_chuck_a_luck.pi: Chuck-a-luck (Mosteller)
- ppl_clan_size.pi: Clan size (BrainStellar)
- ppl_click_graph.pi: Click graph (PSI)
- ppl_clinical_trial.pi: Clinical trial (infer.net)
- ppl_clinical_trial_r2.pi: Clinical trial (R2)
- ppl_cloud_duration.pi: Cloud duration (Mathematica)
- ppl_cluster_watching_birth_days.pi: Cluster watching birth days (Brignell))
- ppl_cluster_watching_birth_months.pi: Cluster watching birth months (Brignell)
- ppl_coin_bias.pi: Coin bias (R2)
- ppl_coin_competition.pi: Coin competition (Bercker)
- ppl_coin_flip_game.pi: Coin flip game (Ogborn)
- ppl_coin_flip_probability_independent_or_not.pi: Coin flip probability independent or not
- ppl_coin_flips.pi: Coin flips (pedrozudo)
- ppl_coin_flips2.pi: Coin flips (Pickover)
- ppl_coin_hh_vs_ht.pi: Coin HH vs HT (Litt)
- ppl_coin_paradox.pi: Coin paradox
- ppl_coins_in_sequences.pi: Coins in sequences
- ppl_coins_learning.pi: Coins learning (ProbLog)
- ppl_coins_learning2.pi: Coins learning (ProbLog)
- ppl_coin_tosses.pi: Coin tosses
- ppl_coin_toss.pi: Coin toss (Stan)
- ppl_coincidences.pi: Coincidences (Diaconis, Mosteller)
- ppl_coincidences2.pi: Coincidences (Diaconis, Mosteller)
- ppl_collecting_lucky_coupons.pi: Collecting lucky coupons (BrainStellar)
- ppl_color_of_the_taxi.pi: Color of the taxi (Kahnemann, et.al)
- ppl_colored_runs_of_cards.pi: Colored runs of cards (BrainStellar)
- ppl_colored_switches.pi: Colored switches (BrainStellar)
- ppl_comparing_two_proportions.pi: Comparing two proportions (Resampling Stats)
- ppl_cond_exponential.pi: Cond exponential (PSI)
- ppl_consecutive_heads.pi: Consecutive heads (BrainStellar))
- ppl_consecutive_numbers_in_lotto_ticket.pi: Consecutive numbers in Lotto ticket
- ppl_continuous_weight.pi: Continuous weight (Church)
- ppl_cookie_problem.pi: Cookie problem (Downey)
- ppl_correct_given_intelligence.pi: Correct given intelligence
- ppl_coupon_collector_probability.pi: Coupon collor problem (distribution)
- ppl_coupon_collectors2.pi: Coupon collectors
- ppl_coupon_collectors3.pi: Coupon collectors
- ppl_coupon_collectors_problem.pi
- ppl_covid_prob.pi: Covid prob (Hua)
- ppl_craps.pi: Craps
- ppl_craps2.pi: Craps (a more eleborate model)
- ppl_crazy_postman.pi: Crazy postman (BrainStellar))
- ppl_csi.pi: CSI, Context-Specific Independence (BLOG)
- ppl_cycling_time.pi: Cycling time (infer.net)
- ppl_dartboard.pi: Dartboard (Resampling Stats)
- ppl_daughter_or_son.pi: Daughter or son (BrainStellar))
- ppl_derangements.pi: Derangements (and matching distribution)
- ppl_dice.pi: Dice problem (Gamble)
- ppl_dice_6_throws.pi: Dice 6 throws (cplint)
- ppl_dice_6_throws2.pi: Dice 6 throws (cplint)
- ppl_dice_6_throws3.pi: Dice until a 6 (Mosteller)
- ppl_dice_game.pi: Dice game (Informs)
- ppl_dice_minimum_valie_of_four_dice.pi: Dice: minimum (and maximum) value of 4 dice
- ppl_dice_problem.pi: Dice problem (Downey)
- ppl_dice_puzzle.pi: Dice puzzle (de Mere)
- ppl_dice_with_reroll.pi: Dice with reroll
- ppl_difference_dice.pi: Difference dice
- ppl_dirichlet_dist.pi: Dirichlet distribution
- ppl_discarding_n_cards_in_poker.pi: Discarding n cards in Poker
- ppl_discrete_uniform_dist.pi: Discrete uniform dist
- ppl_discrete_markov_process.pi: Discrete Markov process
- ppl_discrete_markov_process_biased_coin.pi: Discrete Markov process: Biased coin (Mathematica)
- ppl_discrete_markov_process_gamblers_ruin.pi: Discrete Markov process: Gambler's ruin (Mathematica)
- ppl_discrete_markov_process_marsian_messages.pi: Discrete Markov process - Marsian mesages (Mathematica)
- ppl_disease_infection.pi: Disease infection (SPPL)
- ppl_distinct_number_draws.pi: Distinct number draws (BrainStellar))
- ppl_distinct_six_dice.pi: Distinct six dice
- ppl_distributions.pi: Distributions: The main module for the probability distributions
- ppl_distributions_test.pi: Test of ppl_distributions.pi
- ppl_doomsday.pi: Doomsday
- ppl_drug_trial_evaluation.pi: Drug trial evaluation (PyMC)
- ppl_drunk_ant.pi: Drunk ant (BrainStellar))
- ppl_drunk_man_and_keys_problem.pi: Drunk man and keys problem
- ppl_drunk_passenger.pi: Drunk passenger (BrainStellar)
- ppl_duck_hunter_problem.pi: Duck hunter problem (Siegrist)
- ppl_duelling_cowboys.pi: Duelling cowboys (Katoen))
- ppl_ehrenfest_urn_scheme.pi: Ehrenfest urn scheme
- ppl_either_a_spade_or_and_ace.pi: Either a spade or an ace (Resampling Stats)
- ppl_election.pi: Election (SPPL)
- ppl_empty_boxes.pi: Empty boxes (moligninip)
- ppl_erlang_dist.pi: Erlang distribution
- ppl_euro_coin_problem.pi: Euro coin problme (Downey)
- ppl_euro_coin_problem_unreliable_measurements.pi: Euro coin problem unreliable measurements (Downey)
- ppl_expected_breakup_length.pi: Expected breakup length (BrainStellar))
- ppl_exponential_dist.pi: Exponential distribution
- ppl_exponential_inverse_dist.pi: Inverse exponential dist (only random generation)
- ppl_extreme_value_dist.pi: Extreme value distribution (Mathematica)
- ppl_extreme_value_test.pi: Extreme value distribution, some tests
- ppl_extreme_value_maximum_wind_speed.pi: Extreme value distribution: Extreme wind speed (Mathematica)
- ppl_extreme_value_test.pi: Extreme value test (Mathematica)
- ppl_extreme_value_test2.pi: Extreme value test (Keating)
- ppl_fair_coin_from_a_biased_coin.pi: Fair coin from a biased coin (von Neumann, cplint))
- ppl_fair_coin_tosses.pi: Fair coin tosses (Mcnulty)
- ppl_fair_coin2.pi: Fair coin (Resampling Stats)
- ppl_fair_game.pi: Fair coin game (BL)
- ppl_fairness_hiring_model1.pi: Fairness hiring model 1 (SPPL)
- ppl_fairness_hiring_model2.pi: Fairness hiring model 2 (SPPL)
- ppl_fairness_income_model.pi: Fairness income model (SPPL)
- ppl_false_coin.pi: False coin (Winkler)
- ppl_false_coin2.pi: False coin (Winkler)
- ppl_family_out_problem.pi: Family out problem (Charniak)
- ppl_father_of_lies.pi: Father of lies (BrainStellar)
- ppl_firing_squad.pi: Firing squad (Pearl)
- ppl_firings.pi: Firings (Resampling Stats)
- ppl_five_coins.pi: Five coins
- ppl_five_qualities.pi: Five qualities
- ppl_flights_noshow.pi: Flights noshow (Statistics101 Resampling)
- ppl_flipping_coins_until_pattern.pi: Flipping until pattern (Litt)
- ppl_flipping_three_coins.pi: Flipping three coins (Resampling Stats))
- ppl_football_bet_simple.pi:Football bet simple (Netica)
- ppl_four_cards.pi: Four cards (Bar-Hilel, Ruma Falk))
- ppl_four_dice.pi: Four dice
- ppl_four_girls_and_one_boy.pi: Four girls and one boy (Resampling Stats))
- ppl_frechet_dist.pi: Frechet distribution (Mathematica)
- ppl_frechet_windspeed.pi: Frechet wind speed (Mathematical)
- ppl_frustration_patience.pi: Frustration patience (Grinstead, Snell)
- ppl_galaxy.pi: Galaxy (BLOG)
- ppl_galileos_dice.pi: Galileo's dice (Mathematica)
- ppl_game_of_ur_problem.pi: Game of Ur problem (Downey)
- ppl_game_of_ur_problem2.pi: Game of Ur problem (Downey)
- ppl_game_show_problem.pi: Game show problem
- ppl_gamma_dist.pi: Gamma distribution
- ppl_gamma_dist_test.pi: Gamma dist test
- ppl_gaussian_dist.pi: Gaussian (Normal) distribution
- ppl_gaussian_mixture_model.pi: Gaussian mixture model
- ppl_gaussian_mixture_model2.pi: Gaussian mixture model
- ppl_gender_height.pi: Gender height
- ppl_generalized_extreme_value_dist.pi: Generalized extreme value dist
- ppl_generalized_extreme_value_maximum_wind_speed.pi: Generalized extreme value maximum wind speed (Mathematica))
- ppl_geometric_cereal_box.pi: Geometric cereal box (Mathematica)
- ppl_geometric_coin.pi: Geometric coin (Mathematica)
- ppl_geometric_counting_cars.pi: Geometric counting cars (Mathematica)
- ppl_geometric_dist.pi: Geomestric distribution
- ppl_german_tank_problem.pi: German tank problem
- ppl_girl_births.pi: Girl births (Gelman et.al)
- ppl_grass.pi: Grass (R2)
- ppl_greed_for_an_ace.pi:Greed for an ace (BrainStellar)
- ppl_growth_in_yeast_culture.pi: Growth in Yeast culture
- ppl_guess_the_toss.pi: Guess the toss (BrainStellar)
- ppl_gumbel_dist.pi: Gumbel distribution
- ppl_gumbel_earthquake.pi: Gumbel: parameter recover for earthquakes (Mathematica)
- ppl_gumbel_minimum_daily_flows.pi: Gumbel distribution: Minimum daily flow (Mathematica)
- ppl_gumbel_recover.pi: Gumbel distribution: recover parameters
- ppl_half_time.pi: Half time (BrainStellar)
- ppl_handedness.pi: Handedness (infer.net)
- ppl_healthiness.pi: Healthiness (BLOG)
- ppl_heart_disease_regression.pi: Heart disease regresion (Hugin)
- ppl_hedge_fund_managers.pi: Hedge fund managers (Mathematica)
- ppl_heights.pi: Heights (Winn, Minka)
- ppl_highest_dice_roller.pi: Highest dice roller
- ppl_hiring_young_or_old.pi: Hiring young or old
- ppl_histogram_test.pi: Histogram test
- ppl_hmm_weather.pi: HMM weather (ProbLog)
- ppl_holmes_clock_problem.pi: Holmes' clock problem (Grinstead,Snell)
- ppl_honest_successor.pi: Honest successor (Bassey John)
- ppl_hop_the_lily_pad.pi: Hop the lily pad (Lessard)
- ppl_how_long_until_two_consecutive_sixes.pi: How long until two consecutive sixes (Kubler)
- ppl_how_many_cards_can_you_draw.pi: How many cards can you draw? (Kubler)
- ppl_how_many_sons.pi: How many sons (Ruma Falk)
- ppl_how_many_times_did_i_flip_the_coin.pi: How many times did I flip the coin? (Lambert, via Pascal Bercker)
- ppl_how_many_times_was_the_coin_tossed.pi: How many times was the coin tossed?
- ppl_how_much_does_each_kid_weigh.pi: How much does each kid weigh? (MindYourDecisions)
- ppl_how_similar_are_people.pi: How similar are people?
- ppl_how_tall_is_a.pi: How tall is A? (Downey)
- ppl_hpd_interval.pi: HPD interval
- ppl_hpd_intervals.pi: HPD intervals
- ppl_hurricane.pi: Hurricane (BLOG)
- ppl_hypergeometric1_dist.pi: Hypergeometric1 distribution
- ppl_hypergeometric_dist.pi: Hypergeometric distribution
- ppl_hypothesis_testing.pi: Hypothesis testing (AgenaRisk)
- ppl_hypothesis_testing2.pi: Hypothesis testing (Resample Stats)
- ppl_icy_road.pi: Icy road (Hugin)
- ppl_index_of_smallest_value.pi: Index of smallest (/largest) value in a list
- ppl_indian_gpa.pi: Indian GPA (cpling)
- ppl_indistinguishable_dice.pi: Indistinguishable dice
- ppl_infinite_dice.pi: Infinite dice
- ppl_infinite_gaussian_mixture.pi: Infinite gaussian mixture (BLOG)
- ppl_innocent_monkey.pi: Inncocent monkey (BrainStellar)
- ppl_insurance_cost.pi: Insurance cost (Resample Stats)
- ppl_intelligence_test.pi: Intelligence test
- ppl_invisible_dice.pi: Invisible dice (Brain Stellar)
- ppl_iq_best.pi: IQ best (comparing max of two population with different variances)
- ppl_iq_over_years.pi: IQ over years
- ppl_italian_murder.pi: Italian murder (Bellodi et.al)
- ppl_jungs_fish_stories.pi: Jung's fish stories (Diaconis)
- ppl_kruskal_count.pi: Kruskal count
- ppl_kumaraswamy_dist.pi: Kumaraswamy distribution
- ppl_landing_on_25.pi: Landing on 25 (BL)
- ppl_laplace_births.pi: Laplace births (Stan)
- ppl_laplace_dist.pi: Laplace distribution
- ppl_left_some_candies.pi: Left some candies (BrainStellar)
- ppl_librarian_or_farmer.pi: Librarian or farmer (Kahnemann, Davidson-Pilon)
- ppl_light_bulbs.pi: Light bulbs
- ppl_linear_regression2.pi: Linear regression (WebPPL)
- ppl_lions_tigers_and_bears2.pi: Lions, tigers, and bears (Downey)
- ppl_loaded_coin.pi: Loaded coin (Osvaldo Martin)
- ppl_locomotive_problem.pi: Locomotive problem (Mosteller, Downey)
- ppl_log_gamma_dist.pi: Log gamma distribution
- ppl_log_normal_dist.pi: Log normal distribution
- ppl_logistic_dist.pi: Logistic distribution
- ppl_logistic_recover.pi: Logistic distribution: recover parameters
- ppl_logistic_regression_challenger.pi: Logistic regression: Challenger (Zinkov)
- ppl_logit_dist.pi: Logit dist (only random sampler)
- ppl_london_blitz.pi: London blitz (Church)
- ppl_lottery.pi: Lottery (Mathematica)
- ppl_lotto_not_two_consecutive_numbers.pi: Lotto not two consecutive numbers
- ppl_lucky_dip_task.pi: Lucky dip task (Pascal Bercker)
- ppl_lucky_throw.pi: Lucky throw (genfer, Prodigy)
- ppl_m_and_m_problem.pi: M&M problem (Downey)
- ppl_machine_probability_puzzle.pi: A Machine Probability Puzzle (Frederick)
- ppl_machine_working.pi: Machine working (pedrozudo)
- ppl_machine_working_gaussian.pi: Machine working (gaussian) (pedrozudo)
- ppl_machine_working_gaussian2.pi: Machine working (gaussian) (pedrozudo)
- ppl_mailing.pi: Mailing (Resampling Stats)
- ppl_markov_process.pi: Markov process distribution
- ppl_martin_gardners_odds_on_kings.pi: Martin Gardner's Odds on Kings (BL)
- ppl_matching_distribution.pi: Mathing distribution
- ppl_max_stable_dist.pi: Max stable distribution (Mathematica)
- ppl_max_stable_maximum_wind_speed.pi: Max stable dist maximum wind speed
- ppl_mean_and_stdev_for_10_flips_of_a_fair_coin.pi: Mean and stdev for 10 flips of a fair coin (Statistics101 Resampling)
- ppl_mean_of_uniform.pi: Mean of uniform
- ppl_medical_diagnosis.pi: Medial diagnosis (ProbMods)
- ppl_medical_diagnosis2.pi: Medial diagnosis
- ppl_medical.pi: Medical (Church)
- ppl_medical_test.pi: Medical test
- ppl_meeting_collegues_at_office.pi: Meeting colegues at office
- ppl_meeting_problem.pi: Meeting problem
- ppl_meeting_problem2.pi: Meeting problem
- ppl_meeting_under_the_clock.piMeeting under the clock (Julian Simon)
- ppl_meeting_under_the_clock2.pi: Meeting under the clock (using intervals) (Julian Simon)
- ppl_messing_with_envelopes.pi: Messing with envelopes (BrainStellar)
- ppl_min_stable_dist.pi: Min stable distribution (Mathematica)
- ppl_mixture_of_gaussian.pi: Mixture of gaussian (BLOG)
- ppl_mixture_of_gaussian2.pi: Mixture of gaussian (BLOG)
- ppl_monty_hall.pi: Monty Hall problem (PyMC)
- ppl_monty_hall_problem.pi: Monty Hall problem (Resampling Stats)
- ppl_mr_shearers_class.pi: Mr Shearer's class
- ppl_multinomial_balls.pi: Multinomial balls (Mathematica)
- ppl_multinomial_callcenter.pi: Multinomial callcenter (Mathematica)
- ppl_multinomial_dist.pi: Multinomial distribution (Mathematica)
- ppl_multinomial_voting.pi: Multinomial voting
- ppl_multinomial_voting2.pi: Multinomial voting (Mathematica)
- ppl_multiplying_dice.pi: Multiplying dice (BrainTwister,EnigmaCode)
- ppl_multivariate_hypergeometric_dist.pi: Multivariate hypergeometric distribution (random, pdf, (cdf), mean)
- ppl_murder_mystery.pi: Murder mystery (Andy Gordon)
- ppl_murder_mystery_mbmlbook.pi: Murder mystery (mbmlbook)
- ppl_murphys_law_of_queues.pi: Murphy's law of queues (Robert Matthews)
- ppl_murphys_knots.pi: Murphy's knots (Robert Matthews)
- ppl_my_neighbour.pi: My neighbour ("life")
- ppl_n_heads_in_a_row_after_k_tosses.pi: N heads in a row after K tosses distribution
- ppl_negative_binomial_basketball.pi: Negative Binomial basketball (Mathematica)
- ppl_negative_binomial_basketball2.pi: Negative binomial basketball II (Mathematica)
- ppl_negative_binomial_coins.pi: Negative binomial coins (Mathematica)
- ppl_negative_binomial_coins2.pi: Negative binomial coins II (Mathematica)
- ppl_negative_binomial_dist.pi: Negative binomial distribution
- ppl_negative_binomial_selling_candies.pi: Negative Binomial selling candies
- ppl_negative_binomial_test.pi: Negative binomial test
- ppl_negative_hypergeometric_dist.pi: Negative hypergeometric distribution
- ppl_newton_pepys_problem.pi: Newton-Pepy's problem
- ppl_nine_spades_four_clubs.pi: Nine spades and four clubs (Statistics101 Resampling)
- ppl_no_birthdays.pi: No birthdays (Paulos)
- ppl_noisy_or.pi: Noisy OR (Dice)
- ppl_number_guessing_game.pi: Number guessing game (Stuhlmuller, Goodman)
- ppl_number_of_dice_throws_to_target.pi: Number of dice throws to target
- ppl_number_of_double_heads.pi: Number of double heads (BrainStellar)
- ppl_number_of_walks_until_no_shoes.pi: Number of walks until no shoes (Blom et.al)
- ppl_oil_rig.pi: Oil rig (BayesiaLab)
- ppl_one_ace.pi: One ace (Statistics101 Resampling)
- ppl_one_rigged_coin.pi: One rigged coin (Litt, Bercker)
- ppl_one_spade_or_one_club.pi: One spade or one club (Statistics101 Resampling)
- ppl_orchs.pi: Orchs (Church)
- ppl_orchs_church.pi: Orchs (The Battle of the Two Towers problem) (Church)
- ppl_order_statistics_continuous_dist.pi: Order statistics continuous distribution
- ppl_order_statistics_estimator_of_m.pi: Order statistics estimator of M
- ppl_order_statistics_without_replacement_dist.pi: Order statistics withou replacement distribution
- ppl_order_statistics_without_replacement.pi: Order statistic without replacement
- ppl_order_statistics_with_replacement_discrete_dist.pi: Order statistics with replacement discrete distribution
- ppl_papers_under_9_hours.pi: Papers under 9 hours (Pascal Bercker)
- ppl_pareto_dist.pi: Pareto device life time (Mathematica)
- ppl_pareto_dist.pi_ Pareto dist
- ppl_pareto2_dist.pi: Pareto2 distribution
- ppl_pareto3_dist.pi: Pareto3 distribution
- ppl_pareto4_dist.pi: Pareto4 distribution
- ppl_pareto_earthquakes.pi: Pareto earthquakes (Mathematica)
- ppl_pareto_recover.pi: Pareto I recover
- ppl_parking_cars.pi: Parking cars
- ppl_pascal_dist.pi: Pascal distribution
- ppl_pascal_number_of_fair_coin_flips_before_n_heads.pi: Pascal distribution: Number of fair coin flips before n heads (Mathematica)
- ppl_pass_the_ball.pi: Pass the ball (quantquide, via Pascal Bercker)
- ppl_path.pi: Path (ProbLog)
- ppl_pdf_all.pi: Test of
pdf_all
- ppl_pennies.pi: Pennies (Statistics101 Resampling))
- ppl_pepperoni_pizza.pi: Pepperoni pizza (Statistics101 Resampling (from Downing and Clark)))
- ppl_person_login.pi: Personal login (BLOG)
- ppl_pesticides.pi: Pesticides (Statistics101 Resampling))
- ppl_pig_food.pi: Pig food (Julian Simon, via Statistics101 Resampling))
- ppl.pi: Wrapper for importing ppl_utils.pi and ppl_distributions.pi
- ppl_pi.pi: Wrapper for loding the Picat PPL modules
- ppl_picking_3_of_the_same_color.pi: Picking 3 of the same color
- ppl_piecewise_transformation.pi: Piecewise transformation (SPPL)
- ppl_pill_puzzle.pi: Pill puzzle
- ppl_piranha_puzzle.pi: Pirhana puzzle (Katoen)
- ppl_placebo_and_drugs.pi: Placebo and drugs (Shasha and Wilson)
- ppl_poisson_ball.pi: Poisson ball (BLOG)
- ppl_poisson_dist.pi: Poisson distribution
- ppl_poisson_fishing_problem.pi: Poisson fishing problem
- ppl_poisson_horse_kicks.pi: Poisson Horse kicks
- ppl_poisson_mean_inference.pi: Poisson mean inference (SPPL)
- ppl_poisson_min_max.pi: Poisson min, max (comparing with orderstatistics)
- ppl_poisson_process_dist.pi: Poisson process distribution
- ppl_poisson_recover.pi: Poisson distribution, parameter recover
- ppl_poker.pi: Poker hands
- ppl_political_survey.pi: Political survey (Statistics101 Resampling)
- ppl_population.pi: Population (genfer)
- ppl_probabilistic_graphs.pi: Probabilistic graphs (ProbLog)
- ppl_probability_challenge.pi: Probability Challenge
- ppl_probability_of_missing_values.pi: Probability of missing values
- ppl_profits.pi: Profits (Statistics101 Resampling)
- ppl_quake_probability.pi: Quake probability (Statistics101 Resampling)
- ppl_queens.pi: N-queens problem
- ppl_rademacher_dist.pi: Rademacher distribution
- ppl_random_ratio.pi: Random ratio (BrainStellar)
- ppl_random_shuffle_spotify.pi: Random shuffle Spotify
- ppl_random_walk_1.pi: Random walk (Blom et.al)
- ppl_random_walk_2.pi: Random walk 2D
- ppl_random_walk_3.pi: Random walk (BrainStellar)
- ppl_random_walk_process.pi: Random walk process distribution
- ppl_random_walk_roulette.pi: Random walk roulette (Curing the Compulsive Gambler) (Mosteller)
- ppl_rat_tumor.pi: Rat tumor (PyMC)
- ppl_record_kth_differences.pi: Record: kth differences (including a formula which is fairly accurate for K=1)
- ppl_ratio_between_1_and_2.pi: Ratio between 1 and 2 (Pascal Bercker)
- ppl_record_kth_record.pi: Record: kth record (distribution)
- ppl_record_number_of_records.pi: Record: number of records (distribution)
- ppl_record_permutations.pi: Record permutations
- ppl_records.pi: Records (Robert Matthews)
- ppl_relative_survival_rate.pi: Relative survival rate (Mathematica)
- ppl_repeated_iq_measurements.pi: Repeated IQ measurements
- ppl_robot_localization.pi: Robot localization (SPPL)
- ppl_rolling_dice3.pi: Rolling dice (ProbLog)
- ppl_rolling_dice4.pi: Rolling dice (ProbLog)
- ppl_rolling_dice5.pi: Rolling dice (ProbLog)
- ppl_rolling_multiple_dice_and_picking_the_highest.pi: Rolling multiple dice and picking the highest value
- ppl_rolling_the_bullet.pi: Rolling the bullet (BrainStellar)
- ppl_rope_length.pi: Rope length (Mathematica)
- ppl_ruin_problem2.pi: Ruin problem
- ppl_rumor.pi: Rumor (Feller)
- ppl_run_size_probability.pi: Run size probability (exact)
- ppl_run_until_blue_ball.pi: Run until blue ball
- ppl_runs.pi: Runs, some experiments in "iterated runs"
- ppl_russian_roulette.pi: Russian roulette
- ppl_same_rank.pi: Same rank (Blom et.al)
- ppl_schelling_coordination_game.pi: Schelling coordination game (Church)
- ppl_second_chance.pi: Second chance (BrainStellar)
- ppl_secretary_problem.pi: Secretary problem (Siegrist)
- ppl_sequence_waiting_times_1.pi: Sequence waiting times (Blom et.al)
- ppl_seven_scientists.pi: Seven scientists (BDA problem)
- ppl_simple_aircraft.pi: Simple aircraft (BLOG)
- ppl_simple_mixture_model.pi: Simple mixture model
- ppl_simpson.pi: Simpson (cplint)
- ppl_six_districts_six_robberies.pi: Six districts, six robberies
- ppl_sixty_boys_out_of_next_100_births.pi: Sixty boys out of next 100 births (Statistics101 Resampling)
- ppl_size_of_groups_of_people.pi: Size of groups of people
- ppl_size_of_material.pi: Size of material
- ppl_sleeping_beauty_problem.pi: Sleeping Beauty Problem
- ppl_snake_eyes.pi: Snake eyes (Matt Parker)
- ppl_spread_of_traits.pi: Spread of traits (God plays dice)
- ppl_sprinkler.pi: Sprinkler problem
- ppl_sqrt_and_max_of_0_to_1.pi: Sqrt and max of 0 to 1 (Matt Parker)
- ppl_squid_game.pi: Squid game (PSI)
- ppl_statistical_dependence.pi: Statistical dependence (WebPPL)
- ppl_stick_to_triangle.pi: Stick to triangle (BrainStellar)
- ppl_streaks.pi: Streaks (Grinstead,Peterson and Snell)
- ppl_streaks_chess.pi: Streaks chess
- ppl_streaks_probability.pi: Streaks probability
- ppl_student_interviews.pi: Student interviews (SPPL)
- ppl_student_mood_after_exam.pi: Student mood after exam
- ppl_student_performance.pi: Student performance (linz07m)
- ppl_student_t_dist.pi: Student's T distribution
- ppl_successive_wins.pi: Successive wins (Mosteller)
- ppl_sultans_children.pi: Sultan's children
- ppl_sultans_dowry2.pi: Sultan's dowry
- ppl_sultans_dowry_probabilities.pi: Sultan's dowry probabilities
- ppl_sum_pareto.pi: Sum Pareto (PSI)
- ppl_sum_prob_dist.pi: Sum prob distribution
- ppl_sum_to_one.pi: Sum to one (BrainStellar)
- ppl_telephone_operator.pi: Telephone operator (genfer, Prodigy)
- ppl_the_blind_archer.pi: The blind archer (BrainStellar)
- ppl_the_car_and_the_goats.pi: The car and the goats (Blom et.al)
- ppl_the_red_and_the_black.pi: The Red and the Black (Gardner)
- ppl_thermometer.pi: Thermometer (Hakuru)
- ppl_thermostat.pi: Thermostat
- ppl_three_biased_coins.pi: Three biased coins (Serrabo.Academi, via Pascal Bercker)
- ppl_three_cards.pi: Three cards
- ppl_three_children.pi: Three children paradox
- ppl_three_dice_product.pi: Three dice product
- ppl_three_men_with_hats.pi: Three men with hats (Ross)
- ppl_three_players_coin_toss.pi: Three players coin toss
- ppl_three_people_ten_floors.pi: Three people, ten floors (Molignini)
- ppl_three_urns.pi: Three urns (Stuhlmuller)
- ppl_three_urns_church.pi: Three urns (Stuhlmuller)
- ppl_three_way_election.pi: Three way election (Statistics101 Resampling)
- ppl_thrice_exceptional.pi: Thrice exceptional (cremieuxrecueil)
- ppl_to_begin_or_not_to_begin.pi
- ppl_tornado_poker.pi: Tornado Poker (Simon)
- ppl_tourist_with_a_short_memory.pi
- ppl_triangular_dist.pi: Triangular distribution (Mathematica)
- ppl_trick_coin.pi: Trick coin (Church)
- ppl_trip.pi: Trip (ProbLog)
- ppl_true_skill.pi: True skill (Borgstrom et.al))
- ppl_trueskill_poisson_binomial.pi: Trueskill poisson binomial (SPPL)
- ppl_true_skill_simple.pi: Trueskill simple (R2)
- ppl_tug_of_war.pi: Tug of war (Hakuru)
- ppl_tug_of_war3.pi: Tug of war (Church)
- ppl_two_aces.pi: Two aces (Statistics101 Resampling)
- ppl_two_children_problem.pi: Two children problem
- ppl_two_coins.pi: Two coins (R2)
- ppl_two_dice_sum_8.pi: Two dice sum
- ppl_two_dice_wager.pi: Two dice wager
- ppl_two_dimensional_mixture_model.pi: Two dimensional mixture model (SPPL)
- ppl_two_heads_in_three_coin_tosses.pi: Two heads in three coin tosses (Statistics101 Resampling)
- ppl_unbiased_coin.pi: Unbiased coin
- ppl_unbiased_die.pi: Unbiased die
- ppl_unfair_coin.pi: Unfair coin (Brilliant)
- ppl_uniform_ball.pi: Uniform ball (BLOG)
- ppl_urn_large_balls.pi: Urn large balls (BLOG)
- ppl_urn_model_generalized.pi: Urn model generalized
- ppl_urn_model1.pi: Urn model
- ppl_urn_puzzle.pi: Urn puzzle
- ppl_urn_puzzle2.pi: Urn puzzle (Litt)
- ppl_urn_puzzle3.pi: Urn puzzle (Litt)
- ppl_urn_puzzle4.pi: Urn puzzle (Litt)
- ppl_urns_and_balls.pi: Urns and balls
- ppl_utils.pi: PPL's module for the general utilities
- ppl_vending_machine1.pi: Vending machine (WebPPL)
- ppl_vending_machine2.pi: Vending machine (WebPPL)
- ppl_vending_machine3.pi: Vending machine (WebPPL)
- ppl_virus_infection.pi: Virus infection
- ppl_voting_probabilities.pi: Voting probabilities
- ppl_voting_trump_harris.pi: Voting probability: Trump vs Harris (Silver)
- ppl_waiting_for_a_truck.pi: Waiting for a truck (BrainStellar)
- ppl_wason_selection_test.pi: Wason Selection test (via Pascal Bercker)
- ppl_weather_figaro.pi: Weather figaro (Figaro)
- ppl_weekend.pi: Weekend (PSI)
- ppl_weibull_dist.pi: Weibull distribution
- ppl_weibull_earthquake.pi: Weibull distribution: Earthquake recover parameters (Mathematica)
- ppl_weight_scale.pi: Weight scale (Pyro)
- ppl_where_is_my_bag.pi: Where is my bag? (Bayesia)
- ppl_wiener_process.pi: Wiener process distribution
- ppl_will_the_witches_meet.pi: Will the witches meet? (Pascal Bercker)
- ppl_winning_gold_medals.pi: Winning gold medals
- ppl_who_killed_the_bosmer.pi: Who killed the Bosmer?
- ppl_you_have_a_train_to_catch.pi: You have a train to catch (BrainStellar)
- ppl_youtube1.pi: Probability problem (PSI)
- ppl_youtube5.pi: Probability problem (PSI)
- ppl_youtube6.pi: Probability problem (PSI)
- ppl_zener_test.pi: Zener test (Matt Parker)
- ppl_zipf_dist.pi: Zipf distribution
- ppl_zipf_recover.pi: Zipf distribution recover parameters
- ppl_zipf_max_values.pi: Zipf distribution max values