Skip to content

Commit 5d3a95c

Browse files
dmeoliantmarakis
authored andcommitted
added csp, logic, planning and probability .ipynb (#1130)
* changed queue to set in AC3 Changed queue to set in AC3 (as in the pseudocode of the original algorithm) to reduce the number of consistency-check due to the redundancy of the same arcs in queue. For example, on the harder1 configuration of the Sudoku CSP the number consistency-check has been reduced from 40464 to 12562! * re-added test commented by mistake * added the mentioned AC4 algorithm for constraint propagation AC3 algorithm has non-optimal worst case time-complexity O(cd^3 ), while AC4 algorithm runs in O(cd^2) worst case time * added doctest in Sudoku for AC4 and and the possibility of choosing the constant propagation algorithm in mac inference * removed useless doctest for AC4 in Sudoku because AC4's tests are already present in test_csp.py * added map coloring SAT problems * fixed typo errors and removed unnecessary brackets * reformulated the map coloring problem * Revert "reformulated the map coloring problem" This reverts commit 20ab0e5. * Revert "fixed typo errors and removed unnecessary brackets" This reverts commit f743146. * Revert "added map coloring SAT problems" This reverts commit 9e0fa55. * Revert "removed useless doctest for AC4 in Sudoku because AC4's tests are already present in test_csp.py" This reverts commit b3cd24c. * Revert "added doctest in Sudoku for AC4 and and the possibility of choosing the constant propagation algorithm in mac inference" This reverts commit 6986247. * Revert "added the mentioned AC4 algorithm for constraint propagation" This reverts commit 03551fb. * added map coloring SAT problem * fixed build error * Revert "added map coloring SAT problem" This reverts commit 93af259. * Revert "fixed build error" This reverts commit 6641c2c. * added map coloring SAT problem * removed redundant parentheses * added Viterbi algorithm * added monkey & bananas planning problem * simplified condition in search.py * added tests for monkey & bananas planning problem * removed monkey & bananas planning problem * Revert "removed monkey & bananas planning problem" This reverts commit 9d37ae0. * Revert "added tests for monkey & bananas planning problem" This reverts commit 24041e9. * Revert "simplified condition in search.py" This reverts commit 6d229ce. * Revert "added monkey & bananas planning problem" This reverts commit c74933a. * defined the PlanningProblem as a specialization of a search.Problem & fixed typo errors * fixed doctest in logic.py * fixed doctest for cascade_distribution * added ForwardPlanner and tests * added __lt__ implementation for Expr * added more tests * renamed forward planner * Revert "renamed forward planner" This reverts commit c4139e5. * renamed forward planner class & added doc * added backward planner and tests * fixed mdp4e.py doctests * removed ignore_delete_lists_heuristic flag * fixed heuristic for forward and backward planners * added SATPlan and tests * fixed ignore delete lists heuristic in forward and backward planners * fixed backward planner and added tests * updated doc * added nary csp definition and examples * added CSPlan and tests * fixed CSPlan * added book's cryptarithmetic puzzle example * fixed typo errors in test_csp * fixed #1111 * added sortedcontainers to yml and doc to CSPlan * added tests for n-ary csp * fixed utils.extend * updated test_probability.py * converted static methods to functions * added AC3b and AC4 with heuristic and tests * added conflict-driven clause learning sat solver * added tests for cdcl and heuristics * fixed probability.py * fixed import * fixed kakuro * added Martelli and Montanari rule-based unification algorithm * removed duplicate standardize_variables * renamed variables known as built-in functions * fixed typos in learning.py * renamed some files and fixed typos * fixed typos * fixed typos * fixed tests * removed unify_mm * remove unnecessary brackets * fixed tests * moved utility functions to utils.py * fixed typos * moved utils function to utils.py, separated probability learning classes from learning.py, fixed typos and fixed imports in .ipynb files * added missing learners * fixed Travis build * fixed typos * fixed typos * fixed typos * fixed typos * fixed typos in agents files * fixed imports in agent files * fixed deep learning .ipynb imports * fixed typos * added .ipynb and fixed typos * adapted code for .ipynb * fixed typos * updated .ipynb * updated .ipynb * updated logic.py * updated .ipynb * updated .ipynb * updated planning.py * updated inf definition * fixed typos * fixed typos * fixed typos * fixed typos * Revert "fixed typos" This reverts commit 658309d. * Revert "fixed typos" This reverts commit 08ad660. * fixed typos * fixed typos * fixed typos * fixed typos * fixed typos and utils imports in *4e.py files
1 parent 9c2ffe3 commit 5d3a95c

29 files changed

+7976
-475
lines changed

arc_consistency_heuristics.ipynb

+1,999
Large diffs are not rendered by default.

classical_planning_approaches.ipynb

+2,402
Large diffs are not rendered by default.

csp.py

+91-63
Large diffs are not rendered by default.

deep_learning4e.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
from keras.models import Sequential
1111
from keras.preprocessing import sequence
1212

13-
from utils4e import (sigmoid, dotproduct, softmax1D, conv1D, GaussianKernel, element_wise_product, vector_add,
13+
from utils4e import (sigmoid, dot_product, softmax1D, conv1D, GaussianKernel, element_wise_product, vector_add,
1414
random_weights, scalar_vector_product, matrix_multiplication, map_vector, mse_loss)
1515

1616

@@ -107,7 +107,7 @@ def forward(self, inputs):
107107
res = []
108108
# get the output value of each unit
109109
for unit in self.nodes:
110-
val = self.activation.f(dotproduct(unit.weights, inputs))
110+
val = self.activation.f(dot_product(unit.weights, inputs))
111111
unit.val = val
112112
res.append(val)
113113
return res

games.py

+1-2
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,8 @@
44
import random
55
import itertools
66
import copy
7-
from utils import argmax, vector_add
7+
from utils import argmax, vector_add, inf
88

9-
inf = float('inf')
109
GameState = namedtuple('GameState', 'to_move, utility, board, moves')
1110
StochasticGameState = namedtuple('StochasticGameState', 'to_move, utility, board, moves, chance')
1211

games4e.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,8 @@
44
import random
55
import itertools
66
import copy
7-
from utils import argmax, vector_add, MCT_Node, ucb
7+
from utils4e import argmax, vector_add, MCT_Node, ucb, inf
88

9-
inf = float('inf')
109
GameState = namedtuple('GameState', 'to_move, utility, board, moves')
1110
StochasticGameState = namedtuple('StochasticGameState', 'to_move, utility, board, moves, chance')
1211

@@ -187,8 +186,8 @@ def select(n):
187186
def expand(n):
188187
"""expand the leaf node by adding all its children states"""
189188
if not n.children and not game.terminal_test(n.state):
190-
n.children = {MCT_Node(state=game.result(n.state, action), parent=n): action for action in
191-
game.actions(n.state)}
189+
n.children = {MCT_Node(state=game.result(n.state, action), parent=n): action
190+
for action in game.actions(n.state)}
192191
return select(n)
193192

194193
def simulate(game, state):

improving_sat_algorithms.ipynb

+2,539
Large diffs are not rendered by default.

knowledge.py

+12-12
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,8 @@
1414

1515

1616
def current_best_learning(examples, h, examples_so_far=None):
17-
""" [Figure 19.2]
17+
"""
18+
[Figure 19.2]
1819
The hypothesis is a list of dictionaries, with each dictionary representing
1920
a disjunction."""
2021
if examples_so_far is None:
@@ -124,7 +125,8 @@ def add_or(examples_so_far, h):
124125

125126

126127
def version_space_learning(examples):
127-
""" [Figure 19.3]
128+
"""
129+
[Figure 19.3]
128130
The version space is a list of hypotheses, which in turn are a list
129131
of dictionaries/disjunctions."""
130132
V = all_hypotheses(examples)
@@ -241,7 +243,7 @@ def consistent_det(A, E):
241243
# ______________________________________________________________________________
242244

243245

244-
class FOIL_container(FolKB):
246+
class FOILContainer(FolKB):
245247
"""Hold the kb and other necessary elements required by FOIL."""
246248

247249
def __init__(self, clauses=None):
@@ -255,7 +257,7 @@ def tell(self, sentence):
255257
self.const_syms.update(constant_symbols(sentence))
256258
self.pred_syms.update(predicate_symbols(sentence))
257259
else:
258-
raise Exception("Not a definite clause: {}".format(sentence))
260+
raise Exception('Not a definite clause: {}'.format(sentence))
259261

260262
def foil(self, examples, target):
261263
"""Learn a list of first-order horn clauses
@@ -280,15 +282,14 @@ def new_clause(self, examples, target):
280282
The horn clause is specified as [consequent, list of antecedents]
281283
Return value is the tuple (horn_clause, extended_positive_examples)."""
282284
clause = [target, []]
283-
# [positive_examples, negative_examples]
284285
extended_examples = examples
285286
while extended_examples[1]:
286287
l = self.choose_literal(self.new_literals(clause), extended_examples)
287288
clause[1].append(l)
288289
extended_examples = [sum([list(self.extend_example(example, l)) for example in
289290
extended_examples[i]], []) for i in range(2)]
290291

291-
return (clause, extended_examples[0])
292+
return clause, extended_examples[0]
292293

293294
def extend_example(self, example, literal):
294295
"""Generate extended examples which satisfy the literal."""
@@ -344,9 +345,8 @@ def gain(self, l, examples):
344345
represents = lambda d: all(d[x] == example[x] for x in example)
345346
if any(represents(l_) for l_ in post_pos):
346347
T += 1
347-
value = T * (
348-
log(len(post_pos) / (len(post_pos) + len(post_neg)) + 1e-12, 2) - log(pre_pos / (pre_pos + pre_neg),
349-
2))
348+
value = T * (log(len(post_pos) / (len(post_pos) + len(post_neg)) + 1e-12, 2) -
349+
log(pre_pos / (pre_pos + pre_neg), 2))
350350
return value
351351

352352
def update_examples(self, target, examples, extended_examples):
@@ -411,12 +411,12 @@ def guess_value(e, h):
411411

412412

413413
def is_consistent(e, h):
414-
return e["GOAL"] == guess_value(e, h)
414+
return e['GOAL'] == guess_value(e, h)
415415

416416

417417
def false_positive(e, h):
418-
return guess_value(e, h) and not e["GOAL"]
418+
return guess_value(e, h) and not e['GOAL']
419419

420420

421421
def false_negative(e, h):
422-
return e["GOAL"] and not guess_value(e, h)
422+
return e['GOAL'] and not guess_value(e, h)

learning.py

+15-15
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
from statistics import mean, stdev
99

1010
from probabilistic_learning import NaiveBayesLearner
11-
from utils import (remove_all, unique, mode, argmax, argmax_random_tie, isclose, dotproduct, vector_add,
11+
from utils import (remove_all, unique, mode, argmax, argmax_random_tie, isclose, dot_product, vector_add,
1212
scalar_vector_product, weighted_sample_with_replacement, num_or_str, normalize, clip, sigmoid,
1313
print_table, open_data, sigmoid_derivative, probability, relu, relu_derivative, tanh,
1414
tanh_derivative, leaky_relu_derivative, elu, elu_derivative, mean_boolean_error, random_weights)
@@ -536,17 +536,17 @@ def LinearLearner(dataset, learning_rate=0.01, epochs=100):
536536
# pass over all examples
537537
for example in examples:
538538
x = [1] + example
539-
y = dotproduct(w, x)
539+
y = dot_product(w, x)
540540
t = example[idx_t]
541541
err.append(t - y)
542542

543543
# update weights
544544
for i in range(len(w)):
545-
w[i] = w[i] + learning_rate * (dotproduct(err, X_col[i]) / num_examples)
545+
w[i] = w[i] + learning_rate * (dot_product(err, X_col[i]) / num_examples)
546546

547547
def predict(example):
548548
x = [1] + example
549-
return dotproduct(w, x)
549+
return dot_product(w, x)
550550

551551
return predict
552552

@@ -578,19 +578,19 @@ def LogisticLinearLeaner(dataset, learning_rate=0.01, epochs=100):
578578
# pass over all examples
579579
for example in examples:
580580
x = [1] + example
581-
y = sigmoid(dotproduct(w, x))
581+
y = sigmoid(dot_product(w, x))
582582
h.append(sigmoid_derivative(y))
583583
t = example[idx_t]
584584
err.append(t - y)
585585

586586
# update weights
587587
for i in range(len(w)):
588588
buffer = [x * y for x, y in zip(err, h)]
589-
w[i] = w[i] + learning_rate * (dotproduct(buffer, X_col[i]) / num_examples)
589+
w[i] = w[i] + learning_rate * (dot_product(buffer, X_col[i]) / num_examples)
590590

591591
def predict(example):
592592
x = [1] + example
593-
return sigmoid(dotproduct(w, x))
593+
return sigmoid(dot_product(w, x))
594594

595595
return predict
596596

@@ -624,7 +624,7 @@ def predict(example):
624624
for layer in learned_net[1:]:
625625
for node in layer:
626626
inc = [n.value for n in node.inputs]
627-
in_val = dotproduct(inc, node.weights)
627+
in_val = dot_product(inc, node.weights)
628628
node.value = node.activation(in_val)
629629

630630
# hypothesis
@@ -672,7 +672,7 @@ def BackPropagationLearner(dataset, net, learning_rate, epochs, activation=sigmo
672672
for layer in net[1:]:
673673
for node in layer:
674674
inc = [n.value for n in node.inputs]
675-
in_val = dotproduct(inc, node.weights)
675+
in_val = dot_product(inc, node.weights)
676676
node.value = node.activation(in_val)
677677

678678
# initialize delta
@@ -706,19 +706,19 @@ def BackPropagationLearner(dataset, net, learning_rate, epochs, activation=sigmo
706706
w = [[node.weights[k] for node in nx_layer] for k in range(h_units)]
707707

708708
if activation == sigmoid:
709-
delta[i] = [sigmoid_derivative(layer[j].value) * dotproduct(w[j], delta[i + 1])
709+
delta[i] = [sigmoid_derivative(layer[j].value) * dot_product(w[j], delta[i + 1])
710710
for j in range(h_units)]
711711
elif activation == relu:
712-
delta[i] = [relu_derivative(layer[j].value) * dotproduct(w[j], delta[i + 1])
712+
delta[i] = [relu_derivative(layer[j].value) * dot_product(w[j], delta[i + 1])
713713
for j in range(h_units)]
714714
elif activation == tanh:
715-
delta[i] = [tanh_derivative(layer[j].value) * dotproduct(w[j], delta[i + 1])
715+
delta[i] = [tanh_derivative(layer[j].value) * dot_product(w[j], delta[i + 1])
716716
for j in range(h_units)]
717717
elif activation == elu:
718-
delta[i] = [elu_derivative(layer[j].value) * dotproduct(w[j], delta[i + 1])
718+
delta[i] = [elu_derivative(layer[j].value) * dot_product(w[j], delta[i + 1])
719719
for j in range(h_units)]
720720
else:
721-
delta[i] = [leaky_relu_derivative(layer[j].value) * dotproduct(w[j], delta[i + 1])
721+
delta[i] = [leaky_relu_derivative(layer[j].value) * dot_product(w[j], delta[i + 1])
722722
for j in range(h_units)]
723723

724724
# update weights
@@ -746,7 +746,7 @@ def predict(example):
746746

747747
# forward pass
748748
for node in o_nodes:
749-
in_val = dotproduct(example, node.weights)
749+
in_val = dot_product(example, node.weights)
750750
node.value = node.activation(in_val)
751751

752752
# hypothesis

learning4e.py

+10-10
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
"""Learning from examples. (Chapters 18)"""
1+
"""Learning from examples (Chapters 18)"""
22

33
import copy
44
import heapq
@@ -9,9 +9,9 @@
99

1010
from probabilistic_learning import NaiveBayesLearner
1111
from utils import sigmoid, sigmoid_derivative
12-
from utils4e import (remove_all, unique, mode, argmax_random_tie, isclose, dotproduct, weighted_sample_with_replacement,
13-
num_or_str, normalize, clip, print_table, open_data, probability, random_weights,
14-
mean_boolean_error)
12+
from utils4e import (remove_all, unique, mode, argmax_random_tie, isclose, dot_product,
13+
weighted_sample_with_replacement, num_or_str, normalize, clip, print_table, open_data, probability,
14+
random_weights, mean_boolean_error)
1515

1616

1717
class DataSet:
@@ -531,17 +531,17 @@ def LinearLearner(dataset, learning_rate=0.01, epochs=100):
531531
# pass over all examples
532532
for example in examples:
533533
x = [1] + example
534-
y = dotproduct(w, x)
534+
y = dot_product(w, x)
535535
t = example[idx_t]
536536
err.append(t - y)
537537

538538
# update weights
539539
for i in range(len(w)):
540-
w[i] = w[i] + learning_rate * (dotproduct(err, X_col[i]) / num_examples)
540+
w[i] = w[i] + learning_rate * (dot_product(err, X_col[i]) / num_examples)
541541

542542
def predict(example):
543543
x = [1] + example
544-
return dotproduct(w, x)
544+
return dot_product(w, x)
545545

546546
return predict
547547

@@ -573,19 +573,19 @@ def LogisticLinearLeaner(dataset, learning_rate=0.01, epochs=100):
573573
# pass over all examples
574574
for example in examples:
575575
x = [1] + example
576-
y = sigmoid(dotproduct(w, x))
576+
y = sigmoid(dot_product(w, x))
577577
h.append(sigmoid_derivative(y))
578578
t = example[idx_t]
579579
err.append(t - y)
580580

581581
# update weights
582582
for i in range(len(w)):
583583
buffer = [x * y for x, y in zip(err, h)]
584-
w[i] = w[i] + learning_rate * (dotproduct(buffer, X_col[i]) / num_examples)
584+
w[i] = w[i] + learning_rate * (dot_product(buffer, X_col[i]) / num_examples)
585585

586586
def predict(example):
587587
x = [1] + example
588-
return sigmoid(dotproduct(w, x))
588+
return sigmoid(dot_product(w, x))
589589

590590
return predict
591591

0 commit comments

Comments
 (0)