Skip to content

Commit ec65cd3

Browse files
committed
refer to 'priority' rather than 'scaled_loss'
1 parent 2f70e6a commit ec65cd3

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

paper.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -188,16 +188,16 @@ For the one-dimensional case this could be achieved by using a red--black tree t
188188
So far, the description of the general algorithm did not include parallelism.
189189
In order to include parallelism we need to allow for points that are "pending", i.e. whose value has been requested but is not yet known.
190190
In the sequential algorithm subdomains only contain points on their boundaries.
191-
In the parallel algorithm *pending* points are placed in the interior of subdomains, and the loss of the subdomain is reduced to take these pending points into account.
191+
In the parallel algorithm *pending* points are placed in the interior of subdomains, and the priority of the subdomains in the queue is reduced to take these pending points into account.
192192
Later, when a pending point $x$ is finally evaluated, we *split* the subdomain that contains $x$ such that it is on the boundary of new, smaller, subdomains.
193-
We then calculate the loss of these new subdomains, and insert them into the priority queue, and update the losses of neighboring subdomains if required.
193+
We then calculate the priority of these new subdomains, and insert them into the priority queue, and update the priority of neighboring subdomains if required.
194194

195195
#### We summarize the algorithm with pseudocode
196196
The parallel version of the algorithm can be summarized by the following pseudocode.
197197
In the following `queue` is the priority queue of subdomains, `domain` is an object that allows to efficiently query the neighbors of a subdomain and create new subdomains by adding a point $x$, `data` is a hashmap storing the points and their values, `executor` allows to offload evaluation of a function `f` to external computing resources, and `loss` is the loss function, with `loss.n_neighbors` being the degree of neighboring subdomains that the loss function uses.
198198

199199
```python
200-
def scaled_loss(domain, subdomain, data):
200+
def priority(domain, subdomain, data):
201201
subvolumes = domain.subvolumes(subdomain)
202202
max_relative_subvolume = max(subvolumes) / sum(subvolumes)
203203
L_0 = loss(domain, subdomain, data)
@@ -212,7 +212,7 @@ for x in new_points:
212212
data[x] = None
213213
executor.submit(f, x)
214214
215-
queue.insert(first_subdomain, priority=scaled_loss(domain, subdomain, data))
215+
queue.insert(first_subdomain, priority=priority(domain, subdomain, data))
216216
217217
while executor.n_outstanding_points > 0:
218218
x, y = executor.get_one_result()
@@ -224,7 +224,7 @@ while executor.n_outstanding_points > 0:
224224
for subdomain in old_subdomains:
225225
queue.remove(old_subdomain)
226226
for subdomain in new_subdomains:
227-
queue.insert(subdomain, priority=scaled_loss(domain, subdomain, data))
227+
queue.insert(subdomain, priority(domain, subdomain, data))
228228

229229
if loss.n_neighbors > 0:
230230
subdomains_to_update = set()
@@ -233,7 +233,7 @@ while executor.n_outstanding_points > 0:
233233
subdomains_to_update.update(neighbors)
234234
subdomains_to_update -= set(new_subdomains)
235235
for subdomain in subdomains_to_update:
236-
queue.update(subdomain, priority=scaled_loss(domain, subdomain, data))
236+
queue.update(subdomain, priority(domain, subdomain, data))
237237

238238
# If it looks like we're done, don't send more work
239239
if queue.max_priority() < target_loss:
@@ -245,7 +245,7 @@ while executor.n_outstanding_points > 0:
245245
new_point, = domain.insert_points(subdomain, 1)
246246
data[new_point] = None
247247
executor.submit(f, new_point)
248-
queue.insert(subdomain, priority=scaled_loss(domain, subdomain, data))
248+
queue.insert(subdomain, priority(domain, subdomain, data))
249249
```
250250
251251
# Loss function design

0 commit comments

Comments
 (0)