In backfilling we're updating only the edges which were chosen during the simulation. But since we can reach a state from two different states doing two different actions, shouldn't we update that other part of the tree from where as well we could've come to the same state? That's my understanding of
Action value Q is updated to track the mean of all evaluations V in the subtree below that action