File size: 3,511 Bytes
f96d8dd
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
The lakes and rivers may be represented as a tree rooted at node (lake) \(1\). Let \(D_i\) be the total distance along rivers from the root to each node \(i\).

Let \(Q(i, c)\) be the minimum number of logs which may be lost for a shipment of logs which begins at node \(i\) while being managed by a log driver with a carelessness of \(c\) (such that \(V_i = Q(X_i, Y_i)\)).

If node \(i\) has a log-processing plant (i.e. is a leaf node), \(Q(i, c) = 0\) for any \(c\). Otherwise, \(Q(i, c)\) may be computed by considering possible options for the first "relevant node" \(j\) encountered by a shipment beginning at node \(i\), such that either \(j\) is a leaf node or the first node at which a new log driver will be hired (note that \(j\) must be within \(i\)'s subtree, and may be equal to \(i\) itself). For a given choice of \(j\), the number of logs lost for the shipment comes out to \(c(D_j - D_i) + Q(j, H_j)\), and so we're looking for the node \(j\) which minimizes this expression.

Rewriting the expression as \(cD_j + Q(j, H_j) - cD_i\), and ignoring the final portion (which is independent of \(j\)), we can observe that it forms a linear equation \(F_j(c) = m_j c + b_j\) (where \(m_j = D_j\) and \(b_j = Q(j, H_j)\)). As such, we've reduced computing \(Q(i, c)\) to finding the minimum value of \(F_j(c)\) over all nodes \(j\) in \(i\)'s subtree. To avoid an infinitely recursive definition, note that we can evaluate \(Q(i, H_i)\) by only considering nodes \(j\) such that \(j \ne i\).

Our approach will involve recursively traversing the tree, such that for each node \(i\), we compute \(Q(i, H_i)\) followed by \(Q(X_k, Y_k)\) for each question such that \(X_k = i\). Along the way, we'll need to maintain information around the linear equations \(F_j(c)\) for all nodes \(j\) in \(i\)'s subtree, so that we can efficiently compute each \(Q(i, c)\) value.

Making use of the *dynamic convex hull trick*, we'll store these linear equations in a set ordered by their slope (\(m_y\)), such that each equation yields the minimum result out of all equations when evaluated for some non-empty interval of \(c\) values. Our data structure will also store the "intersection points" of all consecutive pairs of linear equations in the ordered set, which correspond to the intervals of \(c\) values for which the various equations are optimal. With this setup, we're able to insert a new equation into the set and to query the minimum value of any equation for a given value of \(c\) in \(O(log(n))\) time each (where \(n\) is the number of equations in the set).

The remaining question is how we'll build up these sets of equations for entire subtrees of nodes. When handling each node \(i\), we must efficiently merge all of its children's sets together, before proceeding to compute and insert \(F_i\) as well. When merging two sets together, we should simply elect to reuse the larger of the two (or an arbitrary one if they have the same size), inserting all equations from the smaller one into it and then discarding that smaller set. Each time a given equation is merged into a different set, the size of its containing set at least doubles, meaning that it may be inserted into sets at most \(O(log(N))\) times. With each set consisting of up to \(N\) equations, the total time complexity of all insertions comes out to \(O(N log^2(N))\). Factoring in the set queries required to evaluate all \(N + M\) needed \(Q(i, c)\) values, we arrive at a total of \(O(N log^2(N) + M log(N))\) time.