Datasets:

Modalities:
Image
Text
Formats:
parquet
Size:
< 1K
Tags:
code
Libraries:
Datasets
pandas
License:
hackercup / 2021 /round2 /string_concatenation_sol.md
wjomlex's picture
2021 Problems
d3f4f72 verified
|
raw
history blame
6.02 kB

First, notice that if we have two (not necessarily disjoint) subsets of strings, we can definitely make two nonempty threads. Consider sets (A) and (B), with intersection (C=A \cap B). If we know (\sum A = \sum B), then we can choose the first thread to be (A-C) and the second thread to be (B-C). Both will have the same sum still, and now there will be no overlap.

Solution when (n<25) (may be impossible): We can easily solve the problem if (n<25) by simply finding the sum of all subsets of numbers in (2^n) and checking if any two subsets have the same sum. We can do this iteratively without an extra factor of (n) if we want, but those details are left to the reader.

Guaranteed solution when (n=25): It turns out that when (n \ge 25), there will always be a way to create two equal-length threads: simply try all subsets until we find one we've already seen before! The sum is bounded by (2 \cdot 10^5 \cdot 25 \approx 5{,}000{,}000). However, we have (2^{25} \approx 33{,}000{,}000) subsets to try. By the pigeonhole principle, we are guaranteed there will be at least one match (in fact, there will be millions). This takes (O(2^{n}) = 2^{25}) steps in the worst case.

Guaranteed solution when (n \le 100): If (n > 25), let's pessimistically assume (k=24); that we can discard at most (24) strings. Really, the bounds guarantee we could discard up to (25), but that doesn't matter here. We can solve this problem for (n > 25) using a solution that always works when (n=25), which is explained in the last paragraph. Consider the first (25) strings. Let's find two equal length threads in these first (25), put all strings in those threads aside, and then add them back later. In other words, we can remove at least (2) strings, and then repeat until we have at most (24) remaining. This step runs in (O(2^{25}(n-25)/C)), where (C) is a constant that's definitely at least (2), but closer to (10) in practice.

Guaranteed solution when (n \le 1000): Consider a random submask of all strings, such that each string has a (50%) chance of being in the set. The length of this will be some integer in ([0, 2\cdot 10^8]). By the birthday paradox, we only need to check ~(\sqrt{2\cdot 10^8}) random sets to expect to find two with the same sum. Supposing we did this, each of our (1000) elements now has a ~(50%) chance of being in the first set and a ~(50%) chance of being in the second. This isn't independent for all elements, but it is nearly independent for almost all of them, as long as (n) is much bigger than (20). So, it's likely that ~(50%) of the original elements are in exactly one of the two sets, and will be removed this step. We can now recursively solve the problem on the remaining (n/2) strings that are mostly in one of these two sets. This is done (\log(n)) times before (n \le 100) using our above solution, when it's no longer safe to assume independence. This happens (\log(n)) times, but notice that as (n) decreases, the runtime decreases proportionally, so there isn't a log factor in our running time of (O(n \sqrt{2 \cdot 10^8})).

Guaranteed solution when (n \le 200{,}000): First, remove duplicates so we have at most (1) of each element. For example, with (5) strings of length (2,)cm, we can set (4) aside and add two additional lengths of (2,)cm to each thread at the end. Now, let's find a pair ((a, b)) with the same sum as another pair ((c, d)). These pairs won't share any values, since (a = c) would imply (b = d), but all duplicates have been removed.

We can map any possible sum in the range (2...400{,}000) to the unique pair that has that sum, and then if we ever find two pairs with the same sum, we can remove both of them. This will guarantee at most ~(1000) elements remaining afterwards, allowing us to run our solution above. However, we must be careful not to take (O(n^2)) time to remove all pairs. In particular, if we naively do so with two for-loops, we might take (n) pairs that include the first element, and then later on remove that element, thus removing all (n) pairs and only (4) elements per step.

Instead, we can first iterate through the pairs as follows:

  • ((1, 2), (2, 3), (3, 4), (4, 5), ..., (n, 1))
  • ((1, 3), (2, 4), (3, 5), ...)
  • ((1, 4), (2, 5), (3, 6), ...)

and so on.

The virtue of doing this is that all elements will be part of almost the same number of pairs at all times. If we ever remove an element and we have (n) elements total, we are removing an equal share of the pairs: about (400{,}000/n) of them. There are at most (400{,}000) pairs at any time, so the total number of removed pairs are roughly (n/n + n/(n-1) + n/(n-2) + ... + n/1) which is about (n\log(n)). We can additionally use some data structure which supports quick remove() and getKth() operations instead of an array (a Fenwick tree would work), and this will let us solve this step in (O(n\log^2(n))).

Other Solutions

There are other solutions that were also intended to work, including:

  • Noticing that the above solution for bounds up to (100) actually will find a collision much faster most of the time, so it can be used for bigger bounds, including up to (200{,}000) if implemented correctly.
  • Modifying the solution for bounds up to (1000) to work for larger bounds by trying changing every bit in it. If we have (n) strings, this lets us check (n) possible sums in (O(n)) time, making it very likely this will work for bounds up to (200{,}000) as well. If we additionally remove duplicates, we get the nice constraint that all (n) of these sums are different, meaning collisions will allow us to remove about half the elements at each step.
  • Ad-hoc solutions with runtimes that are difficult to prove. One can have strong intuition they run in time and aren't possible to break.

See David Harmeyer (SecondThread)'s solution video.