Welcome to The Editorial!
Keep the upvotes piling up! muhehe
Mostly just short hints so far. There are too many problems, I don't know what to do next... meh, I'll just go with alphabetical order. Like a boss.
A. Rasheda And The Zeriba
(difficulty: medium)
The first question is: When is it possible to construct a (convex) polygon from sticks of given lengths Li?
This question is answered by what's sometimes known as Polygon inequality theorem, which states that the sufficient and necessary condition is for every Li to be strictly less than the sum of all other Li. You can imagine that it works because for the endpoints of every side, the shortest path between them (equal to the length of that side) must be smaller than any other path, including the other one along the perimeter of the polygon; constructing such a polygon, even a convex one, is pretty easy, just imagine it as having sticks linked to each other that you can freely rotate. (There's a rigorous proof, but it's unnecessarily complicated.) In other words, if we denote , we can't construct a polygon if there's an Li satisfying
Suppose there's a solution. What now? Obviously, if there's a circle with radius R, we could extend some vertices (and sides) to the perimeter of the circle, and the solution still exists and is convex (any polygon inscribed in a circle is). In that case, we get something like this:
Notice the angles ai corresponding to the sides li, where li ≥ Li. For a convex polygon and its circumcircle, the center A of the circle lies in the polygon, so all the angles must sum up to 2π. If their sum is any larger, the polygon can't be inscribed in that small of a circle.
From simple trigonometry of isosceles triangles, we get and ai ≤ π; in the range [0, π / 2], arcsine is increasing, so the condition for R to be a possible solution can be formulated as (with
)

There is no nice way to simplify this and find a solution, but that's what we have computers for! If we fix R, it's easy to check the validity of this condition; therefore, we can binary-search the minimum possible R.
It's even possible to inscribe a desired polygon in the circle satisfying the equality of this condition. Just take the circle and put the sides on its perimeter in arbitrary order; the equality guarantees the last side to meet the first one correctly. The order doesn't even matter!
Since it's sufficient to consider R ≤ SL, this approach has time complexity (per test case) .
B. Egyptian Roads Construction
(diff: medium)
In most problems involving maximum/minimum edges, it's best to use a union-find data structure. That's because if we sort the edges by increasing weight and add them into the graph formed by just the N vertices, two cases can happen for an edge of cost c:
it would connect 2 vertices that have already been connected — in that case, no path will ever need to go through that edge and we can just ignore it
it'll connect two different connected components (of x and y vertices); then, all paths between pairs of vertices from those 2 components will cost c, because without that edge, there was no possible path, and adding more expensive ones doesn't decrease the cost, what means that the resulting sum for every vertex of the x will increase by cy and for those of y, the sums increase by cx
So, when adding any edge, we need to remember the connected components found so far (just as lists of vertices), the current component that each vertex belongs to; merging 2 components is done as moving the vertices of one list to another and updating the component they belong to. In order to do this quickly, we'll always merge the smaller component into the larger one.
Along with this algorithm, we also need to calculate the answers. We can't just update them for all vertices of both components in every step, though. We can do it for the smaller component — it doesn't worsen the time complexity. But, the larger one can get too large. To get around it, we'll just remember the information as "for component i, the first j vertices get answers increased by k", where j is set to be the size of this component (before the merge). It's even guaranteed that j are increasing!
These updates can be processed when the component is being merged into another as the smaller one — just process the vertices of it from last to first, remember the sum of all updates that cover the current vertex, and always check if the last unprocessed update doesn't cover it already.
With the "small to large" merging technique, every vertex is merged into another component at most times, because after k merges, it's in a component of size at least 2k, so the sum of sizes of "smaller components" in all steps is
. That's also the time complexity of the union-find, because just a constant number of operations is done for every such vertex. Together with the sorting of edges, we get a complexity of
.
C. Tomb Raiders
(diff: medium-hard)
Let's think about a slower solution using dynamic programming. Let's say that dp[t][x][y] is the amount of treasure gained by a thief standing at (x, y) at time t. If there's A[i][j] treasure at (i, j), then we know dp[1][x][y] = A[x][y]; we're also given a way to compute other values:

This is a straightforward way to find the result, but it's too slow. To speed it up, we'll introduce a transformation
How does this help? Notice that we can replace the condition |i - x| + |j - y| < t equivalently by 4 conditions, which must hold simultaneously:
This means that with our transformation, the cells satisfying |i - x| + |j - y| < t (some sort of diamond in the grid) becomes a rectangle, and its sum can be calculated easily using 2D prefix sums.
One way to implement the DP with this transformation is: we remember the answer for (t, x, y) in dp[t][x + y][x - y + M] (to avoid negative indexing); after calculating all the values for given t, we construct a 2D array of prefix sums S, where . The formula for computing dp[t][x][y] then becomes

The answer is dp[T][x0 + y0][x0 - y0 + M] (T is the initial time from the input). Every element of the 3D arrays S and dp (with sizes up to T × (N + M) × (N + M)) can be calculated in O(1) time, so the time is O(T(N + M)2).
D. Bakkar And The Algorithm Quiz
Let's count the number of rows that contain some marked cells (with an STL set, for example), and denote it A; the same with columns, and denote that as B. Now, w.l.o.g., let's assume A ≥ B.
It's now clear that we must use exactly A rooks — it's at least one per row, and if we put the first B rooks to different columns, then all B columns will also be covered.
The answer to the 2nd part of the problem lies in there, too. Since we use A rooks, it's exactly one rook per row; the only other condition is for all B significant columns to have rooks in them.
Once again, we'll use dynamic programming. Let's denote the number of ways to put i rooks into i rows, so that j columns still don't have rooks in them, as P[i][j]. The next rook (we put them to the rows in some fixed order) must go either to one of the j uncovered columns, giving us j possibilities to get to the state (i - 1, j - 1), or to one of the M - j covered columns, giving M - j possibilities to get to the state (i - 1, j). We can then compute P[i][j] as (M - j)P[i + 1][j] + (j + 1)P[i + 1][j + 1] (if j = B, the 2nd term is 0, so it's just (M - j)P[i + 1][j]).
The starting condition is P[A][B] = 1 (no rooks yet added), and the result can be found in P[0][0]. The modulo isn't a problem either, as we can just modulate every P[i][j] after computing it. This approach has the complexity of .
E. Ghanophobia
(diff: easy)
The result of one match is given: Egypt:Ghana 1:6. In every test case, we're given the result of the other match. This problem was pretty much just about straightforward implementation of the rules described:
if the total number of goals scored in both matches together is different for both teams, the team that scored more wins
otherwise, the team that scored more goals on enemy ground wins (if they both scored the same, it's a tie)
Time complexity: O(1).
F. Bakkar In The Army
(diff: easy-medium)
Three identities are required for this solution: .
The sum of numbers in row number j is .
First, we need to find the smallest a such that doing all the reps in the first a rows is sufficient. That means solving the inequality . While a constant-time solution is possible (it's a 3rd degree polynomial), it's simpler to binary-search the answer.
Now, let's say we know a. Then, it's necessary to do all reps in the first a - 1 rows; after that, reps are left. Now, we need to find the necessary number of reps to do in the a-th row (let's denote it as b). We have 2 cases here:
, so it's sufficient to consider the first a reps only; then, we're looking for the smallest b to satisfy
, so we need to do the first a reps (with sum
) and then b - a more reps, with sum
, so we're again looking for the smallest b to satisfy

Both cases can be solved by binary search again. Now, we know a and b; all that's left is counting the reps. Since there are 2i - 1 reps in row i, the sum of reps in the first a - 1 rows is , and there are b more reps in the a-th row, so the answer is (a - 1)2 + b.
Binary searching takes time, so this is the complexity of the whole algorithm.
G: let X be the number of un-takeable blocks; taking a middle block from some unused level increases X by 2, taking one of the side ones increases it by 1; the player starting with X = N loses, so winning positions are X - 1, X - 2, (X - 3 is losing), X - 4, X - 5..., e.g. losing N are divisible by 3
H: to be done
I: BFS (store them in an STL set, too) on all achievable prices < 1; for every query, find the 2 closest achievable prices and choose the better solution
J: to be done
K: let's sort the numbers and denote them as 1,2,...,N; the best arrangement is obviously (N-1,N-3,N-5,...1,...,N-6,N-4,N-2,N)
L: remember the questions as states (from asked string i, the first j characters form a subsequence in the already processed part of S) and group them by the following characters in those strings; construct S and when char. c is seen, increase all states with c as the following character, then find the next f.c. and put those states to it; if we have a state at the end of some string, the answer for it is YES
I don't understand the analysis of problem G. We can increase number of un-takeable blocks by 3 or 4 using condition that it is only allowed to remove a block from a level that is fully covered from the top(how do we use it by the way?). Why is X=N a losing position?
Sorry, levels and blocks got mixed up. I though this worked as a rough idea, at first. I have a proper strategy and will post it soon.