Quantcast
Channel: CodeChef Discuss - latest questions
Viewing all 39796 articles
Browse latest View live

BIBOARD - Editorial

$
0
0

Problem Link

Practice

Contest

Author:Tianyi

Tester:Misha Chorniy

Editorialist:Bhuvnesh Jain

Difficulty

HARD

Prerequisites

Flows, Minimum Cut

Problem

You are given a grid consisting of $0$(white), $1$(black) and $?$(unknown). You are required to convert $?$ to $0$ or $1$ such that the following cost is maximised:

$$\sum_{i=1}^{\mathrm{min}(N, M)} C_{B, i} \cdot B_i + C_{W, i} \cdot W_i\,.$$

where $B_i$ denotes the number of ways to select a black square of size $i$, $W_i$ denotes the number of ways to select a white square of size $i$ and $C_{B, i}$ and $C_{W, i}$ arrays is provided in input.

Explanation

Subtask 1: N * M ≤ 10

A simple brute force which converts each $?$ to $0$ or $1$ and calculates the cost of the board is sufficient to pass this subtask. The time complexity of the above approach is $O(2^{NM})$.

Subtask 2, 3: N * M ≤ 500

The constraints are small and since the problem is related with maximisation, it hints towards either dynamic programming or flows based solution generally. We will use minimum cut to solve this problem. In case you are not familiar with the topic or its usage in the problem, please you through this awesome video tutorial by Anudeep Nekkanti.

Now, using the ideas similar in the video, we design the below algorithm.

  1. We iterate through each possible square in the grid. If it only consists of white or black cells, add the required cost depending on size to the answer. If it consists of some question marks as well, then we check if it can be converted to only white cells or black cells or both. We optimistically add the required white/black contribution to the answer. (Note that it is similar to the problem where we add all the possible contribution first to the answer, and in order to maximise the final answer, use the minimum cut algorithm to find the minimum contribution which needs to be taken back from the answer. This is the most important step of the problem.). Let the optimistic answer we calculated now be known as "initial ans".

  2. Now, we construct a graph as follows. Consider a source of a white node and sink of a black node. For every optimistic conversion of a square to white/black you considered (by converting some $?$ to $0$ or $1$), add an edge between a new node to the corresponding white (source) or black (sink) node with the weight of the edge as the cost of that square. Also, add edges of weight INFINITY (some large constant) between every cell containing $?$ to the new node. To get an understanding of the graph construction, see the below diagram showing the graph for the sample case in the problem.

  3. The answer to the problem is "initial ans - maximum_flow of above graph".

To understand why the above algorithm works, we will reason cutting of each edge in terms of minimum cuts and the rewards/penalties it provides.

It is sure that edges of weight INFINITY will never be considered in the minimum cut of the graph. So we consider the remaining 2 scenarios.

This image shows the scenario where the edge containing white node (source) and the node representing an optimistic white square of length $L$ is cut. This implies one of the $?$ in the square chose to convert to $1$ instead of $0$. This can happen for whatever reason, for example - It might to costlier to cut the corresponding black edges to which the $?$ node also connects (such a node will exist otherwise there will be no path from white i.e. source to black i.e. sink).

This case is similar to above case, just that instead one of the $?$ in the optimistic black square of length $L$ chose to convert to $0$ instead of $1$.

Thus the minimum cut in the above graph will contain edges which tell that the optimistic square which we considered for being white or black will not be actually white or black in the optimal selection because one of $?$ chose to convert to opposite colour. We must thus remove this minimum optimistic answer we added to "initial ans" to obtain the solution to the problem.

Let us now finally analyse the complexity of the above solution. For the worst case, the grid will only consist of $?$ as there are no edges coming out of $0$ or $1$ in the grid. Let $K = min(N, M)$. So there will be $N * M + (N - 1) * (M - 1) + \cdots + (N - K) * (M - K) = O(K^3)$ intermediate white and black nodes i.e. the ones connecting grid cells and also to the corresponding source or sink. For details of the proof, refer to this. The number of edges in the graph will be $O(K^4)$ as there can be maximum $O(L^2)$ edges from an intermediate node representing the square of length $L$. Using Dinic Algorithm which has the worst complexity of $O(V^2 * E)$, the overall complexity of the solution will be $O(K^{10})$ ~ $O({(N * M)}^3)$. The hidden constant factor is very low and Dinic's algorithm suffices to solve the problem.

Extra Observation

Note that above solution using minimum cut also enables us to print one grid with required cost as well. The construction of such grid is simple. Just consider the edges from white (source) and black (sink) node to the intermediate nodes which are not part of the minimum cut. This means all the cells in the grid connected to it actually retained their preference to colour which was optimistically assigned before to them.

Once, you are clear with the above idea, you can see the editorialist implementation below for help. It also contains the part for printing a possible grid with the conversion of $?$ to $0$ or $1$.

Author's Editorial for the problem

View Content

Feel free to share your approach, if it was somewhat different.

Time Complexity

$O({(N * M)}^3)$

Space Complexity

$O({(N * M)}^3)$

AUTHOR'S AND TESTER'S SOLUTIONS:

Author's solution can be found here.

Editorialist's solution can be found here.


ARCTR - Editorial

$
0
0

Problem Link

Practice

Contest

Author:Igor Barenblat

Tester:Misha Chorniy

Editorialist:Bhuvnesh Jain

Difficulty

MEDIUM-HARD

Prerequisites

Dynamic Convex-Hull Trick, Heavy-Light Decomposition

Problem

You are given a tree with $N$ nodes. $M$ speedsters travel on the tree. The $i^{th}$ speedster starts at time $t_i$ from vertex $u_i$ and travels towards $v_i$ at a constant speed of $s_i$. For every vertex, we need to find the first time, any speedster visited it. In case, it was not visited by any speedster, report the answer as -1.

Explanation

For simplicity, let us assume that the tree is rooted at node $1$ and the depth of all vertices from the root is calculated. The depth is basically the distance of the vertex from the root of the tree i.e. $1$.

Let us first write the equation for time taken by speedster $i$ reach a vertex $x$. If the vertex doesn't lie on the path from $u_i$ to $v_i$ then it is not visited by speedster $i$, or the time taken is INFINITY (a large constant). For all the other vertices on the directed path from $u_i$ to $v_i$, the time taken is given by:

$$\text{Time taken} = t_i + \frac{\text{Distance from vertex }u_i}{s_i}$$

$$\text{Distance between x and y} = \text{Depth[x]} + \text{Depth[y]} - 2 * \text{Depth[lca(x, y)]}$$

where $lca(x, y)$ is the lowest common ancestor of vertices $x$ and $y$.

We can now modify the equation for the time taken to reach any vertex on the path from $u_i$ to $v_i$ as follows:

Let the lowest common ancestor of $u_i$ and $v_i$ be $lca$. Calculate the final time at which we reach vertex $v_i$. Let us denote this by $t_f$. We now split the path from $u_i$ to $v_i$ into 2 parts: one from $u_i$ to $lca$ and from $lca$ to $v_i$. NIte that these paths are directed. The image below shows how to calculate the time at any vertex $x$ and $y$ on the 2 different paths.

From the above figure, for a node $x$ on path from $u_i$ to $lca$, the time to reach it is:

$$\text{Time taken to reach x} = t_i + \frac{(Depth[u] - Depth[x])}{s_i} = \big(t_i + \frac{Depth[u]}{s_i}\big) - \frac{1}{s_i} * Depth[x]$$

Similarly, for a node $y$ on path from $lca$ to $v_i$, the time to reach it is:

$$\text{Time taken to reach y} = t_f - \frac{(Depth[v] - Depth[y])}{s_i} = \big(t_f - \frac{Depth[v]}{s_i}\big) - \frac{1}{s_i} * Depth[y]$$

If we observe carefully, both the above equations look the form $Y = MX + C$, where the first bracket part is $C$, time to be calculated is $Y$, $\frac{1}{s_i}$ is the slope ($M$) and the depth of the node is $X$.

The problem asks us to find minimum time at which every node is visited by any speedster, and the above equations clearly show that time to reach it only depends on the depth of the node and pair $(constant, slope)$ which is known beforehand for every speedster. Thus, this indicates the final solution will use the Dynamic convex-hull trick (Dynamic variant as the slopes are not guaranteed to be in increasing/decreasing order). If you don't know about it or its use case, you can read it here

So, let us first try to solve a simple version of the problem where the tree is a bamboo(a straight path). This basically rules out the tree of the problem and reduces it to updates and queries of the following form on an array:

  1. Update: Add a line $(M, C)$ denoting $Y = MX + C$ to every index in range $[l, r]$.

  2. Query: Find the minimum value at any index $l$ for a given value of $X = w$.

We have range updates and point queries. So, we will use segment trees for the solution. In each node of the segment tree, we will keep the lines (represented by $(M, C)$) and for querying, we will just use the convex-hull trick to evaluate the minimum at node efficiently. Below is the pseudo-code for the segment-tree:


    def init(t, i, j):
        if i == j:
            seg[t].clear()      # remove all the lines
            return
        mid = (i+j)/2
        init(2*t, i, mid)
        init(2*t, mid+1, j)

    def update(t, i, j, l, r, M, C):
        if i > r or j < l:
            return
        if l <= i and j <= r:
            # within required range
            seg[t].add_line({M, C})
            return
        mid = (i+j)/2
        update(2*t, i, mid, l, r, M, C)
        update(2*t+1, mid+1, j, l, r, M, C)

    def query(t, i, j, l, r, X):
        if l <= i and j <= r:
            return seg[t].evaluate_minimum(X)
        mid = (i+j)/2
        if i <= mid:
            if j <= mid:
                return query(2*t, i, mid, l, r, X)
            else:
                a = query(2*t, i, mid, l, r, X)
                b = query(2*t+1, mid+1, j, l, r, X)
                return min(a, b)
        else:
            return query(2*t+1, mid+1, j, l, r, X)

The time complexity of the above operations on segment tree is $O(\log{N} * \log{M})$ for both update and query. This is because each update and query will visit at most $O(\log{N})$ nodes and operation on every node (addition of a line or querying for minimum) is $O(\log{M})$. For a good reference code to the Dynamic convex hull, you can look up this.

Back to the tree problem. We see that we can easily handle queries on an array and the queries on the tree as basically those on a path. Voila, we can simply use Heavy light Decomposition or any other data structure you are suitable with (Euler Path or Centroid Decomposition). Thus, we efficiently solve the problem.

The overall time-complexity of the above approach using heavy-light decomposition will be $O({\log}^{2}{N} * \log{M})$ per update and query as it divides the path between the vertices $u$ and $v$ into $O(\log{N})$ path each of which is a straight path and can be solved using the segment tree mentioned above.

You can look at the author's implementation for more details.

Feel free to share your approach, if it was somewhat different.

Time Complexity

$O((N + M) * {\log}^{2}{N} * \log{M})$

Space Complexity

$O(N + M)$

AUTHOR'S AND TESTER'S SOLUTIONS:

Author's solution can be found here.

VSN has a closed form solution

$
0
0

I'm not sure to class this as an editorial or whatever, but it's interesting. A well spent few hours digging into this.

I figured during the competition that VSN should have a closed form expression, a quadratic equation in something like 13 variables. It turns out that that is indeed true. After some mathematica magic I derived a “beautiful” formula that solved the problem. As mentioned in another question python3 has TLE problems with the binary search method, with the closed form solution python3 actually passes, albeit with a struggle: 18916971.

The formula in python form is

2*cy**2*dx*px + 2*cz**2*dx*px - 2*cx*cy*dy*px - 2*cx*cz*dz*px + 
2*cy*dy*px**2 + 2*cz*dz*px**2 - 2*cx*cy*dx*py + 2*cx**2*dy*py + 
2*cz**2*dy*py - 2*cy*cz*dz*py - 2*cy*dx*px*py - 2*cx*dy*px*py + 
2*cx*dx*py**2 + 2*cz*dz*py**2 - 2*cx*cz*dx*pz - 2*cy*cz*dy*pz + 
2*cx**2*dz*pz + 2*cy**2*dz*pz - 2*cz*dx*px*pz - 2*cx*dz*px*pz - 
2*cz*dy*py*pz - 2*cy*dz*py*pz + 2*cx*dx*pz**2 + 2*cy*dy*pz**2 - 
2*cy**2*dx*qx - 2*cz**2*dx*qx + 2*cx*cy*dy*qx + 2*cx*cz*dz*qx - 
2*cy*dy*px*qx - 2*cz*dz*px*qx + 4*cy*dx*py*qx - 2*cx*dy*py*qx + 
2*dy*px*py*qx - 2*dx*py**2*qx + 4*cz*dx*pz*qx - 2*cx*dz*pz*qx + 
2*dz*px*pz*qx - 2*dx*pz**2*qx + 2*cx*cy*dx*qy - 2*cx**2*dy*qy - 
2*cz**2*dy*qy + 2*cy*cz*dz*qy - 2*cy*dx*px*qy + 4*cx*dy*px*qy - 
2*dy*px**2*qy - 2*cx*dx*py*qy - 2*cz*dz*py*qy + 2*dx*px*py*qy + 
4*cz*dy*pz*qy - 2*cy*dz*pz*qy + 2*dz*py*pz*qy - 2*dy*pz**2*qy + 
2*cx*cz*dx*qz + 2*cy*cz*dy*qz - 2*cx**2*dz*qz - 2*cy**2*dz*qz - 
2*cz*dx*px*qz + 4*cx*dz*px*qz - 2*dz*px**2*qz - 2*cz*dy*py*qz + 
4*cy*dz*py*qz - 2*dz*py**2*qz - 2*cx*dx*pz*qz - 2*cy*dy*pz*qz + 
2*dx*px*pz*qz + 2*dy*py*pz*qz - 2*dx*px*r**2  - 2*dy*py*r**2  - 
2*dz*pz*r**2  + 2*dx*qx*r**2  + 2*dy*qy*r**2  + 2*dz*qz*r**2  + 
sqrt((-2*cy**2*dx*px - 2*cz**2*dx*px + 2*cx*cy*dy*px + 2*cx*cz*dz*px - 
     2*cy*dy*px**2 - 2*cz*dz*px**2 + 2*cx*cy*dx*py - 2*cx**2*dy*py - 
     2*cz**2*dy*py + 2*cy*cz*dz*py + 2*cy*dx*px*py + 2*cx*dy*px*py - 
     2*cx*dx*py**2 - 2*cz*dz*py**2 + 2*cx*cz*dx*pz + 2*cy*cz*dy*pz - 
     2*cx**2*dz*pz - 2*cy**2*dz*pz + 2*cz*dx*px*pz + 2*cx*dz*px*pz + 
     2*cz*dy*py*pz + 2*cy*dz*py*pz - 2*cx*dx*pz**2 - 2*cy*dy*pz**2 + 
     2*cy**2*dx*qx + 2*cz**2*dx*qx - 2*cx*cy*dy*qx - 2*cx*cz*dz*qx + 
     2*cy*dy*px*qx + 2*cz*dz*px*qx - 4*cy*dx*py*qx + 2*cx*dy*py*qx - 
     2*dy*px*py*qx + 2*dx*py**2*qx - 4*cz*dx*pz*qx + 2*cx*dz*pz*qx - 
     2*dz*px*pz*qx + 2*dx*pz**2*qx - 2*cx*cy*dx*qy + 2*cx**2*dy*qy + 
     2*cz**2*dy*qy - 2*cy*cz*dz*qy + 2*cy*dx*px*qy - 4*cx*dy*px*qy + 
     2*dy*px**2*qy + 2*cx*dx*py*qy + 2*cz*dz*py*qy - 2*dx*px*py*qy - 
     4*cz*dy*pz*qy + 2*cy*dz*pz*qy - 2*dz*py*pz*qy + 2*dy*pz**2*qy - 
     2*cx*cz*dx*qz - 2*cy*cz*dy*qz + 2*cx**2*dz*qz + 2*cy**2*dz*qz + 
     2*cz*dx*px*qz - 4*cx*dz*px*qz + 2*dz*px**2*qz + 2*cz*dy*py*qz - 
     4*cy*dz*py*qz + 2*dz*py**2*qz + 2*cx*dx*pz*qz + 2*cy*dy*pz*qz - 
     2*dx*px*pz*qz - 2*dy*py*pz*qz + 2*dx*px*r**2 + 2*dy*py*r**2 + 
     2*dz*pz*r**2 - 2*dx*qx*r**2 - 2*dy*qy*r**2 - 2*dz*qz*r**2)**2 - 
  4*(cy**2*dx**2 + cz**2*dx**2 - 2*cx*cy*dx*dy + cx**2*dy**2 + 
     cz**2*dy**2 - 2*cx*cz*dx*dz - 2*cy*cz*dy*dz + cx**2*dz**2 + 
     cy**2*dz**2 + 2*cy*dx*dy*px - 2*cx*dy**2*px + 2*cz*dx*dz*px - 
     2*cx*dz**2*px + dy**2*px**2 + dz**2*px**2 - 2*cy*dx**2*py + 
     2*cx*dx*dy*py + 2*cz*dy*dz*py - 2*cy*dz**2*py - 2*dx*dy*px*py + 
     dx**2*py**2 + dz**2*py**2 - 2*cz*dx**2*pz - 2*cz*dy**2*pz + 
     2*cx*dx*dz*pz + 2*cy*dy*dz*pz - 2*dx*dz*px*pz - 2*dy*dz*py*pz + 
     dx**2*pz**2 + dy**2*pz**2 - dx**2*r**2 - dy**2*r**2 - dz**2*r**2)*
   (cy**2*px**2 + cz**2*px**2 - 2*cx*cy*px*py + cx**2*py**2 + 
     cz**2*py**2 - 2*cx*cz*px*pz - 2*cy*cz*py*pz + cx**2*pz**2 + 
     cy**2*pz**2 - 2*cy**2*px*qx - 2*cz**2*px*qx + 2*cx*cy*py*qx + 
     2*cy*px*py*qx - 2*cx*py**2*qx + 2*cx*cz*pz*qx + 2*cz*px*pz*qx - 
     2*cx*pz**2*qx + cy**2*qx**2 + cz**2*qx**2 - 2*cy*py*qx**2 + 
     py**2*qx**2 - 2*cz*pz*qx**2 + pz**2*qx**2 + 2*cx*cy*px*qy - 
     2*cy*px**2*qy - 2*cx**2*py*qy - 2*cz**2*py*qy + 2*cx*px*py*qy + 
     2*cy*cz*pz*qy + 2*cz*py*pz*qy - 2*cy*pz**2*qy - 2*cx*cy*qx*qy + 
     2*cy*px*qx*qy + 2*cx*py*qx*qy - 2*px*py*qx*qy + cx**2*qy**2 + 
     cz**2*qy**2 - 2*cx*px*qy**2 + px**2*qy**2 - 2*cz*pz*qy**2 + 
     pz**2*qy**2 + 2*cx*cz*px*qz - 2*cz*px**2*qz + 2*cy*cz*py*qz - 
     2*cz*py**2*qz - 2*cx**2*pz*qz - 2*cy**2*pz*qz + 2*cx*px*pz*qz + 
     2*cy*py*pz*qz - 2*cx*cz*qx*qz + 2*cz*px*qx*qz + 2*cx*pz*qx*qz - 
     2*px*pz*qx*qz - 2*cy*cz*qy*qz + 2*cz*py*qy*qz + 2*cy*pz*qy*qz - 
     2*py*pz*qy*qz + cx**2*qz**2 + cy**2*qz**2 - 2*cx*px*qz**2 + 
     px**2*qz**2 - 2*cy*py*qz**2 + py**2*qz**2 - px**2*r**2 - 
     py**2*r**2 - pz**2*r**2 + 2*px*qx*r**2 - qx**2*r**2 + 
     2*py*qy*r**2 - qy**2*r**2 + 2*pz*qz*r**2 - qz**2*r**2)))
   /
(2*(cy**2*dx**2 + cz**2*dx**2 - 2*cx*cy*dx*dy + cx**2*dy**2 + 
    cz**2*dy**2 - 2*cx*cz*dx*dz - 2*cy*cz*dy*dz + cx**2*dz**2 + 
    cy**2*dz**2 + 2*cy*dx*dy*px - 2*cx*dy**2*px + 2*cz*dx*dz*px - 
    2*cx*dz**2*px + dy**2*px**2 + dz**2*px**2 - 2*cy*dx**2*py + 
    2*cx*dx*dy*py + 2*cz*dy*dz*py - 2*cy*dz**2*py - 2*dx*dy*px*py + 
    dx**2*py**2 + dz**2*py**2 - 2*cz*dx**2*pz - 2*cz*dy**2*pz + 
    2*cx*dx*dz*pz + 2*cy*dy*dz*pz - 2*dx*dz*px*pz - 2*dy*dz*py*pz + 
    dx**2*pz**2 + dy**2*pz**2 - dx**2*r**2 - dy**2*r**2 - dz**2*r**2))

One problem is that the formula actually isn't defined at $r$, which can be worked around by evaluating the function at some $r-\epsilon$. Maybe it's possible to rewrite the equation so that this limit problem isn't an issue, but I will not touch that. :P

I would insert the LaTeX formula in my post, but that would probably break something... I tried to make an image of the formula but my tools failed to create the resolution needed to get a crisp image (>30k wide image). In lieu of that here is a pdf version of the beautiful formula and even that required some work since LaTeX really doesn't like super wide equations.

Uncle johny

$
0
0

// https://www.codechef.com/submit/complete/18917369

include<bits stdc++.h="">

using namespace std; int main(){ int t; cin>>t; while(t--){ int n; cin>>n; int a[100]; int b[100]; for(int i=1;i<=n;i++){

    cin>>a[i];
}
int d;//johnny position intial
cin>>d;
for(int i=1;i<=n;i++){
    b[i]=a[i];
}
    sort(a, a+n+1);
int m=b[d];

int count=0; for(int j=0;j<n;j++){ if(a[j]!=m){ count++; } else{ break; } }

cout<<count+1<<endl; } return 0;

}

the output is correct still showing wrong answer in GCD2 easy level

$
0
0

T = int(input()) for i in range(T) : A,B = list(map(int,input().split())) k = [] x=[] for i in range(1,100): if A%i==0 : k = k+ [i]

for i in range(1,100):
    if B%i==0 :
     x = x+ [i]

v=set(k)
w=set(x)
z=v.intersection(w)

p = max(z)
print(p)

BUILDIT - Editorial

$
0
0

Problem Link

Practice

Contest

Author:Teja Vardhan Reddy

Tester:Misha Chorniy

Editorialist:Bhuvnesh Jain

Difficulty

MEDIUM-HARD

Prerequisites

Matrix Multiplication, Recurrences, Linearity of Expectation, Probabilities

Problem

You are given a circular which is equally divided into $H$ parts. There are $N$ building placed on the circle at some points, not necessarily distinct. You can start from any point and start shooting within a range of $X$. All building within the range collapse. You need to find the expected number of buildings which collapses when the probability of starting from any point is given by the following probability distribution:

$$a_i \text{is given for} 1 ≤ i ≤ K$$

$$a_i = \sum_{j=1}^{j=K} c_j\cdot a_{i-j} \quad \forall\,i:\;K \lt i \le h\,.$$

Explanation

Subtask 1: N ≤ 1000, H ≤ 10000, K ≤ 10

A simple brute force solution which first finds the probability of starting at each point and then simply iterates through each point and finds the number of buildings which are collapsed will work within the required constraints.

The time complexity of the above approach will be $O(H * K + N * H)$ as $X = H$ in the worst case. The first part is for the pre-computation and the next part if for finding the desired number of buildings which are collapsed. The space complexity is $O(H)$.

Subtask 2: N ≤ 50000, H ≤ 1000000, K ≤ 10

All the further subtasks require the knowledge of linearity of expectation and indicator random variables.

Let us define the indicator random variable $Y_i$. It equals $1$ if there is a building at position $i$ else $0$. Using this definition, the expected number of building which collapse is:

$$\text{Expected buildings} = \sum_{i=1}^{i=N} {a_i * \sum_{j=i}^{j=i+X} Y_j}$$

where the second sum is taken in a circular manner.

Using linearity of expectation (or rearrangement of terms), we can rewrite the above expression as:

$$\text{Expected buildings} = \sum_{i=1}^{i=N} {Y_i * \sum_{j=i-X}^{j=i} a[i]}$$

where the second sum is again taken in a circular manner.

Since $Y_i$ is defined to be $1$ for all points where buildings are present, we can see that outer sum runs for $O(N)$ times in worst case. For the inner sum, we can just maintain a prefix sum for array $a$ and thus find the required sum in $O(1)$.

Using this approach, the time complexity becomes $O(H * K + H + N)$ which is enough to pass this subtask. The space complexity is $O(H)$.

You can see editorialist approach for ${1}^{st}$ two subtasks below.

Full Solution Approaches

Author/Tester Solution: Matrix multiplcation for recurrence solving

The last approach was bad as it required us to calculate the probability of starting at each point which is not possible as $H$ is very large. If carefully observed, the way future $a_i$ are generated is given by a recurrence relation. Don't confuse by seeing the convolution form in it. :(

In case you don't know how to solve recurrence using matrix multiplication, I suggest you go through this blog before.

In the last approach, we needed to calculate the prefix sum of some given range, for the recurrence relation, in an efficient manner. We can extend our matrix from $K * K$ for normal recurrence to include one extra row which will calculate the prefix sum. Below the idea with a recurrence containing 3 terms. We can generalise it later.

Let recurrence be $a_m = c_1 * a_{m-1} + c_2 * a_{m-2} + c_3 * a_{m-3}$. As per blog the matrix is given by

$$ \begin{bmatrix} c_1 & c_2 & c_3 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} a_{m-1} \\ a_{m-2} \\ a_{m-3} \end{bmatrix} = \begin{bmatrix} a_m \\ a_{m-1} \\ a_{m-2} \end{bmatrix} $$

To extend it to contain prefix sum as well, the matrix will look like:

$$ \begin{bmatrix} c_1 & c_2 & c_3 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ c1 & c2 & c3 & 1 \end{bmatrix} \begin{bmatrix} a_{m-1} \\ a_{m-2} \\ a_{m-3} \\ \sum_{i=1}^{i=m-1} a_i \end{bmatrix} = \begin{bmatrix} a_m \\ a_{m-1} \\ a_{m-2} \\ \sum_{i=1}^{i=m} a_i \end{bmatrix} $$

Got the idea, we basically store the initial prefix sum as the last value and add the current value to the prefix sum to extend it further. This can be easily extended to higher order recurrences as well.

Using the above idea naively, we can solve the problem in $O(N * K^3 * \log{H})$, where for every building we use matrix multiplication to calculate the prefix sums. This is enough to pass the ${3}^{rd}$ subtask but not the last one.

To solve the last subtask, we need to look at one important detail of matrix multiplication. Given matrices of sizes $(a, b)$ and $(b, c)$, the complexity for their multiplication is $O(a * b * c)$. We are used to using square size matrices, so we say the complexity is always cubic. In recurrences, the last step involves multiplying the recurrent matrix, $(K, K)$ with the base matrix, $(K, 1)$ which takes $O(K^2)$ complexity instead of $O(K^3)$. This gives us a neat optimisation as follows:

$${R}^{n} * B = R^{i_1} * ( \cdots * ({R}^{i_{\log{H}}} * B))$$

where $R$ is the recurrent matrix ($(K+1, K+1)$ matrix which is described above), $B$ is the base matrix ($(K+1, 1)$ matrix which is described above) and $n = 2^{i_1} + 2^{i_2} + \cdots + 2^{i_{\log{H}}}$.

An example of above equation is:

$${R}^{11} * B = R^1 * (R^2 * (R^8 * B))$$

Note that the above step now needs $O(K^2 * \log{H})$ complexity, reducing a factor $K$ from previous one. But we need to precompute the $2^w$ powers of the recurrent matrix. This can be done in $O(K^3 * \log{H})$.

Using the above ideas of precomputation and matrix multiplication, the complexity is $O(K^3 \log{H} + N * K^2 * \log{H})$. This will easily pass all the subtasks.

For more details, you can refer to the author's or tester's solution below.

Editorialist Solution: GP Sum of a matrix (Bad constant factor)

The ideas of precomputation and matrix multiplication described above hold in the editorialist solution too.

The solution uses the following idea for finding the prefix sums or recurrence:

$$a_1 + a_2 + \cdots + a_m = R^0 * B + R^1 * B + \cdots + R^{(m-1)} * B = (R^0 + R^1 + \cdots + R^{(m-1)}) * B$$

So, we need to find the GP (Geometrix progression) sum of a matrix. This is a known problem. If you don't know about it, you can read similar problem here. But the only problem with this approach is the large constant factor involved. The matrix used to calculate the GP sum of a matrix is twice the size of given matrix, i.e. the matrix used will have size $(2k, 2k)$. Hence, a constant factor of $2^2 = 4$ is added to the complexity of the solution which is very hard to deal with. But with some neat observations like some parts of the matrix always retaining particular values, we can reduce the constant factor in the solution too.

I understand the above is a very brief idea of the solution, but in case you have any problem, you can read through the solution and ask any doubts you have in the comment section below.

The time complexity of the approach will be $O(4 * K^3 * \log{H} + 2 * N * K^2 * \log{H})$. The space complexity will be $O(4 * K^2)$.

Feel free to share your approach, if it was somewhat different.

Extra Tip: Reduce constant factor in modular matrix multiplication

Below is the general code used to calculate matrix multiplication in the modular field:


  for i in [1, n]:
    for j in [1, n]:
      for k in [1, n]:
        c[i][j] = (c[i][j] + a[i][k] * b[k][j]) % mod

It can be easily seen that above code uses $O(N^3)$ mod operations. It should be remembered that mod operations are costly operations as compared to simple operations like addition, subtraction, multiplication etc. We observe the following identity:

$$X \% \text{mod} = (X \% mod^2) \% \text{mod}$$

Above can be easily proved using $X = q * mod + r$. Using this fact, we can reduce the number of mod operations to $O(N^2)$ in matrix multiplication. Below is the pseudo-code for it:


  mod2 = mod * mod
  for i in [1, n]:
    for j in [1, n]:
      c[i][j] = 0
      for k in [1, n]:
        c[i][j] += a[i][k] * b[k][j]    # Take care of overflows in interger multiplication.
        if c[i][j] >= mod2:
          c[i][j] -= mod2
      c[i][j] %= mod

Note that above trick reduces the running time by approximately 0.8-0.9 seconds. Though it is not required for the complete solution, knowing it can always help and might be useful somewhere else.

Time Complexity

$O(K^3 \log{H} + N * K^2 * \log{H})$

Space Complexity

$O(K^3)$

AUTHOR'S AND TESTER'S SOLUTIONS:

Author's solution can be found here.

Editorialist's solution for subtask 1 and 2 can be found here.

Editorialist's solution can be found here.

RUN TIME ERROR in c++ code

Why Codechef does not make contest problems test cases public after contest gets over ?

$
0
0

my humble request to @admin to share test cases of contest problems with editorial, or provide feature to see test cases during submission after contest gets over..
while practicing one get WA, TLE or whatever error it must show testCases where code is failing similar to codeforces, hackerRank, hackerEarth etc. plateforms..

Reason is simple, during practice, when we try to upsolve contest problem, it will save our time to figure out what's wrong with our code
whenever we gets WA, then we try our best to figure out corner cases where code is failing, if succeed than its good
but if do not gets succeed than we post link to our submission on discuss, and we have to wait someOne will tell us corner case... most of times on codechef Discuss forum, someOne helps to figure out in which test case code is failing but someTimes we do not get replies...
if we know test cases, then we can figure out by ourselves where code is failing... and it will save our time to post "find where code is failing" type question on discuss and will also save others time of answering those questions..

i am facing this issue to figure out whats wrong with my june long challenge "TWOFL" problem
sorry for by bad English..


Unfair Codechef Ratings

$
0
0

So due to some small mistake a while ago when my rating was dropped by around 500 points for plagiarism in a really old contest, I didnt say anything. I gave the June Long Challenge and to my surprise, despite having 500 points and a ranking of 71, I got an increase of just 200 points, while some other people of my college, some having solved only 2 questions got a boost of 175 points. Is there any basis behind this? What's the point of a lower rated person even solving the questions in long challenges then? Someone please look into this as I am not the only one pissed off by this.

Should we have proper ordering of questions in Short Contests

$
0
0

While trying problems in short contest, most of you must be puzzled - where to start. Which is the easiest problem? Since initially the questions are sorted randomly and sometimes the 4th and 5th questions in the list come out to be the easiest. So it becomes pure luck which causes wastage of 5-10 minutes. Shouldn't we have an ordering of questions?

Uncle Johny

$
0
0

https://www.codechef.com/viewsolution/18918873 //

include<bits stdc++.h="">

using namespace std; int main(){ int t; cin>>t; while(t--){ int n; cin>>n; int a[100]; int b[100]; for(int i=1;i<=n;i++){

    cin>>a[i];
}

int d;//initial johnny position
cin>>d;
for(int i=1;i<=n;i++){
    b[i]=a[i];
}

    sort(a, a+n+1);
int m=b[d];

int count=0; for(int j=0;j<n;j++){ if(a[j]!=m){ count++; } else{ break; } }

cout<<count+1<<endl; } return 0;

}

New blog for Competitive Programmers

$
0
0

Hey guys I have started a new blog. The first article is on dfs lowlinks . You can read it up here. Feel free to comment if anything is wrong or suggest any changes you want me to make. Also feel free to suggest topics you want me to write some article on . Thanks. :D

recurrence relation with offset techinique

$
0
0

A recurrence relation of this type: $F_n = 2F_{n - 1} + 3F_{n - 2} + 3$ , where sum of the coefficient of recurring terms on both side of the equation is not equal.

I have seen a codechef video ( by Kevin ) where they explain the offset adding to solve this type of recurrence by removal of the constant term. The recurrence can be converted to $G_n = 2G_{n-1} + 3G_{n-2}$ and later use matrix exponentiation to solve it with matrix $\begin{bmatrix} 2&3\\ 1&0\\ \end{bmatrix}$ to obtain the value of $G_n \textit{ mod }M$. For example,

$\begin{align*} &\text{Let, }F_n = G_n + C\\ &\Rightarrow G_n + C = 2\left [ G_{n - 1} + C \right ] + 3\left [ G_{n - 2} + C \right ] + 3 \\ &\Rightarrow G_n + C = 2G_{n - 1} + 3G_{n - 2} + 5C + 3 \\ &\Rightarrow G_n= 2G_{n - 1} + 3G_{n - 2} + 4C + 3 \\ &\Rightarrow C = -\frac{3}{4} \\ \end{align*}$

Now after calculating the value of $G_n \textit{ mod }M$ how to recover $F_n$?

Thanks!

Codeforces contest colliding with Codechef contest.

$
0
0

On 23rd june codeforces round is colliding with a rated contest in codechef. On 30th june again another codeforces contest is colliding with codechef lunchtime. Is it possible that the timings might be slightly changed that they don't collide? I'm sure many participants would like to give both contests and don't want to choose between the 2 especially since it is summer vacation and everybody has lot of free time.

TSORT : Getting Wrong answer

$
0
0

Below is my code for TSORT. I checked it multiple times and looks good, but still I get "wrong answer". Please help!!

=====

include <stdio.h>

int main(void) { unsigned int array[1000001] = {0}; unsigned int t; unsigned int value = 0; char temp=0;

scanf("%d",&t);

while(t) {
    scanf("%d",&value);
    array[value] = array[value] + 1;
    t--;
}
t = 0;

while(t<=1000000) {
    if(array[t] >= 1) {
        for(temp=1; temp<=array[t]; temp++) {
            printf("%d\n",t);
        }
    }
    t++;
}
return 0;

}


All panlindromic subsequence.

$
0
0

here is a pseudo code for counting all palindromic subsequence of a given string :

enter code here
if i == j
return 1

Else if (str[i] == str[j)]
return   countPS(i+1, j) + countPS(i, j-1) + 1;

else
return countPS(i+1, j) + countPS(i, j-1) - countPS(i+1, j-1)

i couldn't understand that , when first and last characters are equal why we do not subtract -countPS(i+1, j-1)

as we did when when 1st and last are not same. ?

source : https://www.geeksforgeeks.org/count-palindromic-subsequence-given-string/

Can anyone please explain it ?

Regarding the new rating division system

$
0
0

The March Challenge 2018 witnessed two parallel contests for the two rating divisions for the first time ever in Codechef. However, there are a few issues regarding it which I want to discuss.

Codechef clearly states that the problems in the other division can be practiced and are “NOT part of the contest”.

I had planned to skip this March Challenge as our college fest was going on. However, I tried those div 2 problems in practice and boom, my name appeared on the ranklist with score ‘0’! Now this really needs to be fixed… @admin

The second and the last issue is undoubtedly a petty issue. The div 2 problems once solved, shows a yellow tick even if it is fully solved. Even the profile shows the problems in the partially solved section. I don't know the reason behind this.

However, overall, I am satisfied with this division system and it does motivate all the coders to strive hard for that green tick.

what is wrong ?

getting right output but codeChef showing wrong answer.

$
0
0

`#include <stdio.h>

include <math.h>

int main(void){ int t, a=0, b=0, n=0, r; scanf("%d",&t); while(t--){ scanf("%d %d %d",&a,&b,&n); a=pow(a,n); b=pow(b,n); if(a>b) r=1; else if(a<b) r=2; else if(a==b) r=0; printf("%d",r); } }`

DANYANUM - Editorial

$
0
0

PROBLEM LINK:

Div1
Div2
Practice

Setter-Igor Barenblat
Tester-Misha Chorniy
Editorialist-?????

DIFFICULTY:

MEDIUM-HARD

PRE-REQUISITES:

SOS Dp, Square Root Decomposition, Bitwise Operations

PROBLEM:

Given an array $A$ of $N$ numbers, we have to support the following operations-

  • Add a number to it
  • Remove a number from it
  • Calculate $f(x)$ where $f(x)$ is the maximum of "Bitwise-AND" of all sub-sequences of $A$ with length $x$

QUICK EXPLANATION:

Key to Success- Deducing the type of question is important. The one who deduced that it is $SOS-DP$ and were able to derive that queries were to be broken by $Square-Root$ $Decomposition$ can get a AC. Such intuition comes with practice as questions on similar concepts are asked before.

Some experience of SOS Dynamic Programming helps here. We will use square-root decomposition along with it. Divide the queries into buckets of size $K$. Updates of frequency of elements and re-calculation of SOS dp-table takes place for every bucket instead of every query.

We will iterate over bits of answers from highest to lowest. Greedy works here! (Why? Q1). For each bit, we first set it and see if its possible obtain this answer. If it is, we set that bit of answe and move on to next.

EXPLANATION:

I will assume that you guys have at least some idea about SOS-Dp. The editorial will have a three sections, each describing one step towards the final answer. I will try to give my intuition and help in grasping concept wherever possible, but please make sure you fulfill the pre-requisites! Implementation is integrated with these sections- we use tester's code for that.

1.SOS-Dp

The first question to ask "What is the maximum AND we can obtain if we AND $a\&b$ where $a< b$? Of course, the answer will be $a$! Why? Is the maximum value obtainable from AND operation of $2$ numbers is the minimum one of them?

View Content

The function to calculate SOS dp table, along with explanation is below.

void SOS() { // sum-over-subsets
        memcpy(foo, cnt, sizeof(foo));//Initialized with cnt or frequency array of elements
        for (int bit = 0; bit < k; ++bit) {//Iterating from bit 0 to bit k (0-based index)
            for (int mask = 0; mask < 1 << k; ++mask) {
                if (~mask & (1 << bit)) {//If mask has '0' at that position
                    foo[mask] += foo[mask ^ (1 << bit)];
                              //Number of ways to get mask+=Number of ways to get supermask   
                } 
            }
        }
    }

Initially the dp-table is initialized with frequency array which has count of all the numbers till present query. Using the SOS-dp, we iterate from bit $0$ to bit $k-1$ $(0-based$ $indexing!)$. Now, what are ways we can obtain the given mask? Note that, no new bit is set in AND operation. A bit with value $0$ remains $0$, and bit with value $1$ can get reduced to $0$ or may remain $1$. Hence, the given mask can only be obtained by $super-masks$ (or numbers which have $1$ at ALL places where mask has, and may have $1's$ at other places where our mask has $0$.)

Hence, if mask has $0$ at current position, we add number of ways to obtain $super-mask$ (or $dp[super-mask]$) to $dp[mask]$. The proof of correctness of this procedure is given in the pre-requisites blog I gave link to. (Well, its standard SOS dp procedure, so hopefully it should be clear :) )

2. Square Root Decomposition of Queries-

Clearly, updating the SOS-Dp table is very costly! We cannot do that for every query. What we, hence do is, divide the queries into "buckets" or groups, each of size $K$. SOS dp table is calculated after every bucket (instead of after every query). Tester @mgch chosed this $K$ around $512$. Now, what does this mean? This means that, when we are at $K'th$ bucket, our SOS dp-table is updated till $K-1$ bucket, and we just need to do some manipulations so that we can use that data of SOS dp table, along with data of present bucket to solve queries. What are they?

3. Answering Queries-

First thing is, the $SOS$ dp-table is updated after end of every bucket, but the frequency array of elements can be updated every query as its simply $O(1)$. Frequency array is nothing but count of numbers/elements in the set. Sample code below in the tab-

View Content

We need frequency array to be up-to-date for solving queries of current bucket. Calculating answer after that is actually simple!

Now, for every query asking us to calculate $f(x)$, we first set answer as $0$. Then we start from the highest bit. We will see if its possible to set the bit or not. Recall that our SOS dp table $(dp[mask])$ stored the number of $super-masks$ which have $mask$ as a $sub-mask$. We have answers for $K-1$ bucket in $SOS-dp$ $table$, we have status of queries of present buckets as well stored. HOW can we obtain the final answer?

It will do justice for first giving the code and then explanation.

                if (t == 3) {
                int ans = 0;
                //constructing the answer from the highest bit to lowest one bit-by-bit
                //checking if we have 2^(k-1) in the answer,
                //
                for (int bit = k - 1; bit >= 0; --bit) {
                    int taked = ans | (1 << bit);//Checking if we can set this bit in 
                                    //ans
                    int cur = foo[taked]; //cur = the number of elements that has (ans | 
                                    //2^bit) as submask
                    //recalculating in O(2^K)
                    for (int j = i; (j & ((1 << K) - 1)) != 0; --j) {
                        if (T[j] == 1) {//T[j]=T at j'th query
                                                    //X[j]=X at j'th Query
                            if ((X[j] & taked) == taked) {
                             //Recall the maximum possible value of AND of 2 numbers. We are trying to 
                             //see if taked is a sub-mask of X[j]. 
                                ++cur;
                          //If above is true, and X was inserted==>Increase length of sub-sequence by 1.
                            }
                        }
                        if (T[j] == 2) {
                            if ((X[j] & taked) == taked) {
                                --cur;
                                //If above is true, and X[j] was removed, 
                                                            //the length of sub-sequence reduces by 1
                            }
                        }
                    }
                    if (cur >= x) { // checking if this number >= x then the answer for 
                                   //this query |= 2^bit
                        ans |= 1 << bit;//If possible, add 2^(bit-1) to ans
                    }
                    //Do we REALLY need to check sub-sequence of length==x?
                }

In above code, we first initialized answer to $0$. Now, we iterate from highest bit to lowest bit (greedily) and see if we can obtain $\ge X$ (Why not $==X?$ Q2) numbers such that $ans$ is their sub-mask. We have stats upto $K-1$ bucket in $SOS-dp-table$ (named $foo[mask]$ in above code). We now simply iterate and brute force over this present bucket. Depending on if a favourable number (number whose sub-mask is ans) was added or removed, we add or subtract $1$ from present sub-sequence length. Like this, every query is answered in $O(K)$ or $O(\sqrt{N})$ whatever you choosed :)

Try to answer the question I asked at end of code. //Do we REALLY need to check sub-sequence of length==x? :)

SOLUTIONS:

The codes of setter and tester are pasted below for your convenience- so that you can access them while @admin is linking the solutions to the editorial. It is, just for your convenience.

Setter

View Content

Tester

View Content

Editorialist solution will be put on demand.

$Time$ $Complexity=O(\sqrt{m}*{2}^{k}*k+m*sqrt{m}*k)$

CHEF VIJJU'S CORNER :D

1. Ans of Q1 "Why greedily setting bits works here?"

View Content

2. Ans of Q2- "Do we REALLY need to check sub-sequence of length==x?"

View Content

3. Setter's Note-

View Content

4. TEST CASE BANK-

View Content

5. Related Problems-

  • MONSTER - Based on SOS Dp + Square Root. Should practice this question as well :)
Viewing all 39796 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>