Quantcast
Channel: CodeChef Discuss - latest questions
Viewing all 39796 articles
Browse latest View live

KIRMEME - Editorial

$
0
0

PROBLEM LINK:

Practice
Contest

DIFFICULTY:

Medium-Hard

PREREQUISITES:

divide and conquer, trees, DFS

PROBLEM:

Given a tree of N vertices and two integers L and R, i-th vertex of the tree has value ai, you are required to count the number of simple paths v1, v2, ..., vK such that the number of indices j with 1 < j< K with avj-1< avj and avj> avj+1 is at least L and at most R.

SHORT EXPLANATION:

let's call a triple of 3 consecutive vertices in a path such that avj-1< avj and avj> avj+1 by "interesting triple"

We will use divide and conquer on tree to solve this problem, first we find the centroid node of the tree then try to count the required paths which passes through the centroid node then we delete it and solve the same problem problem on every sub-tree created after deleting.

to count number of paths required which passes through the centroid, first we do DFS from the centroid to count the paths which start at the centroid itself, then we need to consider the case when the centroid is in the middle of the path, in other words that path consists of two sub-paths plus vertex V, so we maintain 3 arrays A, B, C,

Ai means the number of paths which have i interesting triples and start at neighbor of V and that neighbor have value less than V

Bi means the number of paths which have i interesting triples and start at neighbor of V and that neighbor have value bigger than V but second vertex at this path is less than that neighbor

Ci means the number of paths which have i interesting triples and start at neighbor of V and that neighbor have value bigger than V but second vertex at this path is not less than that neighbor

then we count the number of pairs of those paths which will make a path passing through vertex V and having between L and R triples

EXPLANATION

let's call a triple of 3 consecutive vertices in a path such that avj-1< avj and avj> avj+1 by "interesting triple"

Special Case: Counting only paths which starts from a particular vertex

This special case will help us to solve the problem in O(N^2) and thus get partial score, and also it's a part of the approach of the full solution.

To count the paths which starts at a given vertex V, we root the tree at the vertex V then do a recursive dfs and hold some parameter how many interesting triples found so far, here's pseudo code for it:

dfs(int v,int count){
    ans = ans + count
    for each child u of v{
        if(A[parent[v]] < A[v] && A[v] > A[u]){
            dfs(u,count+1)
        }  else {
            dfs(u,count)
        }
    }
}

General description of full solution

The full solution to this problem is called divide and conquer on tree, if we pick some vertex V from the tree then all paths will either pass from V or will be completely contained in one of the sub-trees which will be formed if we delete vertex V

so the idea would be to select some node v then count all interesting paths which passes through it then delete that vertex and solve the problem recursively for each sub-tree remained after deleting V, selecting best vertex V is critical in order to make the complexity of our solution faster than O(N^2)

paths which pass through vertex V have two cases, first vertex V is the first node on that path then we can count them just like what is described in previous paragraph, second for paths in which vertex V is in the middle of it then we notice that the path will start at one sub-tree then reach v then go again to another sub-tree, in other words those paths consist of two sub-paths plus vertex V, so the idea is count those sub-paths then we count how many pairs of sub-paths can be paired in such a way the number of interesting triples is between L and R, we have 3 types of sub-paths so we obtain 3 arrays:

Ai means the number of paths which have i interesting triples and start at neighbor of V and that neighbor have value less than V

Bi means the number of paths which have i interesting triples and start at neighbor of V and that neighbor have value bigger than V but second vertex at this path is less than that neighbor

Ci means the number of paths which have i interesting triples and start at neighbor of V and that neighbor have value bigger than V but second vertex at this path is not less than that neighbor

they can be obtained by DFS similar to one above, now we should look at possible pairing of the sub-paths

  • two sub-paths of type A having i and j interesting triples, this will result in a path having i+j+1 interesting triples

  • two sub-paths of type B having i and j interesting triples, this will result in a path having i+j+2 interesting triples

  • two sub-paths of type C having i and j interesting triples, this will result in a path having i+j interesting triples

  • one sub-path of type A and one of type B having i and j interesting triples, this will result in a path having i+j+1 interesting triples

  • one sub-path of type A and one of type C having i and j interesting triples, this will result in a path having i+j interesting triples

  • one sub-path of type B and one of type C having i and j interesting triples, this will result in a path having i+j+1 interesting triples

Choosing the best vertex V

if we choose the vertex V in such a way that every remaining sub-tree after deleting vertex V will have size at most half size of the current tree then our recursion will have at most O(log N) layers each, layer will take O(N) time so in total complexity would be O(N log N)

but how to find such a vertex? it can be proved that such a vertex always exists for any tree the description of the algorithm to find it can itself work as a proof and this vertex is call centriod vertex

Let's start at an arbitrary vertex V, and make it the root of the tree then we compute the size of every sub-tree in that tree, now let's check is there a child of vertex V having in its sub-tree more than half of vertices of sub-tree of V? if no then we are done because vertex V is the centriod vertex

if no then there would be at most one such child, so we visit that child and repeat until we find the centroid

SETTER'S SOLUTION

Can be found here.

TESTER'S SOLUTION

Can be found here.


A request from CodeChef

$
0
0

Can CodeChef help us conducting the ZCO as TCS iON is doing very bad since 2 years.

SEQUAT2 - Editorial

$
0
0

PROBLEM LINK:

Contest
Practice

Author:Istvan Nagy
Tester:Kevin Atienza
Translators:Sergey Kulik (Russian), Team VNOI (Vietnamese) and Hu Zecong (Mandarin)
Editorialist:Kevin Atienza

DIFFICULTY:

Medium-Hard

PREREQUISITES:

Bitwise operations, sieve, factorization

PROBLEM:

Given $A, B, C, N$, compute all solutions $(x,y)$ to the following equation:

$xy = (x\lor y)(x\land y) + Ax + By + C$

where $\lor$ and $\land$ are bitwise OR and AND, respectively.

Compute the sum of all $x$ and sum of all $y$ across all solutions $(x,y)$ with $0 \le x, y \le N$.

QUICK EXPLANATION:

Let $n = x\land y$, $a = x-n$ and $b = y-n$. Then there's a one-to-one mapping between solutions $(x,y)$ and triples $(a,b,n)$ such that:

  • $a\land b = a\land n = b\land n = 0$.
  • $(a-B)(b-A) = (A+B)n+AB+C$.

The mapping is given by $(x,y) = (a+n,b+n)$.

Let $L = (A+B)N+AB+C$. Notice that $(a-B)(b-A) \le L$, and the smaller of $(a-B)$ and $(b-A)$ is $\le \sqrt{L}$.

To compute all solutions $(a,b,n)$ such that $(a-B) \le (b-A)$:

  • Enumerate all possible values of $d := a-B$ up to $\sqrt{L}$.
  • Compute all numbers $n$ such that $(A+B)n+AB+C$ is divisible by $d$.
  • Let $e = \frac{(A+B)n+AB+C}{d}$. If $e \le d$, then try the solution $(d+B, e+A, n)$.

Similarly, to compute all solutions $(a,b,n)$ such that $(a-B) > (b-A)$:

  • Enumerate all possible values of $e := b-A$ up to $\sqrt{L}$.
  • Compute all numbers $n$ such that $(A+B)n+AB+C$ is divisible by $e$.
  • Let $d = \frac{(A+B)n+AB+C}{e}$. If $e < d$, then try the solution $(d+B, e+A, n)$.

Using this, every solution will be generated exactly once, but you need to check each candidate $(a,b,n)$ if it satisfies all conditions above.

EXPLANATION:

Reparameterization

The equation $xy = (x\lor y)(x\land y) + Ax + By + C$ is very weird because of the bitwise operations. You don't normally see those, which makes this problem unusual. Nevertheless, bitwise operations still behave somewhat well algebraically, so let's see what we can do.

The first thing we note is that if two numbers $a$ and $b$ don't share any bits, then $(a\lor b) = a+b$ and $(a\land b) = 0$. But what if they don't share any bits? Then the first one is incorrect, because it fails to take into account the bits shared by $a$ and $b$. Specifically, these bits are counted twice in $a+b$ but only once in $(a\lor b)$.

But we can easily know which bits two numbers share: it's simply $(a\land b)$! Thus, this gives us the following general identity:
$$(a\lor b) = a+b - (a\land b)$$

This is great, because it eliminates one nasty bitwise operator! Now we have one more left. Our equation is now:
$$xy = (x+y - (x\land y))(x\land y) + Ax + By + C$$

(You'll notice that there are too many parentheses in these equations. This is because bitwise operations are that uncommon among math equations.)

If we let $n = x\land y$, then the equation becomes:
$$xy = (x+y - n)n + Ax + By + C$$

Now we're getting somewhere. Notice that the bitwise operations are disappearing.

In fact, let's take this idea further, and reparameterize the solutions so that our variables don't have bits in common. Currently, we have the values $(x,y,n)$, but these numbers may have bits in common. So let's define new variables $a = x-n$ and $b = y-n$ to denote the bits unique to $x$ and $y$, respectively. This way, for a triple $(a,b,n)$, we have $(a\land b) = (a\land n) = (b\land n) = 0$ and

$$(a+n)(b+n) = ((a+n)+(b+n) - n)n + A(a+n) + By + C$$

Let's manipulate this further:

$$\begin{align*} (a+n)(b+n) &= ((a+n)+(b+n) - n)n + A(a+n) + B(b+n) + C \\\ (a+n)(b+n) &= (a+b+n)n + A(a+n) + B(b+n) + C \\\ ab+an+bn+n^2 &= an+bn+n^2 + A(a+n) + B(b+n) + C \\\ ab &= A(a+n) + B(b+n) + C \end{align*}$$

Notice the nice cancellation going on! This is certainly easier to solve.

At this point though, it isn't clear how to proceed. But if you assume for a while that $n$ is a constant, then we notice that the equation becomes a conic section on the variables $a$ and $b$, and it turns out that such equations are usually reducible to one of the following:

  • A linear diophantine equation
  • Generalized Pell equation
  • Factorization problem

Let's see what we got by assuming $n$ is constant and throwing all "constants" to the right side: $$\begin{align*} ab &= A(a+n) + B(b+n) + C\\\ ab - Aa - Bb &= (A+B)n + C \end{align*}$$ Here, we can try to "complete the factorization":
$$\begin{align*} ab - Aa - Bb &= (A+B)n + C\\\ ab - Aa - Bb+AB &= (A+B)n + AB + C \\\ (a-B)(b-A) &= (A+B)n + AB + C \end{align*}$$

This means we have reduced the problem to factorizing all numbers of the form $(A+B)n + AB + C$, where $n \le N$ :)

Enumerating factors

Now we're really getting somewhere. We want to enumerate the triples $(a,b,n)$ satisfying all the following things:

  • $(a-B)(b-A) = (A+B)n + AB + C$.
  • $a+n \le N$ and $b+n \le N$.
  • $(a\land b) = (a\land n) = (b\land n) = 0$.

The remaining two conditions can just be checked afterwards, so let's focus on the first, i.e. enumerating factorizations of $(A+B)n + AB + C$.

This number is always positive, but the factors on the left side may be negative! Thankfully, we can exclude that possibility because if both factors are negative (and $a, b \ge 0$), then $|(a-B)(b-A)| \le |AB| < (A+B)n + AB + C$. Thus, we can focus on positive divisors.

Next, notice that the largest possible right-hand-side value is $L := (A+B)N + AB + C$. This means that the smaller factor among $(a-B)$ and $(b-A)$ is at most the square root of this number. But the square root of this number is a smallish number considering the problem constraints, which makes it feasible for enumeration!

Specifically, assume first that $(a-B) \le (b-A)$. This means that $(a-B) \le \sqrt{L}$. So let's enumerate all these possible values $d := a-B$. The next step is to enumerate all indices $n \le N$ such that $d$ divides $(A+B)n + AB + C$. Which such indices are there?

If we let $P = A+B$ and $Q = AB + C$, then the following are equivalent:

$$\begin{align*} d &\mid (A+B)n + AB + C \\\ d &\mid Pn + Q \\\ Pn &\equiv -Q \pmod{d} \end{align*}$$

Let $g = \gcd(P,d)$. Then if $g$ doesn't divide $Q$, this has no solutions. Otherwise, we can continue by letting $P' = P/g$, $Q' = Q/g$ and $d' = d/g$:
$$\begin{align*} Pn &\equiv -Q \pmod{d} \\\ P'n &\equiv -Q' \pmod{d'} \\\ n &\equiv -Q'(P')^{-1} \pmod{d'} \end{align*}$$

This means that it's easy to "loop" across all valid indices $n$! The following is an illustration on how this can be done, using a C++-style loop:

Let g = gcd(P,d)
Let x = modInverse(P/g,d/g)
for (n = (-Q'*x) % (d/g); n <= N; n += d/g) {
    ...
}

Having access to $d$ and $n$, we can now compute a candidate solution $(a,b,n)$ as:

  • $a = B + d$ (from $d = a-B$)
  • $b = A + \frac{(A+B)n+AB+C}{d}$
  • $n = n$

(You should only count this if $(a-B) \le (b-A)$ is true.)

Using this method, we can enumerate all candidate solutions $(a,b,n)$ satisfying $(a-B) \le (b-A)$.

We can enumerate candidates $(a,b,n)$ satisfying the other inequality $(b-A) < (a-B)$ similarly:

  • $b = A + d$
  • $b = B + \frac{(A+B)n+AB+C}{d}$
  • $n = n$

(You should only count this if $(b-A) < (a-B)$ is true.)

Since every solution $(a,b,n)$ determines a unique value $(a-B)(b-A)$, this means each solution will be generated exactly once! We just need to filter out the candidates with the remaining requirements:

  • $a+n \le N$ and $b+n \le N$.
  • $(a\land b) = (a\land n) = (b\land n) = 0$.

Running time

The question now is, how fast does this solution run?

Notice that for every number $d \le \sqrt{L}$, the number of candidates we're considering is at most: $$1 + \frac{N}{d/\gcd(A+B,d)}$$ Thus, the running time is proportional to the following sum:
$$\sum_{d\le \sqrt{L}} \left(1 + \frac{N}{d/\gcd(A+B,d)}\right)$$ Which can be simplified to:
$$\begin{align*} T(N)&= O\left(\sum_{d\le \sqrt{L}} \left(1 + \frac{N}{d/\gcd(A+B,d)}\right)\right) \\\&= O\left(\sqrt{L} + \sum_{g \mid A+B} \sum_{\substack{d\le \sqrt{L} \\\ \gcd(A+B,d) = g}} Ng/d\right) \\\&\le O\left(\sqrt{L} + \sum_{g \mid A+B} \sum_{\substack{d\le \sqrt{L} \\\ g \mid d}} Ng/d\right) \\\&= O\left(\sqrt{L} + \sum_{g \mid A+B} N\sum_{d\le \sqrt{L}/g} 1/d\right) \\\&\le O\left(\sqrt{L} + \sum_{g \mid A+B} N\sum_{d\le \sqrt{L}} 1/d\right) \\\&= O\left(\sqrt{L} + \sum_{g \mid A+B} N \log \sqrt{L}\right) \\\&= O\left(\sqrt{L} + N \log L \cdot d(A+B)\right) \end{align*}$$ where $d$ is the divisor count function.

Thus, the time complexity is $O\left(\sqrt{L} + N \log L \cdot d(A+B)\right)$. Note that this is a pretty loose bound, but it still gives an idea of how fast this runs.

Time Complexity:

$O(\sqrt{L}+N \log L \cdot d(A+B))$, where $L = (A+B)N + AB + C$, and $d$ is the divisor count function.

AUTHOR'S AND TESTER'S SOLUTIONS:

setter
tester

SELECT - Editorial

$
0
0

PROBLEM LINK:

Contest
Practice

Author:Istvan Nagy
Tester:Kevin Atienza
Translators:Sergey Kulik (Russian), Team VNOI (Vietnamese) and Hu Zecong (Mandarin)
Editorialist:Kevin Atienza

DIFFICULTY:

Medium

PREREQUISITES:

Backtracking

PROBLEM:

Names are encrypted according to some algorithm (specified in the problem statement). Your job is to decrypt them. It's possible that multiple solutions exist; output any one.

QUICK EXPLANATION:

Generate the name with backtracking, starting from the last. At every point in the backtracking, we need to remember the following data:

  • The current index.
  • The sum of the letters generated so far.
  • The list of intervals where we can still put some letters.

Each "interval" is a contiguous range of letters, say, lmnopqrs. In each interval, we need to remember the following data:

  • The lowest and highest possible letter that can be used (in the above example, l and s),
  • The number of letters we want to take from this interval,
  • The number of letters we have taken and will want to take below this interval.

All these data can be updated as we try out each letter during the backtracking phase.

EXPLANATION:

Decoding the encryption method

The first step is to understand what the encryption does.

encrypt(S[0..N-1])
    W[0..N-1] = {0,..,0}
    for i=0 to N-1
        if S[i]<'a' or S[i]>'z' then
            return "failure in encryption"
        for j=0 to N-1
            if S[i]>S[j] then
                W[i]++
        for j=i to N-1
            if i != j and S[i] == S[j] then
                return "failure in encryption"
            W[i] = W[i] + S[j] - 'a'
        W[i] = W[i] mod 10
    The encrypted name of person is W[0],W[1]..,W[N-1]

With some simple refactoring, we can split this process into four phases:

encrypt(S[0..N-1])
    W[0..N-1] = {0,..,0}

    # checking phase
    for i=0 to N-1
        if S[i]<'a' or S[i]>'z' then
            return "failure in encryption"
        for j=i to N-1
            if i != j and S[i] == S[j] then
                return "failure in encryption"

    # "rank" phase
    for i=0 to N-1
        for j=0 to N-1
            if S[i]>S[j] then
                W[i]++

    # "cumulative sum" phase
    for i=0 to N-1
        for j=i to N-1
            W[i] = W[i] + S[j] - 'a'

    # "mod" phase
    for i=0 to N-1
        W[i] = W[i] mod 10

    The encrypted name of person is W[0],W[1]..,W[N-1]

It's self-explanatory what each of these phases do. From this, we get our first few observations:

  • The "checking" phase simply ensures that the input is valid, i.e., it is a string of $N$ distinct lowercase letters.
  • The "mod" phase tells us that we are working modulo $10$. (at least when computing $W[i]$)
  • The "rank" and "cumulative sum" phases can be interchanged.
  • The "rank" phase calculates the letters' ranks, i.e. the position of each letter assuming the string is sorted.

Let's refactor it a little bit more:

encrypt(S[0..N-1])
    W[0..N-1] = {0,..,0}

    # checking phase
    for i=0 to N-1
        S[i] = S[i] - 'a' # convert to a number from 0 to 25
        if S[i] is not in the range [0,1,2,...,25] then
            return "failure in encryption"
        for j=i+1 to N-1
            if S[i] == S[j] then
                return "failure in encryption"

    # "rank" phase
    R[0..N-1] = {0,..,0}
    for i=0 to N-1
        for j=0 to N-1
            if S[i]>S[j] then
                R[i]++

    # "cumulative sum" phase
    T[0..N-1] = {0,..,0}
    for i=0 to N-1
        for j=i to N-1
            T[i] = T[i] + S[j]

    # "mod" phase
    for i=0 to N-1
        W[i] = (R[i] + T[i]) mod 10

    The encrypted name of person is W[0],W[1]..,W[N-1]

We can state this procedure in yet another way: we can say that $W[i] = (R[i] + T[i]) \bmod 10$, where

  • $R[i]$ is the rank of $i$, i.e., the (0-indexed) position of $S[i]$ when the string is sorted.
  • $T[i]$ is the cumulative sum starting at $i$, i.e. $S[i] + S[i+1] + S[i+2] + \ldots S[N-1]$, or simply $S[i] + T[i+1]$.

Backtracking

Our goal now is to generate a name $S$ that encrypts to the string of digits $W$. But beyond this refactoring, it's hard to analyze the "encrypt" function, so we'll simply resort to backtracking, taking advantage of the smallness of $K$ :)

There are two ways this can be done: either left to right or right to left. Both are possible, but we'll do the right-to-left method. Our goal is to generate the letters $S[N-1], S[N-2], \ldots, S[2], S[1], S[0]$ such that as we generate $S[i]$, we ensure that $S[i]$ decrypts to $W[i]$ correctly. By doing it from right to left, we can keep track of the cumulative sum ($T[i]$) of all the letters so far.

Unfortunately, there's one complication: we can't ensure that $S[i]$ decrypts to $W[i]$ without knowing in advance what the remaining letters will be! (Or at least some information about them.) This is because of the "rank" phase: as we select $S[i]$, we also need to ensure that its rank ($R[i]$) is correct, and $R[i]$ depends on all letters, not just the ones we've already generated. Therefore, whenever we select $S[i]$, we must also specify its rank, and then we must ensure that our future choices will be such that this rank will be correct.

The way we do this is to ensure that the correct number of letters will be taken from certain intervals of the alphabet. Let's give an example. Suppose $N = 8$. Let's try to generate the string, starting with the last letter. Suppose we want the last letter to be k, and we want its rank to be exactly $4$. To ensure that the letter gets rank $4$, we want to make sure that:

  • The letters with ranks $1$ to $3$ will be taken from the interval abcdefghij in the future.
  • The letters with ranks $5$ to $8$ will be taken from the interval lmnopqrstuvwxyz in the future.

Now, we need to select the second letter. Suppose that we want it to be s, and its rank to be exactly $6$. To guarantee this, we want to ensure that:

  • The letters with ranks $1$ to $3$ will be taken from the interval abcdefghij in the future.
  • The letters with ranks $5$ to $5$ will be taken from the interval lmnopqr in the future.
  • The letters with ranks $7$ to $8$ will be taken from the interval tuvwxyz in the future.

And so on.

As you can see, we need to remember all these things as we perform the backtracking. Notice that every time we choose the next letter, we "split" one of the intervals into two. We can continue this until we generate the entire string, or hit a dead end, in which case we backtrack and try other possibilities. Finally, we also need to keep track of $T[i]$ during the backtracking.

On top of this, we need to ensure that $S[i]$ encrypts correctly to $W[i]$. This means that we must select the letter and the rank such that the following is satisfied: $W[i] \equiv R[i] + T[i] \pmod{10}$.

More backtracking details

We now have a general backtracking strategy. It's time to handle the details and formalize it :) For simplicity, we'll assume each letter is actually a number from $0$ to $25$, not a letter from a to z.

Let's call our (recursive) function find. This will have three arguments: i, T and intervals.

  • i is the current index, i.e. we're trying to fill up $S[i]$.
  • T is the cumulative sum of the letters after $S[i]$, i.e. it is equal to $T[i+1]$.
  • intervals is the list of intervals where we can select letters from.

Each "interval" encodes information of the following form:

The letters with ranks R1 to R2 will be taken from the interval [S1,S2] in the future.

Specifically, we represent it as a 4-tuple (S1, S2, R1, R2).

The initial call to our function will be: find(N-1, 0, [(0,25,0,N-1)]). Note that intervals contains a single interval (0,25,0,N-1), which says

The letters with ranks 0 to N-1 will be taken from the interval [0,25] in the future.

Now, let's try to see what happens in the call find(i, T, intervals). During this time, we want to select the letter $S[i]$. We also want to select its rank, $R[i]$. We can only select this letter from one of the intervals in intervals, and once we select it, we need to split that interval into two, as illustrated above.

Specifically, suppose we want our letter to be $S[i]$, and it is taken from some interval (S1, S2, R1, R2). Let's also assume its rank is $R[i]$. This selection of $S[i]$ and $R[i]$ implies all of the following:

  • S[i] must be in the range S1 <= S[i] <= S2.
  • R[i] must be in the range R1 <= R[i] <= R2.
  • T[i] == S[i] + T.
  • The following must be satisfied: W[i] == (R[i] + T[i]) % 10.
  • The interval (S1, S2, R1, R2) will be split into the intervals (S1, S[i]-1, R1, R[i]-1) and (S[i]+1, S2, R[i]+1, R2).

The recursion ends when we reach i = -1, in which case we've already generated the whole string. That's it!

Here's an implementation in Python: (Pypy)

def find(i, T, intervals):
   if i == -1:
      # end case: whole string generated
      return True

   # across all intervals
   for idx, (S1, S2, R1, R2) in enumerate(intervals):
      # for all ranks
      for Ri in range(R1,R2+1):
         # try all letters
         for S[i] in range(S1,S2+1):
            Ti = S[i] + T  # cumulative sum
            if W[i] == (Ri + Ti) % 10:
               # recurse
               if find(i-1, Ti, intervals[:idx] + [(S1,S[i]-1,R1,Ri-1), (S[i]+1,S2,Ri+1,R2)] + intervals[idx+1:]):
                  return True


z, n = map(int, raw_input().strip().split())
S = [None]*n
for cas in xrange(z):
   W = map(int, raw_input().strip())
   assert find(n-1, 0, [(0, 25, 0, n-1)])
   print ''.join(chr(s + ord('a')) for s in S)

This search can be optimized further with a couple of ideas; for example, instead of choosing $S[i]$ and $R[i]$ in increasing order, we can try randomizing the order in which we try it. Another is to use more sophisticated improvements like beam search.

AUTHOR'S AND TESTER'S SOLUTIONS:

setter
tester

SECUBE - Editorial

$
0
0

PROBLEM LINK:

Contest
Practice

Author:Istvan Nagy
Tester:Kevin Atienza
Translators:Sergey Kulik (Russian), Team VNOI (Vietnamese) and Hu Zecong (Mandarin)
Editorialist:Kevin Atienza

DIFFICULTY:

Easy-Medium

PREREQUISITES:

Precomputation, modular arithmetic

PROBLEM:

A shop is selling cubes of size $K\times K\times K$. Sebi bought one. Her sisters asked for $C$ cubes of size $1\times 1\times 1$. Is it possible for Sebi to buy some number of cubes so that he can build another cube (of possibly a different size) out of those and the remaining cubes he has?

QUICK EXPLANATION:

The answer is YES iff there is some $x$ such that $x^3 + C \equiv 0 \pmod{K^3}$. Thus, we want to know if $-C$ is a perfect cube mod $K^3$. We only need to try $x$ up to $K^3$.

To speed this up, notice that $K \le 100$, which means we can precompute all cubes modulo $K^3$ quickly, for all $K$, before processing any input.

EXPLANATION:

Let's keep track of what's happening. Initially, Sebi has $K^3$ individual cubes. After her sister takes some cubes, he has exactly $K^3 - C$ left. Now he wants to buy some more $K^3$-sized cubes, say $t$ more, such that he can form another cube out of those cubes and the $K^3 - C$ leftovers he has. In other words, $(K^3 - C) + tK^3$ must be equal to $x^3$ for some integer $x > 0$, i.e. $(K^3 - C) + tK^3 = x^3$. Thus, the question now becomes: are there some integers $t, x > 0$ satisfying $(K^3 - C) + tK^3 = x^3$?

By reducing this equation modulo $K^3$, we get the following: $$-C \equiv x^3 \pmod{K^3}$$

This says that $(-C)$ must be a perfect cube modulo $K^3$. Any solution must satisfy this, so we now have one necessary criterion for the existence of the solution $(x,t)$. But is this enough?

Fortunately yes! If $-C$ is a perfect cube modulo $K^3$, i.e. if $a^3 \equiv -C \pmod{K^3}$, then by definition, $a^3 = -C + bK^3$ for some integer $b$. This gives us the solution $(x,t) = (a,b-1)$! Note that if $x$ or $t$ are not positive, then we can simply increase $x$ by $K^3$ arbitrarily until they both become positive.

Thus, the remaining part is how to determine if a number is a perfect cube modulo $K^3$. To answer this, we notice the constraint $K \le 100$. This means that even though there are many test cases, there can only be a few distinct $K$s, and for each $K$, we can simply precompute the perfect cubes modulo $K^3$! To do so, notice that we only need to consider $x^3$ for $x$ in the range $[0,K^3)$, because $x^3$ and $(x \pm c\cdot K)^3$ are the same modulo $K^3$!

So how fast does this run? The precomputation part runs in:

$$O(1^3 + 2^3 + 3^3 + \ldots + K^3) = O((1 + 2 + 3 + \ldots + K)^2) = O((K^2)^2) = O(K^4)$$

After that, each test case can be answered in $O(1)$ with just a simple lookup!

Fun fact: We note that there exist faster solutions for this problem. For example, there exists a solution which runs in $O(K \log \log K)$ time precomputation and $O(\log^2 K)$ time per query. It uses the Chinese remainder theorem, Hensel's lifting lemma, and the following theorem about existence of powers modulo primes:

If $n$ is the form $2$, $4$, $p^k$ or $2p^k$ for $k \ge 1$ and some odd prime $p$, and if $\gcd(a,n) = 1$, then at least one solution to $x^t \equiv a \pmod{n}$ exists iff: $$a^{\phi(n)/d} \equiv 1 \pmod{n}$$where $\phi$ is Euler's totient function and $d = \gcd(t,\phi(n))$. If there is at least one solution, then there are exactly $d$ solutions.

We leave this algorithm for the reader to discover.

During the round however, it might actually cost you more time pursuing this fast solution instead of the simpler solution above, so this is a lesson to not overthink things :)

Time Complexity:

Either

  • $O(K_{\max}^4)$ preprocessing, $O(1)$ query
  • $O(K_{\max} \log \log K_{\max})$ preprocessing, $O(\log^2 K)$ query

AUTHOR'S AND TESTER'S SOLUTIONS:

setter
tester

TREEDIAM - Editorial

$
0
0

PROBLEM LINK:

Practice
Contest

Author:Sergey Kulik
Tester:Harshil Shah
Editorialist:Pawel Kacprzak

DIFFICULTY:

MEDIUM

PREREQUISITES:

Trees, Union-Find, LCA

PROBLEM:

Given a tree on N nodes and a sequence of N - 1 edge delete operations, for each i = 0, 1, N - 1, find the product of diameters of all trees formed after performing the first i delete operations. In this problem, diameter of a tree is defined by the maximum sum of weights of nodes taken over all simple paths in the tree.

QUICK EXPLANATION:

Compute the required diameters in reverse order, i.e. begin when there are no edges in the set of trees and add edges in the reverse order they are deleted. At each step, for the newly formed tree from two smaller ones, compute its diameter by combining diameter of these smaller trees, and accumulate it to the result sequence. At the end, print the resulting sequence in the reversed order.

EXPLANATION:

In all subtask we are going to use the same approach, the only difference will be in complexity of computing the diameter of newly formed tree.

The general method is the following. Given a sequence of edge delete operations ei1, ei2, …, eiN - 1, we are going to perform these operations in the reversed order. It means that instead of starting with a single tree, we are going to start with N trees consisting of single nodes, and at each step we are going to merge two trees into a bigger one using the edge that is deleted at the corresponding step in the sequence of delete operations. Notice that if before a merge operation, the product of diameters is P and diameters of trees that are going to be merged are DA and DB respectively, and the diameter of the newly formed tree by this merge is DC, then the product of diameters after the merge is P / DA / DB * DC - actually in the problem we want to have this result computed modulo 109 + 7, but it can be easily achieved by representing division as multiplication by modular multiplication inverse modulo 109 + 7. Using this method we will get the resulting sequence of products of diameters in the reverse order. The last thing to do is to reverse that sequence and return it as the result. It remains to show how to compute the diameter DC of the tree formed by the merge of two smaller trees. Each subtask corresponds to a solving this last problem in particular time complexity. Notice that we are going to perform a merge operation O(N) times in each of subtasks.

Subtask 1

In the first subtask we have N ≤ 100, so any method of computing a diameter of newly formed tree with quadratic running time gets the job done. We can even explicitly recompute the set of trees after the merge from the scratch. The easiest method of computing the diameter of a single tree is to perform DFS from each of its nodes to compute the path with maximum weight starting with the selected node and taking maximum of this weights over all nodes. This method will result in O(N3) time, because we are performing O(N) merge operations and each merge along with computation of new diameter takes O(N2) time.

Subtask 2

In the second subtask we have N ≤ 5000, so O(N3) method is definitely too slow. However, merging two trees in O(N) time may be acceptable if it is implemented efficiently (if not union-find can be used as described in the third subtask). So the goal is to compute the diameter of a newly formed tree in O(N) time and this can be done by using a well known, standard linear algorithm using two combined DFS calls: first, from arbitrary node v, we compute the node u for which the path v -> u has the largest weights among all paths starting in v. After that, starting in node u, we compute the node w, for which the path u -> w has the largest weights among all paths starting in u - this weight is the resulting diameter. Since each merge runs in O(N) right now, the total running time is O(N2) which should pass for N ≤ 5000.

Subtask 3

In the last subtask we need a really efficient method of computing the diameter of tree TC formed by merging two smaller trees TA, TB by adding a single edge between them. We are going to perform this merge along with computation of diameter of TC in O(log(N)) time. First, notice that the merge can be done using union-find data structure in O(log(N)) or faster if necessary. So the only remaining thing is to show how to compute the diameter of TC. In order to do it, for each tree ever formed in the computation, we are going to store two endpoints of its diameter (if the tree has only a single node, then these endpoint are the same). The crucial observation is that the diameter of TC can be computed when we know endpoints of diameters of TA and TB. More specifically, diameter of TC is either diameter of TA or diameter of TB or a path between one of endpoints of diameter of TA and one endpoint of diameter of TB. This is easy to show, because if any other path has a larger weight, then diameter of either TA or TB should be larger, which leads to a contradiction. Since diameters of TA and TB are already computed, the only remaining thing is to show how to compute the weight of a paths between endpoints of these two diameters. In order to do that we are going to use LCA tree-data structure, which can be used to compute the length of a path between any two nodes in a tree in O(log(N)) time with O(N * log(N)) time precomputation, which is sufficient here. Thus the total running time is O(N * log(N)), since each merge along with computation of diameter of newly formed tree takes O(log(N)) time. Please refer to author’s and tester’s solutions for implementation details.

AUTHOR'S AND TESTER'S SOLUTIONS:


Tester's solution can be found here.
Editorialist's solution can be found here.

ALEXTASK - Editorial

$
0
0

PROBLEM LINK:

Practice
Contest

DIFFICULTY:

Simple

PREREQUISITES:

LCM, basic programming skills

PROBLEM:

You are given N events, the i-th event will occur every Ai-th milliseconds. Find the first millisecond when at least two events is happening at that millisection.

EXPLANATION:

First subtask

A very straight forward way to solve this problem is to check for every millisection starting from the first millisecond whether at least two events will occur on that millisecond or not, we stop at the first millisecond in which at least 2 events occured at this millisecond and this will be the answer.

For example, let's say in the first millisecond one event will occur then we go to check the second millisecond and let's say no events happened then we go and check the third millisecond and for example 4 events will occur in that millisecond then we stop and the answer is 3.

but how to calculate the number of events which will occur in a particular millisecond (say x-th millisecond )? since i-th event will only occur in the milliseconds which are divisible by Ai then we should count the number of tasks in which x mod Ai = 0

so for each millisecond, we should iterate over the array A with a loop and keep a counter to the number of milliseconds which are divisible by the current millisecond, after that we check the counter, if it's bigger than 1 then the answer is the current millisecond, otherwise we go and check the next day.

the complexity for this solution is O(N*answer), unfortunately since the answer can be large so this solution will not get full marks.

full solution

Since the required millisecond can be large then we need to find an idea that does not iterate over the milliseconds one by one.

One idea is to try to find a way that immediately calculate for a particular pair of events the first millisecond in which those two events will both occur in that millisecond, other events might also be by coincidence occur on that millisecond but that doesn't matter us. if we find such a way then we just iterate through all pairs of events and apply that way on them then we select the minimum millisecond among all pairs.

now, given two events what is the first millisecond in which both tasks will occur? let's say the indices of those two tasks are i and j, then such a millisecond must be multiple of both Ai and Aj, among all such milliseconds we should pick the minimum millisecond so we just need to calculate the least common multiple (LCM) of Ai and Aj and this will be the answer for this particular pair of tasks.

now let's explain how to calculate LCM of two numbers (say A and B), there's a well-known formula for it:
LCM(A,B) = A * B / GCD(A,B), where GCD(A,B) means the greatest common divisor of A and B, to calculate it we can use a very well-known euclidean algorithm which is described by the pseudo code below, basically it make use of the mathematical formula GCD(A,B) = GCD(A-B,B)

gcd(a,b):
    if b==0 then return a
    otherwise return gcd(b,a mod b)

SETTER'S SOLUTION

Can be found here.

TESTER'S SOLUTION

Can be found here.

SETELE - Editorial

$
0
0

PROBLEM LINK:

Contest
Practice

Author:Istvan Nagy
Tester:Kevin Atienza
Translators:Sergey Kulik (Russian), Team VNOI (Vietnamese) and Hu Zecong (Mandarin)
Editorialist:Kevin Atienza

DIFFICULTY:

Easy-Medium

PREREQUISITES:

Minimum spanning tree, Kruskal's algorithm, union-find

PROBLEM:

You are given a weighted undirected tree. Two nodes are randomly chosen to be connected with an edge of $0$ cost. Find the expected cost of the MST of the resulting graph.

QUICK EXPLANATION:

We want the value $C_{MST} - \frac{S}{T}$ where

  • $C_{MST}$ is the MST cost of the original graph (i.e. just the sum of the costs of all edges).
  • $S$ is the sum of the largest edge weight across all paths.
  • $T$ is the number of paths (i.e. $N(N-1)/2$).

$S$ is the only nontrivial thing to compute. Perform Kruskal's algorithm, but for each connected component, also keep track of its size. Then, $S$ is the sum of $\mathrm{size}(a)\cdot \mathrm{size}(b)\cdot c$ for every step $(a,b,c)$ of the Kruskal algorithm.

EXPLANATION:

Updating the MST

Suppose we want to connect nodes $i$ and $j$ with an edge of $0$ cost. How will the MST be updated?

Here's one equivalent characterization of a (finite) tree: A tree is an acyclic graph with exactly $N-1$ edges. This gives us some clear requirements on how to turn the graph back into a tree again. Specifically, upon adding the edge $(i,j,0)$:

  • We are creating exactly one cycle, namely the cycle $i \rightarrow \ldots \rightarrow j \rightarrow i$, where the $\ldots$ represents the original path from $i$ to $j$. Thus, we need to remove at least one edge from this cycle.
  • The number of edges is now $N$, which means we must remove exactly one edge.

Together, this means that we must remove one edge from that cycle. Which edge? Well, we want the resulting graph to have the smallest possible cost, so naturally we want to remove the edge with the largest cost!

To summarize, by adding the edge $(i,j,0)$, the cost of the MST reduces by exactly the largest edge weight in the path from $i$ to $j$!

Expected value of the new MST

To answer the question, we want to find the expected cost of the new MST. Let $C_{MST}$ be the cost of the original tree. Then from the above, and from the fact that $(i,j)$ is uniformly chosen, we see that this expected value is simply $C_{MST} - \frac{S}{T}$, where

  • $S$ is the sum of the largest edge weight across all paths.
  • $T$ is the number of paths (i.e. $\binom{N}{2} = N(N-1)/2$).

$T$ is pretty easy to compute, (just be careful with overflow!) so all that remains is to compute $S$. Unfortunately, naïve ways of computing this would be too slow, because there are $O(N^2)$ paths! So instead of computing the sum across paths, let's try to compute the sum across all edges, and just figure out how many paths have that edge as the maximum-cost edge. In other words,

$$S = \sum_{\text{$(a,b,c)$ is an edge}} c\cdot F(a,b,c)$$

where $F(a,b,c)$ is the number of paths whose largest-cost edge is $(a,b,c)$.

How do we compute $F(a,b,c)$? Let's consider some path $x \leftrightarrow y$. This path has $(a,b,c)$ as its largest-cost edge if and only if the following two conditions are satisfied:

  • $x$ is connected to $a$ with edges of cost $< c$.
  • $y$ is connected to $b$ with edges of cost $< c$.

(Note that it might be the other way around, i.e. $x$ is connected to $b$ and $y$ is connected to $a$, but then again, paths are symmetric, so $x \leftrightarrow y$ should be considered the same as $y \leftrightarrow x$.)

So we find that $F(a,b,c)$ is simply $R(a,c)\cdot R(b,c)$, where $R(x,c)$ is the number of nodes reachable from $x$ with edges of cost $< c$. But how do we compute $R(x,c)$? Amazingly, Kruskal's algorithm can help us here. Remember that Kruskal's algorithm considers the edges in increasing order and unites the nodes into components in that order. This means that, during the step where we process the edge $(a,b,c)$, $R(a,c)$ and $R(b,c)$ are simply the sizes of the components currently containing $a$ and $b$!

This gives us the following solution: Perform Kruskal's algorithm, but for each connected component, also keep track of its size. Then, $S$ is the sum of $\mathrm{size}(a)\cdot \mathrm{size}(b)\cdot c$ for every step $(a,b,c)$ of the Kruskal algorithm!

In pseudocode:

size[1..N] = [1,1,...,1]
parent[1..N] = [1,2,...,N]

# 'find' operation in 'union-find'
def find(n):
    return parent[n] == n ? n : parent[n] = find(parent[n])


S = T = C = 0
for all edges (a,b,c) sorted according to c:
    # find
    a = find(a)
    b = find(b)

    # update values
    S += size[a]*size[b]*c
    T += size[a]*size[b]
    C += c

    # union
    if size[a] < size[b]:
        parent[a] = b
        size[b] += size[a]
    else:
        parent[b] = a
        size[a] += size[b]

print C - S/T  # use exact division!

Time Complexity:

$O(N \log N)$

AUTHOR'S AND TESTER'S SOLUTIONS:

setter
tester


Data Structure Tutorial : Introduction

$
0
0

I want to post o series of tutorial on data structure. I think it is useful for programming contest. Please comment if u find any error.

Overview
Data structure is a particular way of storing and organizing data. Data structure provide a means to manage to large amount of data efficiently for uses such as large database and internet services . There can be many ways to organize the same data and sometimes one way is better than the others in some contexts.

Operations of data structure
We can also perform some operation on data structure such as insertion(addition), deletion(remove), searching(locate). sorting(arranging), merging(combining) etc.

Types
There are two types of data structures. such as :

1. Linear data structure :  A data structure that traverse the data elements sequencially.       Example : array, linkedlist, stack, queue etc .

2. Non linear data structure : A data structure that traverse the data elements  dynamically. Example :tree, graph etc .

Algorithm
Algorithm is a list of instruction that can be followed to perform a task. To write an algorithm we do strictly follow grammar of any particular language may be near to a programming language.

Complexity of Algorithm It is a function which measure the time and/or space used by an algorithm. There are two types of complexities such as

1. Time complexity : This complexity is related to execution time of an algorithm. It depends on the number of elements comparition and number of elements movements.

2. Space complexity : Space complexity is related to space needs in main memory for the data used to implement the algorithm.

N.B Please wait for next tutorial.

Why i am getting WA?

$
0
0

Can someone tell me why i am getting WA?

My Program for Problem SEBIHWY

Data Structure Tutorial : Array

$
0
0
If you find any error please comment. I will try to update this post.              Array is a collection of homogeneous data elements. It is a very simple data structure. The elements of an array are stored in successive memory location. Array is refered by a name and index number. Array is nice because of their simplicity and well suited for situations where the number is known. Array operation :

Traverse
Insert
 Delete
Sort
Search


There are two types of array. One dimensional array and two dimensional array.One dimensional array This type of array of array represent and strore in linear form. Array index start with zero.

Declaration : datatype arrayname[size];int arr[10];

Input array :for(int i=0; i<10; i++)  cin>>arr[i];

We can use store integer type of data to the array arr using above segment.----------

Traverse : Traversing can easy in linera array. Algorithm:


C++ implement :void traverse(int arr[]){for(int i=0; i<10; i++)   cout<<arr[i];}----------

Insertion : Inserting an element at the end of a linear array can be easily done provided the memory space space allocated for the array is large enough to accommodate the additional element. Inserting an element in the middle .. Algorithm : Insertion(arr[], n, k, item) here arr is a linear array with n elements and k is index we item insert. This algorithm inserts an item to kth in index in arr.

Step 1:Start
Step 2: Repeat for i=n-1 down to k(index)
   Shift the element dawn by one position] arr[i+1]=arr[i];[End of the loop]
Step 3: set arr[k]= item
Step 4: n++; Step 5: Exit.


C++ implement :void insert(int arr[],int n,int k,int item){for(int i=n-1; i>=k; i--){
         arr[i]=arr[i+1];}
      arr[k]= item;
      n++;}----------

Deletion : Deletion is very easy on linear array.

Algorithm : Deletion(arr, n, k) Here arr is a linear array with n number of items. K is position of elememt which can be deleted.
Step 1:Start
Step 2: Repeat for i=k upto n[Move the element upword]  arr[i]=arr[i+1];[End of the loop]
Step 3: n--;
Step 4: Exit.


C++ implementation :void deletion(int arr[],int n,int k){for(int i=k; i<n; i++){
            arr[i]= arr[i+1];}
      n--;}----------

Searching : Searching means find out a particular element in linear array. Linear seach and binary search are common algorithm for linear array. We discuss linear search and binary search.

Linear search Algorithm : Linear search is a simple search algorithm that checks every record until it finds the target valueAlgorithm: LinearSeach(arr, n, item)
Step 1:Start.
Step 2: Initialize loc=0;
Step 3: Repeat for i=0 upto n-1if(arr[i]==item) loc++;[End of the loop]
Step 4:if loc is not zero then print found otherwise print not found.
Step 5: Exit.

C++ implementation :void linear search(int arr[],int n, item){for(int i=0; i<n-1; i++){if(arr[i]==item) loc++;}if(loc) cout<<"Found"<<endl;else cout<<"Not found"<<endl}----------

Binary search : Binary search is available for sorted array. It compares the target value to the middle element of the array;if they are unequal, the half in which the target cannot lie is eliminated and the search continues on the remaining half until it is successful.

Algorithm : BinarySeach(arr, n, item)
Step 1:Start
Step 2: Initialize low =0 and high = n-1;
Step 3: While loop low<=high
    mid =(low + high)/2;if(a[mid]== item)return mid;elseif(a[mid]< item) low = mid +1;else high = mid -1;
Step 4: If item is not found in array return-1. Step 5: End.


C++ implementation :int binarySearch(int[] a,int n,int item){int low =0;int high = n -1;while(low<=high){{int mid =(low + high)/2;if(a[mid]== item)return mid;elseif(a[mid]< item) low = mid +1;else high = mid -1;}return-1;}--------------------
Sorting : There are various sorting algorithm in linear array. We discuss bubble sort and quick sort in this post.
Bubble Sort: Bubble sort is a example of sorting algorithm . In this method we at first compare the data element in the first position with the second position and arrange them in desired order.Then we compare the data element with with third data element and arrange them in desired order. The same process continuous until the data element at second last and last position.

Algorithm : BubbleSort(arr,n)
Step 1:Start
Step 2: Repeats i=0 to n
Step 3: Repeats j=0 to nif(arr[j]>arr[j+1]) then interchange arr[j] and arr[j+1][End of inner loop][End of outer loop]
Step 4: Exit.

C++ implement :void BubbleSort(int arr,int n){for(int i=0; i<n-1; i++){for(int j=0; j<n-1; j++){if(arr[j]>arr[j+1])     swap(arr[j],arr[j+1]);}}}

Quick Sort:
 Quick sort is a divide and conquer paradism. In this paradism one element is to be chosen as partitining element .
We divide the whole list array into two parts with respect to the partitiong elemnt . The data which are similar than or equal to the partitining element remain in
 the first part and data data which are greater than the partitioning element
 remain in the second part. If we find any data which is greater than the partitioning value that will be transfered to the second part., If we find any data whichis smaller than the partitioning element that will be transferred to first part.
Transferring the data have been done by exchanging the position of the the data
found in first and second part. By repeating this process ,
we can sort the whole list of data.Algorithm: QUICKSORT(arr, l, h)if  l<h then pi ← PARTITION(A, l, h)
QUICKSORT(A, l, pi–1)
QUICKSORT(A, pi+1, h)

C++ implementation :int partition(int arr[],int start,int end){int pivotValue = arr[start];int pivotPosition = start;for(int i=start+1; i<=end; i++){if(pivotValue > arr[i]){
          swap(arr[pivotPosition+1], arr[i]);
          swap(arr[pivotPosition], arr[pivotPosition+1]);
          pivotPosition++;}}return pivotPosition;}void quickSort(int arr[],int low,int high){if(low < high){int pi = partition(arr, low, high);
      quickSort(arr, low, pi -1);
      quickSort(arr, pi +1, high);}}



Details about quick sort : [link][1]



C++ example for simple sorting program with stl function: #include <bits/stdc++.h>using namespace std;intmain(){int  n, arr[100];
   cin >> n;for(int i=0; i<n; i++){
     cin>>arr[i];}

   sort(arr, arr + n);for(int i=0; i<n; i++){
      cout<<arr[i]<<"";}
   cout<<endl;return0;}

Codechef Problem : SMPAIR, Ups and Downs, KTTABLE, TLG ,FORESTGA
Spoj Problem : AGGRCOW - Aggressive cows
Hackerrank Problem : Arrays - DS, Quicksort 1 - Partition, Quicksort 2 - Sorting

Suggestions for learning algorithms

$
0
0

I have done a couple of contests in codechef and I have found tat my main handicap is algorithms. Please suggest a book to learn about algorithms related to optimization, dp and data structures like trees and graphs

ZCO: Is it fair??

$
0
0

@all heard about the issues at today's ZCO.Technical Problems.Again?
This is becoming a serious issue now ; IARCS committee has to look into it.This is the past 3 year scenario:
ZCO 2015:
I gave ZCO 2015 and back then also i faced same issues: website crashes, session expired etc. And after the time got over they mail and informed that time has been extended by half an hour. Now who is supposed to look at mail at that point of time. And about the announcement of website:for that the website should have load up. And they thought it was fair.
ZCO 2016:
Last year(ZCO 2016) they granted qualification to all students and thought it was fair.But the question is : Is it really fair for all?
ZCO 2017:
Now this year this will be a serious issue as back at my time one could have opted for ZCO or ZIO inclusively. But this year it is exclusive or. So if they grant qualification to all the ones giving ZCO it will be unfair for the ones who are giving ZIO. And if they grant to all what is point of having this at all.

So this is not supposed to go fair. It has been 3 years same issues; Same suggestions of shifting it to some already existing Online Judges:such as codechef. But still no progress. This is to request you all to mail IARCS in as much amount as possible.And if someone here on codechef is in contact with the organizing committee please request them. Because it seems as if they are not at all learning from the past experiences.

How many top contributors get laddus?

$
0
0

On the goodies.codechef.com page it is given that the top contributor on discuss gets 300 laddus. How many top contributors get that. Like in the contest win, top 10 get it.

Declaring priority queue?

$
0
0

When we declare a std::priority_queue,why do we sometime type priority_queue<int, vector<int>, greater<int> > q, other time we type priority_queue<int> q. Can anyone explain what does priority_queue<int, vector<int>, greater<int> > mean and why do we declare like that?


what is the problem in my code ?

$
0
0

Hi I am trying to solve SUMTRIAN problem it is work good in my computer but when i put the code in the website I have the time limit problem this is my code if any one can help me ,plz? * I am sorry about my English Language

package codechef;
import java.util.Scanner;
class SUMTRIAN {
private static int Trian[][];
//private static int SumTrian[];
private static int rows;
public static void main(String[] args) {
    Scanner input = new Scanner(System.in);
    int case1 = input.nextInt();
    for (int s = 0;s<case1;s++){
        rows= input.nextInt();
        Trian = new int[rows*rows][rows*rows];
        //SumTrian = new int[rows];
        for(int row = 0;row <rows;row++)
            for (int col = 0;col<=row;col++)
                Trian[row][col]=input.nextInt();
        System.out.println(solve(0,0));

    }
}
public static int solve(int row,int col){
    if(row>rows-1)
        return 0;
    else{
        int t1 = solve(row+1,col);
        int t2 = solve(row+1,col+1);
        int t = Trian[row][col]+max(t1,t2);
        return t;
    }
}
public static int max(int t1,int t2){
    int max = t1;
    if(t2>max)
        max = t2;
    return max;

}
}

SEBIHWY - Editorial

$
0
0

PROBLEM LINK:

Contest
Practice

Author:Istvan Nagy
Tester:Kevin Atienza
Translators:Sergey Kulik (Russian), Team VNOI (Vietnamese) and Hu Zecong (Mandarin)
Editorialist:Kevin Atienza

DIFFICULTY:

Cakewalk

PREREQUISITES:

Ad hoc

PROBLEM:

Sebi and his father are in a car, and they want to guess the speed of another car. Both cars travel at a constant speed, and the second is faster than the first car. There are markers on the highway every 50 meters.

They start a timer at the instant at which their car and the other car are parallel to each other. After $T$ seconds, they observe that both the cars are next to some markers and the number of markers between them is $D - 1$. The speed of Sebi's father's car is $S$.

Sebi and his father guess that the speed is $SG$ and $FG$, respectively. Determine who has the better guess.

QUICK EXPLANATION:

The correct speed of the other car is $S_{\text{other}} := S + \frac{180D}{T}$, in the right units. (Be careful: This speed isn't necessarily an integer.) Thus:

  • If $|SG - S_{\text{other}}| < |FG - S_{\text{other}}|$, the answer is SEBI.
  • If $|SG - S_{\text{other}}| > |FG - S_{\text{other}}|$, the answer is FATHER.
  • If $|SG - S_{\text{other}}| = |FG - S_{\text{other}}|$, the answer is DRAW.

EXPLANATION:

To answer the problem, we need to compute the speed of the other car exactly. Let's denote that speed by $S_{\text{other}}$, in kph. If we can do this, then answering the problem boils down to computing which one has the "better guess": we compute the "error" of Sebi's and his father's guess as:

  • $S_{\text{err}} = |SG - S_{\text{other}}|$
  • $F_{\text{err}} = |FG - S_{\text{other}}|$

Then:

  • If $S_{\text{err}} < F_{\text{err}}$, the answer is SEBI.
  • If $S_{\text{err}} > F_{\text{err}}$, the answer is FATHER.
  • If $S_{\text{err}} = F_{\text{err}}$, the answer is DRAW.

Computing $S_{\text{other}}$

We know how fast the first car travels, and that the second car is faster. Thus, we only need to know exactly how much faster the second car is than the first.

They start out in the same place, and then after $T$ seconds, they end up next to some markers. Since there are $D - 1$ markers between them, it means that after $T$ seconds, the cars are $50D$ meters away from each other. But the speeds of the cars are constant, which means this completely determines how much faster the second one is. Specifically, the second one is exactly $\frac{50D}{T}$ meters per second faster than the first!

Using this, and the fact that the first car has speed $S$ kph, we can simply add them to compute the speed of the second car, $S_{\text{other}}$. But we can't simply add them, because they are in different units! In order to add them properly and compare with $SG$ and $FG$, we need to convert $\frac{50D}{T}$ meters per second into kph. This can be done as follows:

$$\frac{\text{$50D$ m}}{\text{$T$ sec}} = \frac{\text{$50D$ m}}{\text{$T$ sec}} \frac{\text{$60$ sec}}{\text{$1$ min}} \frac{\text{$60$ min}}{\text{$1$ hr}} \frac{\text{$1$ km}}{\text{$1000$ m}} = \frac{\text{$180D$ km}}{\text{$T$ hr}}$$

This means that the second car is exactly $S_{\text{other}} := S + \frac{180D}{T}$ kph.

Here's an implementation: (in Python 3)

from fractions import Fraction as F
def solve():
    s, sg, fg, d, t = map(int, input().strip().split())
    actual = s + F(d*50*60*60, t*1000)
    sdist = abs(sg - actual)
    fdist = abs(fg - actual)
    return 'SEBI' if sdist < fdist else 'FATHER' if sdist > fdist else 'DRAW'


for cas in range(int(input())):
    print(solve())

Other possible problems

Here are a few possible bugs:

  • We need to divide $\frac{180D}{T}$ exactly. This means the data type for $S_{\text{other}}$ should not be an integer. You could use, for example, float or double. (The solution above uses fractions.) An alternative is to scale the speeds by $T$, so that they all become integers.
  • Even if you use a double, you might still get an issue if you write your code as follows: double S_other = 180*D / T; This is because even though S_other is a double, D and T might not be, so 180 * D / T might round off unexpectedly. To avoid this, you can

    • declare D and T to be doubles, or
    • use casting ((double) 180 * D / T), or
    • simply write it as 180.0 * D / T. Note that D / T * 180.0 might still fail. (Why?)

Time Complexity:

$O(1)$

AUTHOR'S AND TESTER'S SOLUTIONS:

setter
tester

BAADSHAH - EDITORIAL( IEMCO16)

$
0
0

PROBLEM LINK:

Practice
Contest

Author:Rohit Anand
Tester:Ankit Raj Gupta
Editorialist:Rohit Anand

DIFFICULTY:

EASY-MEDIUM

PREREQUISITES:

BIT/SEGMENT TREE,BINARY SEARCH

PROBLEM:

You have been provided with an array of numbers.Two types of query are there. In first query, you have to update a particular index of array with a given number. In second query, a prefix sum S is given.You have to check, whether this prefix sum exists in array or not, and hence if exists, print the last index of prefix sum.

QUICK EXPLANATION:

Construct a Segment Tree or BIT(Binary Indexed Tree) from the given array, where internal nodes represents range sum.Use Point updates for query 1, and for query 2,using binary search on range-query sums, where starting range is 1 to n, check if the prefix sum exists or not.

EXPLANATION:

Given array has n elements and we have to perform two kinds of operations.Lets start from worst complexity solutions.

Naive Approach
For the given array, first we will store the prefix sum in another array as,

prefix[1]=ar[1];
for(int i=2;i<=n;i++)
prefix[i]=ar[i]+prefix[i-1];

Here, for query 1, we can update the array in constant time ie we can directly do ar[p]=q.But, now we can see that after updating an index of array, the prefix array will also get modified. So, we also we have to modify prefix array as,

Before updation, let prev=ar[p]
After updation,
ar[p]=q,diff=q-prev;
for(int i=p;i<=n;i++)
prefix[i]+=diff;


Now, for query 2, the given sum to be found is S. We can perform a linear search on prefix array to check if S exists or not.
bool found=false;
for(int i=1;i<=n;i++)
if(prefix[i]==S)
found=true,pos=i;


To further optimise, we can use Binary Search on prefix array also.
After doing all these operations, when you submit your code, it shows Time Limit Exceeded. So sad :( Now, have a look at Constraints section.Isn't the values of m and n are too high to pass in 1 sec with the above approach ? Answer is Yes! Time complexity of above code is O(m$*$n) ie you are performing approx 1010 operations, which never executes in 1 sec. In order to get it pass within 1 sec, we have to reduce number of operations to approx 107. So, here is the optimised approach.

OPTIMISED APPROACH
For the given array, first we will construct the Segment tree or BIT(Fenwick tree).Here, the internal nodes will store the sum of leaf nodes as well as other internal nodes ie merging of nodes will be on the basis of summation of its two children nodes.
For, first query we will perform point updates using BIT as,

//Point update in BIT-------

void update(int x, int val) {
while (x <= n) {
bit[x] += val;
x += x & -x;
}
}

Time Complexity of update is O(logn).

For query 2,we can query the given sum S by defining our range using Binary Search as follows:


Pseudo Code:
l=1;
r=n;
int mid;
bool flag=0;

//Binary Search-----
while(l<=r)
{
mid=l+(r-l)/2;
ans=query(mid);
if(ans>S)
r=mid-1;

else if(ans< S)
l=mid+1;
else {
pos=mid;
flag=1;
break;
}
}

//Range-Sum query ------

long long query(int x) {
long long sum = 0;
while (x) {
sum += bit[x];
x -= x & -x;
}
return sum;
}

Time complexity of query is O((logn)2). logn for query and logn for Binary search.Because of monotonic nature of function,we can easily use Binary Search here.
So, Overall Time complexity for m queries will be O(m(logn)2).Clearly, we can see that we have reduced number of operations to less than 107, which is enough to pass within given time constraints(1 sec).
For better understanding of BIT concepts, you can refer BIT.Also you can use Segment tree for above operations.

AUTHOR'S AND TESTER'S SOLUTIONS:

Author's solution can be found here
Tester's solution can be found here

ZIO 2017 Discussion

$
0
0

What are the answers to the questions of ZIO 2017?

What is the possible cut-off for class 8?

When will the results come out?

ZCO 2017 Discussion

$
0
0

What do all of you think about ZCO will it be Reorganized or all who participated in ZCO will be entitled for INOI?

Viewing all 39796 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>