Matrix Operations

  1. Runtime analysis (= description of how much time it will take to run it on a computer:)
    We will compute the runtime based on how many comparisons we perform:
    1. In case the matrices aren't of the same size, we will discover this by noticing that, for $A_{n_1, m_1}$ and $B_{n_2, m_2}$, either $n_1 \neq n_2$ or $m_1 \neq m_2$. Meaning, we will do either $1$ or $2$ comparisons. Since this is a constant number $c \le 2$, we say that the runtime is $\mathcal{O}(1)$ (or, being specific, it is $\Theta(1)$) since, as we learned in Topic $8$, $c = \mathcal{O}(1)$ no matter how large or small the constant $c$ is.
    2. In case the matrices are of the same size, we will compare all their elements. Assuming the matrices have $n \times m$ elements, this will result in $nm$ comparisons. This means that the runtime is $\mathcal{O}(nm)$ (or, being specific, it is $\Theta(nm)$).
    [Note: If we were to implement this algorithm in Java for example, besides doing the actual comparisons, we will of course also need to increment the row and columns indexes $i$ and $j$, which also takes time to run in a computer. However, notice that, per each comparison, we will have $1$ or $2$ increments, which is a constant (call it $c_2$), and $1 + c_2$ is still a constant, so the increments still preserve the same runtime: $(1 + c_2)nm = \mathcal{O}(nm)$.]