Unrolling uppercase boolean and
One of the exercises in Rosen (2019), asks that “[i]f $p_1$, $p_2$, $\ldots$, $p_n$ are $n$ propositions, explain why
$$ \bigwedge\limits_{i \ = \ 1}^{n - 1} \bigwedge\limits_{j \ = \ i + 1}^n (\lnot p_i \lor \lnot p_j) \tag{1} $$is true if and only if at most one of $p_1$, $p_2$, $\ldots$, $p_n$ is true ” Solving this task requires understanding the notation, so let’s unwrap the thing.
Writing code for writing a simple table
The $\LaTeX$ syntax for writing table is simple. One needs to tell
$\LaTeX$ that table is about to be rendered using the
\begin{array} and \end{array} declaration. The columns of a
$\LaTeX$ table align to left by default, but to center column
elements, additional pair of can be written immediately after the
\begin{array}, like in the following example:
Raspberry Pi computing cluster, Part 1
There are multiple ways to create a local area network (LAN) for Raspberry Pi computing cluster (RPCC). One of them is to assign a static IP address to each of Raspberry Pi computing unit in the RPCC and then use the RPCC’s computing units via these addresses through the Linux terminal. Here are the steps.
Nested multiplication
How many multiplication and addition operations are needed in order to evaluate a polynomial like
$$ P(x) = a_0 + a_5x^5 + a_{10}x^{10} + a_{15}x^{15} $$and how to reduce the number of these operations?
Simple Markov chain
A exercise in a book called Introduction to linear algebra by Strang (2016) first defines a matrix of coefficients ($\mathbf{A}$), a vector of starting values ($\mathbf{u}_1$) and then asks for computing successive values $\mathbf{Au}_1 = \mathbf{u}_2$, $\mathbf{Au}_2 = \mathbf{u}_3$, $\mathbf{Au}_3 = \mathbf{u}_4$, and to explore if any interesting properties appear. Also, the exercise asks for a program that does the computation in some programming language, so let’s see how to do this in C++.
Generating random numbers in C++
As is described in Forsyth (2018), the definition of normal distribution, which is a distribution that describes a distribution of a random variable ($x$), given a mean ($\mu$) and variance ($\sigma$) as
When $\mu = 0$ and $\sigma = 1$, the previous equation can be written
1st Derivative of the Sigmoid Function
In neural networks a activation function is a function that defines a threshold that makes a node of a neural network to activate. One example of a such activation function is the sigmoid function
$$ \sigma(x) = \dfrac{1}{1 + e^{-x}}. $$When training a neural network, activation function’s derivative is needed, but what it is for the sigmoid function?