Matti's CS Notebook

Raspberry Pi computing cluster, Part 1

There are multiple ways to create a local area network (LAN) for Raspberry Pi computing cluster (RPCC). One of them is to assign a static IP address to each of Raspberry Pi computing unit in the RPCC and then use the RPCC’s computing units via these addresses through the Linux terminal. Here are the steps.

Nested multiplication

How many multiplication and addition operations are needed in order to evaluate a polynomial like

$$ P(x) = a_0 + a_5x^5 + a_{10}x^{10} + a_{15}x^{15} $$

and how to reduce the number of these operations?

Simple Markov chain

A exercise in a book called Introduction to linear algebra by Strang (2016) first defines a matrix of coefficients ($\mathbf{A}$), a vector of starting values ($\mathbf{u}_1$) and then asks for computing successive values $\mathbf{Au}_1 = \mathbf{u}_2$, $\mathbf{Au}_2 = \mathbf{u}_3$, $\mathbf{Au}_3 = \mathbf{u}_4$, and to explore if any interesting properties appear. Also, the exercise asks for a program that does the computation in some programming language, so let’s see how to do this in C++.

Generating random numbers in C++

As is described in Forsyth (2018), the definition of normal distribution, which is a distribution that describes a distribution of a random variable ($x$), given a mean ($\mu$) and variance ($\sigma$) as

$$ \phantom{,} \ f(x \ | \ \mu, \ \sigma) = \dfrac{1}{\sqrt{2 \pi \sigma^2}} \exp \left( - \dfrac{(\mu - x)^2}{2 \sigma^2} \right) \ . \tag{1} $$
$$ \begin{array}{c c c} \phantom{=} & f(x \ | \ \mu, \ \sigma) & \phantom{=} \\[1em] = & \dfrac{1}{\sqrt{2 \pi \sigma^2}} \exp \left( - \dfrac{(\mu - x)^2}{2 \sigma^2} \right) & \phantom{=} \end{array} $$

When $\mu = 0$ and $\sigma = 1$, the previous equation can be written

1st Derivative of the Sigmoid Function

In neural networks a activation function is a function that defines a threshold that makes a node of a neural network to activate. One example of a such activation function is the sigmoid function

$$ \sigma(x) = \dfrac{1}{1 + e^{-x}}. $$
(Aggarwal, 2023; Rojas, 1996).

When training a neural network, activation function’s derivative is needed, but what it is for the sigmoid function?