scalars on Max Bartolo
https://www.maxbartolo.com/tags/scalars/
Recent content in scalars on Max BartoloHugo -- gohugo.ioen-usWed, 06 Mar 2019 00:00:00 +0000Scalars
https://www.maxbartolo.com/ml-index-item/scalars/
Tue, 26 Feb 2019 00:00:00 +0000https://www.maxbartolo.com/ml-index-item/scalars/A scalar is a single number. Scalars are usually written in italics with lowercase variable names.
$$s = 1.7$$
In Python, a scalar can be represented as a floating point variable.
s = 1.7 print("Scalar {} has type {}".format(s, type(s))) Scalar 1.7 has type <class 'float'> Vectors
https://www.maxbartolo.com/ml-index-item/vectors/
Wed, 06 Mar 2019 00:00:00 +0000https://www.maxbartolo.com/ml-index-item/vectors/A vector is an array of scalar numbers. We can identify each individual number by its index in that ordering. Typically, we give vectors lowercase names in bold typeface, such as $\mathbf{x}$.
Individual elements in the vector can be identified by the name in italics with a subscript indicating the element position. Vectors are conventionally $1$-indexed and are typically assumed to be column vectors.
$\mathbf{x} = \begin{bmatrix}x_1 \\ x_2 \\ \vdots \\ x_n \\ \end{bmatrix}$Matrices
https://www.maxbartolo.com/ml-index-item/matrices/
Wed, 06 Mar 2019 00:00:00 +0000https://www.maxbartolo.com/ml-index-item/matrices/A matrix is a $2$-dimensional (2D) array of numbers. This means that every element in the matrix is identified by two indices, commonly $i$ representing the row-index and $j$ representing the column-index.
Matrices are usually given uppercase variable names in bold, such as $\mathbf{A}$.
$\mathbf{A} = \begin{bmatrix} A_{1,1} & A_{1,2} \\ A_{2,1} & A_{2,2} \end{bmatrix}$
We use a colon “:” to represent all the elements across an axis. So, $\mathbf{A}_{i,:}$ identifies all the elements in the $i$th row and $\mathbf{A}_{:,j}$ identifies all the elements in the $j$th column.Tensors
https://www.maxbartolo.com/ml-index-item/tensors/
Wed, 06 Mar 2019 00:00:00 +0000https://www.maxbartolo.com/ml-index-item/tensors/In the context of Machine Learning, it is convenient to think of a tensor as an $n$-dimensional array. Tensor dimensionality is also commonly referred to as its order, degree or rank, which formally is the sum of the tensor contravariant and covariant indices.
Scalars are $0$-th order tensors. Vectors can be represented as $1$-dimensional arrays and are therefore $1$st-order tensors. In a fixed basis, a standard linear map that maps a vector to a vector, is represented by a matrix (a $2$-dimensional array) and is therefore a $2$nd-order tensor.Dot (Scalar) Product
https://www.maxbartolo.com/ml-index-item/dot-scalar-product/
Wed, 06 Mar 2019 00:00:00 +0000https://www.maxbartolo.com/ml-index-item/dot-scalar-product/The dot product is an algebraic operation which takes two equal-sized vectors and returns a single scalar (which is why it is sometimes referred to as the scalar product). In Euclidean geometry, the dot product between the Cartesian components of two vectors is often referred to as the inner product.
The dot product is represented by a dot operator: $$s = \mathbf{x} \cdot \mathbf{y}$$
It is defined as: $$s = \mathbf{x} \cdot \mathbf{y} = \sum_{i=1}^{n}x_iy_i = x_1y_1 + x_2y_2 + … + x_ny_n$$