* Linear hull

* Affine hull
, where
* Convex hull
, where and
* Conic hull
, where

Optimality conditions

, and ;

Inequality-constrained

,
,
where is a set of active constraints.

Duality

Consider a primal in standard form. , s.t. , .

Lagrangian

,
where are called the Lagrange multipliers, or dual variables, of the problem.

, where
.

Lower bound property of the dual function

The dual function is jointly concave in . Moreover, it holds that
.

Dual optimization problem and weak duality

d s.t. .
It is remarkable that the dual problem is always a convex optimization problem, even when the primal problem is not convex. Weak duality property of the dual problem is: d
p*.

Strong duality and Slater's condition for convex programs

The first of the convex are affine. If there exist a point such that

,
then strong duality holds between the primal and dual problems, that is, p=d.

Complementary slackness

A primal and the corresponding dual inequality cannot be slack simultaneously.

If , then it must be . If , then it must be .

Norm

• Definition

for any scaler and any .
• Examples

Matrix Norm

• Frobenius Norm
=

Cauchy-Schwartz inequalities and definition of angles

We can define the corresponding angle as such that
.

Range, nullspace, and rank

The rank of matrix A is the dimension of its range.

Symmetric matrix

Examples
* Diagonal matrix
Any quadratic function can be written as
, where

Congruence transformations

For any matrix it holds that:
* , and ;
* if and only if is full-column rank, i.e., ;
* if and only if is full-row rank, i.e., .

Eigenvalue and eigenvector

• Therefore means at least one eigenvalue is 0. Also, A is invertible when
• Eigenvalues of symmetric matrices are nonnegative. If is positive semi-definite, then
• Eigenvalues of positive definite matrices are positive. If is positive definite, then
• From , matrix A is invertable if and only if A is positive definite.

Spectral decomposition (a.k.a. eigendecomposition) for symmetric matrix

Any symmetric matrix can be decomposed as a weighted sum of normalized dyads.
then A can be described by eigenvalues and eigenvectors of A.

Or,

Singular value decomposition

In words, the singular value theorem states that any matrix can be written as a sum of simpler matrices (orthogonal dyads).

Then A can be described by singular values of (i.e. eigenvalues of ) and eigenvectors of and as follows.

Or, in compact form,
.

(Because and are p.s.d..)
and are orthogonal matrices.
* SVD, range, and nullspace
The first columns of are the orthogonal basis of range space (columns space) of A.
The last columns of are the orthogonal basis of nullspace of A.

Cholesky decomposition of p.s.d. and p.d. matrices

If such that , then is positive semi-definite.
If is positive semi-definite, then such that . That is, any p.s.d. matrix can be written as a product . P is not unique. If A is positive definite, then we can choose lower triangular matrix for the decomposition as , where L is invertable.
Example of p.s.d. matrix
Variance-covariance matrix is a notable example of p.s.d. matrix.
, where .

Rayleigh quotient

Given , it holds that

Therefore we can solve optimization problem in quadratic form by finding eigenvalues.

Properties of eigenvalues and eigenvectors

Type of matrix Eigenvalues Eigenvectors
Symmetric real orthogonal
Orthogonal all orthogonal
Positive definite all orthogonal
Similar matrix
Projections column space; nullspace
Every matrix rank(A) = rank() eigenvectors of in

Convexity

Convex set and convex function

• A set is convex if and only if
]
• is a convex function if and only if
is convex
]
• is strongly convex if and only if
, s.t.
That is,

Examples of convex sets
• empty set
• a single point
• Halfspace:
• Polyhydra
• Balls and eclipse

Examples of convex function
* All norms on
*
* Affine function:
* Max function:

Epigraph condition

is convex if and only if its epigraph
is convex.

Restriction to a line

, the function is convex.

First-order condition

If f is differentiable, f is convex if and only if

Second-order condition

If f is twice differentiable, f is convex if and only if

i.e. Hessian is positive semidefinite. Positive semidefiniteness is the key to the convexity of a function.

Operations that preserve convexity

• Intersection
• Nonnegative linear combination
• Affine variable transformation
• Pointwise maximum

Separation theorem

Theorem 1: Separating hyperplane

If are two convex subsets of \mathbb{R}^{n} that do not intersect, then there is an hyperplane that separates them, that is, and such that , and .

Theorem 2: Supporting hyperplane

If si convex and nonempty, then for any at the boundary of , there exists a supporting hyperplane to at , meaning that there exist , such that for every .

Composite function and convexity

* If convex and nondecreasing, is convex. * If is concave and is convex and nonincreasing, is convex.

Examples
* If is convex, then is convex.
* If is concave and positive, then is concave.
* If is concave and positive, then is convex.
* If is convex and nonnegative and , then is convex.
* If is convex, then on .
* If are convex, then is convex.

Jensen's inequality

Let be a convex function, and let be a random variable.
Then,

For a discrete probability distribution, take with probability . Then, , where .