Home | ARTS | Operations Management | Introduction and Definition of Linear Programming – Problem Solving [SIMPLEX METHOD]

# Introduction and Definition of Linear Programming – Problem Solving [SIMPLEX METHOD]

Posted On :  22.06.2018 10:37 pm

The Simplex Method also called the ‘Simplex Technique’ or the Simplex Algorithm is an iterative procedure for solving a linear programming problem in a finite number of steps.

Linear Programming – Problem Solving [SIMPLEX METHOD]

## Introduction

The Simplex Method also called the ‘Simplex Technique’ or the Simplex Algorithm is an iterative procedure for solving a linear programming problem in a finite number of steps. The method provides an algorithm which consists in moving from one vertex / corner point of the region of feasible solutions to another vertex in such a manner that the value of the objective function at the succeeding vertex is improved [lesser in case of minimization and more in case of maximization] than at the preceding vertex. This procedure of jumping from one vertex to another is then repeated. Since the number of vertices is finite, the method leads to an optimal vertex in a finite number of steps or indicates the existence of an unbounded solution.

We will introduce various concepts / definitions that are related to simplex method in the following paragraphs.

## Definations

### 1. Objective Function

The function that is to be either minimized or maximized is called as objective function. For example, it may represent the cost that you are trying to minimize or total revenue that is to be maximized and so on.

### 2. Constraints

A set of equalities and inequalities that the feasible solution must satisfy is called as constraints of the problem.

### 3. Optimal Solution

A vector X, which is both feasible (satisfying all the constraints in the given problem) and optimal (obtaining the largest or smallest value for the objective function, depends upon the case) is known as optimal solution.

### 4. Feasible Solution

A solution vector, X, which satisfies all the constraints of the given problem is called feasible solution to the given LPP.

### 5. Basic Solution

X of (AX=b) is a basic solution if the n components of X can be partitioned into m “basic” and n-m “non-basic” variables in such a way that: the m columns of A corresponding to the basic variables form a nonsingular basis and the value of each “non-basic” variable is 0. The constraint matrix A has m rows (constraints) and n columns (variables).

### 6. Basis

The set of basic variables is called the basis for the given problem.

### 7. Basic Variables

Basic variables are set of variables, which are obtained by setting n-m variables values to zero, and are solving the resulting system.

### 8. Non-basic Variables

A variable not in the basic solution, not part of the solution is called non-basic variable.

### 9. Slack Variable

If we have a ‘less or equal’ to constraint, to convert that as an equation, a variable is added to the left hand side of the constraint; the new variable, which is added to the left hand side of the constraint is called as slack variable.

Ex:                         2X1 + 5X2 ≤10

2X1 + 5X2 + SX3 = 10

The variable, SX3, is called as slack variable for the given constraint.

### 10. Surplus Variable

If we have a ‘greater or equal’ to constraint, to convert that as an equation, a variable is subtracted from the left hand side of the constraint; the new variable, which is subtracted, to the left hand side of the constraint is called as surplus variable.

Ex:                                          2X1 + 5X2 ≥10

2X1 + 5X2 – SX4 = 10

The variable, SX4, is called as surplus variable for the given constraint. Therefore, it is a variable added to the problem to eliminate greater-than constraints.

### 11. Artificial Variable

To get the initial basis in a ‘greater than or’ equal to constraint, additional variable is added in addition to the surplus variable. The additional variable added to a linear programming problem is called as ‘artificial variable’.

### 12. Unbounded Solution

For some linear programs it is possible for the objective function to achieve infinitely high / low values, depends upon the objective. Such an LP is said to have an unbounded solution.

### 13. Standard form of LPP

Let the objective function be

Max Z = CX

And the set of constraints are represented as

AX<=b

Where, b- the vector obtained by collecting all the right hand side of the constraints.

If we add set of slack variables to all the constraints and if the constraints are equation, then that particular form is called as standard form of linear programming problem.

Therefore, Max Z = CX And the set of constraints are represented as

AX=b

Example

Consider the following LPP

Maximize Z = 15X1 + 10X2

Subject to constraints

4X1+6X2 <=360

3X1+0X2<=180

0X1+5X2 <=200

X1, X2>=0

The standard form of the given problem is obtained by adding slack variable X3 to the first constraint, X4 to the second and X5 to the third constraint.

4X1+6X2 + X3=360

3X1+0X2 +X4=180

0X1+5X2 +X5=200

X1, X2, X3, X4 & X5>=0

The modified objective function is

Maximize Z = 15X1 + 10X2 + 0X3 + 0X4 +OX5

### 14. Canonical form of LPP

Let the objective function be

Max Z = CX

And the set of constraints are represented as

AX<=b

Where, b- the vector obtained by collecting all the right hand side of the constraints.

This form, where all the constraints are ‘<=’ type for a maximization problem and ‘>=’ type for a minimization problem is known as canonical form of LPP.
Tags : Operations Management - Introduction to Operations Research
Last 30 days 1482 views