Skip to main content

DYNAMIC PROGRAMMING

Dynamic programming (referred to as DP ) is a very powerful technique to solve a particular class of problems. It demands elegant formulation of the approach and simple thinking. The idea is very simple, If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again. If the given problem can be broken up into smaller sub-problems and these smaller sub-problems are in turn divided into smaller ones, and in this process, if you observe some over-lapping sub-problems, then its a big hint for DP. Also, the optimal solutions to the sub-problems contribute to the optimal solution of the given problem.

There are two ways of doing this.
1.) Top-Down : Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive. This is referred to as Memoization.
2.) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial sub-problem, up towards the given problem. In this process, it is guaranteed that the sub-problems are solved before solving the problem. This is referred to as Dynamic Programming.

Obtaining a solution to a dynamic programming problem

(1) Find a recursive relation. If you are already have your recursive relation, go to the implementation stage that starts from (4). Read the problem carefully, and find out if you could divide the problem into sub-problems. The most important thing for dynamic programming is that you should prove that the solution of the higher-level problem expressed in optimal solutions of the sub-problems is optimal. This part might be tough; if you can’t figure out a recursive relation, try the "Backtrack Branch and Bound" method. By doing that you might find a recursive relation between sub-problems.

(2) What is the answer of the problem? Now express your global optimal solution in terms of the recursive relation that you found in the previous stage (1).

(3) Try an example to verify the recursive relation. Try a simple example by walking through the recursion. You would get an insight whether your relation is right or wrong.

(4) Express the recursive relation top-down. First write down the trivial cases (i.e. boundary cases), and then make the top-level function using the previous function.

(5) Use memoization to eliminate redundancy. After writing down the recursion, enter the recursive function to calculate the solution for each stage.

(6) Express the recursive relation bottom-up. The top-down approach divides the problem top to bottom, and after hitting the trivial cases (i.e., boundary cases) it climbs up the ladder. The bottom-up approach starts from the trivial cases and goes up.

(7) Parallelize. If your solution is not fast enough, you have two options. Either find a more efficient recursive relation in terms of time complexity and start again from step (1), or parallelize your program.

Invariant

1. Pre-condition. The problem has a provable optimal substructure, which can be used to
get a globally optimal solution
2. Invariant. Locally optimal solutions are computed in the order defined by the structure
of the problem to lead to a globally optimal solution
3. Post-condition. The globally optimal solution is found.

Bellman's Optimality Principle
The principle that an optimal sequence of decisions in a multistage decision process problem has the property that whatever the initial state and decisions are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decisions.

Deterministic Dynamic Programming
Deterministic problems are the ones where the state at the next stage is completely determined by the state and policy decision at the current stage. Deterministic dynamic programming problems are categorized by the form of the objective function. For example, the objective might be to minimize the sum of the contributions from the individual stages (as for the stagecoach problem), or to maximize such a sum, or to minimize a product of such terms, and so on. Another categorization is in terms of the nature of the set of states for the respective stages.

Probabilistic Dynamic Programming
Probabilistic dynamic programming differs from deterministic dynamic programming in that the state at the next stage is not completely determined by the state and policy decision at the current stage. Rather, there is a probability distribution for what the next state will be. However, this probability distribution still is completely determined by the state and policy decision at the current stage. 

Comments

Popular posts from this blog

Advantages and Disadvantages of EIS Advantages of EIS Easy for upper-level executives to use, extensive computer experience is not required in operations Provides timely delivery of company summary information Information that is provided is better understood Filters data for management Improves to tracking information Offers efficiency to decision makers Disadvantages of EIS System dependent Limited functionality, by design Information overload for some managers Benefits hard to quantify High implementation costs System may become slow, large, and hard to manage Need good internal processes for data management May lead to less reliable and less secure data

Inter-Organizational Value Chain

The value chain of   a company is part of over all value chain. The over all competitive advantage of an organization is not just dependent on the quality and efficiency of the company and quality of products but also upon the that of its suppliers and wholesalers and retailers it may use. The analysis of overall supply chain is called the value system. Different parts of the value chain 1.  Supplier     2.  Firm       3.   Channel 4 .   Buyer

Big-M Method and Two-Phase Method

Big-M Method The Big-M method of handling instances with artificial  variables is the “commonsense approach”. Essentially, the notion is to make the artificial variables, through their coefficients in the objective function, so costly or unprofitable that any feasible solution to the real problem would be preferred, unless the original instance possessed no feasible solutions at all. But this means that we need to assign, in the objective function, coefficients to the artificial variables that are either very small (maximization problem) or very large (minimization problem); whatever this value,let us call it Big M . In fact, this notion is an old trick in optimization in general; we  simply associate a penalty value with variables that we do not want to be part of an ultimate solution(unless such an outcome is unavoidable). Indeed, the penalty is so costly that unless any of the  respective variables' inclusion is warranted algorithmically, such variables will never be p