Skip to main content

Measures of Central tendency



According to Prof Bowley averages are statistical constants which provide an idea about the concentration of values in the central part of the distribution. The five common measures of central tendency are :

1. Mean [Airthmetic mean]
2. Median
3. Mode
4. harmonic mean
5. Geometric mean

According to Prof. Yule following are the characterstics required to measure central tendency:

1. Central tendency should be rigidly defined
2. It should be comprehensive and easy to determine
3. Should be based on all observations
4. It should not be much affected by the fluctuation of sampling
5. It should not be affected by extreme values


Let us talk about these distributions in detail:

1. Arithmetic mean: Arithmetic mean of a set of observations can be calculated as their sum divided by the number of observations


In case fi is the frequency of variable xi then

distribution becomes
'

Properties of Arithmetic Mean

1. Algebric sum of deviations of a set of values from their arithmetic mean is zero
2. The sum of squares of the deviation of a set of values is minimum when it is taken about mean


Applications of Mean


Arithmetic means have many merits. it is rigidly defined,easy to understand and calculate,based upon all observations. It is open to mathematical treatment. Further among all the available averages it is least affected by fluctuation in sampling. That is why it is also called a stable average.

However it has some demerits. It cannot be determined just by inspection. Neither we can locate it graphically. Arithmetic mean cannot be used in case we are dealing with qualitative data. In case we have data dealing with intelligence,happiness, honesty we cannot use mean to measure center of tendency. Further we cannot use arithmetic mean even if single observation from data is missing or lost or wrong. Another demerits is that arithmetic mean is highly affected by the extreme values in case there are extreme values it will give distorted picture of distribution and may not represent the distributions. Arithmetic mean may lead to wrong conclusions even if details of data from which it is computed is not given. Just check out the following example:

Let exam marks for 2 students be as follows


Shyam
name year 1 year 2 year 3
Rama 50 60 70
70 60 50

In both case of Rama and Shyam average marks(arithmetic mean) are 60 however from data we can analyze Rama is improving and Shyam is decreasing.

In case we have extremely asymmetrical (skewed distributions arithmetic mean cannot be suitable measure of location.

Comments

Popular posts from this blog

Advantages and Disadvantages of EIS Advantages of EIS Easy for upper-level executives to use, extensive computer experience is not required in operations Provides timely delivery of company summary information Information that is provided is better understood Filters data for management Improves to tracking information Offers efficiency to decision makers Disadvantages of EIS System dependent Limited functionality, by design Information overload for some managers Benefits hard to quantify High implementation costs System may become slow, large, and hard to manage Need good internal processes for data management May lead to less reliable and less secure data

Inter-Organizational Value Chain

The value chain of   a company is part of over all value chain. The over all competitive advantage of an organization is not just dependent on the quality and efficiency of the company and quality of products but also upon the that of its suppliers and wholesalers and retailers it may use. The analysis of overall supply chain is called the value system. Different parts of the value chain 1.  Supplier     2.  Firm       3.   Channel 4 .   Buyer

Big-M Method and Two-Phase Method

Big-M Method The Big-M method of handling instances with artificial  variables is the “commonsense approach”. Essentially, the notion is to make the artificial variables, through their coefficients in the objective function, so costly or unprofitable that any feasible solution to the real problem would be preferred, unless the original instance possessed no feasible solutions at all. But this means that we need to assign, in the objective function, coefficients to the artificial variables that are either very small (maximization problem) or very large (minimization problem); whatever this value,let us call it Big M . In fact, this notion is an old trick in optimization in general; we  simply associate a penalty value with variables that we do not want to be part of an ultimate solution(unless such an outcome is unavoidable). Indeed, the penalty is so costly that unless any of the  respective variables' inclusion is warranted algorithmically, such variables will never be p