Data analysis is an attempt by the researcher to summarize collected data either quantitative or qualitative. Generally, quantitative analysis is simply a way of measuring things but more specifically it can be considered as a systematic approach to investigations. In this approach numerical data is collected or the researcher transforms collected or observed data into numerical data. It is ideal for finding out when and where, who and what and any relationships and patterns between variables. This is research which involves measuring or counting attributes (i. Quantities). It can be defined as: ‘The numerical representation and manipulation of observations for the purpose of describing and explaining the phenomena that those observations reflect is called quantitative analysis” Quantitative analysis gives base to quantitative geography and considered as one of important parts of geographical research. As, subject matter of quantitative geography is comprehended by the following key issues: Collection of empirical data Analysis of numerical spatial data Development of spatial methods for measurements, theories and hypothesis

Construction and testing of mathematical models of spatial theory Concisely, all above mentioned activities develop understanding of spatial processes. Quantitative geography is not bound by deep-routed philosophical stance as its most obvious, efficient and reliable mean of obtaining knowledge. Thus, it might be labeled all quantitative researchers as positivist or naturalist (Graham, 1997). Thus, its purpose is not to produce flawless data but rather is to maximize knowledge with minimum of error. Therefore, verification of quantitative research can be done by determining its significance in discipline.

Sources of initiative data: We can gather quantitative data in a variety of ways and from a number of different sources. Many of these are similar to sources of qualitative data, for example: a) Questionnaires: these are series of questions and other prompts for the purpose of gathering information from respondents. B) Interviews: a conversation between two or more people (the interviewer and the interviewee) where questions are asked by the interviewer to obtain information from the interviewee. C) Observation – a group or single participants are manipulated by the researcher, for example, asked to perform a specific task or action.

Observations are then made of their user behavior, user processes, workflow etc, either in a controlled situation (e. G. Lab based) or in a real-world situation (e. G. The workplace). D) Transaction logs – recordings or logs of system or website activity. E) Documentary research – analysis of documents belonging to an organization. Level of measurements Quantitative data is associated with any of following four levels of measurement: Nominal: data has no logical and classified into separate groups which are not interlinked. For Example, Male or Female as mentioned below:

There is no order associated with male nor female Each category is assigned an arbitrary value (male = O, female = 1) Ordinal: data has a logical order, but the differences between values are not constant Example: T-shirt size (small-scale, large scale industries) Example: economic activities (from Primary to quandary) Interval: data is continuous and has a logical order. It has standardized differences between values, but no natural zero Example: Fahrenheit degrees Ratio (scale): data is continuous, ordered, has standardized differences between values, and a natural zero Example: height, weight, age, length

Quantitative data analysis procedure: Data tabulation (frequency distributions & percent distributions) Descriptive data Data disaggregating Moderate and advanced analytical methods Data tabulation: The first thing researcher should do with his data is to tabulate results for the different variables in data set. This process will give a comprehensive picture of what data looks like and assist in identifying patterns. The best ways to do this are by constructing frequency and percent distributions. Frequency distribution is an organized tabulation of the number of individuals located in each category.

It is also known as inebriate analysis in which a single variable is analyzed for purposes of description. Example: Gender where the number of men in a sample/population and the number of women in a sample/ population are analyzed. Percent distribution displays the proportion of participants who are represented within each category (see below). From the table, you can see that 75% of students (n = 20) surveyed who participated in the summer program reported being satisfied with the experience Data Descriptive: A descriptive refers to calculations that are used to “describe” the data set.

The most common descriptive used are: Mean – the numerical average of scores for a particular variable Standard deviation- square root of variance Minimum and maximum values – the highest and lowest value for a particular variable Median – the numerical middle point or score that cuts the distribution in half for a particular variable Mode – the most common number score or value for a particular variable Fig. :showing the actual heights (at the shoulders) are: mom, mom, mom, mom and mom with its mean value Fig. : showing standard deviation and variance

However, depending on the level of measurement, you may not be able to run descriptive for all variables in dataset. Mean can only be calculated from interval and ratio data Minimum and maximum values can be calculated for all levels of measurement Median can only be calculated from ordinal, interval, and ratio data Mode can be calculated for all levels of measurement Data disaggregating: After tabulating the data, you can continue to explore the data by disaggregating it across different variables and subcategories of variables.

Crossbars allow you to disaggregate the data across multiple categories. You can also disaggregate the data by subcategories within a variable. This allows you to take a deeper look at the units that make up that category. Moderate and advanced analytical methods Correlation A correlation is a statistical calculation which describes the nature of the relationship between two variables (i. E. , strong and negative, weak and positive, statistically significant). It is also known as bipartite analysis.

An important thing to remember when using correlations is that a correlation does not explain causation. A correlation merely indicates that a relationship or pattern exists, UT it does not mean that one variable is the cause of the other. Correlation is Positive when the values increase together, and Correlation is Negative when one value decreases as the other increases Analysis of Variance An analysis of variance (NOVA) is used to determine whether the difference in means (averages) for two groups is statistically significant.

Regression Regression is an extension of correlation and is used to determine whether one variable is a predictor of another variable. It is part of multivariate analysis in which simultaneous relationships among several variables can be analyzed. A aggression can be used to determine how strong the relationship is between your intervention and your outcome variables. More importantly, a regression will tell you whether a variable (e. G. , participation in your program) is a statistically significant predictor of the outcome variable (e. . , GAP, SAT, etc. ). A variable can have a positive or negative influence, and the strength of the effect can be weak or strong. However, these types of analyses generally require computer software e. G. , SPAS, ASS, STATS, and MAINTAIN. It also requires solid understanding of statistics to interpret the results. Significance: Quantitative Data Analysis is widely used in many fields, including economics, sociology, psychology, market research, health development, and many different branches of science.

Quantitative data is generally more reliable than qualitative data, and good analysis relies on good data. Quantitative data refers to numbers and statistics, and is very useful in finding patterns of behavior or dominant themes. This is more useful than qualitative research, as the latter can be very vague, depending on the methods used to collect it. Thus, it: makes sense of any data that is currently available to researcher. Zanies, summarizes and prepares the data for dissemination to others finds patterns in data helps your results be more accurate, as it is very difficult to add personal bias to numbers obtained when the correct data gathering procedures have been followed Discovers data to extrapolate to be fit on larger sample size than the data was collected due to unbiased approach. Can generate large amounts of is applied to the raw data so that the research can be displayed in a friendlier fashion. It means you can gather percentages and statistics and analyze results using graphs and charts.