Thank you for visiting this blog. Please don't hesitate to contact us if you find error/s. We would appreciate your comment to improve your experience in this blog.


100424P - EVIDENCE BASED DECISIONS IN HEALTHCARE




Presented at a training program on Evidence based decision making in Health care organized by the Ministry of Health 24-27 April 2010

By

Professor Omar Hasan Kasule Sr
MB ChB (MUK), MPH (Harvard) DrPH (Harvard)
Professor of Epidemiology and Bioethics
Faculty of Medicine
King Fahd Medical City
Riyadh

TABLE OF CONTENTS

·        Research Questions and Hypotheses..pg 3

·        Observational Studies..pg 4

·        Experimental Studies..pg 7

·        Data Collection and Management..pg 9

·        Data Presentation..pg 11

·        Data Summary..pg 13

·        Data Analysis..pg 16

·        Tests of Association and Measures of Effect..pg 18

·        Sources and Treatment of Bias..pg 21

·        Data Storage and Retrieval..pg 23

·        Critical Review of a Journal Article..pg 25

·        Public Health Concerns & Evidence..pg 27

·        Decision Making and Problem Solving..pg 30


 1004- RESEARCH QUESTIONS and HYPOTHESES
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0 RESEARCH QUESTIONS
An investigator starts with a substantive question that is formulated as a statistical question. Data is then collected and is analyzed to reach a statistical conclusion. The statistical conclusion is used with other knowledge to reach a substantive conclusion.
The substantive question may be posed as a simple yes/no like is this policy better than that policy? However it may be posed in a comparative way requiring a unit of measurement and comparison for example one policy may be twice as effective as another one.

2.0 HYPOTHESES AND THE SCIENTIFIC METHOD
The scientific method consists of hypothesis formulation, experimentation to test the hypothesis, and drawing conclusions. Hypotheses are statements of prior belief. They are modified by results of experiments to give rise to new hypotheses. The new hypotheses then in turn become the basis for new experiments.

3.0 NULL HYPOTHESIS (H0) & ALTERNATIVE HYPOTHESIS (HA):
The null or research hypothesis, H0, states that there is no difference between two comparison groups and that the apparent difference seen is due to sampling error. The alternative hypothesis, HA, disagrees with the null hypothesis.

4.0 HYPOTHESIS TESTING USING P-VALUES
The main parameters of hypothesis testing are the significance level and the p-value. The pre-set level of significance customarily set at 0.05, is the probability of wrongfully rejecting H0 5% of the time, a ratio of 1:20. The p value is calculated from the data using complicated formulas. The p value can be defined in a commonsense way as the probability of rejecting a true hypothesis by mistake. The decision rules are: If the p < 0.05 H0 is rejected (test statistically significant). If the p>0.05 H0 is not rejected (test not statistically significant).

5.0 CONCLUSIONS and INTERPRETATIONS
A statistically significant test implies that the following are true: H0 is false, H0 is rejected, observations are not compatible with H0, observations are not due to sampling variation, and observations are real/true biological phenomena. A statistically non significant test implies the following are true: H0 is not false (we do not say true), H0 is not rejected, observations are compatible with H0, observations are due to sampling variation or random errors of measurement, and observations are artificial, apparent and not real biological phenomena. Statistical significance may have no clinical/practical significance/importance. This is due to other factors being involved but are not studied. It may also be due to invalid measurements. Practically important differences may not reach statistical significance due to small sample size or due to measurement that are not discriminating enough.


1004 STUDY DESIGN 1: OBSERVATIONAL STUDIES
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0 CROSS SECTIONAL STUDY
The cross-sectional study, also called the prevalence study or naturalistic sampling, has the objective of determination of prevalence of risk factors and prevalence of disease at a point in time (calendar time or an event like birth or death).  Disease and exposure are ascertained simultaneously.

A cross-sectional study can be descriptive or analytic or both.  It may be done once or may be repeated. Individual-based studies collect information on individuals. Group-based (ecologic) studies collect aggregate information about groups of individuals.

Cross-sectional studies are used in community diagnosis, preliminary study of disease etiology, assessment of health status, disease surveillance, public health planning, and program evaluation.

Cross-sectional studies have the advantages of simplicity, and rapid execution to provide rapid answers. Their disadvantages are: inability to study etiology because the time sequence between exposure and outcome is unknown, inability to study diseases with low prevalence, high respondent bias, poor documentation of confounding factors, and over-representation of diseases of long duration.

2.0 HEALTH SURVEYS
Surveys involve more subjects than the usual epidemiological sample are used for measurement of health and disease, assessment of needs, assessment service utilization and care. They may be population or sample surveys.

Planning of surveys includes: literature survey, stating objectives, identifying and prioritizing the problem, formulating a hypothesis, defining the population, defining the sampling frame, determining sample size and sampling method, training study personnel, considering logistics (approvals, manpower, materials and equipment., finance, transport, communication, and  accommodation), preparing and  pre-testing the study questionnaire.

Surveys may be cross sectional or longitudinal. The household is the usual sampling unit. Sampling may be simple random sampling, systematic sampling, stratified sampling, cluster sampling, or multistage sampling. Existing data may be used or new data may be collected using a questionnaire (postal, telephone, diaries, and interview), physical examinations, direct observation, and laboratory investigations. Structure and contents of the survey report is determined by potential readers. The report is used to communicate information and also apply for funding.

3.0 CASE-CONTROL DESIGN
The case-control study is popular because or its low cost, rapid results, and flexibility. It uses a small numbers
of subjects. It is used for disease (rare and non rare) as well as non disease situations. A case control study
can be exploratory or definitive.
The variants of case control studies are the case-base, the case-cohort, the case-only, and the crossover designs. In the case-base design, cases are all diseased individuals in the population and controls are a random sample of disease-free individuals in the same base population. The case-cohort design is sampling from
a cohort (closed or open). The case-only design is used in genetic studies in which the control exposure distribution can be worked out theoretically. The crossover design is used for sporadic exposures. The same individual can serve as a case or as a control several times without any prejudice to the study.
The source population for cases and controls must be the same. Cases are sourced from clinical records, hospital discharge records, disease registries, data from surveillance programs, employment records, and death certificates. Cases are either all cases of a disease or a sample thereof. Only incident cases (new cases) are selected. Controls must be from the same population base as the cases and must be like cases in
everything except having the disease being studied. Information comparability between the case series and the control series must be assured. Hospital, community, neighborhood, friend, dead, and relative controls are
used. There is little gain in efficiency beyond a 1:2 case control ratio unless control data is obtained at no cost.

Confounding can be prevented or controlled by stratification and matching. Exposure information is
obtained from interviews, hospital records, pharmacy records, vital records, disease registry,
employment records, environmental data, genetic determinants, biomarker, physical measurements,
and laboratory measurements.

The case-control study design has the following strengths/advantages: computation of the OR as
an approximation of the RR, low cost, short duration, and convenience for subjects because they are contacted/interviewed only once. The case control design several disadvantages: RR is approximated and is not measured, Pr(E+/D+) is computed instead of the more informative Pr(D+/E+), rates are not obtained because marginal totals are artificial and not natural being fixed by design, the time sequence between exposure and disease outcome is not clear, vulnerability to bias (misclassification, selection, and confounding), inability to study multiple outcomes, lack of precision in evaluating rare exposures, inability to validate historical exposure information, and inability to control for relevant confounding factors.

4.0 FOLLOW-UP DESIGN
A follow up study (also called cohort study, incident study, prospective study, or longitudinal study), compares disease in exposed to disease in non-exposed groups after a period of follow-up. It can be prospective (forward), retrospective (backward), or ambispective (both forward and backward) follow-up.

The study population is divided into the exposed and unexposed populations. A sample is taken from the exposed and another sample is taken from the unexposed. Both the exposed and unexposed samples are followed for appearance of disease. The study may include matching, (one-to-one or one-to-many), pre and post comparisons, multiple control groups, and stratification. The study cohort is from special exposure groups, such as factory workers, or groups offering special resources, such as health insurance subscribers. Information on exposure can be obtained from the following sources: existing records, interviews/questionnaires, medical examinations, laboratory tests for biomarkers, testing or evaluation of the environment. The time of occurrence of the outcome must be defined precisely. The ascertainment of the outcome event must be standardized with clear criteria. Follow-up can be achieved by letter, telephone, surveillance of death certificates and hospitals. Care must be taken to make sure that surveillance, follow-up, and ascertainment for the 2 groups are the same.

The cohort design has 4 advantages: it gives a true risk ratio based on incidence rates, the time sequence is clear since exposure precedes disease, incidence rates can be determined directly, and several outcomes of the same exposure can be studied simultaneously. It has 5 disadvantages: loss to subjects and interest due to long follow-up, inability to compute prevalence rate of the risk factor, use of large samples to ensure enough cases of outcome, and high cost. The cost can be decreased by using existing monitoring/surveillance systems, historical cohorts, general population information instead of studying the unexposed population, and the nested case control design. Follow-up studies are not suitable for study of diseases with low incidence.



1004 STUDY DESIGN 2: EXPERIMENTAL STUDIES
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0 RANDOMIZED DESIGN: COMMUNITY TRIALS
A community intervention study targets the whole community and not individuals. It has 3 advantages over individual intervention. It is easier to change the community social environment than to change individual behavior. High-risk lifestyles and behaviors are influenced more by community norms than by individual preferences. Interventions are tested in the actual natural conditions of the community, and cheaper.

The Salk vaccine trial carried out in 1954 had 200,000 subjects in the experimental group and a similar number in the control group. The aspirin-myocardial infarction study was a therapeutic intervention that randomized 4524 men to two groups. The intervention group received 1.0 gram of aspirin daily whereas the reference group received a placebo. The Women’s Health Study involved randomization of 40,000 healthy women into two groups to study prevention of cancer and cardiovascular disease. One group received vitamin E and low dose aspirin. The other group received a placebo. The alpha tocopherol and beta carotene cancer prevention trial randomized 19,233 mid-age men who were cigarette smokers.

There are basically 4 different study designs. In a single community design, disease incidence is measured before and after intervention. In a 2-community design, one community receives an intervention whereas another one serves as the control. In a one-to-many, the intervention community has several control community. In a many-to-many design there are study with multiple intervention communities and multiple control communities. Allocation of a community to either the intervention or the control group is by randomization.

The intervention and the assessment of the outcome may involve the whole community or a sample of the community. Outcome measures may be individual level measures or community level measures.

The strength of the community intervention study is that it can evaluate a public health intervention in natural field circumstances. It however suffers from 2 main weaknesses: selection bias and controls getting the intervention. Selection bias is likely to occur when allocation is by community. People in the control community may receive the intervention under study on their own because tight control as occurs in laboratory experimental or animal studies is not possible with humans.

2.0 RANDOMIZED DESIGN: CLINICAL TRIALS
The study protocol describes objectives, the background, the sample, the treatments, data collection and analysis, informed consent; regulatory regulations, and drug ordering.
Trials may be single center or multi-center, single-stage or multi-stage, factorial, or crossover.

The aim of randomization in controlled clinical trials is to make sure that there is no selection bias and that the two series are as alike as possible by randomly balancing confounding factors. Equal allocation in randomization is the most efficient design. Methods of randomization include alternate cases and sealed serially numbered envelopes.

Data collected is on patients (eg weight), tumors (eg TNM staging); tumor markers (eg AFP), response to treatment (complete response, partial response, no response, disease progression, no evidence of disease, recurrence), survival (disease-free survival, time to recurrence, survival until death), adverse effects (type of toxicity, severity, onset, duration), and quality of life (clinical observation, clinical interview, self report by patient).

In single blinding the diagnosis is known but the treatment is not. In double blinding both the treatment and the diagnosis are unknown.

The trial is stopped when there is evidence of a difference or when there is risk to the treatment group.

Quality control involves measures to ensure that information is not lost. Institutional differences in reporting, and patient management must be analyzed and eliminated if possible. A review panel or carry out inter-observer rating to assure data consistence.

Comparison of response proportions is by chi-square, exact test, chi-square for trend. Drawing survival curves is by K-M & life-table methods. Comparing survival & remission is by the Wilcoxon and log-rank tests. Prognostic factors of response, remission, duration, and survival times are investigated using Cox’s proportional hazards regression model.

Meta-analysis combines data from several related clinical trials. Differences between the two treatment and control groups are due to sampling variation/chance, inherent differences not controlled by randomization, unequal evaluation not controlled by double-blinding, true effects of the treatment, and non compliance.

Problems in trials are incomplete patient accounting, removing 'bad' cases from series, failure to censor the dead, removing cases due to ‘competing causes of death’, analysis before study maturation, misuse of the ‘p-value’, lack of proper statistical questions and conclusions, lack of proper substantive questions and conclusions, use of partial of data; use of inappropriate formulas, errors in measuring response, and censoring of various types.



1004 DATA COLLECTION AND MANAGEMENT
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0 SAMPLE SIZE DETERMINATION
The size of the sample depends on the hypothesis, the budget, the study durations, and the precision required. If the sample is too small the study will lack sufficient power to answer the study question. A sample bigger than necessary is a waste of resources.

Power is ability to detect a difference and is determined by the significance level, magnitude of the difference, and sample size. The bigger the sample size the more powerful the study. Beyond an optimal sample size, increase in power does not justify costs of larger sample.

There are procedures, formulas, and computer programs for determining sample sizes for different study designs.

2.0 SOURCES OF SECONDARY DATA
Secondary data is from decennial censuses, vital statistics, routinely collected data, epidemiological studies, and special health surveys. Census data is reliable. It is wide in scope covering demographic, social, economic, and health information. The census describes population composition by sex, race/ethnicity, residence, marriage, socio-economic indicators. Vital events are births, deaths, Marriage & divorce, and some disease conditions. Routinely collected data are cheap but may be unavailable or incomplete. They are obtained from medical facilities, life and health insurance companies, institutions (like prisons, army, schools), disease registries, and administrative records. Observational epidemiological studies are of 3 types: cross-sectional, case-control, and follow-up/cohort studies. Special surveys cover a larger population that epidemiological studies and may be health, nutritional, or socio-demographic surveys.

3.0 PRIMARY DATA COLLECTION BY QUESTIONNAIRE
Questionnaire design involves content, wording of questions, format and layout. The reliability and validity of the questionnaire as well as practical logistics should be tested during the pilot study. Informed consent and confidentiality must be respected. A protocol sets out data collection procedures. Questionnaire administration by face-to-face interview is the best but is expensive. Questionnaire administration by telephone is cheaper. Questionnaire administration by mail is very cheap but has a lower response rate. Computer-administered questionnaire is associated with more honest responses.

4.0 PHYSICAL PRIMARY DATA COLLECTION
Data can be obtained by clinical examination, standardized psychological/psychiatric evaluation, measurement of environmental or occupational exposure, and assay of biological specimens (endobiotic or xenobiotic) and laboratory experiments. Pharmacological experiments involve bioassay, quantal dose-effect curves, dose-response curves, and studies of drug elimination. Physiology experiments involve measurements of parameters of the various body systems. Microbiology experiments involve bacterial counts, immunoasays, and serological assays. Biochemical experiments involve measurements of concentrations of various substances. Statistical and graphical techniques are used to display and summarize this data.

5.0 DATA ENTRY
Self-coding or pre-coded questionnaires are preferable. Data is input as text, multiple choice, numeric, date and time, and yes/no responses. In double entry techniques, 2 data entry clerks enter the same data and a check is made by computer on items on which they differ. Data in the computer can be checked manually against the original questionnaire. Interactive data entry enables detection and correction of logical and entry errors immediately. Data replication is a copy management service that involves copying the data and also managing the copies. Synchronous data replication is instantaneous updating with no latency in data consistency. In asynchronous data replication the updating is not immediate and consistency is loose.

6.0 DATA EDITING
Data editing is the process of correcting data collection and data entry errors. The data is 'cleaned' using logical, statistical, range, and consistency checks. All values are at the same level of precision (number of decimal places) to make computations consistent and decrease rounding off errors. Data editing identifies and corrects errors such as invalid or inconsistent values. Data is validated and its consistency is tested.

The main data problems are missing data, coding and entry errors, inconsistencies, irregular patterns, digit preference, out-liers, rounding-off / significant figures, questions with multiple valid responses, and record duplication.

7.0 DATA TRANSORMATION
Data transformation is the process of creating new derived variables preliminary to analysis and includes mathematical operations such as division, multiplication, addition, or subtraction; mathematical transformations such as logarithmic, trigonometric, power, and z-transformations.

8.0 DATA ANALYSIS
Data analysis consists of data summarization, estimation and interpretation. Simple manual inspection of the data is needed before statistical procedures.

Preliminary examination consists of looking at tables and graphics. Descriptive statistics are used to detect errors, ascertain the normality of the data, and know the size of cells. Missing values may be imputed or incomplete observations may be eliminated.

Tests for association, effect, or trend involve construction and testing of hypotheses. The tests for association are the t, chi-square, linear correlation, and logistic regression tests or coefficients.

The common effect measures Odds Ratio, Risk Ratio, Rate difference. Measures of trend can discover relationships that are not picked up by association and effect measures. Analytic procedures and computer programs vary for continuous and discrete data, for person-time and count data, for simple and stratified analysis, for univariate, bivariate and multivariate analysis, and for polychotomous outcome variables. Procedures are different for large samples and small samples.


1004 DATA PRESENTATION
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0 DATA GROUPING
Data grouping summarizes data but leads to loss of information due to grouping errors. The suitable number of classes is 10-20. The bigger the class interval, the bigger the grouping error. Classes should be mutually exclusive, of equal width, and cover all the data. The upper and lower class limits can be true or approximate. The approximate limits are easier to tabulate. Data can be dichotomous (2 groups), trichotomous (3 groups) or polychotomous (>3 groups).

2.0 DATA TABULATION
Tabulation summarizes data in logical groupings for easy visual inspection. A table shows cell frequency (cell number), cell number as a percentage of the overall total (cell %), cell number as a row percentage (row%), cell number as a column percentage (column %), cumulative frequency, cumulative frequency%, relative (proportional) frequency, and relative frequency %. Ideal tables are simple, easy to read, correctly scaled, titled, labeled, self explanatory, with marginal and overall totals. The commonest table is the 2 x 2 contingency table. Other configurations are the 2 x k table and the r x c table.

3.0 DATA DIAGRAMS SHOWING ONE QUANTITATIVE VARIABLE
Diagrams present data visually. An ideal diagram is self-explanatory, simple, not crowded, of appropriate size, and emphasizes data and not graphics. The 1-way bar diagram, the stem and leaf, the pie chart, and a map are diagrams showing only 1 variable.

A bar diagram uses ‘bars’ to indicate frequency and is classified as a bar chart, a histogram, or a vertical line graph. The bar chart, with spaces between bars, and the line graph, with vertical lines instead of bars, are used for discrete, nominal or ordinal data. The histogram, with no spaces between bars, is used for continuous data. The area of the bar and not its height is proportional to frequency. If the class intervals are equal, the height of the bar is proportional to frequency. The bar diagram is intuitive for the non specialist.

The stem and leaf diagram shows actual numerical values with the aid of a key and not their representation as bars. It has equal class intervals, shows the shape of the distribution with easy identification of the minimum value, maximum value, and modal class.

The pie chart (pie diagram) shows relative frequency % converted into angles of a circle (called sector angle). The area of each sector is proportional to the frequency. Several pie charts make a doughnut chart. Values of one variable can be indicated on a map by use of different shading, cross-hatching, dotting, and colors.

A pictogram shows pictures of the variable being measured as used instead of bars. A pictogram shows pictures of the variable being measured as used instead of bars.



1004 DATA SUMMARY
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0 DISCRETE / CATEGORICAL DATA  SUMMARY
Discrete data is categorized in groups and can be qualitative or quantitative. It can be categorized before summarization. The main types of statistics used are measures of location such as rates, ratios, and proportions.

A rate is the number of events in a given population over a defined time period and has 3 components: a numerator, a denominator, and time. The numerator is included in the denominator. The incidence rate of disease is defined as a /{(a+b)t} where a = number of new cases, b = number free of disease at start of time interval, and t = duration of the time of observation.

A crude rate for a population assumes homogeneity and ignores subgroups differences. It is therefore un-weighted, misleading, and unrepresentative. Inference and population comparisons based on crude rates are not valid. Rates can be specific for age, gender, race, and cause.

Specific rates are more informative than crude rates but are cognitively difficult to internalize, digest, and understand so many rates or be able to reach some conclusions.

An adjusted or standardized rate is a representative summary that is a weighted average of specific rates free of the deficiencies of both the crude and specific rates. Standardization eliminates the ‘confusing’ or ‘confounding’ effects due to subgroups.

A ratio is generally defined as a : b where a= number of cases of a disease and b = number without disease. Examples of ratios are: the proportional mortality ratio, the maternal mortality ratio, and the fetal death ratio. The proportional mortality ratio is the number of deaths in a year due to a specific disease divided by the total number of deaths in that year. This ratio is useful in occupational studies because it provides information on the relative importance of a specific cause of death. The maternal mortality ratio is the total number of maternal deaths divided by the total live births. The fetal death ratio is the ratio of fetal deaths to live births.

A proportion is the number of events expressed as a fraction of the total population at risk without a time dimension. The formula of a proportion is a/(a+b) and the numerator is part of the denominator. Examples of proportions are: prevalence proportion, neonatal mortality proportion, and the perinatal mortality proportion. The term prevalence rate is a common misnomer since prevalence is a proportion and not a rate. Prevalence describes a still/stationary picture of disease. Like rates, proportions can be crude, specific, and standard.


2.0 CONTINUOUS DATA SUMMARY 1: MEASURES OF CENTRAL TENDENCY

The arithmetic mean is the sum of the observations' values divided by the total number of observations and reflects the impact of all observations. The robust arithmetic mean is the mean of the remaining observations when a fixed percentage of the smallest and largest observations are eliminated. The mid-range is the arithmetic mean of the values of the smallest and the largest observations. The weighted arithmetic mean is used when there is a need to place extra emphasis on some values by using different weights. The indexed arithmetic mean is stated with reference with an index mean.

The mode is the value of the most frequent observation. It is rarely used in science and its mathematical properties have not been explored. It is intuitive, easy to compute, and is the only average suitable for nominal data. It is useless for small samples because it is unstable due to sampling fluctuation. It cannot be manipulated mathematically. It is not a unique average, one data set can have more than 1 mode.
 
The median is the value of the middle observation in a series ordered by magnitude. It is intuitive and is best used for erratically spaced or heavily skewed data. The median can be computed even if the extreme values are unknown in open-ended distributions. It is less stable to sampling fluctuation than the arithmetic mean.

3.0 CONTINUOUS DATA SUMMARY 2: MEASURES OF DISPERSION/VARIATION

3.1 MEASURES OF VARIATION BASED ON THE MEAN
Mean deviation is the arithmetic mean of absolute differences of each observation from the mean. It is simple to compute but is rarely used because it is not intuitive and allows no further mathematical manipulation. The variance is the sum of the squared deviations of each observation from the mean divided by the sample size, n, (for large samples) or n-1 (for small samples). It can be manipulated mathematically but is not intuitive due to use of square units. The standard deviation, the commonest measure of variation, is the square root of the variance. It is intuitive and is in linear and not in square units. The standard deviation, s, is from a population but the standard error of the mean, s, is from a sample with s being more precise and smaller than s. The relation between the standard deviation, s, and the standard error, s, is given by the expression s = s /(n-1) where n = sample size.

The percentage of observations covered by mean +/- 1 SD is 66.6%, mean +/- 2 SD is 95%, and mean +/- 4 SD virtually 100%.

The standardized z-score defines the distance of a value of an observation from the mean in SD units.

The coefficient of variation (CV) is the ratio of the standard deviation to the arithmetic mean usually expressed as a percentage. CV is used to compare variations among samples with different units of measurement and from different populations.

3.2 MEASURES OF VARIATION BASED ON QUANTILES
Quantiles (quartiles, deciles, and percentiles) are measures of variation based on division of a set of observations (arranged in order by size) into equal intervals and stating the value of observation at the end of the given interval. Quantiles have an intuitive appeal.
Quartiles are based on dividing observations into 4 equal intervals. Deciles are based 10, quartiles on 4, and percentiles on 100 intervals. The inter-quartile range, Q3 - Q1, and the semi inter-quartile range, ½ (Q3 - Q1) have the advantages of being simple, intuitive, related to the median, and less sensitive to extreme values. Quartiles have the disadvantages of being unstable for small samples and not allowing further mathematical manipulation.

Deciles are rarely used.

Percentiles, also called centile scores, are a form of cumulative frequency and can be read off a cumulative frequency curve. They are direct and very intelligible. The 2.5th percentile corresponds to mean - 2SD. The 16th percentile corresponds to mean - 1SD. The 50th percentile corresponds to mean + 0 SD. The 84th percentile corresponds to mean + 1SD. The 97.5th percentile corresponds to mean + 2SD. The percentile rank indicates the percentage of the observations exceeded by the observation of interest. The percentile range gives the difference between the values of any two centiles.


1004 DATA ANALYSIS
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0 DISCRETE DATA ANALYSIS
Discrete data, also called categorical data, is based on counting and has no fractions or decimals. It is analyzed using the chisquare statistic for large samples and the exact test for small samples.

The Mantel-Haenszel chi-square is used to test 2 proportions in stratified data.

The MacNemar chi square is used for pair matched data.

2.0 CONTINUOUS DATA
Inference on numeric continuous data is based on the comparison of sample means. Three test statistics are commonly used: the z-, the t- and the F-statistics. The z-statistic is used for large samples. The t and F are used for small or moderate samples. The z-statistic and the t-statistic are used to compare 2 samples. The F statistic is used to compare 3 or more samples.

The student t-test is the most commonly used test statistic for inference on continuous numerical data. It is used for independent and paired samples. It is used uniformly for sample sizes below 60 and for larger samples if the population standard deviation is not known. The F-test is used to compare 3 or more groups

The formulas for the z, t, and F statistics vary depending on whether the samples are paired or are an unpaired. They also vary depending on whether the samples have equal numbers of observation or the number of observations in each sample is different.

3.0 CORRELATION ANALYSIS
Correlation analysis is used as preliminary data analysis before applying more sophisticated methods. Correlation describes the relation between 2 random variables (bivariate relation) about the same person or object with no prior evidence of inter-dependence. Correlation indicates only association; the association is not necessarily causative. I The first step in correlation analysis is to inspect a scatter plot of the data to obtain a visual impression of the data layout and identify out-liers. Then Pearson’s coefficient of correlation (product moments correlation), r, is the commonest statistic for linear correlation.

4.0 REGRESSION ANALYSIS
The simple linear regression equation is y=a + bx where y is the dependent/response variable, a is the intercept, b is the slope/regression coefficient, and x is the dependent/predictor variable.

Multiple linear regression, a form of multivariate analysis, is defined by y=a+b1x1 + b2x2 + …bnxn. Linear regression is used for prediction (intrapolation and extrapolation) and for analysis of variance.

Logistic regression is non-linear regression with y dichotomous/binary being predicted by one x or several x's.

5.0 TIME SERIES ANALYSIS
Longitudinal data is summarized as a time series plot of y against time showing time trends, seasonal patterns, random / irregular patterns, or mixtures of the above. Moving averages may be plotted instead of raw scores for a more stable curve. Time series plots are used for showing trends and forecasting.

6.0 SURVIVAL ANALYSIS
Survival analysis is used to study survival duration and the effects of various factors on survival. Two non-regression methods are used in survival analysis: the life-table and the Kaplan-Maier methods. The life-table methods better with large data sets and when the time of occurrence of an event cannot be measured precisely. It leads to bias by assuming that withdrawals occur at the start of the interval when in reality they occur throughout the interval. The Kaplan-Maier method is best used for small data sets in which the time of event occurrence is measured precisely. It is an improvement on the life-table method in the handling of withdrawals. The assumption could therefore create bias or imprecision. The Kaplan-Maier method avoids this complication by not fixing the time intervals in advance.


1004 DATA INTERPRETATION 1: TESTS OF ASSOCIATION AND MEASURES OF EFFECT
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0 GENERAL CONCEPTS
Data analysis affects practical decisions. It involves construction of hypotheses and testing them. Simple manual inspection of the data is needed can help identify outliers, assess the normality of data, identify commonsense relationships, and alert the investigator to errors in computer analysis.

Two procedures are employed in analytic epidemiology: test for association and measures of effect. The test for association is done first. The assessment of the effect measures is done after finding an association. Effect measures are useless in situations in which tests for association are negative.

The common tests for association are: t-test, F test, chi-square, the linear correlation coefficient, and the linear regression coefficient.

The effect measures commonly employed are: Odds Ratio, Risk Ratio, Rate difference. Measures of trend can discover relationships that are too small to be picked up by association and effect measures.

2.0 TESTS OF ASSOCIATION
The tests of association for continuous data are the t-test, the F-test, the correlation coefficient, and the regression coefficient. The t-test is used for two sample means. Analysis of variance, ANOVA (F test) is used for more than 2 sample means. Multiple analysis of variance is used to test for more than 2 factors. Linear regression is used in conjunction with the t test for data that requires modeling. Dummy variables in the regression model can be used to control for confounding factors like age and sex.

The common test of association for discrete data is the chi square test. The chisquare test is used to test association of 2 or more proportions in contingency tables. The exact test is used to test proportions for small sample sizes. The Mantel-Haenszel chi-square statistic is used to test for association in stratified 2 x 2 tables.

The chi square statistic is valid in one of the following conditions: (a) if at least 80% of cells have more than 5 observed, (b) if at least 80% of cells have more than 1.0 expected, (c) if there are at least 5 observed in 80% of cells. If the observations are not independent of one another as in paired or matched  studies, the McNemar chisquare test is used instead of the usual Pearson chisquare test.

3.0 MEASURES OF EFFECT
The Mantel-Haenszel Odds Ratio is used for 2 proportions in 2x2 tables. Logistic regression can be used as an alternative to the MH procedure. For paired proportions, a special form of the Manetl-Haenszel OR and a special form of logistic regression called conditional logistic regression, are used. Excessive disease risk is measured by Attributable Risk, Attributable Risk Proportion, and Population Attributable Risk. Variation of an effect measure by levels of a third variable is called effect modification by epidemiologists and interaction by statisticians. Synergism/antagonism is when the interaction between two causative factors leads to an effect more than what is expected on the basis of additivity or subtractibility.

4.0 VALIDITY and PRECISION
An epidemiological study should be considered as a sort of measurement with parameters for validity, precision, and reliability. Validity is a measure of accuracy. Precision measures variation in the estimate. Reliability is reproducibility. Bias is defined technically as the situation in which the expectation of the parameter is not zero. Bias may move the effect parameter away from the null value or toward the null value. In negative bias the parameter estimate is below the true parameter. In positive bias the parameter estimate is above the true parameter. A study is not valid if it is biased. Systematic errors lead to bias and therefore invalid parameter estimates. Random errors lead to imprecise parameter estimates.

Internal validity is concerned with the results of each individual study. Internal validity is impaired by study bias. External validity is generalizability of results. Traditionally results are generalized if the sample is representative of the population. In practice generalizability is achieved by looking at results of several studies each of which is individually internally valid. It is therefore not the objective of each individual study to be generalizable because that would require assembling a representative sample. Precision is a measure for lack of random error. An effect measure with a narrow confidence interval is said to be precise. An effect measure with a wide confidence interval in imprecise. Precision is increased in three ways: increasing the study size, increasing study efficiency, and care taken in measurement of variables to decrease mistakes.

5.0 META ANALYSIS
Meta analysis refers to methods used to combine data from more than one study to produce a quantitative summary statistic. Meta analysis enables computation of an effect estimate for a larger number of study subjects thus enabling picking up statistical significance that would be missed if analysis were based on small individual studies. Meta analysis also enables study of variation across several population subgroups since it involves several individual studies carried out in various countries and populations.

Criteria must be set for what articles to include or exclude. Information is abstracted from the articles on a standardized data abstract form with standard outcome, exposure, confounder, or effect modifying variables.

The first step is to display the effect measures with each article with their 95% confidence limits to get a general idea of their distribution before proceeding to compute summary measures. The summary effect measure, OR or b, is computed from the effect measures of individual studies using weighted logistic regression or computing a MH weighted average in which the weight of each measure is the inverse of its precision i.e. 1/(se)2. In both the logistic or MH procedures, each study is treated as a stratum.

The combined effect measure is then statistically adjusted for confounding, selection, and misclassification biases. Tests of homogeneity can be carried out before computing the summary effect measure. Sensitivity analysis is undertaken to test the robustness of the combined effect measure.



1004 DATA INTERPRETATION: SOURCES AND TREATMENT OF BIAS
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0 MISCLASSIFICATION BIAS
Misclassification is inaccurate assignment of exposure or disease status. Random or non-differential misclassification of disease biases the effect measure towards the null and underestimates the effect measure but does not introduce bias. Non-random or differential misclassification is a systematic error that biases the effect measures away from the null exaggerating or underestimating the effect measure. Positive association may become negative and negative associations association may become positive.

Misclassification bias is classified as information bias or detection bias. Information bias is systematic incorrect measurement on response due to questionnaire defects, observer errors, respondent errors, instrument errors, diagnostic errors, and exposure mis-specification. Detection bias arises when disease or exposure are sought more vigorously in one comparison more than the other group.

Misclassification bias can be prevented by using double-blind techniques to decrease observer and respondent bias. Treatment of misclassification bias is by the probabilistic approach or measurement of inter-rater variation.

2.0 SELECTION BIAS
Selection bias arises when subjects included in the study differ in a systematic way from those not included. It is due to biological factors, disease ascertainment procedures, or data collection procedures.

Selection bias due to biological factors includes the Neyman fallacy and susceptibility bias. The Neyman fallacy arises when the risk factor is related to prognosis (survival) thus biasing prevalence studies. Susceptibility bias arises when susceptibility to disease is indirectly related to the risk factor.

Selection bias due to disease ascertainment procedures includes publicity, exposure, diagnostic, detection, referral, self-selection, and Berkson biases. The Hawthorne self selection bias is also called the healthy worker effect since sick people are not employed or are dismissed.

The Berkson fallacy arises due to differential admission of some cases to hospital in proportions such that studies based on the hospital give a wrong picture of disease-exposure relations in the community.

Selection bias during data collection is represented by non-response bias and follow-up bias. Prevention of selection bias is by avoiding its causes that were mentioned above.  There is no treatment for selection bias once it has occurred. There are no easy methods for adjustment for the effect of selection bias once it has occurred.

3.0 CONFOUNDING BIAS
Confounding is mixing up of effects. Confounding bias arises when the disease-exposure relationship is disturbed by an extraneous factor called the confounding variable. The confounding variable is not actually involved in the exposure-disease relationship. It is however predictive of disease but is unequally distributed between exposure groups. Being related both to the disease and the risk factor, the confounding variable could lead to a spurious apparent relation between disease and exposure if it is a factor in the selection of subjects into the study.

A confounder must fulfil the following criteria: relation to both disease and exposure, not being part of the causal pathway, being a true risk factor for the disease, being associated to the exposure in the source population, and being not affected by either disease or exposure.

Prevention of confounding at the design stage by eliminating the effect of the confounding factor can be achieved using 4 strategies: pair-matching, stratification, randomisation, and restriction.

Confounding can be treated at the analysis stage by various adjustment methods (both non-multivariate and multi-variate). Non-multivariate treatment of confounding employs standardization and stratified Mantel-Haenszel analysis. Multivariate treatment of confounding employs multivariate adjustment procedures: multiple linear regression, linear discriminant function, and multiple logistic regression. Care must be taken to deal only with true confounders. Adjusting for non-confounders reduces the precision of the study.

4.0 MIS-SPECIFICATION BIAS
This type of bias arises when a wrong statistical model is used. For example use of parametric methods for non-parametric data biases the findings.

5.0 SURVEY ERROR and SAMPLING BIAS
Total survey error is the sum of the sampling error and three non-sampling errors (measurement error, non-response error, and coverage error).

Sampling errors are easier to estimate than non-sampling errors. Sampling error decreases with increasing sample size. Non-sampling errors may be systematic like non-coverage of the whole sample or they may be non-systematic. Non-systematic errors cause severe bias.

Sampling bias, positive or negative,  arises when results from the sample are consistently wrong (biased) away from the true population parameter. The sources of bias are: incomplete or inappropriate sampling frame, use of a wrong sampling unit, non-response bias, measurement bias, coverage bias, and sampling bias.


1004 DATA SOURCES and RETRIEVAL
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0 INTRODUCTION
Data gives rise to information that in turn gives rise to knowledge. Knowledge leads to understanding. Understanding leads to wisdom. A document is stored data in any form: paper, book, letter, message, image, e-mail, voice, and sound. Some documents are ephemeral but can still be retrieved for the brief time that they exist and are recoverable. Data for public health decisions is

2.0 DATA SOURCES
A document is stored data in any form: paper, book, letter, message, image, e-mail, voice, and sound. Documents of medical importance are usually journal articles, books, technical reports, or theses. The sources of on-line documents are Medline / pubmed, on-line journals, on-line books, on-line technical reports, on-line theses and dissertations.

3.0 DATA RETRIEVAL
Retrieval technology for formatted character documents is now quite sophisticated. It uses matching, mapping, or use of Boolean logic (AND, OR, NOT). In matching, the most common form of retrieval, the query is matched to the document being sought after determining what terms or expressions are important or significant. The search can be limited by subject matter, language, type of publication, and year of publication.

Document surrogates used in data retrieval are: identifiers, abstracts, extracts, reviews, indexes, and queries. Queries are short documents used to retrieve larger documents by matching, mapping, or use of Boolean logic (and, or, but). Queries may in natural or probabilistic language. Fuzzy queries are deliberately not rigid to increase the probability of retrieval.

Other forms of data retrieval are term extraction (based on low frequency of important terms), term association (based on terms that normally occur together), lexical measures (using specialized formulas), trigger phrases (like figure, table, conclusion), synonyms (same meaning), antonyms (opposite meaning), homographs (same spelling but different meaning), and homonyms (same sound but different spelling). Stemming algorithms help in retrieval by removing ends of words leaving only the roots. Specialized mathematical techniques are used to assess the effectiveness of data retrieval.

4.0 DATA WAREHOUSING
Data warehousing is a method of extraction of data from various sources, storing it as historical and integrated data for use in decision-support systems. Meta data is a term used for definition of data stored in the data warehouse (i.e. data about data). A data model is a graphic representation of the data either as diagrams or charts. The data model reflects the essential features of an organization. The purpose of a data model is to facilitate communication between the analyst and the user. It also helps create a logical discipline in database design.

5.0 DATA MINING
Data mining is the discovery part of knowledge discovery in data (KDD) involving knowledge engineering, classification, and problem solving. KDD starts with selection, cleaning, enrichment, and coding. The products of data mining are pattern recognition. These patterns are then applied to new situations in predicting and profiling. Artificial intelligence (AI), based on machine learning, imbues computers with some creativity and decision making capabilities using specific algorithms.


1004 CRITICAL READING OF A JOURNAL ARTICLE
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0  WHY CRITICAL READING?
In order for public health practitioners to use an article as a source of evidence for decision making, they must read it critically to assess its quality. For critical reading of scientific literature, the reader must be equipped with tools to be able to analyze the methodology and data analysis critically before accepting the conclusions.

2.0 COMMONEST PROBLEMS IN PUBLISHED PAPERS
Common problems in published studies are incomplete documentation, design deficiencies, improper significance testing and interpretation.

3.0 PROBLEMS OF THE TITLE, ABSTRACT, and INTRODUCTION
The main problem of the title is irrelevance to the body of the article. Problems of the abstract are failure to show the focus of the study and to provide sufficient information to assess the study (design, analysis, and conclusions). Problems of the introduction are failures of the following: stating the reason for the study, reviewing previous studies, indicating potential contribution of the present study, giving the background and historical perspective, stating the study population, and stating the study hypothesis.

4.0 PROBLEMS OF STUDY DESIGN
Problems of study design are the following: going on a fishing expedition without a prior hypothesis, study design not appropriate for the hypothesis tested, lack of a comparison group, use of an inappropriate comparison group, and sample size not big enough to answer the research questions.

5.0 PROBLEMS OF DATA COLLECTION
Problems in data collection are: missing data due to incomplete coverage, loss of information due to censoring and loss to follow-up, poor documentation of data collection, and methods of data collection inappropriate to the study design.

6.0 PROBLEMS OF DATA ANALYSIS
Problems of data analysis are failures in the following: stating type of hypothesis testing (p value or confidence interval), use of the wrong statistical tests, drawing inappropriate conclusions, use of parametric tests for non-normal data, multiple comparisons or multiple significance testing, assessment of errors, assessment of normality of data, using appropriate scales and tests, using the wrong statistical formula, and confusing continuous and discrete scales.

7.0 PROBLEMS OF THE RESULTS SECTION
Problems in reporting results are: selective reporting of favorable results, numerators without denominator, inappropriate denominators, numbers that do not add up, tables not labeled properly or completely, numerical inconsistency (rounding, decimals, and units), stating results as mean +/- 2SD for non-normal data, stating p values as inequalities instead of the exact values, missing degrees of freedom and confidence limits.

8.0 PROBLEMS OF THE CONCLUSION
Problems of the conclusion are failures in the following: repeating the results section, discussion of the consistency of conclusions with the data and the hypothesis, extrapolations beyond the data, discussing short-comings and limitations of the study, evaluation of statistical conclusions in view of testing errors, assessment of bias (misclassification, selection, and confounding), assessment of precision (lack of random error), and assessment of validity (lack of systematic error).

Internal validity is achieved when the study is internally consistent and the results and conclusions reflect the data. External validity is generalizability (i.e. how far can the findings of the present study be applicable to other situations) and is achieved by several independent studies showing the same result. Inability to detect the outcome of interest due to insufficient period of follow-up, inadequate sample size, and inadequate power.

9.0 ABUSE or MISUSE OF STATISTICS
Statistics can be abused by incomplete and inaccurate documentation of results as well as selection of a favorable rate and ignoring unfavorable ones. This is done by 'playing' either with the numerator or the denominator. The scales of numerators and denominators can be made artificially wider or narrower giving false and misleading impressions.

Statistical results are misleading in the following situations: (a) violating the principle of parsimony, (b) study objective unclear and not reflected in the study hypothesis (c) fuzzy, inconsistently, and subjective definitions (of cases, non-cases, the exposed, the non-exposed, comparison groups, exposure, method of measurement), (f) incomplete information on response rates and missing data.



1004 BACKGROUND TO MAIN PUBLIC HEALTH CONCERNS REQUIRING EPIDEMIOLOGICAL USE OF EPIDEMIOLOGICAL EVIDENCE FIR DECISION MAKING
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0 INTRODUCTION
This paper provides a background on the major areas of decision making in public health as a basis for discussion on how to use epidemiological evidence in making decisions.

2.0 HEALTH ECONOMICS
Health economics, an integration of medicine and economics, is application of micro-economic tools to health. Economic concepts used in health economics are scarcity, production, efficiency, effectiveness, efficacy, utility, need, want, demand & supply, elasticity, input-output, competition, marginal values (marginal costs and marginal benefits), diagnosis related group (DRG), service capacity, utilization, equity, value of money (present and future), and compounding & Discounting. Economic analysis uses models and hypothesis testing. The assumptions of a free/competitive market is not always true in health care because of supplier-induced demand and government regulation. Measurements of production costs, health outcomes, the value of human life, and the quality of life are still controversial. The purpose of economic analysis is to evaluate projects regarding cost minimization, cost benefit analysis, cost effectiveness analysis, and cost utility. It also plays a role in decision analysis.

3.0 HEALTH POLICY
Health policies are framed within a context of 4 contrasting alternatives that remain un resolved: prevention vs. cure, health promotion vs. disease prevention, primary care vs specialty practice, physician decision making vs. joint physician-patient decision making. Health policy is formulated to achieve specific objectives: ensuring adequate supply of services, ensuring accessibility of services, assuring equity, assuring technical and economic efficiency, assuring quality, and cost control. The main outstanding issues in health policy are: justice, needs (needs assessment, unmet needs, and prioritization of needs), rationing health care, centralization and decentralization, maximizing benefits, access and coverage, quality of care, and cost considerations. Health policy is regulated by laws and regulations. Public health laws have 5 functions: prohibition of injurious behaviour, authorization of services, allocation of resources, financing arrangements, and surveillance over the quality of care. Health policy varies by country and there are variations within the same country.

4.0 HEALTH PLANNING
Planning is a circular process that includes: situation analysis, prioritization, goal and objective definition, and choice of strategies, and evaluation. Objectives of health planning can be universal, national, or local. Rational planning is based on analysis of data, defining objectives, and formulating plans to achieve those objectives. In incremental planning plans evolve as problems arise and solutions are found for them. Mixed scanning is a judicious mixture of rational and incremental planning. The methodology of planning is defined by answering the questions about planning: how techniques), who (planning by specialists of the community or both), when (long term or short term), and where (place and whether centralized or decentralized). Planning can be for manpower, facilities, or services. Planning for a new program proceeds by identifying a problem, defining the problem, understanding the problem, planning an intervention, and evaluating the intervention. Needs assessment and prioritization are necessary preliminary steps in planning. Indicators of need are morbidity, mortality, and social deprivation. Needs assessment proceeds in 5 steps: determining present health status, assessing the environment, identifying and prioritizing existing programs, assessing service deficits in light of existing programs, dealing with the problems, and validating the needs. Prioritization considers: extent and seriousness of the problem, availability of effective cure or prevention, appropriateness and efficiency of the cure or prevention, and whether intervention will be at the level of the individual or the level of the community. The goals of public health intervention include: raising awareness of the health problem, increasing knowledge and health skills, changing of attitudes and behavior, increasing access to health care, reducing of risk, and finally improved health status. The following are the public health interventions: behavioral modification, environmental control, legislation, social engineering, biological measures, and screening for early detection and treatment of disease.

5.0 HEALTH CARE FINANCING
Expenditure on health is rising in all countries due to higher demand for care, higher wages of health care workers, more sophisticated and expensive medical technology. The traditional cost containment strategies included insurance deductibles, co-payments, and exclusion of certain services from coverage. The new cost containment procedures are prospective payments based on DRGs and managed care (pre admission review and certification, emergency room review, concurrent reviews to reduce hospital stay, discharge planning, second opinions, and gatekeepers to check referrals to specialists. HMOs have built-in incentives to control costs by curtailing hospitalization. Reimbursements based on DRGs control what care providers can charge for given services. The zeal to control costs can lead to inadequate or inappropriate care and decreased of access and equity. Health care finance can be from general taxation, social insurance, direct payments, and private insurance. Outstanding issues in health care financing are distributive justice (access, affordability, and quality) and allocation priorities (rural vs urban, curative vs preventive medicine, administrative costs vs actual care).

6.0 HEALTH CARE DELIVERY
The health care system can be described as resources, organization, and management. It is described according to availability, adequacy, accessibility, acceptability, appropriateness, assessibility, accountability, completeness, comprehensiveness, and continuity. It consists of institutions, human resources, information systems, finance, management, and organization, environmental support, and service delivery. Its nature is determined by demographic, cultural, political, social, and economic factors. Health care services include: preventive care, primary care, secondary care, tertiary care, restorative care, and continuing health care (for the elderly). Health care delivery systems can be classified by ownership (profit or not for profit and government or private), method of funding (public taxation, direct payment, or insurance), type of care (western, alternative, or traditional), and level of care (primary, secondary, or tertiary). Modes of health care delivery can be the physician office, a Health Maintenance Organizations (HMO), Preferred Provider Organizations (PPO), or ambulatory Care Centers. Health care personnel are classified as independent providers (physicians), limited care providers (eg dentists), nurses, allied health professionals, and public health professionals. Health care facilities are physician offices, hospitals, nursing homes, out-patient services (ambulatory services), emergency room services, Health care maintenance organizations (HMO), rehabilitation centers, and continuing care facilities.

Primary health care (PHC) was defined by the World Health Organization in 1978 as essential health services universally accessible to individuals and families in the community by means acceptable to them through their full participation and at a cost that the community and the country can afford. It is the frontline or point of entry of an individual into the health care system. It is centered on the individual and not the organ system or disease. It is provided at physician offices, clinics, and other patient facilities. It is a comprehensive care for common diseases including prevention, screening, diagnosis, and treatment. It  rests on 8 elements: health education, food supply and proper nutrition, safe water and basic sanitation, maternal and child health services including family planning, immunization against major infectious diseases, prevention and control of locally endemic diseases, appropriate treatment of common diseases and injuries, and provision of essential drugs.

Health promotion refers to activities that improve personal and public health such as health education, health protection, risk factor detection, health enhancement, and health maintenance. Health protection includes accident prevention, occupational safety and health, environmental health, food and drug safety, and oral health.

Overall health status is assessed using mortality statistics, life expectancy, years of potential life lost (YPLL), Disability Adjusted Life years (DALY), and results of Nutritional and Health Surveys.

Program evaluation is study of effectiveness, outcomes, efficiency, goals, and impact. Process evaluation: is evaluation of the processes involved in health care without reference to the output. Outcome evaluation focuses on results. The following are used as outcome measures: mortality, morbidity, patient satisfaction, quality of life, degree of disability or dependency, and any other specific end-points.

Quality assurance (QA) is formal and systematic identification, monitoring, and overcoming problems in health care delivery. Quality indicators are mortality, morbidity, patient satisfaction, and various rates. Consensus guidelines, Good Clinical Practice guidelines, clinical protocols, and nursing guidelines are a bench-mark against which clinical performance can be evaluated. QA review may be concurrent or retrospective. The QA reviewers may be independent clinical auditors from outside or may be part of the health care team. QA in hospitals centers around review of the patient charts. The aim of QA review is to ascertain compliance with the given guidelines. If a deviation is found, it is documented as well as its surrounding circumstances. It is discussed at the departmental QA committee. The committee will suggest actions to be taken to alleviate the deficiency and map out an implementation plan. The QA review process is cyclical.


1004 DECISION MAKING and PROBLEM-SOLVING
Presented at a workshop on evidence-based decision making organized by the Ministry of Health Kingdom of Saudi Arabia Riyadh 24-26 April 2010 by Professor Omar Hasan Kasule MB ChB (MUK), MPH (Harvard), DrPH (Harvard) Professor of Epidemiology and Bioethics Faculty of Medicine King Fahd Medical College. omarkasule@yahoo.com

1.0 PRINCIPLES OF DECISION MAKING
Leaders and managers make strategic, tactical, or operational decisions. Programmed decisions are routine procedures and un-programmed decisions are creative, innovative, and risky. Major decisions are best taken incrementally. Certainty is better than uncertainty. Easier alternatives are preferred. Discussions improve decision making. Best decisions address needs and not want and require time. Scientific decisions are better than hasty or prevaricative decisions. Decision making can be rational systematic, intuitive, mathematical or statistical. Consensus is better than majority decisions. A competent individual decision is better than that of a majority of average individuals. The worst decision is that by an average individual.

2.0 PROCESS OF DECISION-MAKING
All possibilities are considered and the larger picture is visualized while putting the decision in proper context. Review of previous related decisions helps. An assessment is made whether a decision is necessary. A bad decision is stopped before making a better one. The degree of risk and uncertainty must be known. The present decision must be related to others. Biases are acknowledged. Implementability must be considered. The issues must be classified as to importance and urgency. Assumptions and forecasts are made. Available resources are considered. Decision alternatives are generated and the best alternative is selected. The future impact of the decision is analyzed. Then istikhara is carried out before decision implementation. A bad decision should be changed sooner than later.

3.0 PRINCIPLES OF PROBLEM-SOLVING
A problem exists if reality is different from the expected. Problems should be identified early in the lag time between cause and consequence. Problems are challenges and opportunities that should be approached with an open mind and viewed as holistic. Problems are solved and not shifted around. It is better to leave a problem unsolved if the consequences of the 'best' solution are worse than the original problem. An optimal solution will produce maximum effect from minimum effort. Cumulative experience cannot solve all problems. Fixed tested procedures solve routine and emergency problems but not creative new problems. Decision audit is educative. 'Best' is not synonymous with the simplest solution. Quality solutions can be arrived at by generating a lot of alternatives and selecting the best proving the rule that quality is from quantity.

4.0 PROCESS OF PROBLEM-SOLVING
Problem solving is realistic appraisal, seeing problems as challenges and opportunities, open mindedness,, toleration for alternatives, the realization that 'different' is not 'wrong', encouraging 'strange' ideas, combining and extending ideas, creativity, and persistence. Stages of rational systematic problem-solving are: analysis of the environment, recognition of the problem, identification of the problem, determination of the ownership of the problem, definition of the problem, classification of the problem, prioritizing the problem, collection of information, making assumptions and forecasts, generating decision alternatives, pause during incubation period that leads to illumination, selection of the best alternative, analysis of the impact of the chosen alternative, implementation, control of the implementation,  and evaluation of the results. Barriers to effective problem solving are wrong concepts, attitudes, behaviors, questions, and methods. When you have an overwhelming problem, talk to someone who can listen. De-emotionalize the problem. Look at problem from wider perspective. Identify positives in the problem. Solve the problem systematically. Do not escape/avoid, do nothing, scream, self-anesthesia, or lament.

5.0 MANAGEMENT OF CRISES
A crisis is a situation of a major change with potential risk. A crisis, preventable and non preventable, is always waiting to happen. Crises are opportunities for creative problem solving. A crisis is a fluid, dynamic, and fast condition associated with fear and interferes with normal life. It goes through 5 stages: prodroma, acute crisis, chronic crisis, and crisis resolution with ripple effects. Crisis management reveals organizational weaknesses and strengths. Strong organizations have mechanisms to forecast crises, contingency plans, and worked-out worst-case scenarios. They can detect prodromal signs before a crisis. Crisis management involves reversing prodromal signs and intervention to deal with after-effects. The crisis intervention strategy includes identification of the crisis, isolation of the crisis and management of the crisis. Decision making in a crisis is stressful.



Video


Writings of Professor Omar Hasan Kasule, Sr








This section provides thoughts in Islamic Epistemology and Curriculum Reform.
This section covers motivation of a medical student and development of personal skills: social, intellectual, professional behavior etc. It also equips the medical student with leadership skills that will be required of him as a future physician.




New Items

This section contains monthly e-newsletter presents the most recent developments in the fields of Islamic epistemology and educational curriculum reform summarized from books, journals, websites, interviews, and academic proceedings (conferences, seminars, and workshops). We also accept original contributions of less than 500 words...










Recent Uploads


This section provides inter-disciplinary books authored by renowned scholars.

This section contains different e-journals.