Click on a title to see an abstract of the paper
Ding, X., Hu, P.J., Verma, R. and D.G. Wardell (2010). “The Impact of Service System Design and Flow Experience on Customer Satisfaction in Online Financial Services,” forthcoming in Journal of Service Research (scheduled to appear in the February 2010 issue).
Victorino, L., Verma, R. and D.G. Wardell (2008), “Service Scripting: A Customer’s Perspective of Quality and Performance,” Cornell Center for Hospitality Research Managerial Report, 8, 20, 4-13.
Tsai, W. and Wardell, D.G. (2006).
"Creating Individualized Data Sets for Student
Exercises Using Microsoft Excel and Visual Basic," INFORMS
Transactions on Education, 7, 1, http://archive.ite.journal.informs.org/Vol7No1/TsaiWardell2/.
Tsai, W. and Wardell, D.G. (2006).
"An Interactive Excel VBA Example For Teaching Statistics
Transactions on Education, 7, 1, http://archive.ite.journal.informs.org/Vol7No1/TsaiWardell/
Ding, X., Wardell, D.G. and
Verma, R. (2006). "An Assessment of Statistical Process
Control-Based Approaches for Charting Student Evaluation Scores," forthcoming
in the Decision Sciences
Journal of Innovative Education, 4, 2, 259-272.
Chesteen, S., Helgheim, B.,
Randall, T. and Wardell, D.G. (2005). "Comparing quality
of care in non-profit and for-profit nursing homes: A process perspective,"
of Operations Management, 23, 2, 229-242.
Pullman, M., Moore, W. and Wardell,
D. G. "A Comparison of Quality Function Deployment and Conjoint
Analysis In New Product Design," to appear in Journal
of Product Innovation Management.
Wardell, D. G. (1997). "Small
Sample Interval Estimation of Bernoulli and Poisson Parameters," The
American Statistician, 51, 4, 321-325.
Wardell, D. G. and M. R. Candia
(1996). "Statistical Process Monitoring of Customer Satisfaction
Survey Data," Quality
Management Journal, 3, 4, 36-50.
Wardell, D. G., H.
Moskowitz and R.D.
Plante (1994). "Run Length Distributions of Residual
Control Charts for Autocorrelated Processes," Journal
of Quality Technology, 26, 4, 308-317.
Wardell, D. G., H.
Moskowitz and R.D.
Plante (1994). "Run Length Distributions of Special-Cause
Control Charts for Correlated Processes," Technometrics,
36, 1, 3-17.
Plante and D. G. Wardell (1994). "The Use of Run Length
Distributions of Statistical Process Control Charts to Detect False Alarms,"
Production and Operations
Management, 3, 3, 217-239.
Wardell, D. G., H.
Moskowitz and R.D.
Plante (1992). "Control Charts in the Presence of
Data Correlation," Management Science,
38, 8, 1084-1105.
The Impact of Service System Design and Flow Experience on Customer Satisfaction in Online Financial Services
Prior research examines customer satisfaction in retailing and e-commerce settings, yet online financial services have received little research attention. To understand customer satisfaction with this fast-growing service, we investigate the role of flow experience, a sensation that occurs as a result of significant cognitive involvement. We examine how service system characteristics affect the cognitive states of the flow experience, which determines customer satisfaction. The flow construct and total experience design suggest a structural model that we empirically test using responses from a large sample of online investors. In support of the model and most of the hypotheses it suggests, our empirical results clarify the important antecedents and consequence of flow experience in online financial services and suggest the viability of using a dual-layer experience construct to investigate customer satisfaction. Our findings can help researchers and service providers understand when, where, and how flow experience is formulated in online financial services.
KEYWORDS: Satisfaction, service system, flow experience, online financial service
Creating Individualized Data
Sets for Student Exercises Using Microsoft Excel and Visual Basic
In this paper we describe an approach that utilizes Excel macros to help OR/MS
instructors to enhance and assess their students' learning. We begin by explaining
a specific macro, Data Generator, which mainly helps statistics instructors to
create assignment questions where numbers or data sets used in the questions are
randomly generated according to the instructions specified by instructors. Because
of the potentially narrow application range of our particular macro, we also describe
an approach that can be used for more general questions. For the most part, this
more general approach requires simply recording a macro and then pasting a few
other instructions into the resulting code to make the recorded macro more user-friendly.
We also provide a suggested procedure to help instructors to grade the students'
KEYWORDS: Spreadsheets, Student Assessment, Education Customization, Computer-based
Testing, Algorithmic-styled Exam Generation
An Interactive Excel VBA Example for Teaching
It is often challenging for business students to learn abstract statistical
concepts and apply these concepts to their work. Three concepts in particular
that we have found difficult to communicate effectively are the Central Limit
Theorem, interval estimation and hypothesis testing. To improve the effectiveness
of teaching these fundamental statistical concepts, we developed a Visual Basic
for Applications (VBA) driven Excel spreadsheet that is built around one simple
business scenario. The scenario involves setting the filling speed in a cereal
filling plant. The faster the filling speed, the larger the variation in cereal
box weights and the higher the chance of having an out-of-control filling process.
On the other hand, the lower the filling speed, the less efficient the plant
is at utilizing capacity. Through interactively finding the optimal filling
speed, students are exposed to these key statistics concepts as well as random
sampling techniques. Hence, we integrate the illustration of three important
statistical concepts in one simple yet practical business scenario. Moreover,
the Excel VBA-driven example demonstrates several Excel statistical formulae
that are useful to business students. We conducted an in-class open-book quiz
to two sections of professional MBA students to assess teaching effectiveness
of this interactive example. The results showed that the scores of those using
the interactive VBA demo were superior to those exposed to more traditional
techniques at 10% significance level. A follow-up on-line feedback survey further
supported the usage of the Excel VBA-driven example in enhancing student learning.
KEY WORDS: Excel VBA Macro, Visualize Statistics Concepts,
Central Limit Theorem, Confidence Interval, Hypothesis Testing, Random Sampling
An Assessment of Statistical Process Control-Based Approaches for
Charting Student Evaluation Scores
We compare three control charts for monitoring data from student
evaluations of teaching (SET) with the goal of improving student satisfaction
with teaching performance. The two charts that we propose are a modified p
chart and a z-score chart. We show that these charts overcome some of the
shortcomings of the more traditional charts for analyzing SET data. A comparison
of three charts (an individuals chart, the modified p chart and the
z-score chart) reveals that the modified p chart is the best approach for
analyzing SET data because it utilizes distributions that are appropriate
for categorical data, and its interpretation is more straightforward. We conclude
that administrators and faculty alike can benefit by using the modified p
chart to monitor and improve teaching performance as measured by student evaluations.
Comparing quality of care in non-profit and for-profit nursing homes: A
Using data from a sample of nursing homes, this paper uses an
operations centric approach to test the economic hypothesis asserting that
quality in non-profit healthcare entities will exceed quality in for-profit
counterparts (Arrow 1963). For-profit healthcare entities face an inherent
conflict between providing profits to investors and health welfare to patients.
Thus, non-profit entities exist as a signal and evidence of higher quality
services. To date, research examining differences in quality between for-profit
and non-profit nursing homes has focused on a direct link between outcome
quality and non-profit status. These studies have produced inconclusive or
mixed results. We argue that non-profit or for-profit status and outcome quality
are linked via two intermediate factors, namely process quality and input
quality. Consistent with many prior studies, we report no direct link between
non-profit status and outcome quality. However, we report that process quality
is indeed higher at non-profit nursing homes than for-profit nursing homes,
but that input quality is lower. We also examine the association between outcome
quality with process and input quality. We report that different aspects of
process quality are tied to better outcome quality, but report several notable
exceptions. This research provides support for Arrow's hypothesis at the process
level and gives insights into the link between process quality, input quality
and outcome quality in the nursing home environment.
KEYWORDS: Process quality, hierarchical linear models,
A Comparison of Quality Function Deployment and Conjoint Analysis In New
We compare two product design approaches, quality function
deployment (QFD) and conjoint analysis, by applying each to the design of
a new all-purpose climbing harness for the beginning/intermediate ability
climber that would complement a leading manufacturer's existing product line.
While many of the optimal design features were the same under both approaches,
the differences allow us to highlight the strengths of each approach. With
conjoint analysis, it was easier to compare the most preferred features (i.e.,
ones that maximized sales) to profit maximizing features and also to develop
designs that optimize product line sales or profits. On the other hand, QFD
was able to highlight the fact that certain engineering characteristics or
design features had both positive and negative aspects. This tradeoff could
point the way to "out of the box" solutions. QFD also highlighted
the importance of starting explicitly with customer needs, regardless of which
method is used.
Rather than competing, we view them as complementary approaches
that should be conducted simultaneously; each providing feedback to the other.
When the two approaches differed on the optimal level or importance of a feature,
it appeared that conjoint analysis better captured customers' current preferences
for product features while QFD captured what product developers thought would
best satisfy customer needs. Looking at the problem through these different
lenses provides a useful dialogue that should not be missed. QFD's ability
to generate creative or novel solutions should be combined with conjoint analysis'
ability to forecast market reaction to design changes.
Small Sample Interval Estimation of Bernoulli and Poisson Parameters
To find an interval estimate for the parameter of the Bernoulli or the Poisson distribution usually requires the sample size to be large so that the normal approximation may be used. Small sample intervals have been proposed earlier, but the procedures have required tables and are inexact. In this note we give a simple procedure for finding a small sample confidence interval with minimum interval width. We also give a geometric interpretation to the minimization problem.
KEY WORDS: Confidence Interval, Optimization, Binomial
Statistical Process Monitoring of Customer Satisfaction Survey Data
Understanding and monitoring customer needs and satisfaction are crucial requirements of quality management. Many organizations collect frequent customer satisfaction data over time using customer surveys. A logical yet infrequently used means of monitoring these data is with process control charts. Control charts provide managers with a mechanism for scanning the external and internal environments for changes in customer satisfaction levels. Traditional Shewhart charts need to be modified, however, to monitor categorical or ordinal customer survey data. Using data from a major hospital, we show how blindly applying traditional Xbar charts to ordinal survey data can lead managers to erroneous conclusions about the level of customer satisfaction. We also discuss two alternative charts that are more appropriate for survey data. The first is an extension of the p chart to the case where there are more than two possible outcomes. The second chart is based on the chi-squared statistic, which is commonly used to test hypotheses when data are categorical. We illustrate the use of each chart, and compare their properties using the hospital data. The comparison shows that the chi-squared chart detects shifts faster on average than does the extension of the p chart. The chi-squared chart is somewhat difficult to compute and interpret, however, and many managers may prefer the (simpler) extended p chart.
KEY WORDS: Control Charts, Categorical Data, Goodness-of-Fit, Multinomial Distribution, Customer Focus
Run Length Distributions of Residual Control Charts for Autocorrelated Processes
A FORTRAN program is given to calculate the run length distribution (RLD), the average run length (ARL), and the standard deviation of the run length (SRL), for residual control charts used to monitor autocorrelated process output. RLD, ARL and SRL values are calculated for processes which can be modeled by pure Autoregressive models of order p (AR(p)), pure Moving Average models of order 1 (MA(1)), and mixed Autoregressive Moving-Average Models of orders p and 1 (ARMA(p,1)), given that the assignable cause to be detected is a step shift in the process mean.
KEY WORDS: Time Series, Control Charts, Average Run Length
Run Length Distributions of Special-Cause Control for Correlated Processes
Recently, considerable attention has been given to the effect of data correlation on statistical process control (SPC). Use of traditional SPC methods when observations are correlated often leads to misleading conclusions as to whether or not the process is under control. We derive the run length distributions for the special-cause control chart proposed by Alwan and Roberts (1988) for modeling time-series models in SPC, given that the assignable cause to be detected is a shift in the process mean. Run length distributions, as well as the average run length (ARL) and the standard deviation of the run length (SRL), are derived for any AR(p) process, and approximate values are derived for the more general ARMA(p,q) process. The expressions derived do not depend on the type of shift in the process mean (e.g., step shift, ramping shift). Knowledge of the run length distribution, the ARL, and the SRL for correlated processes provides information on the responsiveness of the special-cause control chart to a shift in the process mean. For the general ARMA(p,q) model, both recursive and closed form solutions are derived. Numerical results are illustrated for the ARL and the SRL of the ARMA(1,1) model for various values of the autoregressive and the moving average parameters, given that the shift in the mean is a step shift. These results show that the special-cause control chart detects shifts in the process mean more quickly when the process is negatively rather than positively autocorrelated. Moreover, the SRL is usually smaller when the process is negatively autocorrelated. However, when the process is positively autocorrelated, the ARL for the special-cause chart is relatively large. Regardless of the sign of the autocorrelation, the shape of the probability mass function of the run length shows that the probability of detecting shifts very early is substantially higher for the special cause chart than with more traditional control charts. Early detection makes the cause of the signal easier to identify, resulting in a more rapid rate of continuous quality improvement. However, there are some cases when traditional charts, which are simpler to implement, should be considered even when the process is autocorrelated.
THE USE OF RUN LENGTH DISTRIBUTIONS OF STATISTICAL PROCESS CONTROL CHARTS TO DETECT FALSE ALARMS
Run-length distributions for various statistical process-control charts and techniques for computing them have recently been reported in the literature. The real advantages of knowing the run-length distribution for a process-control chart versus knowing only the associated average-run length of the chart have not been exploited. Our purpose is to use knowledge of the run-length distribution as an aid in deciding if an out-of-control signal is a true signal or merely a false alarm. The ability to distinguish between true and false signals is important, especially in operations where it is costly to investigate the causes of out-of-control conditions. Knowledge of the run-length distribution allows us to compute likelihood ratios, which are simple to calculate and to interpret, and which are used to determine the odds of obtaining an out-of-control signal at a particular run-length when a shift in the process mean has actually occurred vis-a-vis no such shift. We extend our analysis in a Bayesian sense by incorporating prior information on the distribution of the shift size of the process mean, combined with the likelihood ratio obtained from the run-length distribution, to determine if a shift larger than a critical size has occurred. We give examples for the Shewhart chart, the exponentially weighted moving-average chart, and the special-cause control chart for processes with autocorrelated observations. The examples show that the current recommended usage of the average-run length alone as a guide for determining if a signal is a false alarm or otherwise can be misleading. We also show that the performance of the traditional charts, in terms of their average-run length, can be enhanced in many instances, by using the likelihood-ratio procedure.
KEY WORDS: Statistical Process Control, Average Run Length, False Alarms
Control Charts in the Presence of Data Correlation
Traditional statistical process control charts assume that observations are independent and normally distributed about some mean. We investigate the robustness of traditional charts to data correlation when the correlation can be described by an ARMA(1,1) model. We compare the performance of the Shewhart chart and the Exponentially Weighted Moving Average (EWMA) chart to the performance of the Special-Cause Control (SCC) chart and the Common-Cause Control (CCC) chart proposed by Alwan and Roberts (1988), which are designed to account for data correlation. We also explore the possibility of putting limits on the CCC chart, in order to predict quality abnormalities. The measure of performance used is the average run length (ARL). The results show that the ability of the EWMA chart to detect shifts in the process mean is quite robust to data correlation, while the corresponding individuals Shewhart chart rarely detects such shifts more quickly than the other charts. The SCC and CCC charts are shown to be preferred in most cases when a shift in the process mean exceeds 2 standard deviations. The experimental results can aid practitioners in deciding which chart would be most effective at detecting specified shifts in the process mean given the nature of their particular correlated environments. Two methodologies are utilized to explain the relative performance of the SPC charts compared: the dynamic step response function, and response surface methodology. Such methods not only facilitate a discussion of our results, but also make it possible to predict the relative performance of the charts when the process can be described by a model which is more complex than the ARMA(1,1) model.
Back to the Top of the Page
Last updated Jan. 14, 1998