Chapter2_Literature - [email protected]

97
20 Chapter Two LITERATURE REVIEW 2.0 Introduction The intent of the literature review is to identify and discuss a theoretical framework that can be used as a foundation for the development of a method for measuring web-based library service quality and to explain the concepts surrounding the phenomenon. This study attempts to develop a scale to assess web-based library service quality in academic libraries using a mixed-method research design and scale development methodology adapted from Churchill (1979) and DeVellis (2003). It further examines a conceptual model for web-based library service quality assessment incorporating customer satisfaction, service value and customer loyalty. This chapter reviews the literature as related to the current study. Conceptual definitions and background theories on service quality are obtained from textbooks on service quality and research articles in various databases. Since the concept of service quality is rooted in the business and marketing literature, the researcher relied heavily on the rich literature in these service settings. Using the terms ‘service quality’, ‘library service evaluation’, ‘e-service’, ‘SERVQUAL’ and ‘LibQUAL’ revealed a rich source of research articles, mainly in the following journals: Library & Information Science Research, portal: Libraries & The Academy, Journal of Academic Librarianship, Library Trends, Library Reviews, Reference Services Review, Information Research, Library Hi Tech, Journal of Services Research, Journal of Marketing, Journal of Services Marketing, Managing Service Quality and Journal of Marketing Research. The main databases used to extract these sources were Emerald Intelligence; Science Direct; Library, Information Science & Technology Abstracts; Expanded Academic ASAP

Transcript of Chapter2_Literature - [email protected]

20

Chapter Two

LITERATURE REVIEW

2.0 Introduction

The intent of the literature review is to identify and discuss a theoretical

framework that can be used as a foundation for the development of a method for

measuring web-based library service quality and to explain the concepts surrounding

the phenomenon. This study attempts to develop a scale to assess web-based library

service quality in academic libraries using a mixed-method research design and scale

development methodology adapted from Churchill (1979) and DeVellis (2003). It

further examines a conceptual model for web-based library service quality assessment

incorporating customer satisfaction, service value and customer loyalty.

This chapter reviews the literature as related to the current study. Conceptual

definitions and background theories on service quality are obtained from textbooks on

service quality and research articles in various databases. Since the concept of service

quality is rooted in the business and marketing literature, the researcher relied heavily

on the rich literature in these service settings. Using the terms ‘service quality’, ‘library

service evaluation’, ‘e-service’, ‘SERVQUAL’ and ‘LibQUAL’ revealed a rich source

of research articles, mainly in the following journals: Library & Information Science

Research, portal: Libraries & The Academy, Journal of Academic Librarianship,

Library Trends, Library Reviews, Reference Services Review, Information Research,

Library Hi Tech, Journal of Services Research, Journal of Marketing, Journal of

Services Marketing, Managing Service Quality and Journal of Marketing Research. The

main databases used to extract these sources were Emerald Intelligence; Science Direct;

Library, Information Science & Technology Abstracts; Expanded Academic ASAP

21

Plus; Library Literature & Information Science Full Text; and Library and Information

Science Abstracts (LISA). The Digital Dissertations (UMI) had about 179 doctoral

dissertations on the subject of service quality in the last 10 years, of which only 8 were

in Library and Information Science.

The chapter begins with a brief introduction to web-based library services in

academic libraries. This is followed by a review of the literature related to foundations

in the conceptualization and measurement of the service quality construct and an

overview of several distinct models applied in service quality research. This is followed

by a critique of these models to identify gaps in service quality research to justify and

direct this study.

Subsequently, the third section presents a discussion of library service quality

assessments adapted and adopted by LIS researchers and practitioners. The fourth

section examines issues in conceptualization of electronic service quality and the

development of various quality models, both in the marketing and LIS literature. The

following section discusses empirical issues of the relationship between service quality,

customer satisfaction, service value and customer loyalty.

Finally, the literature review is summarized in relation to the research questions

and a proposed conceptual model for web-based library service quality is presented.

2.1 Web-based Library Services

Library services have mainly been described as services which facilitate the use

of materials and information made available at a library, and which normally involve

interaction between the user and the librarian (Edwards & Browne, 1995). Most typical

examples in the past have been reference and information desk, reader education

22

programs, interlibrary loan and bibliographic search services. Over the last two decades

however, advancements in information technology has had a great impact on library

services. In the modern library, technology is being used to introduce many new

services, either by delivering existing services via electronic medium, or by developing

and implementing entirely new services for search, delivery and use of information

(Poll, 2005). Some examples of these modern library services include: access to

electronic or digital collections such as online databases, electronic journals, e-books

and digitized collections; and other services including web-portals, personalized

services, online library instructions, online reference, helpdesk, online document

deliver, and electronic publishing.

A common term used to differentiate these modern (new/ hybrid) services from

traditional library services is electronic services. The term electronic services in library

literature is defined as either ‘network-based services’ (Bertot, 2003); ‘services through

the Internet’ (Hernon & Calvert, 2005; Henderson, 2005); ‘web-based services’ (Li,

2006) or ‘technology mediated interaction’ (Shanaf & Oltmann, 2007). The networked

environment may refer to the Intranet network within a library system or it can be

expanded further to the boundless Internet network. Most studies in digital library

research also use the term electronic services to denote digital library services (Bertot,

2004; Goncalves, Fox, Watson, and Moreira, 2007). Table 2.1 lists some of these

services as evident in the literature. What is a common attribute is that institutional

libraries usually deliver these services through a web site accessible on the Internet –

gaining the description of ‘web-based services’.

23

Table 2.1 : Electronic Library Services

Online/digital/electronic/web-based services Authors

Access to online catalogue/regional or

national

O’Neill, Wright and Fitz (2001); Bertot (2003); Hernon &

Calvert (2005); Poll (2005); Landrum, Prybutok, and

Zhang (2007); Li (2006)

Access to online databases/e-journals Bertot (2000); O’Neill, Wright and Fitz (2001); Ho

(2004); Hernon & Calvert (2005); Li(2006)

Access to digitized materials Bertot (2003) ; Ho (2004); Hernon & Calvert (2005); Poll

(2005) Xenidou-Dervou (2006)

Reserve materials O’Neill, Wright & Fitz (2001); Bertot (2003); Ho (2004);

Hernon & Calvert (2005);

Extend dates on loaned materials O’Neill, Wright & Fitz (2001)

Request for books and articles delivered O’Neill, Wright & Fitz (2001)

Interlibrary loan/ document delivery

requests

Bertot (2003); Ho (2004); Hernon & Calvert (2005); Li

(2006); Gardner, Juricek, and Zu, (2008)

Links to non library content Bertot (2003); Hernon & Calvert (2005)

Digital reference service/ Virtual reference Bertot (2000); Ho (2004); Hernon & Calvert (2005); Poll

(2005); Xenidou-Dervou (2006); Kiran (2006); Li (2006);

Landrum, Prybutok, and Zhang (2007)

Online query form Hernon & Calvert (2005)

Online course materials Hernon & Calvert (2005); Xenidou-Dervou (2006)

Online library tutorials/user training Hernon & Calvert (2005); Xenidou-Dervou (2006); Kiran

(2006)

Alert service Hernon & Calvert (2005); Poll (2005)

Online communication/contact with

librarian –email, chat, text messaging

Hernon & Calvert (2005)

Online helpdesk (technical and topical

support)

Ho (2004); Landrum, Prybutok, and Zhang (2007)

Personalized services –alerting, profile Ho (2004); Poll (2005)

Electronic publishing Poll (2005)

Federated search Xenidou-Dervou (2006)

Library Portal Xenidou-Dervou (2006)

24

According to Bertot and McClure (2003), though traditional and networked-

based library services are perhaps related and similar in function, they differ vastly

because of:

i. the infrastructure required to deliver the services,

ii. the ways in which users access the services,

iii. the skills required by the users,

iv. the reach and range of the services,

v. the ways in which librarians mange the services and

vi. the skills required within the library to deliver and access the services.

In any instance, it is important to consider that in many networked services, access to

these services is sometimes based on user accessibility tools, therefore user interaction

can differ quite vastly depending on user information technology (IT) skills (Bertot &

McClure, 2003).

Bailin and Grafstien (2005) also argued that the basic structure of libraries have

actually remained intact even in the networked environment. Libraries are still building

collections, providing means of access to the collection and assisting users in accessing

and using the collection. The electronic service is often a part of a wider service

delivery (Rowley, 2006) to enhance, support or bypass their traditional channels. Often

there is a continuum of service delivery mechanisms with different mixes of face-to-

face and self-service and different associated levels of intensity in service relationship

(Rowley, 2006). Hernon, et al., (1999) believe that although the Internet and other

technological applications may improve customer access to information, these alone

may not help the customer use the information. Thus, reference services, document

delivery and library instructions are part of the online service portfolio.

25

Consequently, library service assessment practices have evolved from the

traditional collection-count input measures (Nitecki, 1996) to customer oriented

evaluation that have become essential in the discipline of services marketing (Cook,

2001). Any attempt to measure web-based or electronic library service quality must be

founded upon a strong understanding of the phenomenon of service quality and what

connotes service quality from the user perspective (Berry, Zeithaml, & Parasuraman,

1990; Cook, 2001).

In this study the term web-based library services is used to refer to services

accessible via the academic library’s web site, as to differentiate purely digital library

services that may be delivered by means of a digital library. A content analysis of the

web sites participating in this study is presented in Chapter 4 to form the basis of the

construct under study. In order to attempt a holistic conceptualization of web-based

library service quality, firstly the concept of service quality and its underlying

theoretical foundation of assessment need to be examined.

2.2 Service Quality Literature

Service quality has it roots in the business and management field. Marketers

realized that to retain customers, and to support market growth, they must provide high

quality of service (Dabholkar, Shephard & Thorpe, 2000; Zeithaml, 2002). It is said that

service quality is an important antecedent of consumer assessment of value, which in

turn influences customer satisfaction, which then motivates loyalty (Babakus & Boller,

1992). There has been much debate as to what constitute service quality and how its

measures can be operationalized in various service industries, yet no consensus has

been reached (Chowdary & Prakash, 2007). The following section traces the

26

development of the definition of quality, as it gears towards understanding service

quality to form the basis of the conceptualization of this construct.

2.2.1 Definitions of Quality

Various definitions of quality can be traced back to the early 1920s when Walter

Shewhart initiated the concept of TQM. Two of his students, W. Edward Deming and

Joseph Juran may be more popularly known as the gurus of the quality movement.

(Brophy and Coulling, 1996). Joseph Juran was the first to incorporate the humanistic

environment of quality management, referred to as TQM (Total Quality Management),

which he then introduced to the Japanese, who became the paradigm of the quality

movement as early as the 1950s. Table 2.2 lists some of the definition of quality and

provides evidence of its conceptual evolution from being product-based to customer-

centric.

Table 2.2 : Definitions of Quality

Juran (1989) Quality is fitness of use, need satisfying product features and free from

deficiencies

Deming (1986) Quality is meeting consumer needs by focusing on constant improvement

in consistency and reduction in variation

Parasuraman, Zeithaml and

Berry (1985)

Quality is a function of the difference between the expected and perceived

performance determined by several indicators

Brophy & Coulling (1996) Quality is concerned with meeting the wants and needs of customers

ISO Standard 11620

Performance Indicators for

Libraries

Quality is the totality of features and characteristics of a product or

services that bear on the library’s ability to satisfy stated or implied needs

Scrutinizing these definitions, one is able to deduce that in the early years, the

focus was on product reliability and inspection of product or goods that gradually

changed to a customer-centric quality control concept practiced through TQM. Once

27

accepted and adopted by the Japanese, the concept of quality underwent a cultural

change management to provide a service-centric environment (Green 2006).

Quality and its definitions can be approached and measured in a number of

ways. The following are two definition of quality that are a decade apart yet, share some

common elements.

Ghobadian, Speller, and Jones (1994) defined quality by classifying the concept into 5

broad categories.

i. Transcendent: the relationship between individual salience and perceived

quality. It is not practically applicable because does not allow determinants

of quality to be defined.

ii. Product led: defined as units of goodness. Relies on the quantification of the

service units of goodness or tangible attributes.

iii. Process or supply led: conformance to requirements. The focus is internal-

management and control of the supply side.

iv. Customer led: fitness of purpose, satisfying customers’ requirements. Most

appropriate for organizations offering high-contact, skill-knowledge-based,

such as education.

v. Value led: cost to produce and price to customer

Ten years later, Schneider & White (2004) defined quality based on three different

approaches:

a) Philosophical approach – quality is synonymous with innate excellence. People

know quality when they see it but cannot define it further. Meaning that quality

is unmeasurable.

b) Technical approach – objective or conformance quality. Quality can be

measured objectively through investigation of defects or deviation from

standards. More suitable for products that are mass produced.

c) User-based approach - quality of the product/service is determined by the user.

This definition takes the view that quality depends on the individual perceptions

of customers.

28

Compared to Ghobadian, Schneider & White’s (2004) definition has a more

conceptual approach which can lead to the methodology behind the operationalization

of service quality measure based on the researchers approach. Since services are

directly linked to the consumers of service, the definitions of quality that can relate to

service quality are those emphasizing on satisfying customers requirements and

determined by the user.

An analysis of quality concepts, led Brophy & Couling (1996) to conclude that

generally the definitions of quality emphasize a link between the customer, the purpose

and the product/service being received. One very important point made by them was

that: ‘quality can be achieved in any organization setting with any product or

service…what is needed is a clear definition of what the service is intended to achieve,

agreement with customers that this will meet their needs and consistent delivery’. There

are two implications: first, quality does not necessarily mean ‘highest grade’ and

secondly, quality for one group of customers may not mean quality for another (Poll &

Boekhorst, 2007).

The ISO Standard 11620 for Library Performance Indicators, has defined

quality as ‘totality of features and characteristics of a product or services that bear on

the library’s ability to satisfy stated or implied needs’. This definition includes all

library processes, and requires the determination and definition of a set of quality

criteria and performance indicators implied by the library’s goals and objectives

(Derfect-Wolf, Gorski & Marcinek, 2005). These indicators are meant to assess the

quality, effectiveness and usage of library resources. As for library services, only the

library users or the customers, can determine ‘ability to satisfy’. However, the level of

satisfaction is not a measure of quality of the service or product. In the next section, the

conceptualization of service quality shall shed some light on this.

29

2.2.2 Conceptualization of Service Quality

Over the years a universal definition of service quality has not reached a

consensus, and it may never will. This is mainly because it depends on the context of

the service being provided – marketing, operations, industrial, education, health, etc.

Since service itself is a complex phenomenon, efforts to define service quality and its

dimensions have been subjected to academic debate. One of the most cited and applied

concept of service quality is by Parasuraman, Zeithaml and Berry (1985) who simple

put it as: the overall evaluation of a specific service firm that results from comparing

the firm’s performance with the customer’s general expectations of how firms in that

industry should perform.

To accept the above definition, one has to be clear of what is characterized as service.

Are all types of services to be judged for quality in terms of expectations and

performance? In an attempt to provide a more through research perspective on services,

Schneider & White (2004) summed up the most commonly found characteristics of

services in the literature as:

i. Relative Intangibility – pure services have no physical manifestation, they are

essentially processes that are experiences.

ii. Relative inseparability – pure services are produced by the organization and

consumed by the consumer at the same time

iii. Relative heterogeneity – interaction between service personnel and customers can

never be identical

30

Schneider and White’s (2004) characteristics of services are applicable to library

services which consist of resources (information content); organization (service

environment and resource delivery) and service delivered by staff (Hernon & Altmann,

1996). Library users interact with the library system and the reference librarians during

the delivery and consumption of the information search and retrieval. Furthermore, each

user may have different information needs and level of information literacy, thus the

interaction with library system or librarian will not be identical, even more so in an

online environment. The inseparability characteristic of library service indicates that the

quality of the service will be determined at the time the service is rendered. Meaning

that it will be determined by the consumers of the service and not the provider (Seay,

Seaman & Cohen, 1996). Thus, the ‘customer becomes the sole judge of the quality of

the service’ (Berry, et al., 1990).

In the management of library services, however, a didactic model of service is

practiced. The professional librarian assumes the role of the custodian of knowledge

and library collections and services are planned and developed based on these

‘professional’ choices. The librarian then assumes the role of an instructor (reference

librarian) in assisting user to search, retrieve and select relevant resources. This

distinction in library services from the business world however, does not marginalize

the importance of the library customer. In any circumstance, the library customer is the

ultimate judge of the service as much of knowledge about quality comes after the

service has been rendered (Seay, et al., 1996).

A distinct evolution in the conceptualization and measures of library service

quality was brought about by Danuta Nitecki. In her doctoral dissertation, she stressed

that the traditional ways of examining academic library quality in terms of size of the

library’s holdings and counts of its use were becoming obsolete compared to the

31

alternative approaches to measure quality emerging in the business sector (cited in

Nitecki, 1996). Although user studies, including user information seeking behavior or

needs and identification of user information search process models were being

rigorously studied, Nitecki revealed that these studies did not address the ‘user-based’

criteria for measuring service quality. The evaluation methods were library performance

measures that librarians felt were important (Edwards & Brown, 1995) and did not fully

explore user-based assessment.

The criteria applied by users in judging quality of a service may be different

from those the librarian consider to be important, thus much of the work assessing the

quality of library services reflected the objective quality and customer satisfaction

rather than conceptualization of service quality as developed by Parasuraman, et al.,

(1988). Their conceptualization of service quality clearly hinges on the notion that

‘only customers judge quality; all other judgments are essentially irrelevant’, has been

adopted in only a small number of LIS research, evident in the works of Hernon and

Altman (1996), Calvert and Hernon (1997), Quinn (1997), Cook and Thompson (2000a)

and Kyrillidou and Giersch (2005).

Despite the acceptance of the service quality concept from the marketing

literature, the indicators to be used to reflect quality of library services are still not well

defined, as in other industries. Hernon & Altman (1996) stress that for libraries, service

quality applies to resources (information content); organization (service environment

and resource delivery) and service delivered by staff. Evaluation of library service

quality must rest upon a strong understanding of service quality assessment. The

following section discusses the theoretical foundations upon which service quality

assessments have been developed by researchers.

32

2.2.3 Theoretical Foundations for Service Quality Assessment

This section reviews several different perspectives of service quality, both old

and new, from different conceptual and empirical approaches. There exist two main

conceptualizations of service quality in the literature – one based on the

disconfirmation approach (Gronroos, 1984; Parasuraman, et al., 1985, 1988) and the

other on performance-only approach (Cronin & Taylor, 1992).

(A) The Disconfirmation Approach

The Expectancy Disconfirmation Theory became the base of earlier

conceptualization of service quality as it was successfully deployed in the physical

goods literature. This theory is based on the satisfaction literature which justifies that

the consumers judge their satisfaction with the product based on their expectations

about the product’s performance (Oliver, 1980). Researchers adopted the theory that

service quality was a measure of how well the service level delivered matched customer

expectations (Parasuraman et al., 1988), as opposed to customer need or wants. Since

services are intangible, the quality is not directly observable, thus the construct is

measured as perceived by the customer, strongly relating it to trust (Gummesson, 1979).

There are two main streams of research into the dimensions of service quality

based on the disconfirmation model. The first is the Nordic perspective or Gronroos’s

model (1984), which uses global terms to define service quality in terms of functional

quality and technical quality. The second is the American perspective by Parasuraman

et al., (1988), which uses service encounter characteristics to describe service quality as

an overall measure of its components or indicators.

33

(i). Gronroos’s Model (1982, 1984) – the Nordic Perspective

Gronroos (1979, cited in Gronroos 1984) defined the concept of perceived

service quality, as ‘the outcome of an evaluation process, where the consumer compares

his expectations with the service he perceived he has received’. He is cited as the first

author to contribute a service quality conceptual framework (Green, 2006). Gronroos

(1982, 1984), based his definition on technical quality (the outcome or ‘what’) and

functional quality (the process or ‘how’). The functional quality represents how the

service is delivered; in other words it focuses on the interaction that takes place during

the service delivery. Whereas, the technical quality refers to what the customer receives

in the service encounter. Figure 2.1 depicts the Nordic Model.

Figure 2.1 : The Nordic Model (Gronroos, 1984)

According to Gronroos, corporate image is a moderating dimension for

perceived and expected quality. An interesting note here is that the inclusion of

corporate image is an indication of a higher-order construct, which is evident in later

research. Gronroos (1988) also derived six criteria (dimensions) for experienced

service quality, which are similar to the SERVQUAL typology:

34

i. Professionalism and skills : refers to the knowledge and skills to solve

customer problems in a professional way

ii. Attitudes and behavior: the extend to which service providers show concern

and interest in solving problems in a friendly way

iii. Accessibility and flexibility: service is easily accessible and adjustable to

demands of the customers

iv. Reliability and trustworthiness : the system keeps promises and performs

with best interest of the customer

v. Recovery: take immediate steps to keep customer in control whenever

something goes wrong or something unpredictable happens

vi. Reputation and credibility: operations can be trusted and give adequate value

for money.

It must be noted that all of these dimension were derived from other available studies

and were not empirically tested by Gronroos.

(ii). The SERVQUAL Model, (1985 – 1996) – American Perspective

The SERVQUAL model was first published in 1985 by A. Parasuraman, Valarie

A. Zeithaml and Leonard L. Berry for measuring and managing service quality across a

broad range of service categories. It was based on their definition of quality as

‘difference between the expected and perceived performance’. Parasuraman et al.

(1984) derived SERVQUAL from the fifth gap of the Gap Model of Service Quality

based on information from 12 focus groups of consumers in service and retailing

organization to assess service quality. Gap Five is described as the magnitude and

direction of the gap between expected service and perceived service. The authors further

conducted a qualitative study involving consumers views on service quality and elicited

ten determinants of the service quality (Table 2.3) that focussed more on the ‘process’

of service delivery and not on the ‘output or technical’ as defined by Gronross (1984).

The scale consisted of 22 pairs of statements – measuring expectations of customers by

35

asking each respondent to rate, on a 7 point scale, how essential each item is for an

excellent service. The second set of 22 identical statements ascertains the respondent’s

perception to the level of service given by the service provider. The difference between

the ranked perception and the ranked expectations is calculated: the average score is the

SERVQUAL overall service quality score.

This model has been vigorously tested and improved upon (Parasuraman et al.,

1985, 1988, 1990, 1991, 1993, 1994, 2004; Zeithaml, et al., 1996; Zeithaml,

Parasuraman & Malhotra, 2002; Parasuraman, et al., 2005). In 1988, the ten factors

were collapsed to five dimensions: Reliability, Assurance (competence, courtesy,

credibility, security), Tangibles, Empathy (access, communication, knowing the

customer) and Responsiveness, better known as the R.A.T.E.R. dimensions (Table 2.4).

Then in 1991, the authors refined much of the wording of the original items to focus

more on customer expectations. The measure of expectations was further refined by

using the three side-by-side measures of adequate, desired and perceived quality

measures. In 1994, the scale was extended to a 9 point Likert type scale with an addition

of the ‘no opinion’ measure. McAlexander, Kaldenberg, and Koenig (1994) reported

that the perception scores outperformed the gap scores in predictive power as agreed by

Parasuraman, Zeithaml, and Berry (1994). Table 2.3 summarizes the changes in

SERVQUAL based on the critisim received by other researchers.

36

Table 2.3 : Evolution of SERVQUAL

SERVQUAL Dimensions Criticisms

Conceptual model of

SQ: the Gap Theory

Model

1985

10 determinants of service quality

- Reliability, Responsveness,

Competence, Access, Courtesy,

Communication, Credibility, Security,

Knowing customer needs, Tangibles

Focused more on the process of

service delivery, not the output

SERVQUAL,

1988

5 R.A.T.E.R. dimensions were developed

through factor analysis

- Reliability, Assurance, Tangibles,

Empathy, Responsveness,

- Used a 7 point Likert type scale

- Use of difference scores and

associated reliability not

theoretically supported

- Wording and use of negative

scores

SERVQUAL,

1991

- Dropped negative wording

- Dropped the normative ‘should’ and

replaced it with ‘would’

- Allocated 100 points among the 5

dimensions on a 10 point scale

- Expectation component not

fully conceptualized (Teas,

1994)

- Weak in convergent validity

because factor loadings not

consistent in different studies

(Babakus & Boller, 1992)

SERVQUAL,

1993

Expectation component interpreted as :

- Adequate service

- Desired service

- Predicted service

- Dimensions not retrievable

- Reliability and validity of

difference score is

questionable

SERVQUAL,

1994

- Reformatted 22 items to 21 items on a

9 point Likert-type scale

- Included use of ‘no opinion’

- Perception next to desired and adequate

separately

- Perception scores outperform

gap scores in predicting

overall evaluation

(McAlexander et al, 1994)

- Different scores are as sound

as their direct-measure

counterparts, except in terms

of predictive power.

(Parasuraman et al., 1994)

SERVQUAL,

1996

Developed a conceptual framework of both

financial and behavioral consequences of

sevice quality

- Dimensions not retrievable

- Call for exclusion of

expectations (Dabholkar

Shephard & Thorpe, 2000;

Brady & Cronin, 2001)

E-S-QUAL,

2005

- Modified SERVQUAL to measure

e-service quality

- Identified 5 dimensions of e-service

quality: Efficiency, Fulfillment, System

availability, Privacy

- Identified 3 dimensions for service

recovery : Responsiveness,

Compensation, Contact

- Call for inclusion of service

recovery in overall service

quality perception (Collier &

Bienstock, 2006)

37

The proposed five dimensions of service quality that are currently widely accepted and

used over different service industries are : Realiability, Assurance, Tangibles, Empathy

and Responsiveness.

Table 2.4 : SERVQUAL R.A.T.E.R. Dimensions

SQ Dimensions Definition

Realiability Delivering the promised performance dependably and

accurately

Assurance

(combination of Competence, Courtesy,

Credibility, Security)

Ability of the organization’s employees to inspire trust

and cinfidence in the organization through their

knowledge and courtesy

Tangibles. Appreances of the organization’s facilities, employees,

equipment, and communication materials

Empathy

(combination of Access, Communication,

Understanding the customer)

Personalized attention given to customers

Responsiveness Willingness of the organization to provide prompt

service and help customers

Reliability deals primarily with the outcome of service delivery, whilst the other four

with the process of service delivery. The overall sevice quality score is calculated based

on the discrepencies between expectations and perceptions over the 22 attributes.

Figure 2.2 : SERVQUAL model

The tool, SERVQUAL, has been since widely accepted and used to assess service

quality in retailing (Barnes & Vidgen, 2002), health care (Carman, 1990; Yang,

Peterson & Cai, 2003; Kilbourne, Duffy, Duffy, Giarchi, 2005), banking (Zhou, Zhang,

and Xu, 2002; Al-Hawari, Hartley & Ward, 2005), education (Ruby, 1998; Tan & Kek,

Determinants of

Service quality:

Reliability

Responsiveness

Empathy

Assurance

Tangibles

Expected service

Perceived service

Perceived

Service

Quality

38

2004), information systems (Kettinger, Lee & Lee, 1995; Jiang, Klein, & Carr, 2002),

library (Edwards & Browne, 1995; Nitecki, 1996; Cook & Thompson, 2001; Landrum

& Prybutok, 2004) and other areas of service across many countries.

SERVQUAL in the Electronic Environment ( E-S-QUAL)

Since SERVQUAL, to some extent, proved to be a successful instrument to

quantify customers’ global (as opposed to transaction-specific) assessment of a

company’s service quality, the founders extended it to assessment of electronic services.

Using the same conventional guidelines for scale development by Churchill, (1979) and

Gerbing and Anderson (1988), Parasuraman, et al., (2005) developed another scale, E-

S-QUAL, for measuring electronic services quality, mainly for web sites on which

customers shop online. It was based on the premise that in the electronic medium there

is minimal, if any, face-to-face contact. They defined electronic service quality (e-SQ)

as ‘the extent to which a web site facilitates efficient and effective shopping,

purchasing, and delivery.’ The E-S-QUAL consists of a 22-item scale comprising of

four dimensions: Efficiency, Fulfillment, System availability and Privacy. In their study

they decided that any recovery service was to be measured separately and derived the E-

RecS-QUAL scale consisting of three dimensions: Responsiveness, Compensation and

Contact. The E-S-QUAL also includes a measure of perceived value (four items) and

loyalty intentions (5 items). This scale is discussed in detail in Section 2.4.2.

39

Summary of the Disconfirmation Models

The two disconfirmation models are quite similar as they address both the

service delivery processes and what the customers receives. The Reliability

(trustworthiness), Assurance (reputation & credibility) and Empathy (attitudes &

behavior) dimension of SERVQUAL are evident in Gronroos’s model too. However,

there is more focus on the issue of accessibility and flexibility by Gronroos compared to

SERVQUAL’s convenient operating hours in the empathy dimension. Gronroos has

also defined Recovery as a separate single dimension, a construct also supported by

Bitner, Booms & Tetreault (1990) and Schneider & White (2001).

Though Gronroos focused on technical and functional aspects of service and

SERVQUAL mostly on functional, other researchers (Gummesson, 1992; White &

Schneider, 2000; Zeithaml, Bitner, and Gremler, 2006) continue to call for more

emphasis on the ‘Tangible’ dimension of service quality. Their contention is that the

physical facilities and surroundings in which services are delivered can impact people’s

perception of the service and feelings towards the organization.

When Parasuraman et al., (1988) developed SERVQUAL, they proposed that it

be used as a ‘base for’ developing further service quality assessment tools in other

service settings and industries. However, over time issues of it’s dimensionality and

measurement validity brought about alternative conceptualization of service quality in

different industries. The next section describes some of these alternative models for

service quality assessment.

40

(B) Performance only Model : Cronin and Taylor (1992)

Cronin and Taylor (1992) argued that service quality is a form of customer

attitude and concluded that perception only scores are better than the difference score

between expectations and actual performance. This resulted in the development of

SERVPREF, a tool that gained popularity because of its simplified measure of service

quality. This tool contains the same 22 items in SERVQUAL, but with the perception-

only scores, excluding the 22 expectations scores. Studies have found SERVPREF to be

able to explain more variance in overall service quality than SERVQUAL (Lee, Lee &

Yoo, 2000) and capable of providing a more convergent and discriminant valid

explanation of service quality construct (Jain & Gupta, 2004). Even Parasuraman, et al.,

(1994) observed that ‘…difference scores are by and large as sound as their direct-

measure counterparts, except in terms of predictive power ...’. There are numerous

studies that have adopted the performance-only measure (Dabholkar et al., 1996;

Sureshchander, Chanrasekharan, and Anantharam, 2001; Janda, Trocchia & Gwinner,

2002; Gounaris, 2005; Parasuraman et al., 2005; Caro & Gracia, 2007; Wilkins,

Merrilees, and Herington, 2007). These studies have empirically tested the measure

using confirmatory factor analysis, supporting the use of the perception-only battery for

perceived service quality. Cook’s (2000) study of service quality in academic research

libraries also concluded that a perceived-only measure is also able to maintain the

integrity of the perceived score. However, according to Jain and Gupta (2004),

SERVQUAL scale entails superior diagnostic power to pinpoint areas for managerial

intervention compared to SERVPERF.

41

2.2.4 Alternative Conceptualization of Service Quality

The fundamental understanding of the service quality concept is to identify its

attributes based on consumer judgement (Berry, et al., 1990). Consumers or customers

become the core input to the development of service quality models. According to Tih

(2006), there are three perspectives to modeling traditional service quality : i) the single

perspective ii) the multi-level perspective and iii) the integrative perspective. The single

perspectives postulated by the Nordic and SERVQUAL model has been examined in

the previous section (Section 2.3.3). Following the failure of these models to consider

the physical or environmental aspects to the measure of service quality, several

researchers offered alternative models.

(i). The Rust & Oliver Model, 1994

Rust and Oliver (1994) offered a three-component service quality model: the

service product (technical quality); the service delivery (functional quality) and the

service environment(physical ambience). The model, as shown in Figure 2.3, was not

empirically tested, but support has been found for similar models in fast-food,

photograph developing, amusement parks and dry cleaning services (Brady & Cronin,

2001) and electronic services (Fassnacht & Koese, 2006).

Figure 2.3 Rust and Oliver’s 3 Component Model of Service Quality

42

This models compliments the lack of environment quality in the previous

disconfirmation models. It is also the basis of consequent models – the multilevel and

hierarchical model of service quality.

(ii). The Multilevel Model

Attempts to replicate or integrate the conceptual structure of SERVQUAL and

SERVPERF in different industries, led to a call for researchers to not only investigate

how service quality should be measured, but to focus on specifically examining the

‘dimensionality’ of service quality construct (Parasuraman, et al., 1994). Among the

researchers who attempted to identify new and integrated conceptualization of service

quality were Dabholkar, et al. (1996) and Brady and Cronin (2001).

Dabholkar, et al., (1996) suggested that service quality dimensions should be

viewed as higher-order constructs that have various sub-dimensions. They identified

and tested a hierarchical conceptualization of retail service quality that proposes three

levels: i) customers overall perception of service quality; ii) primary service quality

dimensions and iii) sub-dimensions of service quality. The model recognizes retail

service as a higher order factors that is defined by two additional levels of attributes

(Brady & Cronin, 2001).

Figure 2.4 : Hierarchical Model of Service Quality

Sub-dimensions

Primary

dimensions

43

Brady and Cronin (2001) identified and tested a three factor model of service

quality in eight service industries. They assumed the three-factor conceptualization of

Rust and Oliver (1994) that overall service quality is based on three dimensions:

functional quality(interaction), service environment and technical quality(outcome).

They then incorporated the five dimensions of service quality from SERVQUAL. Their

contention was that if service quality perceptions represent a latent variable, then

something specific must be reliable, responsive, empathetic, assured and tangible. They

then went on to identify the ‘something’. Their study revealed:

- Customers form service quality perceptions on the basis of their evaluation of

three primary dimensions : environment, delivery and outcome

- The three primary dimensions are composed of multiple sub-dimensions

- Reliability, responsiveness and empathy are modifiers of sub dimensions and not

direct determinants of service quality. Meaning that they represent how each sub-

dimensions should be evaluated.

The proposed multi-level hierarchical model is as shown in Figure 2.5.

Figure 2.5 : Brady & Cronin’s Service Quality Model

44

The multilevel-hierarchical structure for service quality proposed by Brady and

Cronin has also been empirically tested in travel services (Ho, 2007); B2B services

(Gounaris, 2005); electronic services (Fassnacht & Koese, 2006) and hotel services

(Wilkins, et al., 2007).

An extensive review of 19 service quality models (1984 -2003) by Nitin,

Deshmukh & Vrat’s (2004) and Ladhari’s (2008) overview of service quality measures

in 30 studies, concludes that although various other service quality models have

appeared in the literature over the past twenty years, it is still SERVQUAL that

continues to be used widely despite criticism about its applicability in various industries

and issues of psychometric properties. The next section discusses some of the criticism

of the above mentioned models to justify the development of specific measurement

scales in different service context.

2.2.5 Critique of Service Quality Models - Identifying Gaps

There has been much criticism in the literature of the theoretical and operational

issues of service quality models and the corresponding measurement scales, especially

the extensively applied SERVQUAL scale and its variant scales. Some major objections

relate to use of disconfirmation theory (Perception minus – Expectations gap scores),

predictive power of the instrument, validity of the five-dimension structure, and length

of the questionnaire (Babakus & Boller, 1992; Cronin & Taylor, 1992; Teas, 1993,

1994; Buttle, 1996; Van Dyke, Prybutok & Kappelman, 1999; Dabholkar, et al., 2000;

Lee, et al., 2000; Chi, Lewis and Park, 2003; Badri, Mohamed & Abdelwahab, 2005;

Wilkins, et al., 2007).

45

(i) The Gap score (P-E) vs the Perception-only measure

There are several problems identified pertaining to the Gap score measure. The

operationalization of the Gap score, perception minus expectations score, is a poor

measure as a psychological score (Buttle, 1996; Ekinci & Riley, 1998; Van Dyke et al.,

1999). Buttle (1996) claims that there is little evidence that customers assess service

quality in terms of P - E (performance – expectations) gap. Furthermore, it is argued

that there is an ambiguity of the ‘expectation’ construct in the gaps theory adhered to by

the SERVQUAL measure (Parasuraman et al., 1991; Babakus & Boller, 1992; Cronin

& Taylor, 1992). To begin with, there is no definite definition of ‘expectation’ and it is

open to multiple interpretations that can result in measurement validity problems

(Cronin & Taylor 1992; Teas, 1993, 1994; Buttle, 1996). Iacobucci et al., (1994, cited

in Dabholkar et al., 2000) warns that expectations might not even exist or be formed

clearly enough to serve as a standard evaluation of a service experience because it may

be formed simultaneously with service consumption. The various interpretations can

also cause measurement validity problems. There is also evidence that the Cronbach’s

alpha overestimates the reliability of the difference scores compared to perception only

scores, especially when the component scores are highly correlated (Van Dyke et al.,

1999). Furthermore, Green (2006) warns that it is imperative that each of the perception

and expectation scores be subjected to factor analysis to determine if the same factors

exist and that the measures are unidimensional. Failing to do this before subtracting the

perception and expectation scores, may explain the failure to replicate the original five

factor structure of SERVQUAL.

Subsequently some researchers recommend that a performance-only measure (or

direct-effect model) is superior to the Gap score (Lee, et al., 2000; Page & Spreng,

2002; Cronin & Talyor, 1992; Roszkowskit, Baky & Jones, 2005; Caro & Gracia, 2007;

46

Wilkins, 2007) because it is more reliable and explains more variance than the

disconfirmation model (Teas & Wilton, 1988; Cronin & Taylor, 1992, 1994; Babakus &

Boller, 1992; Brown, Churchill, and Peter, 1993; Teas, 1993; Parasuraman et al, 1994;

Dabholkar et al., 2000; Landrum & Prybutok, 2004). Page & Spreng (2002) have

further argued that performance is a much stronger indicator of service quality than

expectations.

The common theme of these aforementioned studies is that the disconfirmation

approach is unnecessary - on the contrary, a perception-only measure is sufficient.

(ii). Predictive Power of the Instrument

Cronin and Taylor’s (1992) claim of ‘perceived quality’ is best conceptualized

as an attitude and is superior to gap score led the original SERVQUAL developers to

examine this empirically. Similiarly, Babakus and Boller (1992) suggest that ‘the

difference scores do not provide any additional information beyond that already

contained in the perceptions score because of a generalized response tendency to rate

expectations high (in Buttle, 1996). The perception-only scores are found to have better

predictive value for overall service quality (Parasuraman, et al., 1988; Babakus &

Boller, 1992; Cronin & Taylor, 1992, 1994; Boulding, Ajay, Staelin, and Zeithaml,

1993; VanDyke, et al., 1999), overall customer satisfaction used as the dependent

variable (VanDyke, et al., 1999; Gounaris, 2005, Landrum et al., 2007), and behavioral

intentions (Brown et al., 1993; Dabholkar et al., 2000). Gounaris (2005) also noted that

perception could allow an understanding of service quality evaluations at the factor

level, and that all dimensions are antecedents rather than components.

47

(iii). Components or Dimensionality of Service Quality

Another source of criticism arises from the unstable dimensionality of the

measurement scales, including SERVQUAL’s R.A.T.E.R. dimensions. Though

designed to be used as a base for service quality measure in various service settings and

industry (Parasuraman, Berry, and Zeithaml, 1993), the five R.A.T.E.R. dimensions are

often not recoverable and do not load on to factors as expected, probably due to the

scoring method (VanDyke et al., 1999). The RATER dimensions have failed to re-

emerge in library services (Nitecki, 1996; Cook 2001; Edwards & Browne, 1995;

Nitecki, 1995) and higher education (Badri, et al., 2005; Carman, 1990; Buttle, 1996,

Green, 2006; O’neill, Wright & Fitz, 2001; Gounaris & Dimitriadis, 2003).

Besides that, contrary to Gronroos’s (1984) view that both the technical

(outcome of service encounter) and functional (process of service delivery) quality are

factors of service quality, the SERVQUAL addresses only the functional quality

(Cronin & Taylor, 1992; Nagata, Satoh, Gerrard & Kytomaki, 2004). Subsequently,

studies based on SERVQUAL also focus on functionality quality (Getty & Getty, 2003;

Markovic, 2006). Santos (2003) believes that it is important not only to understand how

consumers’ experience and evaluate the service delivery, but also how the delivery

contributes to the total service experience and its evaluation.

Contrary to the fundamental model underlying SERVQUAL’s five dimensions,

several researchers have suggested that service quality is a hierarchical construct with

primary and sub-dimensions (Dabholkar et al., 1996; Brady & Cronin, 2001; Gounaris,

2005; Fassnacht & Koese, 2006; Collier & Beinstock, 2006; Caro & Gracia, 2007;

Wilkins et al., 2007; Ho, 2007. However, Ladhari (2008) found that there is little

empirical evidence for this structure and more research is needed in this area.

48

A common conclusion reached by researchers (Seth et al., 2005; Ladhari, 2008)

who have reviewed service quality studies is that, service quality outcome and

measurement is dependent on type of service setting, situation and time and need factor.

Therefore, it is very important that researchers describe the empirical context in which

each scale is developed (Ladhari, 2008), so that when dimensions are discovered in any

particular context, generalization are made with caution.

iv. Length of the questionnaire

Though not entirely a vital issue, the measure of expectations and perception

makes the instrument a lengthy one, including the 100 point rating system used to

determine dimensions most important to customers. Lengthy questionnaires may reduce

the response rate in a survey, which may affect credibility of the findings (Hernon,

2002). The SERVPERF scale reduces the number of items to be measured by 50% and

is considered more efficient (Jain & Gupta, 2004).

Summary

In defense of the criticism faced by SERVQUAL, the authors of SERVQUAL

have argued that one of the main reasons for the criticism was the inability of other

researchers to adhere to scale construction guidelines by Churchill (1979) and Anderson

& Gerbing (1988). Most studies that adopted and adapted SERVQUAL did not follow

the survey methodology completely, thus gave rise to many reliability and validity

issues.

On the other hand, for practitioners, a perception-only measure means that

detailed service quality studies can be made through simpler, more efficient, cross-

sectional designs (Dabholkar et al., 2000). The argument that service quality is defined

as overall evaluation of service performance is similar to Parasuraman et al.’s (1985

49

definition of service quality- that quality is judged in terms of excellence and

superiority.

As a result, though the use of SERVQUAL has been wide spread, many

researchers have suggested that industry-specific measure of service quality might be

more appropriate that a single generic scale (Dabholkar, 1996; Babakus & Boller, 1992;

Gounaris, 2005; Caro & Garcia, 2007 cited in Ladhari, 2008).

2.3 Library Service Quality Assessment Literature

Library service assessment have largely been driven by early practices of

irregular collection of ‘usage metric’, which had been criticized by Lincoln (1998) as a

failure to foresee relationships between libraries and their users. As libraries began to

adopt and connect their research to other bodies of research, more systematic methods

based on marketing service quality models, mainly SERVQUAL and LibQUAL +TM (an

adaptation of SERVQUAL to suite library environment) took place. Shi and Levy’s

(2005) review on theoretical models applied in library assessment gives valuable

overview of the shift of early methods based on statistics and staff perception to the

acceptance of user perceptions of service quality and user satisfaction as essential

elements of service assessment. Surprisingly, Poll’s (2008) paper on the new edition of

ILFA handbook Measuring Quality has no mention of service quality measurement

based on user-based approaches. Though she does mention that performance indicators

that include electronic and web-based library services are difficult to use and do not

show that users benefit from their interaction with the library.

Since the focus of this study is on service quality, the next section will begin

with a discussion of the application of SERVQUAL in library assessment activities.

50

2.3.1 Replication of SERVQUAL in Library Assessment

It is important to review the development of library service quality assessment

beginning from the adoption of the SERVQUAL model. The application of

SERVQUAL in libraries began as early as its inception in the early 1990s. Studies by

Edwards and Browne (1995); Nitecki (1995, cited in Nitecki 1996); Seay, et al., (1996);

Surithong (1997 in Narit & Nagata, 2003); Hernon and Nitecki (1998, 2001); Nitecki

and Hernon (2000); O’neill, et al., (2001), are a few examples of SERVQUAL being

adapted to measure service quality in a library setting.

Edwards & Browne (1995) used the five SERVQUAL’s R.A.T.E.R. dimensions

to explore the difference between academics and librarians on their perception of

quality of an information service. They suggested that the dimensions developed by

Parasuraman et al., (1988) may not hold for information services in a university library.

Communication and user education formed separate dimensions and the dimension

‘reliability’ was not perceived as important by academics as the SERVQUAL scores

predicted it to be. Though they conclude that there is congruence between librarians and

academics in what they view as characteristics of quality information service, librarians

have a tendency to underestimate how important it is to academics that an information

service performs the promised service dependably and accurately. Interestingly they

also implied that it was difficult for users to distinguish conceptually between

‘products’ of the library and the ‘service’ attached to it.

One of the most cited research, using the SERVQUAL instrument, was by

Nitecki (1995, cited in Nitecki 1996) for her doctoral dissertation. Reviewing the

literature on SERVQUAL, Nitecki found that by 1994 it had been introduced explicitly

to the library field through at least 4 empirical studies undertaken in public, special and

academic libraries. The validity of the instrument was tested on 3 services in an

51

academic library: interlibrary loan, reference and closed-reserve. Though her data

supported the validity and reliability of SERVQUAL scale, it suggested a three-factor

relationship among the 22 SERVQUAL items rather than the five collapsed dimensions

(R.A.T.E.R.) which Parasuraman et al., (1988) had revealed. The conceptualization of

the dimension tangibles in libraries was similar to SERVQUAL, but there was overlap

between reliability and responsiveness and even between responsiveness, assurance and

empathy. Nitecki did not attempt to further explore and suggest what model would

better explain library service quality dimensionality. However, her suggestion for an

alternative way to conceive quality from the traditional counts of collection size and

materials use, to a more psychometrically sound measure was the beginning of more

library science practitioners’ and researchers’ uptake of development in the business

sector service quality assessment that had a great impact on later research in library

service quality assessment.

Another application of the SERVQUAL was carried out by Seay, et al., (1996).

Taking the original ten dimensions of SERVQUAL and rewording each items in each

dimension, they decided that the following seven service determinants could be adapted

to library services: reliability, responsiveness, assurance, access, communications,

security and tangibles. They then asked library users to express their expectations for

library services and coded these responses to match the seven determinants in an effort

to evaluate service quality. This method of positive and negative comments via open

ended questions caused the majority of the comments to be concerning the dimension

‘tangibles’, contradictory to SERVQUAL’s findings that tangibles was the least

important dimension. As Bitner (1990) had pointed out, physical surrounding and

employee responses can influence customer reaction towards the service. The study also

found that negative comments on ‘reliability’ dominated the rest of the responses,

supporting Berry et al.’s (1990) findings. One drawback of this method was that it

52

focused on customer’s negative comments and equated it with perceived importance of

a particular service attribute, whereas, expectation and satisfaction have been

summoned as positive feeling about something. These seven determinants and their

definition were later used by Thapisa and Gamini (1999) and Ashok (2007) in their

evaluation of their respective university library service quality. However, both studies

did not provide any empirical data on the reliability and validity of the scale used.

In Thailand, SERVQUAL was used by Surithong (1997, in Narit & Nagata,

2003) in her doctoral dissertation to examine user expectations and perception of library

service quality. She focused on 3 areas: circulation; reference and computer information

services. The instrument was an adapted one used by Nitecki for academic library

assessment. Her study’s contribution was that the dimensions perceived most important

by Thai library users were similar to users in the United States, thus supporting use of

SERVQUAL across different cultures. However, there was no empirical testing of the

validity and reliability scores of SERVQUAL items in this study to render support for

SERVQUAL‘s suitability in academic libraries.

In the same year, Coleman, Xiao, Blair, and Chollett, (1997) reported using the

22-item SERVQUAL survey at the Sterling C. Evans Library at Texas A & M

University. The sample included faculty, staff, graduate students, undergraduate

students and community users. They also weighted the scores of the 100-point

allocation criteria. It was found that even though the five R.A.T.E.R. dimensions were

extracted; neither reliability tests nor factor analysis was carried out. The point scores

identified reliability as the most important dimension, and tangibles as the least

important.

53

A series of different but related works by Hernon, Altman, Nitecki and Calvert

have examined library service quality by developing a basic framework for

understanding and measuring service quality in academic libraries (Hernon and Altman

(1996). In an attempt to produce global dimensions for customer expectations of

academic library service quality, this framework was then used as a basis for subsequent

gap analysis research in different countries – New Zealand (Hernon & Calvert, 1996),

Singapore (Calvert, 1998) and China (Calvert, 2001). Calvert (2001) claims that based

on the studies in America, New Zealand, Singapore and China, there is sufficient

evidence that the concept of service quality may vary between countries, but they share

common core believes that do not change. What was interesting was the fact that in

Singapore, the participants did not think cultural sensitivity was an issue because the

three large ethnic groups all believed that their cultural differences were a part of way

of life and it did not effect staff attitude during service delivery. This is a similar trait in

Malaysia too.

Subsequently, Nitecki and Hernon (2000) studied the feasibility of developing

and testing the conversion of SERVQUAL to reflect expectations of library users and

staff at Yale University Libraries. Their instrument had 40 statements (revised from a

set of core service attributes developed by Hernon and Altman, 1998), to explore users

expectations and delivery perception, followed by five statements reflecting the five

dimension of SERVQUAL which are to be rated for relative importance and lastly an

overall expectations question. As in other studies, their study revealed Reliability to be

perceived as the most important attribute and Empathy the least important. They also

concluded that the SERVQUAL dimensions failed to address the desire to be self-

reliant or self–supporting, a very important characteristic of library users.

54

Hernon’s (2002) research on service quality expanded to include measures of

customer satisfaction and outcome assessment. The inability to recover the five

dimensions of SERVQUAL accurately in library setting (Edwards & Browne, 1995;

Nitecki, 1995; Andeleeb & Simmons, 1998; Hernon & Altman, 1998; Nitecki &

Hernon, 2000) led to an intensive research into service quality measure by the

Association of Research Libraries, ARL. A team of researchers, mainly Colleen Cook

and Bruce Thompson carried out a longitudinal study at the Texas A&M University

Library for the years 1995, 1997 and 1999. Their main purpose was to determine if

SERVQUAL was a reliable and valid instrument to be applied to the library context

across different time and different respondent groups. They used the 22-item survey as

originally constructed by Parasuraman et al. (1994) with slight wording modifications to

reflect the library environment. They found the reliability scores to be fairly reasonable

across time and user group variations. Their research also reported that the recovery of

the R.A.T.E.R dimensions was not supported. As in Nitecki’s (1995, cited in Nitecki

1996) study, only tangibles was distinctly identifiable. Another important finding was

that most studies that they examined had identified an overlap among responsiveness,

assurance and empathy (named ‘demeanor’ by Andaleeb and Simmons, 1998; Landrum

& Prybutok, 2003). Even Parasuraman et al., (1994) had revealed the possibility of a

three-dimensional structure wherein responsiveness, assurance and empathy meld into a

single factor. Cook and Thompson (2000) concluded that the underlying factor structure

of SERVQUAL may not be the same in research library context because using

difference scores between minimum, desired and perceived responses typically have

different factor structures from one application to another (Babakus & Boller, 1992) and

stress that a direct measurements of perception may yield more reliable outcome as

suggested by others (Cronin & Taylor, 1992; Van Dyke, Kappelman, and Prybutok,

1997; Zeithaml, et al., 1996).

55

Following up her work with ARL, Colleen Cook in her PhD dissertation (2001) ,

developed a web-based total market survey tool for assessing academic library service

quality. She used the 1994 SERVQUAL instrument with three column side-by-side

format composed of adequate, desired and perceived quality. However, some

respondents were given the only perception option questionnaire. Cook revealed that the

SERVQUAL constructs, R.A.T.E.R. were affirmed in the research library context.

However, several new construct emerged in her study: library as a place, ubiquity and

ease of access to collection and self-reliance. This confirms that the SERVQUAL

dimensions were not adequate to measure academic library service quality. The

responsiveness, assurance and empathy factors collapsed as one factor: Affect of

Service. Reliability was the second strongest factor. Cook found that the long-form

questionnaire (3 scales: minimum, desired, perceived) and the short form (1 scale:

perceived only) are both able to maintain the integrity of the ‘perceived’ scores. Her

study resulted in producing a total-market survey for library service quality, the

LIBQUAL+TM instrument.

The development of a new scale, LIBQUAL+TM, by ARL researchers had a huge

impact on library assessment activities in the United States and Canada as libraries

began to experiment with the instrument and report successful implementation (details

in Section 2.3.2).

Meanwhile, in other parts of the world, researchers are still trying to replicate

the SERVQUAL methodology to design scales to suit library services. A group of

researchers Nagata, et al., (2004) examined dimensions of service quality in four

university libraries by building upon their previous experimental studies using the

SERVQUAL instrument. They too failed to extract the five dimensions of

SERVQUAL, in fact they added on technical quality items that were absent from

56

SERVQUAL (information delivery, information retrieval, service procedure, remote

service and quiet place). This was based on Bitner et al., (1990) and Buttle’s (1996)

observation that SERVQUAL items reveal that a majority of the items relate directly to

the human interaction element of service delivery (functional quality). In their

sequential study, Nagata et al., (2004) incorporated nine items from the LIBQUAL+TM

instrument and used the three column format –expectation, desired and perception.

Data from four universities in Europe and Japan confirmed four dimensions of service

quality: Effect of Service(Personal); Library as Ba; Collection and Access and Effect of

Service(Organizational). These dimensions were similar to those additional dimensions

found in Cook’s dissertation study. Subsequently, Satoh, Nagata, Kytomaki, and

Gerrad, (2005) conducted focus group interviews to recapture the four dimensions and

investigate if there were others dimensions. They confirmed the emergence of the four

dimensions, but with additional items relating mostly to electronic service provision –

usability of online OPAC, databases, e-journals, electronic access. Also the dimension

‘effect of service-personal’ was further expanded to include communication, service

response, and customer-first assistance.

O’neill, et al., (2001) conducted an exploratory study at an Australian university

library to examine the conceptualization of service quality in an online environment.

They designed an 18 item questionnaire based on input form 2 focus groups and

referring to the SERVQUAL dimensions. Only four of the SERVQUAL dimensions

emerged, with assurance and empathy combined as a single dimension and termed

‘contact’. The use of importance and performance scores allowed them to use the

quadrant analysis to identify the strength and weakness of the service dimensions.

There has been limited support for the applicability of the SERVQUAL

dimensions to library services, however many studies continue to do so (Narit &

57

Nagata, 2003; Ashok, 2007). A specific application for the libraries, LIBQUAL+TM, was

developed by Thompson, Cook & Heath (2001) as a derivative of SERVQUAL. The

following section discusses with more detail the LIBQUAL+TM model.

2.3.2 Generic Measure of Library Service Quality : LibQUAL+TM

LibQUAL+TM was developed by the Association of Research Libraries (ARL) in

collaboration with several faculty members at the Texas A&M University as a web-

based tool for systematic assessment and measurement of library service quality, over

time and across institutions (Cook, Heath, Kyrillidou & Webster, 2002). It was

developed along the same conceptual and methodological framework as the

controversial but widely used SERVQUAL. According to Cook & Thompson (2000a,

2000b), SERVQUAL does not address all the important issues particularly relevant in

libraries. Methodologically the disconfirmation theory that customers perception of

quality is the difference between what they expect from a service and what they believe

they have received guided the research design of LIBQUAL+TM. The scale asks

respondents to indicate the minimum level of acceptable service; the desired level of

service and the perceived level of service. Gap scores are calculated between minimum

and perceived expectation and desired and perceived expectations or NA (not

applicable) for each of the 22 items. The zone of tolerance is the difference between

minimum and desired scores. Ideally, perception ratings will fall within the zone

(Thompson, Kyrillidou & Cook, 2008). The presupposition is that the service is good if

perception meet or exceed expectations and problematic if perceptions fall below

expectations. After several attempts to refine the scale to reduce the biases and

empirical problems, LIBQUAL+TM (2003) measures three dimensions of library service

quality:

58

- Affect of Service (Empathy, Responsiveness, Assurance, Reliability)

- Information Control (Scope, Timeliness, Convenience, Ease Of Navigation,

Modern Equipment)

- The Library as Place (Utilitarian Space, Symbol, Refuge)

There are, respectively, nine, eight, and five items constituting these three

LIBQUAL+TM

subscales. Its emphasis on user perceptions (inherent in SERVQUAL)

covers users’ experience of service delivery – its emotional impact (Affect) on them,

the extent to which service delivery is under users’ control, and the extend to which the

library as a place is comfortable for research and learning (Edgar, 2006). The scale also

includes :

- open-ended comments from users regarding library service quality

- option of selecting five additional items from a supplementary pool of 100+

items to augment the 22 core items o focus on issues of local interest

- data about the user’s frequency of use of various information resources

In the first 3 years since its inception in 2000, LIBQUAL went through vigorous

empirical testing to improve and stabilize it’s dimensions. Table 2.6 shows the changes

in the dimension and number of items in the scale.

Table 2.5 : Dimensions of Library Service Quality in LIBQUAL+TM

2000 2001 2002 2003

41 –Items 56 Items 25 Items 22- Items

Affect of Service Affect of Service Affect of Service Affect of Service

Library as a place Library as a place Library as a place Library as place

Reliability Reliability Personal Control Information Control

Provision Of Physical

Collections

Self-Reliance Information Access

Access to Information Access to Information

From: Developing a National Sc DL (NSDL) LibQUAL Protocal, 2003 NSDL Evaluation Workshop

59

Since LIBQUAL+TM was first implemented in 2000, the survey has been

completed by nearly a million users at more than 1,000 libraries worldwide. Participants

include a broad range of library types, from college and university libraries to health

sciences libraries, law libraries, special libraries, and more (Hoseth, 2007). ARL

describes the LIBQUAL+TM survey as one tool in a kit of tools for performance measure

(Saunders, 2007). The advantages are two-fold: i) individual libraries can compare their

results with results of peer institutions; and ii) libraries can use a proved and tested

survey instrument, thereby foregoing all the expense and work of developing their own

survey. Thompson, et al., (2008) describe it as a ‘total market survey, because the

protocol (a) seeks perceptions of all potential customers, regardless of frequency of use,

including even nonusers and (b) uses benchmarking against peers’.

LibQUAL+TM scores have been repeatedly shown to have high reliability

coefficients and to be reasonably valid (Cook, Heath, Thompson & Thompson, 2001;

Thompson & Cook, 2002; Thompson, Cook & Heath, 2003; Thompson, Cook &

Thompson, 2002). Library staff has also found the scores useful in improving library

service quality (Cook, et al., 2002; Jilovsky, 2006). Tiangulation is allowed via the

option of the ‘Box’, an open ended comment box for users to add comments. Each year

roughly 40% participants provide comments of their ratings.

In the literature there are many general studies reporting on the use of

LIBQUAL+TM to measure individual library’s performance (Roszkowski, et al., 2005;

Creaser, 2006; Jankowska, Hertel, & Young, 2006; Whang, & Ring, 2007; Johnson,

2007;Kayongo, & Jones, 2008; Nadjla, & Farideh, 2008; Garthwait, & Richardson,

2008; Jaggars, Jaggars, & Duffy, 2009).

60

The development and use of LibQUAL+TM to improve service quality has been

documented in more than 50 articles published in the refereed, archival journal

literature. Approximately half these articles document psychometric properties of

LibQUAL+TM (Cook, Heath & Thompson, 2002; Thompson & Cook, 2002; Wei,

Thompson & Cook, 2005; Thompson, 2006). The remaining half of the published

articles describe how libraries are using LibQUAL+TM results to improve services (Cook,

2002; Heath, Kyrillidou & Askew, 2004; Thompson, Kyrillidou, & Cook, 2007).

LibQUAL+TM Critique

Though LIBQUAL+TM has been gaining popularity and increased usage by

practitioners, there are several theoretical and empirical criticism that have been raised

by LIS researchers. These can be summarized as follows:

- LIBQUAL+TM addresses only the functional quality of library services and does not

include the technical component of services in its assessment (Edgar, 2006). Nagata

et al., (2004) argue that in a university library, the content offered (materials) in

each library differ, and to have a fixed definition of technical quality is almost

feasible. Thus, supporting service quality measure to include technical quality.

- The operationalization of the Gap Score is questionable. In SERVQUAL the gap

score is the measure of difference between the expectation score minus the

perceived score. However, in LIBQUAL+TM the authors have used the ‘minimum’

and ‘desired’ level to compare library users’ perception of service quality. Shi and

Levy (2005) contest that the mathematical interpretation of the gap scores is

considerably different from the original service quality conceptualization in the

service marketing, though LIBQUAL+TM claims to have followed the similar

framework. Furthermore, Green (2006) asserts that the psychometric properties of

61

each battery(minimum, desired, perceived) be assessed before any subtraction of

scores is performed. The problem is further complicated by critiques of

SERVQUAL who claim ‘gap score’ is a poor choice as a measure of psychological

construct (Van Dyke et al., 1999) because the scores are unlikely to be distinct from

their component scores (Brown et al, 1993). In response to this, Thompson, Cook

and Heath (2000) argue that this model enables them to check for inconsistencies in

the response data since the ‘minimum’ rating of an item should not be higher than

the ‘desired’ rating on the same item. During analysis, any record containing more

than 9 logical inconsistencies are deleted (Thompson, 2006).

- The construct ‘expectations’ is again not well defined. In library service context,

user needs have always been a priority for librarians but the use of these two term

have not been clearly defined in LIBQUAL+TM. Expectations and needs are two

different constructs (Shi & Levy, 2005) and need to be separately conceptualized so

that the measure can be operationalized accurately.

- Interpretation of the LIBQUAL+TM scores are difficult for librarians. Most literature

on LIBQUAL+TM reports on a one time use of the tool to assess library quality and

data analysis is usually descriptive, relying on counts of frequencies and means.

Which do not allow the library to take action to identify actual problematic areas

and take action to overcome these shortcomings.

- There has been little effort to refine the scale though much literature in marketing

research has supported the use of the perception only score as overall predictor of

service quality. Cronin and Taylor (1992) contend that performance only scores

outperform gap scores in predicting overall service quality. Green (2006) examined

62

the reliability and validity of using a LIBQUAL+TM adapted scale in a public library

and found the data were neither reliable nor valid.

Other issues involving use of LIBQUAL+TM are:

- Patrons complain that it is too long (thirty-nine questions), or that all questions

have to be answered before the survey will be accepted (Kalb, 2007; Saunders,

2007)

- Many libraries would like to tailor the questionnaire to find out information that

is specific to their library clientele or local problems (Saunders, 2007; Kalb,

2007)

- uses self-selected respondents : estimates are always going to be somewhat

biased (Saunders, 2007)

It has been argued that more research needs to carried out to identify and refine

the determinants of library service quality perception and depend less on scales

developed in the business and marketing areas as their services are noticeably different

from library services. Most service quality models from the marketing literature

emphasis on the service delivery and not the product. However, in library services, the

information product and the service component both have a distinct but inter-dependent

role in overall service quality judgment. Another concern is the definition of tangible

and non-tangible. According to Shi and Levy (2005), library service quality is a

combination of the quality of the information provided (comprehensiveness,

appropriateness, format) and the services offered by the library (physical facilities,

helpfulness, attitude of staff, etc.). For example, access to online databases is very much

dependent on the quality, content and recentness of the content of the databases to form

service quality judgment.

63

2.3.3 Summary of SERVQUAL and LIBQUAL+TM Use in Library Service Quality

Assessment

Reviewing SERVQUAL and LIBQUAL+TM mainly has led to some important

issues regarding the use of these scales as a global measure of library service quality,

especially even more so in the electronic services environment. Two main concerns are:

i) Dimensionality of Traditional Service Quality Models (SERVQUAL vs

LIBQUAL+TM)

Traditional measures of library service quality for services such as reference desk,

interlibrary loan, bibliographic services, closed-reserves, document delivery, etc. have

revealed a number of common constructs or dimensions. An analysis of a number of

conceptual and theoretical papers helped identify these common dimensions of service

quality as presented in Table 2.6. The comparison is between service quality dimensions

as adapted from SERVQUAL for use in library service quality assessment and the three

dimensions of LIBQUAL+TM .

64

Table 2.6 : Service Quality Dimensions in SERVQUAL and LIBQUAL+TM

Dimension SERVQUAL in LIS LIBQUAL, 2003

Reliability • Providing service as promised (Narit & Nagata, 2003)

• Performing service right the first time (Nitecki 1996; Narit & Nagata, 2003; Cook &

Thompson, 2000)

• Dependability in handling user’s service problems (Nitecki 1996; Narit & Nagata, 2003;

(Nitecki & Hernon, 2000; Edwards & Browne, 1995; Cook & Thompson, 2000;

Vergueiro & Carvalho, 2000)

• Providing services at promised time (Nitecki 1996; Narit & Nagata, 2003; Cook &

Thompson, 2000; Calvert, 2001)

• Answer query efficiently and correctly (Shachaf & Oltmann, 2007; Ashok, 2007)

• Provide librarian identity (Shachaf & Oltmann)

• Provide service accurately (Nitecki & Hernon, 2000; Snoj & Petermanec, 2001;

Ashok,2007;Vergueiro & Carvalho, 2000; Calvert, 2001)

• Frequency of updating (Hernon & Calvert, 2005)

• Proper technical functioning (Hernon & Calvert, 2005; Calvert, 2001)

• Materials are in proper places and well marked (Snoj & Petermanec, 2001)

• Making relevant information available (Ashok,2007; Thapisa & Gamini, 1999)

• Keeping records consistent with actual holdings (Ashok,2007; Thapisa & Gamini, 1999;

Calvert, 2001)

• Keeping computer database up and running (Ashok,2007; Thapisa & Gamini, 1999)

• Error free records (Nitecki 1996; Cook & Thompson, 2000; (Vergueiro & Carvalho, 2000)

Affect of service

• Dependability in handing service

problems

65

Table 2.6, continued

Dimension SERVQUAL in LIS LIBQUAL, 2003

Assurance • Knowledge and courtesy of staff (Nitecki 1996; Nitecki & Hernon, 2000; Edwards &

Browne, 1995; Cook & Thompson, 2000; Vergueiro & Carvalho, 2000; Calvert, 2001)

• Able to inspire trust and confidence (Nitecki 1996; Nitecki & Hernon, 2000; Edwards &

Browne, 1995; Cook & Thompson, 2000; Vergueiro & Carvalho, 2000)

• Providing individual attention (Thapisa & Gamini, 1999; Ashok, 2007; Calvert, 2001)

• Familiarity with equipment and technology (Thapisa & Gamini, 1999; Ashok, 2007;

Calvert, 2001)

Affect of service

• Employees who are consistently

courteous

• Employees have knowledge to answer

questions

• Employees who instill confidence in

users

Tangibles • Visually appealing facilities (Nitecki 1996; Narit & Nagata, 2003; Nitecki & Hernon, 2000;

Edwards & Browne, 1995; Cook & Thompson, 2000)

• Visually appealing materials (Nitecki 1996; Narit & Nagata, 2003; Cook & Thompson, 2000)

• Modern equipment (Nitecki 1996;Narit & Nagata, 2003; Nitecki & Hernon, 2000; Edwards

& Browne, 1995; Cook & Thompson, 2000)

• Equipment in working condition (Snoj & Petermanec, 2001; Vergueiro & Carvalho, 2000;

Calvert, 2001)

• Appearance of personnel (Nitecki 1996; Nitecki & Hernon, 2000; Edwards & Browne, 1995;

Cook & Thompson, 2000; Vergueiro & Carvalho, 2000)

• Computer service (Snoj & Petermanec, 2001)

Information Control

• Modern equipment

66

Table 2.6, continued

Dimension SERVQUAL in LIS LIBQUAL, 2003

Empathy • Library staff who understand needs of users (Nitecki 1996; Narit & Nagata, 2003; Cook &

Thompson, 2000; Calvert, 2001)

• Having users’ best interest at heart (Narit & Nagata, 2003; Cook & Thompson, 2000)

• Deal with users in a concerned or considerate fashion (Narit & Nagata, 2003; Nitecki &

Hernon, 2000; Edwards & Browne, 1995; Cook & Thompson, 2000)

• Giving users individual attention (Nitecki 1996, Narit & Nagata, 2003; Nitecki & Hernon,

2000; Edwards & Browne, 1995; Cook & Thompson, 2000)

• Convenient opening hours (Nitecki 1996; Narit & Nagata, 2003; Calvert, 2001)

• Giving equal importance to all user’s request (Ashok, 2007; Thapisa & Gamini 1999)

Affect of service

• Employees deal with users caring

fashion

• Giving users individual attention

• Employees understand needs of users

Responsiveness • Prompt service (Nitecki 1996, Narit & Nagata, 2003; Nitecki & Hernon, 2000; Edwards &

Browne, 1995; Ashok,2007; Thapisa & Gamini, 1999; Cook & Thompson, 2000;

Vergueiro & Carvalho, 2000; Calvert, 2001)

• Willingness to help users (Nitecki 1996; Narit & Nagata, 2003; Nitecki & Hernon, 2000;

Edwards & Browne, 1995; Cook & Thompson, 2000; Calvert, 2001)

• Readiness to respond to user questions (Narit & Nagata, 2003; Cook & Thompson, 2000)

• Keeping users informed about when services will be performed (Nitecki 1996; Narit &

Nagata, 2003; Cook & Thompson, 2000)

• Virtual reference-acknowledgement of user email in a timely manner (Shachaf & Oltmann,

2007)

• Respond as quick as possible (Nitecki 1996; Shachaf & Oltmann, 2007; Vergueiro &

Carvalho, 2000)

• Adherence to stated turnaround policy(Shachaf & Oltmann, 2007; Ashok,2007; Thapisa &

Gamini, 1999)

• Making new information available (Ashok, 2007; Thapisa & Gamini, 1999)

Affect of service

• Willingness to help users

Readiness to respond to users’ questions

67

Table 2.6, continued

Dimension SERVQUAL in LIS LIBQUAL, 2003

Information

Control

Information Control

• Library web site enabling locate

info on my own

• Making info easily access for

independent use

• Easy access tools allow find things

on my own

• The electronic information

resources I need

• Print and/or electronic journal

collation required for work

• Printed library material I need for

work

• Making electronic resources access

home or office

Library as a

Place

Library as a Place

• Comfortable and inviting location

• Community space for group

learning and group study

• Inspires study and learning

• Quiet space for individual

learning

• A gateway for study, learning or

research

68

The items that measured reliability, assurance and empathy in the SERVQUAL

type scale have been consolidated as the ‘Affect of Service’ dimension in LIBQUAL+TM.

It is evident that in LIBQUAL+TM the emphasis is more on the relationship between the

library employees and the users, specifically regarding their knowledge and how they

deal with users. The second dimension, ‘Information Control’ is much broader in its

coverage than tangibles. It includes issues of availability of modern equipment, but

unlike SERVQUAL, the emphasis is not solely on equipment and appearances. In

library services, the availability of the information resources becomes an important

indicator of quality. The services that provide the means to access these resources are

deemed important. The third dimensions, ‘Library as Place’, is quite unique to a library

as a service organization. This dimension has not been captured by SERVQUAL which

only conceptualizes tangibles as facilities and equipment, not a physical place serving

as a comfortable and quiet place for users. Adaptations of SERVQUAL also failed to

address this service – provision of a place for learning and socializing. Further details of

LIS studies using SERVQUAL or SERVPERF are presented in Appendix A.

In conclusion, application of SERVQUAL in the library setting has shown that

some items are not as relevant in the library context (Cook & Thompson, 2000a;

Thompson & Cook, 2002; Nitecki, 1996; Andaleeb & Simmons, 1998). Cook and

Heath (2001) found that students and faculty at various universities had concerns about

library services that were not addressed in SERVQUAL.

ii) Measure of Electronic Services in Libraries

Traditional library services delivered within the library walls, have over time

been automated or new services are being delivered in a networked environment. The

assessment of service quality has included items on electronic services. Examples are

‘frequency of updates’ (Hernon & Calvert, 2005), ‘computer databases up and running’

69

(Ashok, 2007), ‘virtual reference’ (Shachaf & Oltmann, 2007), ‘web site’ (Cook, 2001),

‘availability of electronic resources’ (Cook, 2001). Though ‘Information control’

dimension in LIBQUAL+TM includes some aspects of electronic services in libraries, the

overall scale was not developed to capture electronic services quality specifically. Due

to limitations in the length of the questionnaire, only a small number of items regarding

electronic services can be included at any one time, thus limiting the information

necessary to make informed decisions about improved services.

It is important to understand the theoretical and conceptual under pining of

service quality evaluation in traditional face-to-face services in order to make the

transition to electronic service quality assessment. The next section is dedicated to the

developments in electronic services and the measure of service quality in the electronic

environment.

2.4 Electronic Service Quality Literature

As the development of the web technology accelerated, service providers

subsequently began offering electronic services (e-services) via the web or Internet as

stand-alone services (services provided is the main benefit to the user) or supporting

services (facilitating the use of traditional service or purchase of goods (Fassnacht &

Koese, 2006).

The growth of e-tailing, e-services and digital libraries led many to attempt to

measure electronic service quality using traditional measures and adapting them to the

electronic medium. However, online services have unique characteristics that can affect

the perception of service quality (Collier & Bienstock, 2006). A generally accepted

definition of electronic services has not yet emerged in the literature (Santos, 2003;

Rowley, 2006). The only similarity in the existing definitions is the focus on provision

70

of service over electronic networks (Rust & Lemon, 2001; Tih, 2004), the Internet

(Boyer, Hallowell, and Roth, 2002; Fassnacht & Koese, 2006) or also referred to as

web-based services (Reynolds, 2000). There is also an emphasis on interaction between

customers and organization’s online system (Tih, 2004; Rowley, 2006) to retrieve

desired benefits (Fassnacht & Koese, 2006).

Studies so far can be categorized by types of services covered, for example

online retailing (Yoo & Donthu, 2001; Wolfinbarger & Gilly, 2002; Kim, Kim, and

Lennon, 2006; Zeithaml et al., 2006), electronic banking (Jun & Chai 2001;

Jayawardhena, 2004; Waite, 2006), travel agency (Yen 2005; Ho 2007), or studies that

focus on Website quality alone (e.g., Loiacono, Watson & Goodhue 2002; Yang, Chai,

Zhou, and Zhou, 2005). In LIS research the focus is on library web site quality (Chao,

2002), digital library quality (Summer, Khoo, Recker, and Marlino, 2003; Bertot, 2004;

Heath, et al., 2004; Kryllidou & Giersch, 2005; Goncalves, 2006) and library e-service

quality (Hernon & Calvert, 2005; Li, 2006).

2.4.1 Conceptualization of E-Service Quality

Zeithaml, Parasuraman and Malhotra (2000, cited in Zeithaml et al., 2002)

provided the first formal definition of electronic service quality as ‘the extent to which a

website facilitates efficient and effective shopping, purchasing, and delivery of products

and services’. This definition was considered too narrow because it focuses on online

shopping and is limited to only website quality not service quality as a whole

(Gummerus, Liljander, Pura, and VanRiel, 2004), which may include technical

processes as well (Fassnacht & Koese 2006). Subsequently, many authors described the

electronic service experience as self-service experience (Dabholkar, 2000; Sara, 2000;

Meuter, Ostrom, & Roundtree, 2000, Zhu, Wymer, & Chen, 2002). An analysis of the e-

service literature by Rowley (2006), concludes that there are three main defining

71

characteristics in the literature: technology mediation, self-service and information

service. She adopted the notion that e-service is actually e-service ‘experience that

results from purchase through or engagement with information technology mediated

service delivery’, including e-tailing, customer support and service and service delivery.

A number of studies in e-commerce have found information availability and

content to be key benefits of online activities (Zeithaml, 2002; Kim et al., 2006),

meaning that both search and retrieval of information pay a role in service quality

evaluation. Rowley (2006) asserts that self-service is a relative term and can be adapted

to describe e-services where the customers must learn to navigate the web interface and

take control, because the customer’s interaction with the organization is through the

technology, such as a web site. This led her to claim that due to the absence of face-to-

face interaction, electronic service is a relatively impoverished service. Some of the

definitions of electronic service are given in Table 2.7

Table 2.7 : Electronic Service Definitions

Service in cyberspace (Rust & Lemon, 2001) or virtual marketplace (Santos, 2003)

Delivered over the internet (Boyer et al., 2002; Surjadjaja, Ghosh & Anthony, 2003); Fassnacht & Koese,

2006; Zeithaml et al., 2006)

Web-Based Services (Reynolds, 2000; Gounaris & Dimitriadis; 2003, Zeithaml, 2002; Parasuraman et al.,

2005 )

Delivered via information and communication technology (Fassnacht & Koese , 2006); electronic

networks (Rust, 2001); technology mediated (Rowley, 2006)

Interactive information service (Rowley, 2006; Santos, 2003; Li & Suomi, 2007)

Stand-alone services (Fassnacht & Koese , 2006)

Self-service (Dabholkar, 2000 ; Sara 2000 ; Meuter et al., 2000 ; Rust & Lemon, 2001 ; Zhu et al., 2002 ;

Rowley, 2006)

72

One important issue in the conceptualization of e-service quality is that, any

contact with the service provider (via e-mail or other means) is considered as a service

recovery construct. Electronic services are technology mediated and only when there is

a problem does the customer need to communicate with the service provider to handle

questions, concerns and frustrations (Collier & Bienstock, 2006).

However, in the case of library services, recovery may not entirely be a separate

construct. Libraries not only provide access to the collections (product) but facilitating

access and retrieval of relevant information is an integral part of library services. Online

help, may it be technical or topical, is a part of reference service and the reference

interview cannot be put aside in the electronic environment. Helping users search and

answering queries are not part of the ‘recovery’ dimension of e-service quality, as

postulated inmost e-service literature.

According to Rowley (2006), electronic service is often a part of a wider service

delivery. Libraries use the Internet as a channel to enhance, support or bypass their

traditional channels. Often there is a continuum of service delivery mechanisms with

different mixes of face-to-face and self-service and different associated levels of

intensity in service relationship. There are no well-accepted conceptual definitions and

models of electronic service quality and its measurements (Seth et al., 2005). An

acceptable premise is that web-based services are delivered and accessed via a specific

web site, either incorporated within the organizations’ homepage or on a completely

separate site

When Hernon & Calvert (2005) examined library e-service quality at eight

universities in New Zealand, though not explicitly listed, they constructed the survey

instrument based on the following library services : online catalogue, online access to

73

course materials, e-resources, communication channels, document delivery,

personalized service and online alert services.

Li (2006) describes electronic library user information services as bibliographic

instructions, computerized library catalogs, digital libraries, distance learning services,

e-databases, government documents, instant messaging services, interlibrary loan and

document services, ready reference, virtual classrooms, virtual references.

Upon scrutinizing several review papers on electronic service quality literature

(Seth & Deshmukh and Vrat, 2005; Rowley, 2006; Ladhari, 2008) and research articles

(Fassnacht & Koese, 2006; Collier & Bienstock, 2006; Ho, 2007; Li & Suomi, 2007;

Cristobal, Flavian & Guinaliu, 2007; Ladhari, 2009), several prominent model of e-

service quality are being repeatedly used by researchers to measure e-service quality in

various context and industries. Thus, the following section will discuss some of these

models, mainly WebQUAL (Loiacono et al, 2000); SITEQUAL (Yoo & Donthu, 2001);

eTailQ (Wolfinbarger & Gilly, 2003) and E-S-QUAL (Parasuraman et al., 2005). This is

followed by efforts in measuring library e-service: DigiQUAL (Kyrillidou & Giersch,

2005) and e-SERVQUAL for libraries (Hernon & Calvert, 2005).

2.4.2 E-service Quality Models

This section present several models of e-service quality that have been

developed and used for evaluation of web site service quality and Internet retailing. First

the context in which these scales were developed is presented, followed by a description

and comparison of dimension in Table 2.8.

74

(i) WebQual

WebQUAL is an instrument for consumer evaluation of Web site quality. It was

developed by Loiacono, Watson, and Goodhue (2002) based on the Theory of Reasoned

Action and Technology Acceptance Model. The conceptual context used in developing

WebQUAL was: What perceived characteristics of a Web site will affect a consumer’s

decision to reuse the site? It contains 36 questions on 12 characteristics of a Web site,

demonstrating strong measurement validity, and it predicts intention to buy from or

revisit a Web site. WebQual’s development was based on responses from undergraduate

business students to a selected group of Web sites the search for distinct dimensions for

evaluating Web sites begins with a framework of four categories: ease of use, usefulness

in gathering information, usefulness in carrying out transactions, and entertainment

value. Zeithaml et al., (2002) commented that WebQual is more pertinent to interface

design rather than service quality measurement. Furthermore, it also lacks qualitative

emerging categories as the respondents were given researcher-specified categories.

WebQUAL is widely used and adapted for various site evaluations but its short

fall as a measuring tool for library service quality, as it does not address the construct in

its initial conceptualization.

(ii) SiteQual

Yoo and Donthu (2001) developed SITEQUAL to measure the perceived quality

of Internet shopping sites. It originally consisted of 38 items and nine factors of two

broad sets: vendor-related and site quality. However, the first set of factors was

removed because the researchers wanted to focus on site quality. Confirmatory factor

analysis (CFA), apparently using the same data, indicated a poor fit and the model was

re-specified. After several iterations, the instrument was reduced to nine items to

75

measure the four factors (only two items for most factors). Loiacono, Watson and

Goodhue (2007) contest that SITEQUAL’s original set of items was too narrowly

based, and most of its final factors are measured by only two items.

(iii) eTailQ

Wolfinbarger and Gilly (2003) developed an instrument for the measurement of

Internet retailing quality. Their contention was that most studies focus only on

customer’s interface with the website, whereas they believed that online shoppers are

goal-directed and their behavior may differ vastly in the online purchase context. Their

multi-method iterative process combined findings from focus groups (across Canada

and U.S.) with SERVQUAL items and generated determinants of e-retailing quality,

namely: Website design; Customer service; Fulfillment/Reliability; Security/privacy.

The model generated showed eTail quality as a higher order factor. They

concluded that judgement of online purchase experiences are strongly related to website

design factors and fulfillment, less on security. An acknowledged limitation of this

study was that the participants were an online panel (not random) that may be more

‘technologically sophisticated’ than the average user.

(iv) E-S-Qual

It is evident that Zeithaml et al.’s ( 2002) conceptualization and Parasuraman et al.’s

(2005) measurement of e-service quality is widely used as a base, (both for

conceptualization of constructs and methodology for scale development) by many

researchers of e-service quality. Zeithaml, Parasuraman and Malhotra (2001, in

Zeithaml et al., 2006) carried out a systematic study on how customers judge e-service

quality. In that study, they defined e-Service Quality (e-SQ) as the extent to which a

website facilitates efficient and effective shopping, purchasing and delivery. Through

76

exploratory focus groups and two phases of empirical data collection and analysis, they

identified seven dimension of e-SQ:

i. Core dimension

a. Efficiency : the ability of customers to get to the website, find what they want

and check out with minimal effort

b. Fulfillment : the accuracy of the service promised, and delivering the products

in the promised time

c. Reliability: the technical functioning of the site,, extent to which it is available

and functioning properly

d. Privacy: assurance that the customer data is not shared and credit information

is secure

ii. Service recovery dimensions

a. Responsiveness: the ability of service providers to provide appropriate

information to customers when a problem occurs,

b. Compensation: the degree to which customers are to receive money back and

are reimbursed

c. Contact: the availability of live customer service agents online or though the

phone

New dimensions emerge in e-services. Reliability and responsiveness are shared

dimensions. Efficiency and fulfillment are core dimension in e-service quality, and both

share some elements of traditional reliability and responsiveness dimensions. The

personal dimension of empathy is not present in e-service quality except in a non-

routine or problem situation. The tangible, visual elements of the site will be critical to

efficiency. Zeithaml et al., (2002) concluded that some of the SERVQUAL dimensions

may be applied in e-service quality with the inclusion of several new dimensions

77

relating specifically to technology. Table 2.8 shows a comparison of all four e-service

quality models, detailing the respective dimensions and items representing them.

78

Table 2.8: E-Service Quality Models

Dimensions WebQUAL SITEQUAL eTailQ E-S-QUAL + E-RES-QUAL

Usefulness

Informational Fit-to-

Task

- Is what I need

- Adequately meet my needs

- Is effective

Website design

- provides in-depth

information

- doesn’t waste time

- level of personalization is

about right

- had good selection

- it is quick and easy to

complete a transaction

Efficiency

- Site is well organized

Information on this site is well

organized

Tailored information - allows me to interact with it to

receive tailored information

- has interactive features which

help me accomplish my task

- tailored to my specific needs.

Efficiency

- This site makes it easy to find

what I need

- It makes it easy to get

anywhere on the site

Trust/ Secrity/Privacy - feel safe in my transactions

- trust the Web site to keep my

personal information not

misuse my personal

information

Security

- The site ensures me of

security

- I am confident with this site

Security

- Privacy is protected at this

website

- Safe transaction with this

site

- Web site has adequate

security features

Privacy

- Protects information about

Web-shopping behavior

- Does not share personal

information with other sites

- Protects information about

credit card

Response Time - very little waiting time

- site loads quickly/ site takes

long to load

Processing Speed

- it is to access the results

the site has quick process

Efficiency

- It loads pages fast

- Enables to complete a

transaction quickly

- This site enables to get on to

it quickly

System Availability

- The site is always available

- This site launches and runs

right away

- Site does not crash

- Pages on site do not freeze

79

Table 2.8, continued

Dimensions WebQUAL SITEQUAL eTailQ E-S-QUAL + E-RES-QUAL

Ease of Use

Ease of Understanding

- display pages are easy to read

- text is easy to read

- labels are easy to understand

The site is convenient to use Efficiency

The site is simple to use

Intuitive Operations - Learning to operate the Web

site is easy for me.

- It would be easy for me to

become skillful at using the

Web site.

- I find the Web site easy to use.

It is easy to search for

information

Entertainment

Visual appeal

- site is visually pleasing.

- displays visually pleasing

design.

Website design

- The site is colorful

- This site is creative

- This site shows good pictures

of the products

Innovativeness - The Web site is innovative.

- The Web site design is

innovative.

- The Web site is creative

Emotional Appeal - I feel happy when I use the

Web site.

- I feel cheerful when I use the

Web site.

- I feel sociable when I use the

Web site.

80

Table 2.8, continued

Dimensions WebQUAL SITEQUAL eTailQ E-S-QUAL + E-RES-QUAL

Customer Service - - Willing and ready to

respond to customer

needs

- When there is a problem,

web site shows sincere

interest in solving

- Inquiries are answered

promptly

- Sends out the items ordered

- Has in stock the items claimed to

have

- Is truthful about the offerings

- Makes accurate promise about

delivery of products

Responsiveness (E-RecS-

QUAL)

- Convenient options for returning

items

- Handles product returns well

- Offers a meaningful guarantee

- Tells what to do if transaction is

not processed

- Takes care of problems promptly

Compensation(E-RecS-

QUAL)

- The site compensates for the

problems it creates

- Compensates when order does

not arrive on time

- Picks up items for return from

home or business

Contact(E-RecS-QUAL) - Provides a telephone number to

reach the company

- Customer service representatives

available online

- Offers the ability to speak to a

live person if there is a problem

81

Table 2.8, continued

Dimensions WebQUAL SITEQUAL eTailQ E-S-QUAL + E-RES-QUAL

Complementary Relationship

Consistent Image

- The Web site projects an

image consistent with the

company’s image.

- The Web site fits with my

image of the company.

- The Web site’s image

matches that of the

company.

On-Line Completeness - The Web site allows

transactions on-line

- All my business with the

company can be completed

via the Web site.

- Most all business processes

can be completed via the

Web site.

Fulfillment/reliability

- You get what you ordered

from this site

- Product delivered by the

time promised by the

company

- Product delivered was

represented accurately by

the web site

Fulfillment/reliability

- Delivers orders when

promised

- Makes items available for

delivery within a suitable

time

- Quickly deliver what is

ordered

Relative Advantage - It is easier to use the Web

site to complete my business

with the company than it is

to telephone, fax, or mail a

representative

- The Web site is easier to use

than calling an organizational

representative agent on the

phone.

- an alternative to calling

customer service or sales

82

All four scales compared above were developed for measuring online shopping

or electronic retailing. The focus is on the web site that facilitates the customers activity

of accessing, searching, selecting and purchasing a product by interacting with through

the web site. E-S-QUAL goes a step further by including aspects of delivery and return

of the product too. Since the whole service is delivered via a web site, the main

indicators are the web site characteristics, including web site design, ease of use,

usefulness and processing speed. The terms used to describe these indicators may differ

in each scale, but generally they are operationalized as having convenient access to an

well designed web site so that they can find the right information easily and quickly.

Since online purchasing requires payment, security (privacy) is another important

indicator of e-service quality.

Another issue is of customer support. Since the service provider is not physically

present, the online customers may face some problems dealing with the service. Usually

there are two types of assistance needed:

(i) help with making a decision about the purchase/transaction

(ii) help with handling technical problems

Support services assist the online customers overcome these hurdles. Poor website

design or lack of accurate information may hinder the customer’s ability to interact

successfully with the service. As a result there arises a need to communicate with the

service provider. Providing contact information or opportunity to communicate with

service provider is essential as customers always want to be heard and made to feel

important.

83

SITEQUAL does not address this issue at all. WebQUAL on the other hand

warrants that online completeness is a quality indicator. Meaning that all transaction

should be completed via the web site without any problems. ETailQ addresses this by

indicating good customer services that responds quickly and willingly to customer

needs is necessary in service quality measure. This indicator is incorporated in the

service quality scale. As for E-S-QUAL, though it addresses this issue, a separate scale

is designed to measure indicators of what is defined as ‘recovery services’. They authors

created E-RecS-QUAL to measure responsiveness, compensation and contact. Customer

service is conceived as a recovery service, not a core service of online retailing.

The third issue of concern in e-retailing is the ability of the service to fulfill the

customers’ needs. Getting the exact item that is ordered and on time are service quality

indicators. e-TailQ and E-S-QUAL address this in the fulfillment/reliability dimension,

whereas WebQUAL approaches it quiet differently. Their dimension ‘Complementary

relationship’ goes beyond fulfilling customer needs (online-completeness) to include

aspects of service outcome, such as company image and relative advantage of doing

business online.

In summary, there are three main issues surrounding the operationalization of

electronic service quality indicators, first is the environment and tools available to

access the service. Second is the process during service delivery or interaction with the

service provider and lastly, the outcome of using the service.

The next section examines electronic service quality models for measuring electronic or

web-based library services.

84

2.4.3 Library E-Service Models

The literature shows little effort by LIS researchers to develop tools specifically

for electronic or web-based library services. There have been two main efforts in

developing a scale for electronic services. The first is DigiQUAL®

, by ARL, a tool to

measure digital library service quality and the other is efforts by Hernon and Calvert

(2005) to develop the e-SERVQUAL tool for electronic library services.

(i) DigiQUAL®

DigiQUAL®

is being developed by ARL, Texas A&M University and

University of Texas to evaluate the National Science Digital Library (NSDL),

emphasizing issues related to reliability and trustworthiness of a website. The

development of DigiQUAL®

uses a mixed methods approach, both qualitative and

quantitative methods. Based on the focus groups held at Digital Library for Earth

System Information (DLESE) and Multimedia Educational Resource for Learning and

Online Teaching (MERLOT) a model was developed that describes two major

components in the digital library environment, the human/system interaction component

and the technical component (Kyrillidou, Heath, Cook, Thompson, Lincoln, & Webster,

2007) .

UTOPIA, a digital library (DL) developed and supported by the University of

Texas was one of the first DLs to implement DigiQUAL ®

together with other NSDL

collections. DigiQUAL ®

is based on the LibQUAL®

protocol and collects feedback on

the site’s service, functionality and content. (Kyrillidou & Cook, 2008). Twelve themes

related to digital library service quality have been identified : design features;

accessibility/navigability; interoperability; digital library as community for users,

developers and reviewers; collection building; role of federations; copyright; resource

use; evaluating collections; and digital library sustainability.

85

(ii) Library E-SERVQUAL (Hernon & Calvert, 2005)

So far the most comprehensive study on library electronic service quality has

been by Hernon and Calvert (2005). Building upon their previous study of service quality

(Hernon & Altmann, 1998; Nitecki & Hernon, 2000) using early dimensions of SERVQUAL to

identify library service quality, Hernon & Calvert (2005) examined library e-service

quality at eight universities in New Zealand. They began with ten dimensions they

deduced from the literature review and focus groups. However, eleven factors emerged

after factor analysis, of which two factors were not identifiable as no pattern to the

statements could be discernible. The other nine, though not explicitly listed, were

identified as collections, empathy/responsiveness, linkage, equipment, flexibility,

interaction/communication, ease of use, efficiency and customization/personalization.

The researchers did not conduct confirmatory factor analysis but suggested further

research to refine the pool of statements and re-conceptualization of the dimensions.

The instrument asked users to think of an ideal library with excellent services and then

judge the current library services on a 10 point scale ranging from 1(of no importance)

to 10( of highest importance) Likert type scale. Each questions is to be answered twice,

once ‘in an ideal library’ and then ‘in library xxx’. There are 104 items in the pool from

which about 22 statements (corresponding to the number used in the original SERVQUAL

and in E-S-QUAL) are recommended for inclusion in the questionnaire.

The second section of the instrument requires users to rate how important each

of the 10 dimensions of library service was to the user when evaluating library service

by allocating a total of 100 points among the 10 dimensions. Analysis is reported using

means scores of each item and quadrant charts to visualize service attributes into four

quadrants. The authors used Factor Analysis to produce a eleven factor solution. Table

2.9 depicts the original ten dimensions assumed from the literature and the subsequent

eleven dimensions extracted using factor analysis.

86

Table 2.9 : Library E-SERVQUAL Dimensions

No. Dimensions deduced from the literature Dimension deduced from factor analysis

Dimensions Description Dimensions Description

1 Ease of use Navigation, search, find, download, speed, remote

access

Ease of use Ease of use (*Santos, 2003)

Ease of navigation (*Zeithaml et al., 2002)

2 Collections Quality, relevance, and deep collections of electronic

material to

Meet my immediate needs

Collections Ease of access

Reliability

3 Linkage connectivity to relevant information, avoid broken links,

regularly update the accuracy of links

Linkage Links (*Santos, 2003)

4 Flexibility different search procedures: basic and advanced, etc. Flexibility Save searchers, make requests in different

formats, site map

5 Support Help pages, section on frequently asked questions,

technical help if

There is a problem or question,

Customer

feedback

Interaction(*Loiacono, Watson, and

Goodhue., (2002)

Communication (*Yang et al., 2003; *Santos, 2003)

6 Customization/person

alization

Receive e-mail announcements about the arrival of new

books on topics of personal interest, etc

Customization/

Personalization

Customization/Personalization (*Zeithaml et

al., 2002)

7 Security/privacy/

trust

Belief the site is relatively safe from intrusion, personal

Information is protected, etc

Equipment Provision of equipment to use

8 Easy of access logon/off quickly, etc Empathy Responsiveness (*Zeithaml et al., 2002)

Support (*Santos, 2003)

Courtesy (*Yang, Peterson, and Cai, 2003)

9 Reliability Frequency of updating, proper technical functioning of

Web site or

electronic product, etc.)

Efficiency Efficiency (*Zeithaml et al, 2002) *Santos,

2003)

10 Web site aesthetics Colours, graphics, size, etc. Not Discernible

11 Not Discernible

Based on Hernon & Altman (2005). * Indicates other studies that have used the same dimension.

87

Calvert (2008) in his PhD dissertation concludes that the Library e-SERVQUAL may

not be a generic instrument applicable in all libraries and for all circumstances, but it does

provide a research-based pool of statements which can be used a starting point and then

developed further.

2.4.4 Dimensions of Electronic Service Models

Besides online retailing and electronic library services, there are numerous e-services

that have attempted to develop measurement scales to assess service quality in their

respective service context. Table 2.10 presents a summary of some of the service

quality dimensions identified in the electronic service industries- Internet services, e-

commerce, online retailing and web portals. It is found that some of these dimensions

are similar, while others differ significantly. These dimensions help to serve as a basis

for the construction of quality indicators when exploring web-based library service

quality.

88

Table 2.10 : Summary of E-Service Quality Dimensions

Dimension Description Others LIS Literature

Web site features/

functionality

*Colours graphics image animation, etc.

Aesthetics ; Apprearance

*Linkage/ Well organized hyperlinks

*Links to relevant information

Product and its features are correctly presented

Information is regularly updated

Pages load quickly/

Contents are easily found

Well arranged/ clear structure

Full information provided on product and services

Attractive appearance

Design appropriate

*Helpful search function

*Different search procedures

In-depth information for customer decision

making

Comprehensive information

Quick & easy to complete transaction

Good selection

Dabholkar, 1996; Yoo & Dounthu, 2001;

Wolfinbarger & Gilly, 2002, 2003; Zeithaml et al.,

2002; Loiacono et al. 2002; Madu & Madu, 2002 ;

Santos , 2003 ; Yang, et al., 2004; Gounaris et al.

2005; Fassnacht & Koese, 2006; Cristobal, et al.,

2007; Kim et al., 2006 ; Heim & Field , 2006 ;

Collier & Bienstock , 2006; Ho, 2007; Tate, et al.,

2007

O’Neill, Wright & Fitz, 2001

Hernon & Calvert, 2005

Usability Easy to learn to operate

Clear and understandable

*Easy to navigate

Easy to use

flow

Sense of competency

Positive experience

Search facilities

Loiacono et al., 2002 ; Yang et al., 2005 ; Tate, et al.,

2007

Hernon & Calvert, 2005

89

Table 2.10, continued

Dimension Description Others LIS Literature

Ease of use/

Efficiency

*Easy navigation

Easy access/easy connection

Effective navigation

Functionality

Speed-remote access

Minimize technical difficulties Easy to remember URL

Content concise and easy to understand

*Easy to search and download

Easy transactions

Dabholkar, 1996; Yoo & Dounthu, 2001;

Santos, 2003; Yang et al., 2004; Fassnacht &

Koese, 2006; Ho, 2007

Zeithaml et al., 2002; Santos, 2003;

Parasuraman et al., 2005(E-S-QUAL)

Hernon & Calvert, 2005

Security/privacy Reputable company

*Protect customer information

Secure online payment

Feel safe Adequate security features

Trustworthy

Yoo & Dounthu, 2001; Wolfinbarger & Gilly,

2002, 2003; Loiacono et al. 2002; Zeithaml et

al., 2002; Madu & Madu, 2002; Santos, 2003;

Yang et al., 2004; Yang, et al., 2004 ;

Parasuraman et al., 2005; Gounaris et al. 2005 ; Ho, 2007; Heim & Field ,2006; Kim et al.,

2006 ; Tate, et al., 2007

Hernon & Calvert, 2005

90

Table 2.10, continued

Dimension Description Others LIS Literature

Reliability/ fulfillment Perform service right the first time

Perform service in timely manner

Accuracy in performing services

Accurate records

Prompt delivery

Accurate order fulfillment Promise fulfillment

Keep service promises

Product same as ordered

Dabholkar, 1996; Wolfinbarger & Gilly, 2002,

2003; Zeithaml et al., 2002; Madu & Madu,

2002; Santos, 2003; Yang et al., 2004;

Fassnacht & Koese, 2006; Heim & Field, 2007;

Kim et al., 2006 ; Collier & Bienstock , 2006;

Parasuraman et al., 2005(E-S-QUAL)

O’Neill, Wright & Fitz, 2001

Ease of returns and refunds compensation

Zeithaml et al., 2002; Parasuraman et al., 2005(E-RecS-QUAL); Heim & Field, 2007

*Frequency of updating information

*Proper technical functioning web site

by answering the query efficiently

and correctly and providing a signature that

contains the librarian’s name or initials, title,

and institution

Hernon & Calvert, 2005

Shachaf, Oltmann &Horowitz,

2008

91

Table 2.10, continued

Dimension Description Others LIS Literature

System reliability- Compatibility with other systems

Answer reference query efficiently and

correctly

Shachaf & Oltmann, 2007

Responsiveness Prompt response to queries

Quick order execution

FAQs

Help available when there is problem

Chanel for user comments

Help resolve problems in timely manner

Find information quickly

Find information accurately

Download speed

Processing speed

Yoo & Dounthu, 2001; Zeithaml et al., 2002 ;

Madu & Madu, 2002 ; Yang et al., 2004;

Gounaris et al., 2005; Parasuraman et al., 2005;

Ho, 2007

O’Neill, Wright & Fitz, 2001

Answer e-mail query in timely manner

Respond quickly

Adhere to stated turnaround policy

Shachaf & Oltmann, 2007;

Shachaf, Oltmann and Horowitz,

2008

Accessibility/

Easily contact customer service

Easily find contact information

Multiple ordering options

Retailer chat room available

Dabholkar, 1996; Santos, 2003; Parasuraman

et al., 2005; Kim et al., 2006 ; Ho, 2007

92

Table 2.10, continued

Dimension Description Others LIS Literature

Availability Site always available –launch quickly

*Logon/log off quickly

High speed page loading

Site does not crash

Pages do not freeze

Yang, Zhou & Zhou, 2004; Parasuraman et al., 2005; Fassnacht & Koese, 2006; Yang et al.,

2007

Hernon & Calvert, 2005

Customization

Personalization

Real time interaction

Personal support

Personalized attention

Relevant information for customers specific

needs

*Receive e-mail announcements about new

materials

Madu & Madu, 2002; Yang, et al., 2004; Kim

et al., 2006 ; Ho, 2007

Hernon & Calvert, 2005

Communication Prompt warnings Prompt notification

Old records

Contact information

Keep informed

Loiacono et al., 2002 ; Zeithaml et al., 2002 ; Parasuraman et al. 2005 ; Tate, et al., 2007;

Ho, 2007

93

Table 2.10, continued

Dimension Description Others LIS Literature

Customer service/ support

Cust Rship

Do what is promised in certain time

Rapidly deal with complaints and problems

Service performed properly first time

Provide tailor-made service

Delivery period adhered to

Willing to respond to customer need

Sincere in solving problems

*Answer inquiries promptly *Help pages

*FAQs *Technical help

Easy to track delivery/purchase

Personalized features

Understand specific need

Wolfinbarger & Gilly, 2002, 2003;

Santos, 2003;

Heim & Field, 2006; Collier & Bienstock,

2006; Ho, 2007; Cristobal, et al., 2007

O’Neill, Wright & Fitz, 2001

Hernon & Calvert, 2005

Interactivity Good reputation

Security of personal information

Sense of community

Easy to communicate with organization,

confident goods will be delivered as promised

Enjoyability /entertainment

incentive

Dabholkar, 1996; Loiacono et al. 2002 ;

Santos, 2003; Madu & Madu, 2002 ; Gounaris

et al. 2005 ; Tate, et al., 2007

Interactive feedback between customer and

company

Follow up services

Message board from c-2-c.company

Yang et al., 2004

94

Table 2.10, continued

Dimension Description Others LIS Literature

Information

*Accurate, updated, concise information,

timeliness

Product portfolio

*Comprehensive information relative to other

Complete content

Sufficient information for potential and

existing customers Complete product description

Detailed contact information

Unique content

Relevant information to customer

Valuable tips on product

Reliable professional opinions

Up-to-date information

Loiacono et al., 2002 ; Young et al., 2004;

Gounaris et al. 2005; Fassnacht & Koese,

2006 ; Ho, 2007; Yang et al., 2004

Content/

Collections

Selection

Information availability

Information accuracy

Information clarity

Information believability Appropriate level of detail

Appropriate format

*Ease of understanding

Relevance

*Collections meet information need

Santos, 2003; Yang, et al., 2004

Tate, et al., 2007

Hernon & Calvert, 2005

95

Table 2.10, continued

Dimension Description Others LIS Literature

Assurance(Security) Transmit an image of reliability and

trustworthy

Confidentiality of customer data

Confirmation of transaction

Clear info on how to purchase

Customer aware of security incorporated

Madu & Madu, 2002; Cristobal, et al., 2007 [courtesy]

Shachaf & Oltmann, 2007;

by approachability, friendliness,

politeness,

and professional courtesy

Shachaf, Oltmann and Horowitz,

2008

Delivery fulfillment Successful delivering of products

Willingness to correct mistakes during

transactions

Dabholkar, 1996; Zeithaml et al., 2002;

Ho, 2007

Efficiency Santos, 2003; Parasuraman et al., 2005;

Policy Madu & Madu, 2002

Empathy Madu & Madu, 2002

Flexibility Different search procedure

Make request in different formats, search

through site map

Hernon & Calvert, 2005

Competence Ability to solve problems

Knowledge to answer questions

Research capacity

Quick to solve problems

Yang et al., 2004

Functional benefit Service serves its purpose very well

Fassnacht & Koese, 2006

Emotional benefit Using the service is fun Invites customer to stay

Fassnacht & Koese, 2006

96

The first common aspect is Website features and functionality, which includes

site appearance similar to aesthetics of the physical service scape. On the web, colored

graphics (Hernon & Calvert, 2005; Fassnacht & Koese, 2006; Santos, 2003), well

organized with a clear structure (Yang, Zhou & Zhou, 2005) and working links (Hernon

& Calvert, 2005; Santos, 2003; Ho, 2007), and attractive design (Tate et al., 2007) are

some of the quality features which according to Yoo and Donthu (2001) may encourage

overall loyalty to the organization. Functionality in terms of content organization for

easy search and navigation (Yang, et al., 2005; Fassnacht & Koese, 2006) are a concern

in other e-services. When it comes to library e-services, the focus is more on helpful

search functions and links to relevant resources (O’Neill, et al., 2001). In Hernon &

Calvert’s (2005) study, ‘linkage’ was a separate dimension for e-service quality.

However, generally most studies combine this as a web site functionality indicator

(Santos, 2003; Ho, 2007).

Usability and ease of use are two other attributes relating to the technological

aspects of web services. Some researchers combine both attributes, while others treat

them as two distinct constructs. Usability, according to some researchers refers to easy

navigation (Tate, Evermann, Hope, & Banes, 2007; Loiacono et al., 2002 ; Yang et al.,

2005) or flow of the site that allows easy operation and encourages positive experience.

Ease of use as conceptualized by Hernon & Calvert (2005) also includes ease of

navigation (Parasuraman et al., 2005 (E-S-Qual); Santos, 2003), searching,

downloading information (Santos, 2003; Yang, et al., 2004) and the speed of remote

access. Both these constructs seem to be used interchangeable in the literature without

clear definition by researchers, especially in the use of indicators such as easy

navigation.

97

There are many dimensions that overlap in the website design, functionality,

ease of use and usability. Basically these dimensions are concerned with the appearance

of the site, the organization of the information, how easy it is to access the web site and

how easy it is to interact with the site/service.

The next dimension is the security or privacy of the service. This is an important

dimension in many e-services studies (Yoo & Dounthu, 2001, 2003; Loiacono et al.,

2002; Zeithaml et al., 2002; Madu & Madu, 2002; Yang et al., 2004; Ho, 2007; Kim et

al., (2006) ; Gounaris et al., 2005). Basically the indicators include trustworthiness

(Hernon & Calvert), which may include protecting customer information (Parasuraman

et al., 2005; Wolfinbarger & Gilly, 2002; Santos, 2003; Heim & Field, 2006; Tate, et

al., 2007; Yang, et al., 2004) and secure online payment (Parasuraman et al., 2005;

Santos, 2003). Secure payment may only be relevant in library services if the system

allows online fine payment or payment for document delivery. Hernon & Calvert’s

(2005) defined it as the believe that the site is relatively safe from intrusion and

personal data is protected.

An important dimension that surfaces in most literature is reliability/fulfillment.

This dimension covers a broad conceptualization from its early definition of performing

the service right the first time in a timely manner (Zeithaml et al., 2002) to include

accurate records, ease of returns and compensation (Heim & Field, 2006; Parasuraman

et al., 2005). For electronic library services it refers to frequency of updating

information and ensuring technical proper functioning of web site (Hernon & Calvert,

2005; Shachaf, et al., 2008) and efficient answering of online queries (Shachaf &

Oltmann, 2007). This construct may also include system reliability in term of

compatibility with other systems.

98

Another common dimension of electronic service quality is responsiveness. This

dimension was conceptualized as prompt response, either in answering queries or order

execution (Ho, 2007). In electronic environment, download speed and processing speed

become important indicators (Zeithaml et al., 2002 ; Madu & Madu, 2002 ; Yang et al.,

2004; Gounaris et al., 2005; Parasuraman et al., 2005; Yoo & Dounthu, 2001). For

library e-services, this dimension may be specific to answering e-mail queries in a

timely manner and adhering to tunaround policies (Shachaf & Oltmann, 2007; Shachaf,

et al., 2008).

Accessibility concerns issues of technicality, such as availability of the site, ease

of login/log off (Hernon & Calvert, 2005; Yang, et al, 2004) and high speed of page

loading (Fassnacht & Koese, 2006; Parasuraman et al., 2005; Ho, 2007; Yang, et al.,

2004). This is similar to some of the indicators included in the web functionality

dimension. However, some studies have a slightly different approach when they focus

on issues of contact, customer service, multiple ordering options and availability of chat

rooms (Santos, 2003). This may overlap with the customer service and communication

dimension which are considered as separate dimensions of e-service quality. Basically,

accessibility includes access to the web site and access to the service provider.

There are several dimensions which include some aspects of customer and

service provider interaction. The first is personalization or customization. In e-services,

this dimension refers to personalized attention to customers (Yang, et al., 2004 )

including provision of relevant information for specific needs (Ho, 2007; Kim et al.,

2006; Madu & Madu, 2002). Hernon & Calvert (2005) defined it as customers

receiving e-mail announcements about new arrivals, personalized problem solving and

access to online library tutorial. The second dimension is Communication. Some

researchers include prompt warnings or notification (Loiacono et al. 2002; Ho, 2007;

99

Zeithaml et al., 2002 ; Tate, et al., 2007 ; Parasuraman et al. 2005). While others refer to

provision as contact information only (Santos, 2003). Hernon & Calvert (2005)

subsumed this dimension into their sixth factor together with indicators of customer

service and service interaction. The third dimension is customer support or customer

service. This dimension includes dealing with customer complaints (Santos, 2003) in

terms of promptness; sincerity (Wolfinbarger & Gilly, 2003); providing FAQs (Hernon

& Calvert, 2005; Santos, 2003); technical help (Hernon & Calvert, 2005; Santos, 2003).

Ho (2007) defined is slightly different when he used indicators such as easy to track

delivery and understanding specific user needs and offer personalized features. These

could actually be in personalization dimension. Basically it is ‘interaction’ between

service providers and customers of the service.

Some researchers have used the term interactivity to include indicators of

communication with organization, feedback (Yang et al., 2004), good reputation and

enjoyable or entertainment incentive using the e-services (Loiacono et al., 2002 ; Tate,

et al., 2007; Dabholkar, 1996; Santos, 2003; Madu & Madu, 2002; Gounaris et al.,

2005).

Overall it can be said that the basic relationship between service providers and

customer can be subsumed as one dimension in certain service context, or form separate

dimensions in other service context. So far there has been no clear conceptualization of

this construct as much overlap exists between the operationalization of interaction,

communication, support and personalization.

The next two closely related dimensions in the literature are Information and

Content/collection. In an electronic service, the web site acts as the inter-mediatory

between the service and the customer. The customer accesses the web site to obtain

information and/or make a transaction. The type of transaction that occurs depends on

100

the type of service being accessed. So there are basically two issues here. The first is the

quality of the information and the second is the quality of the content. Some researchers

(Yang, et al., 2004) refer to Information as availability of accurate, updated and concise

information (Fassnacht & Koese, 2006; Ho, 2007); complete product description;

reliable professional opinion( Fassnacht & Koese, 2006). Other who have examined

web sites, refer to it as availability of information (Hernon & Calvert, 2005; Fassnacht

& Koese, 2007), accuracy and clarity of information (Santos, 2003), or appropriate level

of detail and format of collections (Tate, et al., 2007).

Assurance is a dimension that concerns a sense of confidentiality, reliability and

trustworthiness (Cristobal et al., 2007; Madu & Madu , 2002) which again could

overlap with security/trust. Shachaf and Oltmann (2007) defined it as approachability,

friendliness and professional courtesy. This dimension is less obvious because e-

services rarely include interaction with polite and friendly staff. However, it is different

for web based library services, especially reference service. Hernon & Calvert’s (2005)

study too did not reveal this dimension in their examination of library e- services.

Two dimensions that emerge from Fassnacht and Koese (2006) study are

functional benefit and emotional benefit. Both dimensions relate to the outcome of the

service interaction. Functional benefit is described as the extent to which the service

serves its actual purpose, whereas emotional benefit refers to the degree to which using

the service arouses positive feelings.

In summary, it can be said that service quality dimensions in the networked

environment are dynamic and they depend on the type of electronic service being

studied. Even then, the same type of service may have different dimensions due to

different interpretation of researchers. Thus, each service scape should be studied

individually, while trying to fit it within the existing body of knowledge.

101

2.5 Conceptual Models of Service Quality Incorporating Attitudinal-Behavioral

Constructs

The measurement model of service quality address only the dimensions that form

objective measures of the service quality construct. On a broader sense, service quality

is an important construct because it influences consumer behavioral intentions (Cronin,

et al., 2000). Consumer behavior and consumer satisfaction can lead to favorable

service encounters. An examination of the relationships between service quality,

satisfaction and behavioral intentions can lead a better understanding of the phenomena

and assist libraries in the development of customer retention strategies. The following

section reviews the literature to examine the relationship between service quality,

customer satisfaction and behavioral intentions.

2.5.1 Perceived Service Quality

The term perceptions is acquired from cognitive psychology, defined as ‘the

means by which information acquired from the environment via the sense organs is

transformed into experience of objects, events, sounds, tastes, etc.’ (Roth, 1986 –

Dictionary of Cognitive Psychology). In marketing, the term perception has been used

to describe consumers’ opinions, beliefs or judgemental thoughts of products or

services. Service quality is said to be more subjective than product quality (Berry et al.,

1989) therefore, Parasuraman et al, (1988) defined the subjectivity of service quality as

perceived quality, meaning ‘the consumer’s judgement about a product’s overall

excellence or superiority’. Perceive quality is:

a. different from objective or actual quality

b. a higher level abstraction rather than a specific attribute or a product

c. a global assessment that in some cases resemble attitude

d. a judgement usually made within a consumer’s evoked set

102

According to several researchers, quality must be addressed from someone’s viewpoint,

it cannot be attained objectively (Curry & Faulds, 1986). This is similar to the user-

based approach defined by Garvin (1984), in which he founded the premise that ‘quality

lies in the eyes of the beholder’. Perceived service quality derives from the individual

service encounter between the customer and the service provider, during which the

customer evaluates quality and develops a judgement. Each service experience is made

up of a series of individual discrete service encounters during which the customer will

make these evaluations (Bitner, 1990). Thus, service quality measures are actually

measures of ‘perceived service quality’.

2.5.2 Customer Satisfaction

In Oliver’s chapter on Conceptual Model of Service Quality and Service

Satisfaction (1993, as cited in Hernon, 2002), he explains that the word satisfaction is

derived from the Latin word satis (enough) and facere (to do or make). This word alone

implies a degree of fulfillment – meaning that it can be interpreted as customer’s feeling

of fulfillment of his needs (Hernon, 2002). Generally user surveys use the term

satisfaction to find out customers’ experiences in using the library. They either use

several different questions to gauge their experience with varying collections, services,

facilities, etc. or just a single question to gauge the general satisfaction level with the

library.

Parasuraman et al., (1985) had conceptualized perceived service quality as the

difference between service expectations and perceived performance, however,

Lancaster (1993) equated user satisfaction as the difference between service

expectations and perceived performance. In LIS there was no clear distinction between

the two construct until research began to focus on service quality measure as a more

serious evaluation of library services.

103

The relationship between service quality and customer satisfaction is twofold.

The work of multiple researchers have posited that satisfaction is an antecedent to

service quality, which then directly affects buyer’s behavioral intentions (Oliver, 1981;

Bitner, 1990; Bolton & Drew, 1991; Mohr & Bitner, 1995). Another set of researchers,

on the other hand have found that service quality is an antecedent of satisfaction

(Cronin & Talyor, 1992; Parasuraman et al., 1988, 1994; Anderson & Sullivan, 1993;

Rust & Oliver, 1994; Anderson, Fornell, and Lehmann, 1994; Teas, 1994; Caruanna,

2002; Hernon, 2002; Zeithaml, et al., 2006; Wilkins, et al., 2007).

This debate may be explained by reflecting upon Dabholkar’s (1995) claim that

the relationship is situation specific, it depends on the context of the service encounter

because the nature of the customers’ cognitive orientation and emotions may determine

the overall perception (service quality) and affective reaction (satisfaction) to the

service encounter.

Hernon and Whitman (2001) viewed service quality as dealing with users’

expectations of the service and satisfaction as an emotional reaction to the cumulative

experiences a customer has with the service provider. In an attempt to further

differentiate the two concepts, Hernon & Nitecki (2001) stress that service quality and

satisfaction are not synonymous concepts. According to them, service quality

judgement is cognitive, whereas satisfaction may focus on affective or emotional

(Hernon, 2002) reactions to a specific transaction or a cumulative judgement based on

collective encounters (overall satisfaction). Although the two concepts have certain

things in common, satisfaction is generally viewed as a broader concept, whereas

service quality focuses specifically on dimensions of service. The Table 2.11 lists the

conceptualization of service quality and customer satisfaction.

104

Table 2.11 : Conceptualization of Customer Satisfaction.

Author(s) Customer Satisfaction

Bitner & Hubbert,

1994

Service encounter satisfaction: examine satisfaction /

dissatisfaction over a particular service encounter

Overall service satisfaction: examine satisfaction / dissatisfaction

based on organizational or multiple encounters

Hernon & Whitman,

2001

A sense of contentment that arises from actual experience in

relation to an expected experience

Cronin et al.,

2000

Satisfaction with a service is both evaluative and emotional based

response to a service encounter

Hernon,

2002

An affective or emotional reaction to a service encounter or to a

series of encounters

Operationalization of the Customer Satisfaction construct

In terms of the measurement of customer satisfaction, it is agreed that the

customers’ predictive expectations influences the satisfaction level. An examination of

some of the questions used to measure satisfaction in library research reveal that many

are based on: fulfillment of expectations; comparison with an ideal library and/or

feeling about the experience.

Table 2.12 : Operationalization of the Customer Satisfaction Construct

Study Type of service Measurement items

Seay, Seaman and Cohen,

1996

Library - Overall how satisfied are you with today’s

library visit?

Cook, 2000 Library - In general I am satisfied with the way I am

treated at the library

- In general I am satisfied with the library

support for my teaching, research and learning needs.

Cronin, Brady and Hult, 2000 Retailing - My choice of purchase was a right one

- I think I did the right thing when I purchased

this service

- This is exactly what is needed for this service

Landrum and Prybutok

(2004)

Library - Overall are you satisfied with the library?

Martensen and Gronholdt

(2003)

Library - Considering all your experience of [library

name], how satisfied are you in general?

- To what degree do you consider the library

fulfils your expectations?

- Imagine a library which is perfect in all

aspects. How close is this ideal library do you

consider this library to be?

105

Table 2.12, continued

Study Type of service Measurement items

Landrum, Prybutok and

Zhang (2007)

Research library - The service at this facility provides value

- The service is effective.

- The service is efficient

- I am satisfied…

Dabholkar et al, 2000

Taylor and Baker, 1994 Health care,

recreation,

airlines, long

distance

telephone

- I believe I would be satisfied

- Overall in purchasing this service, I believe I

would be pleased

- Satisfying experience - My feelings towards this service can be best

characterized as ….

Ho (2007) Tourism - Made the right choice

- Will use again

- Truly enjoyed it

- Choice was wise

- Satisfied with most recent experience

- Happy with this

Ladhari (2009) Hotel industry - Emotional satisfaction : Happiness/ pleasant/

joyful

This study will attempt to investigate if service quality is an antecedent to

customer satisfaction. It shall work upon the definition of Oliver (1997):

‘Satisfaction is the consumer’s fulfillment response. It is judgement that a

product or service feature, or the product or service itself, provides a pleasurable level

of consumption-related fulfillment,’

The present study postulates that the following items shall measure the customer

satisfaction with web-based library services, encompassing both emotional and

cognitive components:

i. Using the web-based library services has been a good experience

ii. Web-based library services adequately meet my information needs

iii. Web-based library services are efficient

106

2.5.3 Service Value

Zeithaml (1988) suggested that ‘perceived value is the customer’s overall

assessment of the utility of a product based on perceptions of what is received and what

is given’. Operationalization of this construct is closely related to the usefulness of the

service. In retail literature, value is dependent on monetary costs, basically what the

customer has to ‘sacrifice’ when utilizing the service. However in information services,

including academic library services, where there is no direct cost incurred, the

indicators are more in relation to the ‘usefulness’ of the service. Table 2.13 lists some of

the items used by researchers to operationalize this construct.

Table 2.13 : Operationalization of the Service Value Construct

Study Type of service Measurement items

Cronin, Brady and Hult,

2000

Retail - Overall the value is…

- Overall ability of this facility to satisfy my

wants and needs is…

Parasuraman et al, 2005 Web site quality - Prices

- Overall convenience of using

- Feeling of being in control

- Overall value for money and effort

Martensen & Gronholdt,

2003

Academic Library - I use the library to keep up to date

- The library’s services satisfy my needs for

knowledge, learning and development

- The library plays a crucial role for me

Landrum, Prybutok &

Zhang, 2007

Research library Used the term Usefulness

- Accomplish my tasks faster

- Improves ability to do research

- Enhance effectiveness

- Be more productive

- Easier to do research

- Is useful.

Since this study focuses on library services for the research community, similiar to

Martensen & Gronholdt (2003) and Landrum et al., (2006), the present study postulates

that the following items shall measure library service value:

i. Web-based library services gives me a feeling of being in control

ii. Web-based library services improve my ability to do research

iii. Web-based library services enable me to be more productive

107

2.5.4 Customer Loyalty

Customer loyalty is the ultimate goal of any service organization. Loyalty is

translated to certain behavioral intention of the customers, such as repeated use (Oliver,

1997; Cronin et al., 2000), expressing a preference for it and recommending service to

others (Zeithaml et al., 1996; Cronin et al., 2000). Though in profit organizations

loyalty is important for increased revenues and is measured in terms of profit, in non-

profit organizations, increased return rate and increase in number of users may be used

to justify budget and accountability to the parent organization. In an academic

institution of higher learning, it is an indication of ‘increased use of scholarly

information by researchers and moneys spent is justified’. So it is important to include

loyalty in the conceptual model of service quality. Hernon & Altman (2010) suggest

that serving loyal customers is important because it ensures repeat use and making the

more use of the library and its services.

The importance of loyalty has also caused organizations to build long-term

relationships with customers (Schneider & White, 2004), known as relationship

marketing, a concept gaining merit in LIS research. Table 2.14 lists how some studies

have operationalised the customer loyalty construct.

Table 2.14 : Operationalization of the Customer Loyalty Construct

Study Type of service Measurement items

Martensen , and Gronholdt

2003

Library Will you be using more of the library’s services

in the future?

Is it important for you to be able to use the

library in the future too?

Would you recommend the library to other

users?

Parasuraman et al, 2005 Web site quality Say positive things to others

Recommend to others

Encourage others to use

Consider as first choice

Use more in future

Ho (2007) e-travel Encourage others to use

Say positive things Use more in future

Recommend to others

Consider as a first choice

108

Thus, the present study postulates that the following items shall measure the customer

loyalty with web-based library services:

i. I will be using more of the web-based library services in the future

ii. I would recommend the web-based library services to others

iii. I will say positive things about the web-based library services to others.

2.6 Research Hypotheses

In academic institutions, the phenomena of students and researchers preferring

to use other Internet service providers as opposed to library resources is becoming more

prevalent (Griffths & Brophy, 2005; Ross & Sennyey, 2008). Academic librarians need

to understand the relationship between service quality and customer loyalty to better

manage their services. Dabholkar, et al. (2000) found that the literature reports

contradicting finding relating to the causal relationship between service quality,

satisfaction and loyalty. They recommended that more research is needed to investigate

the possible mediating role of customer satisfaction in the relationship between service

quality and behavioral intentions.

Cronin, et al. (2000) claim to be the first to simultaneously compare the relative

influence of the three constructs; satisfaction, value and quality, on the service

encounter outcomes or behavioral intentions. They operationalized behavioral

intentions as consumers’ intention to use the service again, recommend it to others and

repeat use. If one examines Zeithaml et al.’s (1996) dimensions for behavioral

intentions, then these three items are characteristics of the loyalty dimension. Their

finding supported their proposed model that all three construct have a direct effect on

customer behavioral intentions (saying positive things, recommending to others, remain

loyal). However, it must be noted that their measure of service quality was based on 10

items of which eight were on employee characteristics and ability to provide reliable,

109

dependable and consistent service. While one was on a risk free environment and the

other on appealing physical facilities and employees.

Figure 2.6, graphically illustrates the relationship between the Customer

Satisfaction and Loyalty (Zeithaml, et al., 2006, p.106)

Figure 2.6 : Relationship between Customer Satisfaction and Loyalty

In the context of library service, Marthensen & Gronholdt (2003) examined the

effects of six dimensions of users’ perceived quality on user value, satisfaction and

loyalty. They found value to have a direct positive effect on satisfaction and loyalty.

User satisfaction too had a direct positive effect on loyalty. Whereas the indirect effect

of value on loyalty via satisfaction was smaller than its direct effect.

The drive for research in investigating the relationship between these construct

has been to develop an improved understanding of not only how they relate to each

other but how subsequently they influence behavior (Cronin, et al., 2000) in terms of

loyalty towards the service. Table 2.15 lists some of the research models relating to the

relationship between these construct as concluded by various researchers.

110

Table 2.15 : Studies on Relationship Between Service Quality, Satisfaction, Value and

Loyalty

Relationship Studies

Service Quality directly influences Customers

Satisfaction

- Cronin, Brady & Hult, 2000

- Marthensen & Gronholdt, 2003

- Dabholkar & Overby, 2004

- Prybutok & Landrum, 2004

- Parasuraman et al., 2005

- Zhang & Prybutok, 2005

- Birgelen, Ghijsen, and Janjaap, 2005

- Landrum et al., 2007

- Collier & Beinstock, 2006

- Lin, 2006; Ho, 2007

- Ladhari, 2009

Service Quality directly influences Service Value - Cronin, Brady & Hult, 2000

- Marthensen & Gronholdt, 2003

- Prybutok & Landrum, 2004

- Parasuraman et al., 2005

- Landrum et al., 2007

- Lin, 2006

Service Quality directly influences Customer

Loyalty

- Cronin & Talyor, 1992

- Wolfinbarger & Gilly, 2003

- Parasuraman et al., 2005

- Zhang & Prybutok, 2005

- Collier & Beinstock, 2006

- Ho, 2007

Customer Satisfaction directly influences

Customer Loyalty

- Cronin, Brady & Hult, 2000

- Marthensen & Gronholdt, 2003

- Birgelen, Ghijsen, and Janjaap, 2005

- Zhang & Prybutok, 2005

- Ho, 2007

- Collier & Beinstock, 2006

Service Quality influences loyalty through

Customer Satisfaction

- Cronin, Brady & Hult, 2000

- Marthensen & Gronholdt, 2003

- Zhang & Prybutok, 2005

- Birgelen, Ghijsen, and Janjaap, 2005

- Collier & Beinstock, 2006

- Ho, 2007

- Heinrichs et al., 2007

Service Value directly influences Customer

Satisfaction

- Cronin & Taylor, 1992

- Cronin, Brady & Hult, 2000

- Marthensen & Gronholdt, 2003

- Lin, 2006

Service Value directly influences Customer

Loyalty

- Cronin, Brady & Hult, 2000

- Marthensen & Gronholdt, 2003

Service Quality influences loyalty through Service

Value

- Cronin, Brady & Hult, 2000

- Marthensen & Gronholdt, 2003

Service Value influences loyalty through Customer

Satisfaction

- Cronin, Brady & Hult, 2000

- Marthensen & Gronholdt, 2003

- Lin, 2006

111

This research relies on Bagozzi’s (1992) theoretical justification that initial

service evaluation leads to emotional reaction that in turn drives behavior, meaning that

service quality and value appraisals precede satisfaction (Cronin, et al., 2000). Not

many studies in LIS have examined these relationships. There is a need to add to the

understanding of the interrelationships between these construct, especially since the

literature has still not reached a consensus on the nature of these issues.

2.6.1 Generating Hypotheses

Based on Table 2.15, several hypotheses are generated to explore the

relationship between these four constructs.

There is substantive body of evidence about the direct and significant effects of

perceived service quality on customer satisfaction in various industries including e-

commerce, e-travel, e-retailing, catering, among others. Many have found empirical

support for service quality to have a positive effect on customer satisfaction (Cronin, et

al., 2000; Marthensen & Gronholdt, 2003; Dabholkar & Overby, 2004; Prybutok &

Landrum, 2004; Parasuraman et al., 2005; Zhang & Prybutok, 2005; Birgelen, Ghijsen,

& Janjaap, 2005; Collier & Beinstock, 2006; Lin, 2006; Landrum et al., 2007; Ho, 2007;

Heinrichs et al., 2007; Ladhari, 2009). Similiarly it is expected that web-based library

service quality will positively effect customer satisfaction:

Hypothesis 1A : Web-based library service quality is positively related to customer

satisfaction

112

Perceived service quality, as defined by Zeithaml (1988) is actually the

assessment of the overall excellence of the service. An excellent service is expected to

be a service that is useful to the customer in fulfilling the customers’ needs. Within the

business literature, value is related to cost or price of utilizing the service, however in

academic libraries, there is no direct cost incurred by the user. Value perception would

strongly be based on utility of the service. Studies have shown that there is a direct

impact of service quality on service value (Cronin, et al., 2000; Marthensen &

Gronholdt, 2003; Landrum & Prybutok, 2004; Parasuraman et al., 2005; Landrum et al.,

2007; Lin & Hsieh, 2006). Thus, it is postulated that service quality positively effects

service value.

Hypothesis 1B: Web-based library service quality is positively related to service

value

The relationship between service quality and customer outcomes such as

loyalty, are important to retain customers for increased profit impact. In a library,

increased use of library resources is an indicator for justifying the library budget to the

parent institution. There is a strong need to study the relationship between service

quality and loyalty because studies have shown that the is a positive impact of service

quality on customer loyalty (Cronin & Talyor, 1992; Wolfinbarger & Gilly, 2003;

Parasuraman et al., 2005; Zhang & Prybutok, 2005; Collier & Beinstock, 2006; Ho, 2007).

This study examines if there is a direct relationship between service quality and customer

loyalty.

Hypothesis 1C: Web-based library service quality is positively related to customer loyalty

A few studies have indicated a positive relationship between satisfaction and

intention to re-visit. Dabholkar, et al. (2000) found that customer satisfaction strongly

mediates the effect of service quality on behavioral intentions (loyalty). A satisfied

113

customer is more likely to stay with the organization. However, they recommended that

more research is needed to investigate the possible mediating role of customer

satisfaction in the relationship between service quality and behavioral intentions. The

following two hypotheses will address this issue:

Hypothesis 2A: Customer satisfaction is positively related to customer loyalty

Hypotheses 2B: Customer satisfaction has a mediating effect on the relationship

between service quality and customer loyalty

Based on previous research, service value is suggested as a measure of the

customer’s overall assessment of utility (Zeithaml, 1988). When utility for a researcher

is analogous to the ability to increase his research productivity by having his

information needs adequately met, then it can be hypothesized that service value

influences satisfaction. Thus it is proposed:

Hypothesis 3A: Service value is positively related to customer satisfaction

Furthermore, it has also been shown that positive perception of service value,

encourages customers not only to repeatedly use the service but also recommend it to

others (Cronin, et al., 2000; Marthensen & Gronholdt, 2003). This is expressed in the

following hypothesis:

Hypothesis 3B: Service value is positively related to customer loyalty

Hypothesis 3C: Service value has a mediating effect on the relationship between

service quality and customer loyalty.

The hypotheses are summarized in the conceptual framework shown in Figure 2.7. The

model also shows the various studies that support the direction of the relationships. The

study examines the direct and indirect relationships between the constructs, as depicted

in the proposed Structural model.

114

Figure 2.7 The Proposed Structural Model

- Cronin & Taylor, 1992

- Wolfinbarger & Gilly, 2003

- Zhang & Prybutok, 2005

- Collier 7 Beinstock, 2006

- Ho, 2007

- Cronin & Taylor, 1992

- Cronin, Brady & Hult, 2000

- Marthensen & Gronholdt, 2003

- Lin, 2006

Web-based

Library SQ

Service value

Customer

satisfaction

Customer

loyalty

- Cronin, Brady & Hult, 2000

- Marthensen & Gronholdt,

2003

- Prybutok & Landrum, 2004

- Parasuraman et al., 2005

- Landrum et al., 2007

- Lin, 2006

- Cronin, Brady & Hult, 2000

- Marthensen & Gronholdt,

2003

- Cronin, Brady & Hult, 2000

- Kim, 2003

- Marthensen & Gronholdt, 2003

- Birgelen et al., 2005

- Zhang & Prybutok, 2005

- Ho, 2007

- Collier & Beinstock, 2006

- Cronin, Brady & Hult, 2000

- Marthensen & Gronholdt, 2003

- Dabholkar & Overby, 2004

- Prybutok & Landrum, 2004

- Parasuraman et al., 2005

- Zhang & Prybutok, 2005

- Birgelen et al., 2005

- Landrum et al., 2007

- Collier & Beinstock, 2006

- Wen_bao, 2006

- Ho, 2007

- Ladhari, 2009

115

2.7 Summary

Based on the review of the service quality literature in Marketing and LIS field the

researcher has made some decisions on the approach of this study:

i. Perception-only measure

Oliver’s (1980) disconfirmatory paradigm was adequate for the measurement of

satisfaction which was a measure of how well the service level delivered

matched the customer expectations. This notion was adopted by Parasuraman et

al., (1985) as ‘the comparison between customer expectations and perceptions of

service’ is defined as perceived service quality. But Cronin and Taylor, (1992,

1994) argue that is not for service quality as service quality is conceptualized as

an attitude and is judged in terms of excellence and superiority (Zeithaml, et al.,

1985). Furthermore, Buttle (1996), argues that the expectations of customers is a

consequence of previous contact with the service, thus the measure of the ‘gap’

or difference between expectations and perceptions depends on ‘experience’,

which in the web environment may be low because of lack of interactions

(Cristobal et al., 2007). Based on the debate in the literature over the inclusion

of expectations in the measurement of service quality, resulting in general

agreement that performance only measures are superior (Cronin and Taylor,

1994; Parasuraman et al., 1994; Teas, 1994; Landrum & Prybutok, 2003;

Wilkins, 2007), this study shall use the performance-only measure. It also

responds to Hernon’s (2002) recommendation for further review of the

performance-only method.

ii. Based on the summary of e-service dimensionality generated from various

studies, it is concluded that there are some common dimensions and some

industry specific dimensions. Calvert’s (2001) claims that there is sufficient

116

evidence that the concept of service quality may vary between countries, but

they share common core believes that do not change. Table 2.10 will assist the

researcher in formulating themes during the analysis of qualitative data from

focus group interviews.

iii. A service quality model adopted from the commercial sector is not directly

applicable in the not-for-profit library service environment in higher education

(Quinn, 1997). Thus the researcher shall develop a scale from grounded data as

recommended by Churchill (1979)’s scale development methodology.

iv. Based on the research on relationship between customer satisfaction, service

quality, value, and behavioral intentions, the researcher proposes the following

model for investigation in this study (Figure 2.7). This study would be the first

to examine the relationships between all four construct simultaneously in an

academic library's web-based service environment.

The next Chapter will present the methodology and research design adopted to answer

the research questions and fulfill the aims of this study.