FACTORS AFFECTING AN ACCEPTANCE AND USING CLOUD ACCOUNTING ...
An empirical study of the factors affecting the user-acceptance of paid content online
-
Upload
manas-datta -
Category
Documents
-
view
145 -
download
0
description
Transcript of An empirical study of the factors affecting the user-acceptance of paid content online
Submitted in part fulfilment of the requirements for the degree of Master of Business Administration
An empirical study of the user acceptance of fee-based online content
by
Manas Datta
School of ManagementUniversity of Surrey
January 2010
Contents
Abstract ......................................................................... Error: Reference source not found Declaration of Originality ............................................. Error: Reference source not found Ethical Issues in Research ............................................. Error: Reference source not found
Chapter 1: Introduction ................................................. Error: Reference source not found
1.1 About this study .................................................. Error: Reference source not found 1.2 Defining a fee-based online content service ........ Error: Reference source not found 1.3 A difference in context ........................................ Error: Reference source not found 1.4 Building a base for the study ............................... Error: Reference source not found 1.5 Objectives of the study ........................................ Error: Reference source not found
Chapter 2: Literature Review ........................................ Error: Reference source not found
2.1 Objectives of the literature review ...................... Error: Reference source not found 2.2 The Theory of Reasoned Action/Theory of Planned Behaviour ..... Error: Reference source not found
2.2.1 Attitude towards the behaviour ..................... Error: Reference source not found 2.2.2 Social factors or subjective norms ................ Error: Reference source not found 2.2.3 Perceived behavioural control ...................... Error: Reference source not found
2.3 The Technology Acceptance Model .................... Error: Reference source not found
2.3.1 Perceived usefulness ..................................... Error: Reference source not found 2.3.2 Ease of Use ................................................... Error: Reference source not found
2.4 Alternatives ......................................................... Error: Reference source not found 2.5 Satisfaction .......................................................... Error: Reference source not found 2.6 Building our hypotheses ...................................... Error: Reference source not found 2.7 Entertainment vs Information Content ................ Error: Reference source not found 2.8 Proposed Research Framework ........................... Error: Reference source not found
Chapter 3: Research Methodology ................................ Error: Reference source not found
3.1 Introduction ......................................................... Error: Reference source not found 3.2 Research design ................................................... Error: Reference source not found 3.3 Sampling .............................................................. Error: Reference source not found 3.4 Method and data collection ................................. Error: Reference source not found 3.5 Questionnaire design ........................................... Error: Reference source not found
Chapter 4: Analysis of the data ..................................... Error: Reference source not found
4.1 Data Descriptive .................................................. Error: Reference source not found 4.2 Data Reliability and Validity ............................... Error: Reference source not found 4.3 Preliminary Analyses .......................................... Error: Reference source not found
2
4.4 Test of hypotheses ............................................... Error: Reference source not found
Chapter 5: Conclusion ................................................... Error: Reference source not found
5.1 Results ................................................................. Error: Reference source not found 5.2 Research Implications and contributions ............ Error: Reference source not found 5.3 Research limitations ............................................ Error: Reference source not found 5.4 Further research ................................................... Error: Reference source not found 5.5 Management implications ................................... Error: Reference source not found
References ..................................................................... Error: Reference source not found Appendices .................................................................... Error: Reference source not found
Appendix 1: Constructs ............................................. Error: Reference source not found
Table a: Questionnaire showing constructs and items ..... Error: Reference source not foundTable b: Correlation between totals of constructs . Error: Reference source not found Figure a: Charts showing normality for each statistic ..... Error: Reference source not found
Appendix 2: T-Test Tables ........................................ Error: Reference source not found
Table a: Group Statistics (split by past purchase experience) ........... Error: Reference source not foundTable b: Independent Samples Test (split by past purchase experience) ........... Error: Reference source not foundTable c: Group Statistics (split by country of residence) . Error: Reference source not foundTable d: Independent Samples Test (split by country of residence) . Error: Reference source not found
Appendix 3: Multiple Regression Tables .................. Error: Reference source not found
Table a: Correlations ............................................. Error: Reference source not found Table b: Residuals Statistics .................................. Error: Reference source not found
Appendix 4: ANOVA Tables (Comparion of four groups) . Error: Reference source not found
Table a: ANOVA test output ................................. Error: Reference source not found Table b: Multiple Comparisons table for ANOVA test ... Error: Reference source not found
3
Abstract
This research project was carried out in order to study the factors that influence the
user-acceptance of fee-based online content. The study also looked at the factors that
affect the continued use of a service and aimed to see if there is any difference in user
attitude based on their past experience with paid online content.
The subject of paid online content is an increasingly relevant one today, not only
because of the way the newspaper and magazine publishing industry is affected, but
also because of the increasing number of ways that various kinds of content are being
delivered using the internet.
The results of our research project showed that the main factors affecting a user’s
intention to pay for online content are satisfaction with the content service, the
perceived consequences as a result of accessing an online content service, the user’s
normative beliefs or social factors and the value that the user places on the online
content service as compared to free alternatives.
The perceived ease of use however, was not found to be a determinant of a user’s
intention to pay for content online.
Further, by splitting the sample into groups based on their past experience with online
paid content and then comparing results between them, we have seen that even though
the kind of content consumed does not affect their agreement on various constructs and
their intention to pay, their past experience in dealing with an online content service
does.
The research and management implications of our results, the relevance of the results to
providers of online content in today’s market place, research limitations and
recommendations for future research are also discussed in this paper.
4
Declaration of Originality
I declare that my work entitled "An empirical study of the factors affecting the user-
acceptance of fee-based online content" for the Masters in Business Administration
degree embodies the results of an original research programme and consists of an
ordered and critical exposition of existing knowledge in a well-defined field.
I have included explicit references to the citation of the work of others or to my own
work which is not part of the submission for this degree.
Manas Kumar Datta
December 24, 2009
5
Ethical Issues in ResearchSome types of research need ethical approval. The form below is designed to allow you and your supervisor to establish very quickly whether your study will need ethical approval, and if so from whom. It will also allow you to discuss with your supervisor alternative approaches that do not require ethical approval. If you find that ethical approval is required, please follow the instructions on the reverse of this form. You are advised to do this as soon as you can. Approval may take as little as 2 weeks, but could take longer if issues arise.
ALL STUDENTS MUST APPEND THE COMPLETED FORM TO THEIR DISSERTATION
Name of student: MR MANAS KUMAR DATTA
Course: MBA
Supervisor: DR DANAIL IVANOV
Dissertation topic: AN EMPIRICAL STUDY OF THE FACTORS AFFECTING THE USER ACCEPTANCE OF FEE-BASED CONTENT ONLINE
Please answer Yes or No to the following questions. If you answer Yes to any question, ethical approval will be required for your study either from the Faculty of Management and Law Ethics’ Committee or the University Ethics’ Committee (UEC).
Does the study, or may the study, involve FML students?
YES Seek FML ethical approval
NO Anonymous participants over the WWW
Does, or may the study, involve UG students across the University of Surrey?
YES Seek approval from UEC
NO Business professionals
Does, or may, the study, involve FML staff as subjects?
YES Seek FML ethical approval
NO
Does, or may, the study involve staff across The University of Surrey?
YES Seek approval from UEC
NO
Does the study involve vulnerable groups (e.g. children)?
YES Seek FML ethical approval
NO
Will the respondents receive payment (including in kind or involvement in prize draws)?
YES Seek FML ethical approval
NONo incentives offered to subjects
Could questioning – in questionnaire or in interview – or other methods used, cause offence or be deemed as sensitive?
YES Seek FML ethical approval
NORespondents have complete freedom to refuse to respond to any of the survey questions
Does the study involve invasive procedures (e.g. blood tests) or feeding trials?
YES Seek approval from UEC
NO Data collected through Web-based survey
Does your research study involve staff or patients from the NHS?
YESSeek approval from NHS Research Ethics’ Committee AND UEC
NO
Supervisor comments:I have submitted my comments within the table above.
Supervisor’s signature__________________________________ Date___26 th December, 2009 ________
6
Chapter 1: Introduction
1.1 About this study
This research project is inspired by a paper recently published by Choi, Soriano and
Ribeiro(2009) based on a study done a couple of years ago and studies the factors that
influence the user-acceptance of fee-based online content. It also looks at the factors
that affect the continued use of such a service.
This research project first looks at Choi et al’s study in South Korea. In the current
project, we have carried out an extensive literature review of the theories relevant to this
subject, reviewing existing literature on similar studies and trying to build a picture of
the relationship between a user and an online content service when presented with this
service. The study touches the areas of user acceptance of new technology and studies
of user attitude and behaviour.
Using Choi et al’s study as a base, this study aims to critically view the literature
reviewed, propose arguments where necessary, to try and find potential gaps or
developments in the literature that have already been studied, and then apply the
hypotheses and assumptions to a different context – i.e. users of fee-based online
content in 2009.
The literature reviewed in the preceding study was first reviewed, and then further
research carried out on the main themes identified as being of crucial importance to this
subject. Needless to say, the interpretation of all literature is context-sensitive but there
was no difference significant enough to affect the objectives of our research.
Our hypotheses will be in line with those of the authors of the previous study.
Definitions of some terms have been clarified to suit the current context.
7
1.2 Defining a fee-based online content service
Parthasarathy and Bhattacherjee (1998) have defined an “online service” as one that
offers a combination of proprietary and open internet-based content (e.g.: news,
weather, sports), features (e.g.: software downloads, financial research data) and
services (e.g.: email, bulletin boards, web access).
By this definition, we can also include services such as iTunes, the online music
download service from Apple and online gaming communities such as World of
Warcraft, Sony’s Playstation Network or Microsoft’s Xbox Live which require a
subscription, paid streaming video sites such as LoveFilm and credit file management
services like Experian as “online content services”. Some of the above had already been
identified in Choi et al’s study.
The variety of revenue models to charge for an online content service still stays at a
fundamental level as either a subscription model or a pay-per-purchase model. iTunes
charges per download, World of Warcraft charges per week or month etc.
We will be using this definition when referring to an online content service or “service”
throughout this study. Considering this definition, and not restricting our observations to
paid magazine and newspaper subscription models, we have seen already a number of
success stories around fee-based content access - multi-player video gaming
communities and music, movie and software downloads.
8
1.3 A difference in context
The two studies take place almost three years apart, which by itself leads to some
differences in context. In this section, we take a brief look at internet connectivity and
access in the UK, where most of the respondents to our survey were based.
The last couple of years have seen a strong growth in broadband proliferation in the
country. As of December 2008, more than 95% of internet connections in UK homes
are high-speed broadband connections, up from about 69% in 2006. In terms of
connectivity, about 70% of UK homes are now connected to the internet (Office for
National Statistics, 2009).
Further, there have been other technological advances and developments in the last
couple of years which have changed the way people use the internet. The launch of
Apple’s iPhone in 2007 was one such development (BBC News, 2007); the concept of
mobile-phone based applications caught on almost instantly, and in 2008 Apple
launched the iPhone application or “app” store where users could purchase applications
for their iPhone (many of which require access to the internet through the cell phone
network service provider). There are currently over 1 million iPhones in use in the UK
(Holton, 2009). Estimates of mobile internet users in the UK are as high as 17.4 million
(Reuters, 2009).
Internet access over mobile phones falls in both the areas of mobile phone activity as
well as internet access, but this in this study, we are only referring to this as an example
of how internet access has grown and new ways in which people are accessing content
online, rather than looking at mobile phone technology as an application delivery
platform or the role that mobile phone communication or technology plays in electronic
commerce, which are subjects that deserve separate studies by themselves.
The growth of Wi-Fi (or wireless local area network) hotspots which allow people to
connect to the internet wirelessly have also played a role in changing the way people
interact with the internet, and the take-up of online content services. Cafes, airports and
hotels in most major cities in the UK provide wireless internet services to its customers,
either as a paid service or complimentary depending on the establishment.
9
While in 2007, just under 700,000 people accessed the internet through public wireless
hotspots, by 2009, that number had reached close to 2.5 million (Office for National
Statistics, 2009).
Though these changes in context do not change the foundation of the studies or the
nature of the enquiries that we are making, it does have an effect on some of the
statements made in Choi et al’s paper where it is claimed that online content services
have struggled as a revenue model (Choi et al, 2009).
With the exception of magazines and newspapers (essentially the periodical publishing
sector) which has truly struggled to make online paid content work – there are many
other examples of online content propositions working quite successfully as we are
about to see.
In the newspaper/periodical sector, successes are limited, with the Financial Times and
the Wall Street Journal two models of success that are often cited as examples.
However, these are but two successful models against a lot many more failures such as
the New York Times, the LA Times and Slate magazine (Shafer, 2009).
Most of the content that is available in a print magazine can be obtained off of the
accompanying magazine website completely free of cost. Moreover, the website gives
the user the advantage of looking up related archived articles and save articles or images
if they so wish. This by itself makes the concept of paying for a magazine redundant for
many readers.
There used to be the argument that buying a magazine or newspaper gives you the
portability of being able to access your news anywhere, especially in places where there
is limited or no internet access.
The growth and advancement in wireless internet technology and mobile internet
infrastructure, combined with the rapid fall in prices of complementary hardware such
as internet-enabled mobile phones and mini-laptops or netbooks makes this argument
weaker by the day.
The proliferation of free content online (for periodicals) has also been exacerbated by
user-generated content – people who may or may not be professional journalists, but
who maintain websites or blogs disseminating opinions and news to the general
10
readership base for free. There are also online communities of professionals or interest
groups who share information and news among themselves.
These factors have all had an effect on the way magazines and newspapers make a
living.
The issues surrounding the periodical publishing industry need to be researched
separately and will not be part of the remit of this study. We have already defined what
we mean by a fee-based online content service and we will work according to this
definition for this study.
So sticking with our definition of “online content services” as one group, we can see
that in spite of apparent advances in some areas, there is a distinct struggle in some
other areas.
Our study and attempt to identify the factors that influence the customers’ adoption of
fee-based online content services is still valid, and considering the growth of activity
online and the increasingly diverse ways in which users now access the internet, this
subject is more topical and relevant than ever before.
1.4 Building a base for the study
What we are studying here is what a user does when presented with a fee-based online
content service. We are trying to predict their behaviour, to understand what those
factors are that influence them to either pay or to not pay for this service.
Conceptually, we can see broadly the issues that are involved here. At a very
fundamental level, we’re trying to think of what is going on inside the user’s mind as
they are sitting in front of a computer screen evaluating whether they want to go ahead
and pay for and use this service.
They are evaluating the content and the way in which they have to go about accessing it
– they are making some judgements and building up a perception and an opinion about
their situation. They will respond based on their attitude towards this situation, and at
some point, they will have been influenced enough by all of their observations and
perceptions to make a decision and perform an action, or behave in a particular way.
11
Once they have used the service for the first time, their perceptions may or may not
change, based on what their experience was after the initial use. Based on their
experience, the user will decide whether or not they’d like to continue using that
service.
A review of literature and theories revolving around attitude and behaviour leads us in
the direction of Ajzen and Fishbein’s Theory of Reasoned Action or the TRA (Ajzen
and Fishbein, 1980) (and subsequently the Theory of Planned Behaviour or TPB, an
evolution of the TRA) which deals with the antecedents to an individual’s behaviour,
with respect to that individual’s attitudinal beliefs and the elements responsible for the
development of these attitudes. This theory is discussed in further detail in the following
chapters.
Continuing this study of attitude and beliefs then leads us on to the role that the actual
system and the content in question play in influencing the user’s decision to pay for the
online content service. Extensive research in similar fields have been carried out based
on Davis’ Technology Acceptance Model or TAM (Davis, 1989), which is in fact an
adaptation of Ajzen’s TRA (Davis et al, 1989). The TAM deals mainly with the factors
affecting the adoption of new technology. We will be discussing the relevance of the
TAM in a subsequent chapter. Though there have been a number of extensions of the
TAM to suit the model for various specific purposes, for this study, we will stick with
Davis’ original model since it is not a customisation of the TAM that we are attempting,
nor an analysis of the TAM itself.
Studying the TAM, we have moved into the Information Systems realm of research.
Continuing to search and review past studies using the TAM, we find that the vast
number of studies revolve around adoption and user satisfaction and the role that these
play in influencing system usage (Bailey and Pearson, 1983; Melone, 1990; Ives et al,
1983). User satisfaction per sé is not described in the TAM, though researchers have
pointed out the importance of this construct in influencing usage through system design
and implementation (Taylor and Todd, 1995; Venkatesh et al, 2003).
The user’s perceptions of the actual content and their attitude towards the content and
their thoughts on the after-effect of paying for this content are covered as well when we
look at the Theory of Reasoned Action and the TAM in more detail.
12
The above two theories form the bedrock of our literature review, and the basis for the
development of our constructs and hypotheses.
1.5 Objectives of the study
The objective of this study is to build a set of hypotheses about the factors influencing a
user’s behaviour when presented with a fee-based online content service from a study of
existing literature on the subject, guided by Choi et al’s research, and then carry out
suitable field research to either confirm or disconfirm our hypotheses.
We will be looking not only at first-time use, but will also be considering the factors
influencing continued use of a service.
Additionally, we will also be comparing users of two different kinds of content –
information (or professional/academic) and entertainment (or leisure), and we will see if
user behaviour differs at all based on the kind of content being consumed in our context.
Our research questions are:
1. What are the factors affecting the user-acceptance of fee-based online content?
2. Is there a difference in user behaviour based on the type of content?
13
Chapter 2: Literature Review
2.1 Objectives of the literature review
The objective of this literature review is twofold. Our first goal is to review the existing
literature from Choi et al’s study and to digest and critique what they have found.
Secondly, our objective is to identify any possible developments or improvements to the
existing research in order to improve the quality of the study or to suit it to the current
context.
We have already described above the objective of the study. The literature review has
been done to provide a justification for our assumptions and actions – why we have
chosen certain constructs and decided to build the hypotheses that we did.
We start off with a discussion of our initial findings. We then settle on the main theories
based on our initial survey of existing literature and research, and drill deeper from
there to help us get more clarity on the constructs to choose and the hypotheses to build.
At the end of the literature review, we will have provided a foundation to base our
construct and hypothesis development on, after which we present our hypotheses.
14
2.2 The Theory of Reasoned Action/Theory of Planned Behaviour
As we have already seen above, the research question revolves around studying user
behaviour, in our case the behaviour of the user when interacting with a fee-based
online content service. We have thus chosen user behaviour to be the initial concept to
help guide us in the right direction.
As already discussed, a review of research projects studying user behaviour points us in
the direction the Theory of Reasoned Action and the Theory of Planned Behaviour.
These two theories form the bases for a large number of studies revolving around the
study of user behaviour, especially in the field of online activity (Khalifa and Ning
Shen, 2008; De Cannière et al, 2009; Lin et al, 2006; Pavlou and Fygenson, 2006).
Icek Ajzen and Martin Fishbein put forward the Theory of Reasoned Action in 1980
(Ajzen and Fishbein, 1980). This theory attempted to identify the factors that most
influenced a user’s behaviour, and is one of the most commonly used theories when
studying user attitude and behaviour. It assumed that a user’s intention to perform a
certain action is closely linked to the actual performance of the action itself, and posited
that intention is an immediate antecedent to the actual behaviour. The stronger the
user’s intention, the greater the likelihood that the behaviour will be performed (Ajzen
and Madden, 1986). Thus, if we can determine the factors influencing intention, we can
measure the likelihood that the action or behaviour will be performed.
The two constructs laid out by the TRA as being influencers of user intention were the
user’s attitude towards the act or the behaviour, and subjective norms or the social
pressure that the user felt to perform the given action (Ajzen and Madden, 1986).
In the case of this study, we’re dealing with the use of an online content service, which
now becomes our action. Working backwards, we want to find out what will influence
the user’s intention to use a specified online content service.
The TRA thus fits in quite well in our search for a framework for the measurement of
intention. We will now look at these constructs in further detail.
15
2.2.1 Attitude towards the behaviour
The user’s attitude towards a given behaviour is related to their beliefs about the
consequences of performing that behaviour and their evaluation of those consequences.
What they think of the attitude of individuals close to them is related to their perception
of their beliefs about the consequences of performing the action and their perception of
their evaluation of those consequences (Ajzen and Fishbein, 1972).
The attitude of a user towards behaviour is influenced by behavioural beliefs. Each
behavioural belief links the behaviour to a certain outcome as a result of performing the
behaviour. The value of this outcome then contributes to the attitude toward the
behaviour (Ajzen and Madden, 1986).
In order to predict the intention, the TRA posited that this can be determined by
measuring the attitude of the user towards the behaviour and the subjective norms or
social pressures on the user to perform the said behaviour. If we can measure these two
parameters, we can then get a measure of the user’s intention to perform the behaviour,
and thus the likelihood that this behaviour will be performed.
2.2.2 Social factors or subjective norms
The other determinant of intention as proposed by the Theory of Reasoned Action is
“subjective norms” (Ajzen and Fishbein, 1980) – this is described as the social pressure
to engage in a particular kind of behaviour.
These subjective norms or beliefs are determined by normative beliefs, which are
concerned with the likelihood that the individuals or groups important to the user will
approve or disapprove of the behaviour. There may be more than one normative belief,
and collectively, these normative beliefs determine the amount of pressure an individual
feels to perform or not perform behaviour. (Ajzen and Madden, 1986).
Contrary to what Ajzen has prescribed though, Mathieson’s research (Mathieson, 1991)
did not find a significant contribution of social factors to the behavioural intention of
the user, though Venkatesh and Davis did theorise that this was perhaps because
subjective norms had a direct effect on mandatory usage – when the user had no choice
in the use, but not so when usage was voluntary (Venkatesh and Davis, 2000) – i.e.
when the user had an open choice as to whether or not perform a certain behaviour.
16
The notion that a user feels pressure to behave in a certain way because of other
individuals leads us to question the user’s motivation to choose a particular system.
Further research into this area leads us to a discussion on intrinsic and extrinsic
motivational factors.
Intrinsically motivated content is content whose importance is not necessarily driven by
an external factor – such as work, or decision making, or even peers. It is something that
is enjoyable because of what it is, whereas extrinsically motivated content consists of
content that is consumed because of some extrinsic factor (eg: the knowledge will help
them do their job better, or it will improve their image at work, it will get them social
status) (Lopes and Galletta, 2006). The user may thus be extrinsically or intrinsically
motivated (Ryan and Deci, 2000).
In the section dealing with different types of content, we will discuss the classification
of content and present our views in more detail.
2.2.3 Perceived behavioural control
The TRA predicts behaviour by measuring intention based on the two constructs
described above. And though the theory has found support in a number of studies
(Ajzen et al, 1982; Smetana and Adler, 1980; Bentler and Speckart, 1979; Fredricks and
Dossett, 1983), one of the areas that it did not succeed in providing satisfactory
explanations is related to the theory’s boundary conditions, mainly to do with the
transition from verbal responses to actual behaviour (Ajzen, 1991).
One of the conditions for the TRA to predict behaviour is that the user must be in
complete volitional control of the behaviour i.e. the user must be able to decide at will
whether or not they want to perform the behaviour in question. If the user did not have
complete control over the behaviour, then the TRA failed (Ajzen and Madden, 1986). A
user’s behavioural control could be limited by such factors as money, time, resources or
the co-operation of others. Ajzen addressed this issue of control by proposing the
Theory of Planned Behaviour, in which behaviour was predicted, based not only on
intention but on behavioural control as well i.e. the amount of control that a user has on
the behaviour.
17
It is however difficult to measure the actual behavioural control that a user has because
of the practical difficulties in assessing skill and accidental, unanticipated changes in
circumstances which change the degree of behavioural control the user possesses. Based
on studies revolving around the self-efficacy theory, it was found that people’s
behaviour is strongly influenced by their confidence in their ability to perform it
(Bandura et al, 1977 and Bandura et al, 1980 in Ajzen, 1991).
Thus, according to the Theory of Planned Behaviour, perceived behavioural control
along with behavioural intention, can be used to directly predict behavioural
achievement (Ajzen, 1991).
However, perceived behavioural control may not be realistic when a person has little
information about the behaviour, when requirements or resources have changed, or
when unfamiliar elements have entered the situation (Ajzen, 1991).
Also, a strong effect of perceived behavioural control is expected only when the
behaviour in question is not under complete volitional control, in which case the TPB
reduces to the TRA (Ajzen and Madden, 1986). Further, when there are no serious
problems of control, intentions alone are sufficient to accurately predict behaviour
(Ajzen, 1991).
The third determinant of intention prescribed by the Theory of Planned Behaviour is
Perceived Behavioural Control (Ajzen and Fishbein, 1980). This is the construct that
differentiates the TPB from the TRA (Ajzen and Madden, 1986).
In the case of our study using fee-based online content, in practical terms, the
respondents are already familiar with using the internet, are comfortable with email and
answering online surveys (since this is the sole data collection method), they have
access to the internet and are accessing content online as well, thus removing the skill
and resource barriers. There is the question of money however which can be considered
to be a factor limiting behavioural control, since the availability of money can be an
issue. It seems however to be the only such limiting factor. Also, considering that for
this study “fee-based online content” is restricted to mainly entertainment and
information-type content which are of relatively little cost anyway, it can safely be
assumed that for this audience, the small amount of money involved is not really a
limiting factor.
18
In the absence of any serious threats to performing the factor, we can assume that the
perceived behavioural control will not play an important role in predicting the user
behaviour, and intention alone will be sufficient.
In the event that fee-based online content includes high-priced items, the user’s
volitional control does decrease. It might mean that the user does not have the money,
or for B2B content, the user needs the approval of an official higher up to approve
expenditure for access to that content service. In such a situation, the perceived
behavioural control would play a more important role in predicting the user-acceptance
of fee-based online content.
19
2.3 The Technology Acceptance Model
The Technology Acceptance Model or TAM has evolved as one of the most widely
used theories in the study of Human-Computer Interaction. Fred D Davis and V
Venkatesh put the Technology Acceptance Model forward in 1980. The TAM is an
adaptation of the TRA, but repurposed specifically so that it can be used to predict
computer usage.
The TAM proposes that application or system usage is predicted by ease of use and
perceived usefulness. It is based on findings that these two factors influence the attitude
of the user, which in turn is a determinant of the behaviour (i.e. usage) of the system.
The TAM has been extended and revised by many other researchers to suit their
respective research purposes, and the model itself has formed the basis of many other
studies.
2.3.1 Perceived usefulness
Similar to the “Attitudes” construct in the TRA, the perceived usefulness in the TAM
also deals with the attitude of the user towards the system, and focuses on the perceived
consequences or outcomes as a result of using the system.
The concept of perceived consequence is developed from perceived usefulness (Davis,
1989). In the case of paying for content online, the new system that is being introduced
is the system wherein one has to register and gain access to the content – a barrier to
prevent further access to content without first completing an action – that of registering
or putting payment details through. According to TAM, the user will only go through
with this if they perceive the rewards of the action to be greater than the effort required
to obtain it (Davis, 1989).
Other studies involving the TAM have already shown that perceived usefulness was
more influential that ease of use in determining usage (Davis, 1993). This has been
shown to hold true in other studies as well (Lin, 2006 and Davis et al, 1989).
20
Studies on the customers of a Greek bank (Rigopoulos and Askounis, 2007) have shown
that the user-acceptance of electronic payment systems was directly and positively
affected by the perceived usefulness of the process and the perceived ease of use of the
system.
In the case of our study, we are trying to put a measure on what the user feels that they
are going to get out of the action of using the paid content service. We will be
measuring this by looking at what benefits or rewards the user perceives that they will
achieve as a result of adopting this content. These benefits may be an improvement in
the way they work, more money or enhanced status/credibility.
2.3.2 Ease of Use
Another one of the determinants of the acceptance of technology, according to Davis’
Technology Acceptance Model – is perceived ease of use, described as the degree to
which a person believes that using a particular system will be free of effort. (Davis,
1989).
In the case of fee-based online content, this includes the process of registering and
going through the necessary steps to access the content in question. For our study, this
implies that the entire process of paying for content online needs to be as simple as
possible in order to reduce barriers and instil a perception of ease of use in the user.
Numerous studies have been cited in where the ease of use of a system positively and
directly effects the adoption of online services (Choi et al, 2009). The ease of use has
also been shown to have a significant influence on the attitude of the user (Davis et al,
1989).
In the online environment, we can think of “ease of use” as being a combination of the
ease of understanding the proposition, the ease of navigation and the ease with which
key information about the proposition can be recognised (Rederer et al, 2000). Other
studies in e-commerce have found that the ease of use affects participation. The ease
with which a transaction can be carried out is both a direct and an indirect factor on
electronic commerce usage; ease of usage has a direct effect on whether a user will
make an online purchase (McCloskey, 2003-4).
21
Conceptually, we can see how the interaction between the user and the interface of the
system will have an effect on the user’s intention to proceed with a transaction. If the
amount of effort required to understand and to proceed is deemed to be too high (i.e. the
system is perceived as being too difficult to use), the user may change their intention –
especially if there are other alternatives available or if the perceived usefulness of the
content is not that high.
We will be discussing how the presence of suitable alternatives affects the user’s
intention and behaviour in a subsequent chapter, based on what we have gathered from
the TRA and the TAM.
2.4 Alternatives
So what happens when there are alternatives to the fee-based content in question?
The TRA in its original form only considers a situation where a user is performing a
single possible action. A study by Ajzen and Fishbein found that a user’s intention to
perform a particular action could be more accurately predicted if we considered all of
the possible actions rather than focusing on just that one particular action. Further, in
support of the original TRA, their study also found that the original constructs – attitude
and normative beliefs – had a direct effect on intentions where there were two or more
alternatives. The study concluded that when predicting intention (and thus behaviour)
we must take into account the alternatives available to the user (Ajzen and Fishbein,
1969).
The presence of alternatives also becomes important if one is to study the continued use
of a system. Research has shown that the longer a user maintains the use of a system,
the lesser the perceived attractiveness of the alternatives (Johnson and Rusbult, 1989).
However, this study deals with the psychology behind human relationships. It can be
reasoned that this study still gives us some direction to study what role alternatives can
play in areas other than human relationships – in this case the choice of a fee-based
online content service versus an alternative.
22
2.5 Satisfaction
The perceived usefulness once the user has used a system directly influences their
satisfaction of the system. This satisfaction then influences their likelihood to continue
using the system (Bhattacherjee, 2001). It is important for us to dwell a little longer on
the satisfaction levels of a user after they have used a service because in reality, a
service can only be sustained commercially if the user continues to use it.
Once we have determined the factors influencing first-time use, it is useful to study the
user’s behaviour when it comes to continued use.
The Expectation Confirmation Theory (Oliver, 1980) states that “post-usage ratings of
satisfaction appear to be a function of a linear combination of an adaptation level
component (expectations or prior attitude) and disconfirmation”.
The study essentially shows that satisfaction is based on the perceived expectation
before the use of the system, combined with the perceived disconfirmation of this
perception i.e. whether the user felt that the system exceeded their expectation or not. If
it did (positive disconfirmation), then they’re said to be satisfied. Further, the study also
showed that the satisfaction measure directly impacted the attitude and the intention of
the user, which is more relevant to our study.
Building a bridge between the theories surrounding user satisfaction and the technology
acceptance model as Wixom and Todd have done (Wixom and Todd, 2005) allows us to
see the theoretical logic that links user satisfaction and technology acceptance. Their
study has shown that user satisfaction (information satisfaction and system satisfaction)
is a strong predictor of intention to use. Results from other studies have shown that
users’ intention to continue using a system is determined by their satisfaction with IS
use (Bhattacherjee, 2001).
In a study examining post-adoption behaviour, Parthasarathy and Bhattacherjee used
information technology adoption as a basis for their analysis, combined with other
theories such as the Diffusion of Innovation theory by Everett Rogers (Parthasarathy
and Bhattacherjee, 1998).
23
Studies have shown that over 60% of subscribers to online services discontinue their
services because of dissatisfaction with the service (Keaveney, 1995 in Parthasarathy
and Bhattacherjee, 1998). Post-adoption satisfaction is thus an important determinant of
continued usage.
This construct plays an important role in helping us take the study a step forward and
thus understand not only the factors determining the initial adoption, but also the
continued use or recurring use of fee-based online content.
A further study of attitudes when faced with alternatives leads us to The Investment
Model, which analyses the tendency of an individual to persist in a relationship (Rusbult
et al, 1998).
Going by this model, an individual’s persistence in establishing a relationship can be
determined by analysing the satisfaction level of the user from that relationship, the
quality of the alternatives available, and thirdly the investment size. This refers to the
magnitude or importance of the resources attached to that relationship (Rusbult et al,
1998).
2.6 Building our hypotheses
Based on our detailed review of the TRA and the TAM, we are now able to pull
together the various constructs reviews and focus on a few constructs which will enable
to predict user behaviour when presented with a fee-based online content service.
As we have seen, Ajzen defines “intention” as “an indication of a person's readiness
to perform a given behaviour”, and considers intention to be a direct antecedent to
behaviour (Ajzen, 1991).
Based on our review of theories, we are going to measure the user’s “intention to use”,
and use this as an indicator of the user’s likelihood to use the given fee-based online
content service. The “intention to use” thus becomes the dependent variable for our
research exercise.
The TRA has been described as a general model (Davis et al, 1989), one which does not
specify the beliefs specific to a particular behaviour. For this, we need to identify the
24
beliefs relevant to the behaviour that we are studying – in this case the behaviour being
the use of fee-based online content.
Before the user actually uses the content, we are looking at the determinants of first-
time use. From the TRA and the TAM, we can see how the attitude of the user towards
the system, their normative beliefs and their perceived usefulness all influence their
perceived consequences.
Hypothesis HA1: The greater the value of the perceived consequences of using
the fee-based content, the more likely the customer’s intention to adopt it.
The other important construct that comes out as an important determinant is the ease of
use of the system which has a relatively lower direct significance on the user’s
perceived consequences, but which significantly affects the user’s perceived usefulness
of the system (Davis, 1989).
Hypothesis HA2: The greater the perceived ease of use of the fee-based online
content, the more likely the customer’s intention to adopt it.
The third important construct in predicting first-time usage behaviour that comes out of
our literature review is the subjective form or social factors, which we get from the
TRA.
Hypothesis HA3: The higher a user perceives social influence in using the fee-
based online content, the more likely their intention to use it.
The next construct that we can consider as a result of our findings is the availability of
comparable alternatives. We have already seen how alternatives influence a user’s
behaviour in an earlier chapter. The evaluation of alternatives by the user is not only
important to predict first-time usage, but also to predict continued usage of the system.
Hypothesis HA4: The greater the perceived value of using the fee-based online
content as compared to available alternatives, the more likely the customer’s
intention to use it.
25
The last construct we consider is user-satisfaction. Again, this factor is of significance
to determine continued usage as the satisfaction the user experiences after the first use
of the system can influence their decision to continue using the system.
Hypothesis HA5: The higher the level of satisfaction felt by the user after using
the fee-based online content, the more likely their intention to adopt it.
2.7 Entertainment vs Information Content
We have already defined a fee-based online content service.
In their research paper, Choi et al have divided online content services into two
categories – entertainment and information. Their study was intended to investigate the
difference among customer groups based on the type of online content (Choi et al,
2009). It is not clear from the research paper what the basis of classification of online
content services was. Choi et al’s research classifies “entertainment” content services as
games, movies and music, and “information” services as newspapers, magazines,
academic papers and professional databases.
It would appear though that users behave in a particular way towards an online content
service not because of the kind of content, but because of what motivates them to use
the content service.
The same piece of content can be used by different users for different reasons. An
aviation magazine can be extrinsically motivated content for a user who uses it as a
resource for work and depends on it for professional success, whereas it is intrinsically
motivated content for someone who is just interested in events in the aviation field.
The current study, in line with Choi et al’s study, has split online content services into
two categories: entertainment and information. This has been done with a view to see if
there are any differences in behaviour between those users who consume entertainment
content and those who consume information content.
Based on this discussion, we propose the following hypothesis:
Hypothesis HB: A user’s behaviour towards the online content service will differ,
based on the type of online content.
26
2.8 Proposed Research Framework
Based on our literature review and the hypotheses that we have posited, we propose the
following research framework.
Figure 2.1: Research Framework
Source: Adapted from Adapted from Choi et al, 2009
27
Chapter 3: Research Methodology
3.1 Introduction
In Chapter 2, we have presented the build-up to our hypotheses, and explained why we
have chosen these specific constructs.
In this chapter, there will be explanations provided for the various steps taken in the
research project – including the choice of sampling methodology, methods of data
collection and the theoretical concepts behind the research actions. In order to choose
the methods for this project, due consideration was given to information gathered from
existing research already done in this field obtained from the literature review as seen
above, as well as technical and practical constraints such as limited time and manpower.
3.2 Research design
In this study, we have tried to find the determinants of the intention of a user to pay for
content online. Our literature review has led us to close in on a few indicators, based on
which we have built some hypotheses. Through our quantitative research, we have
attempted to see if there indeed is a link between these factors and the intention to pay
for content online. In order to establish the presence (or lack) of a relationship between
the dependent and independent variables, correlation tests were carried out to determine
the presence of a relationship, and then the strength of that relationship if present.
Further, in order to test whether the independent variables actually had a causal effect
on the dependent variable, a multiple regression test was carried out to identify the
strongest determinants from the ones chosen.
Once the influencing factors were determined, respondents were split into groups based
on their past experience with paid online content, and tests were carried out to identify
any statistically significant differences in the intention of these groups to pay for online
content.
28
3.3 Sampling
Sampling has been defined as a subgroup or a subset of a population (Sekaran, 2003
p.266). In the case of our study, we are looking at internet users, treating them as
current or potential users of online paid content.
Non-probability sampling was utilized for this research project due to the time and
resource constraints, and also because of the population in question. As far as this study
is concerned, the population extends to anyone using a computer, and it would be
beyond the scope of this project to try and ensure that there is some degree of foreseen
probability in selecting a sample systematically. As a result, those individuals who are
active on the various boards and online services where the survey was promoted had a
much higher chance of being included in the survey than those who are not. This
constituted a case of convenience sampling.
Recognising that it would be difficult to stop people from other countries taking the
survey, a control question was placed in the survey in order to enable us to separate out
UK and non-UK respondents. Further, using the geo-tracking mechanism in
SurveyGizmo (the online survey tool used in this research), only responses from the US
were selected among non-UK respondents and included for analysis as they constituted
the only other single country with a substantial number of respondents. US respondents
were included as part of our sample only after tests showing that there were no
statistically significant differences in the mean values of the constructs between UK and
US respondents.
All questions were compulsory and at the end of two weeks there were 216 usable,
complete responses. It was also noted that because the researcher works at a publishing
company which is currently grappling with the issues of online paid content, there
might be some bias from those respondents at the company, and in order to check this,
an additional control question was inserted into the survey in order to enable us to
separate out those individuals who work in publishing if required. Further, again using
the geo-tracking information provided by SurveyGizmo, it was possible to separate out
the respondents who worked at the researcher’s organisation based on their IP address
and other network identification information. However, on analysis, it was found that
very few individuals responded from the company, and after further tests on these data
29
comparing respondents working in publishing companies against those who do not, no
statistically significant difference was found either in terms of measure of intention or
any of the other constructs. We have chosen to ignore this difference.
3.4 Method and data collection
A review of the existing literature (Choi et al, 2009, Venkatesh and Davis, 2000,
Vasquez and Xu, 2009) showed that a quantitative study testing the various constructs is
the best way to proceed with gathering data to test our hypotheses. Further, a qualitative
study would have proven too time-consuming if we wanted to get a large enough
sample even remotely representative of the average internet user.
SPSS was used to carry out statistical tests on the data.
An internet-based survey was published and responses were collected for about two
weeks. Invitations to take part in the survey were sent out through email, Twitter, a
Facebook application, LinkedIn groups and forums, as well as a number of other online
forums (both specialized as well as general) and the researcher’s company intranet – in
an effort to get as wide a spread of respondents as possible.
3.5 Questionnaire design
Good practices for designing the questionnaire were taken into consideration from the
researcher’s own practical experience as well as from the literature reviewed. Closed
questions were used to help respondents make a choice of answer as soon as possible
and also to aid coding at the end of the data collection (Sekaran, 2003).
Because of the convenience sampling utilised, and our inability to control who would
take the survey – a number of control questions were asked in Section 1 of the
questionnaire in order to allow us to better understand the behaviour of different kinds
of respondents. The control questions in Section 1 asked users to answer Yes/No to
whether they worked or studied in the UK, whether they had ever paid for any kind of
content online and whether they worked in the publishing industry.
The online survey consisted of 20 questions split into two sections, with a further four
control questions asked only to those individuals who responded that they had paid for
30
online content before. The survey took no more than a few minutes to complete and
submit. Please see Table (a) in Appendix 1 for details of the constructs and items used.
The questions for the questionnaire were decided upon based again on the literature
reviewed; in particular Choi et al’s study (Choi et al, 2009, Moore and Benbasat, 1991,
Rusbult and Farrell, 1983, Taylor and Todd, 1995, Venkatesh and Davis, 2000). A
couple of questions had to be reworded so that the questionnaire sounded better suited
to our respondent base i.e. respondents in the UK and the US, the majority of whom we
assume have English as their first language. These sources have already shown these
items to be valid.
All questions were discussed with peers and compared against existing literature
mentioned above to ensure that the original sense of the items was kept intact.
The data analysis was planned was to get a measure of the agreement of the respondent
with various concepts such as the availability of free alternatives, or whether they were
willing to pay for something that improved the way they worked (for example). For this
purpose, a Likert scale was deemed to be most suitable. Multiple items were agreed
upon to test each construct, with a scale from 1 to 5 presented to the respondent – (1
equals Strongly disagree, 5 = Strongly agree). The respondent was asked to choose the
level which represented their level of agreement with the statement presented. The
scores for each item in a construct were then summated to get a total measure of
agreement for that construct.
This method is in agreement with Choi et al’s study as well as other texts which have
guided us on this project such as Sekaran and Saunders et al (Sekaran, 2003, Bryman
and Bell, 2007, Saunders et al, 2007).
31
Chapter 4: Analysis of the data
We have seen in Chapter 3 how we went about organising our project and collecting the
data for our analysis. In this chapter, we will be taking a closer look at the kind of data
that we collected, and the statistical tests that we conducted in order to test our
hypotheses about the determinants of a customer’s intention to pay for an online content
service, and to test whether the kind of paid content consumed in the past affects the
user’s intention.
4.1 Data Descriptive
The table below gives us a quick snapshot of the data collected through our survey:
Table 4.1: Grouping of respondents based on control questions
Variable Category Frequency Percentage
Do you work in publishing?Yes 52 24.2No 163 75.8
Do you work/study in the UK?Yes 124 57.7No 91 42.3
Have you ever paid for content online?Yes 173 80.0No 43 20.0
Source: SPSS Output
4.2 Data Reliability and Validity
The Cronbach Alpha co-efficient was used an indicator of the reliability of the scales,
and the results obtained were as follows:
Table 4.2: Values of Cronbach’s α
Construct Name of item Number of itemsValues of
Cronbach’s αPerceived consequences Con 4 .765Perceived ease of use Eou 2 .633Social factors Soc 2 .647Satisfaction Sat 3 .636Alternatives Alt 3 .642Intention to use fee-based content services Int 2 .691
Source: SPSS Output
The values of Cronbach’s α are all above .6 and close to .7 for all of the constructs
except for perceived consequences. We will accept these scales as being reliable given
the proximity of Cronbach’s α value to .7.
32
In this project, we are trying to find a correlation between the various dependent
variables (perceived consequences, perceived ease of use, satisfaction, value placed
compared to alternatives and social factors) and the independent variable (intention to
pay). These are our inferential statistics (Sekaran, 2003 p.314). Carrying out a normality
distribution check on our data, we see from the normal Q-Q plots (Figure (a) in
Appendix 1) that all of the constructs are reasonably – if not very – normally
distributed. We will thus consider our data to be normal and will be referring to
parametric tests suitable for normally distributed data for our statistical analyses.
4.3 Preliminary Analyses
First, in order to establish whether or not there is a correlation between the intention to
pay and the hypothesised determinants, a correlation test was carried out which gave us
a measure of the relationship between the constructs and the dependant variable along
with a direction i.e. whether the relationship is positive or negative. The table below
gives us a value for the Pearson correlation coefficient – which gives us an indication of
the strength of the relationship between the respective independent variable and the total
intention. The Sig (2-tailed) value tells us whether the strength of this observed
correlation is statistically significant.
Table 4.3: Pearson Correlation coefficients for constructsTotal Intention
Total Perceived ConsequencesPearson Correlation .656**
Sig. (2-tailed) .000
Total Perceived ease of usePearson Correlation .197**
Sig. (2-tailed) .004
Total Social factorsPearson Correlation .436**
Sig. (2-tailed) .000
Total AlternativesPearson Correlation .592**
Sig. (2-tailed) .000
Total SatisfactionPearson Correlation .702**
Sig. (2-tailed) .000
**. Correlation is significant at the 0.01 level (2-tailed).
Source: SPSS Output
From the table above, we can see that there is a significant positive correlation of the
intention with all of the determinant constructs.
The factors which come out as having the strongest link to intention are the satisfaction
as a result of consuming a piece of content (r=.702), the perceived consequences
33
(r=.656) and the perceived value of paid content compared to free alternatives (r=.592).
There is a slightly weaker (though still good) correlation between social factors and
intention (r=.436). As for perceived ease of use, the correlation is weak (r=.197), even
though it is statistically significant. Table (b) in Appendix 1 shows the correlation co-
efficient between each of the constructs. However, this still does not tell us whether the
independent variables have an causal effect on the dependent variable.
Next, we carried out an independent samples t-test (please see Appendix 2 for more
detail) to see if the mean of intention was any different between respondents who had
previously paid for online content, compared to those who had never paid for online
content below. We see that there is a significant difference in the measure of total
intention between those who have paid for online content before (Mean: 5.46, SD:
1.50), as compared to those who have not had the experience of paying for online
content before (Mean: 6.34, SD: 1.57) – the value of Sig (2-tailed) is .001 which
indicates a statistically significant difference in the total measure of intention between
the two groups being studied. We will see in the multiple regression test results a further
breakdown of this statistic to get a clearer idea of where exactly this difference lies.
An independent samples t-test was also carried out to gauge the difference in mean
intention between respondents from the UK and the US – however, from the value of
Sig (2-tailed), which is .833, we see that there is no statistically significant difference in
the mean intention between the two groups.
4.4 Test of hypotheses
We already established that there are some significant correlations between our selected
determinants and out dependent variable. In order to test our hypotheses, we need to
establish that these determinants actually play a role in influencing the user’s intention
to pay for online content.
In order to do this, a multiple regression was carried out in order to see which
determinants played a role influencing the user’s intention, and also to what extent.
Tolerance figures for all constructs are above .10 and VIF values are well below 10, and
we can be confident that multiple correlations with other constructs are very low and
that multicollinearity is not a problem (Pallant, 2007 p.155).
34
In the table below, the value of Adjusted R Square tells us how much of the variance in
the total intention is explained by our constructs. In percentage terms, this means that
our model can account for 61.6% of the variance in total intention.
Table 4.4: Results of Multiple Regression Test
Model R R Square Adjusted R SquareStd. Error of the
Estimate
1 .791a .626 .616 .98775
a. Predictors: (Constant), t_alt, t_eou, t_soc, t_con, t_sat
b. Dependent Variable: t_int
Source: SPSS Output
Looking at the ANOVA table below, we can get an understanding of the statistical
significance of the above result.
Table 4.5: ANOVA Results for the Multiple Regression test
Model Sum of Squares df Mean Square F Sig.
1 Regression 289.235 5 57.847 59.291 .000a
Residual 172.688 177 .976
Total 461.924 182
a. Predictors: (Constant), t_alt, t_eou, t_soc, t_con, t_sat
b. Dependent Variable: t_int
Source: SPSS Output
The significance value being .000 indicates that this result is statistically significant. We
thus conclude that our determinants – in combination at least – can influence a user’s
intention to pay for content online.
In order to see how each individual determinant contributes to this influence, we look at
the table of coefficients below.
Table 4.6: Coefficient values from Multiple Regression test
Model
Standardized CoefficientsSig
Collinearity Statistics
Beta Partial Part Tolerance VIF
(Constant) .008
t_con .327 .000 .391 .260 .633 1.579
t_eou .070 .136 .112 .069 .958 1.044
t_soc .137 .008 .198 .123 .816 1.225
t_sat .361 .000 .377 .249 .475 2.106
t_alt .144 .021 .172 .107 .550 1.819
Source: SPSS Output
35
The Beta value gives us an indication of the contribution of each determinant. The
largest unique contribution comes from the satisfaction, followed very closely by the
perceived consequences. Alternatives and social factors make much less of a
contribution, with the perceived ease of use contributing very little. For all of the
constructs except for the perceived ease of use, the significance levels are below 0.05,
indicating that the construct is making a statistically significant unique contribution
towards determining the user’s intention.
If we square the “Part” values for each construct, we can get a percentage contribution
to R Square uniquely for each construct, removing any shared contribution with the
other constructs. We thus get perceived consequences alone contributing 6.8%, social
factors contributing 1.5%, satisfaction contributing 6.2% and alternatives contributing
1.1% to the R Square value.
Going back to our hypotheses, we can conclude the following:
Hypothesis HA1: The greater the value of the perceived consequences of using
the fee-based content, the more likely the customer’s intention to adopt it.
From Table 4.6 (β = .327, significance .000), we accept this hypothesis.
Hypothesis HA2: The greater the perceived ease of use of the fee-based online
content, the more likely the customer’s intention to adopt it.
From Table 4.6 (β = .070, significance .136), we reject this hypothesis. In fact, the
perceived ease of use does not have a direct, positive correlation with the customer’s
intention to pay for an online content service.
Hypothesis HA3: The higher a user perceives social influence in using the fee-
based online content, the more likely their intention to use it.
From Table 4.6 (β = .137, significance .008), we accept this hypothesis, keeping in
mind the relatively small contribution of this determinant.
Hypothesis HA4: The greater the perceived value of using the fee-based online
content as compared to available alternatives, the more likely the customer’s
intention to use it.
36
From Table 4.6 (β = .144, significance .021), we accept this hypothesis.
Hypothesis HA5: The higher the level of satisfaction felt by the user after using
the fee-based online content, the more likely their intention to adopt it.
From Table 4.6 (β = .361, significance .000), we accept this hypothesis.
The last hypothesis which we are yet to investigate involves the study of the user’s
intention to pay for online content based on their experience of online content
consumption.
We have categorised our sample into four groups:
Those who have never paid for content online
Those who have paid for content for personal/entertainment purposes as well as
for professional/academic purposes
Those who have only paid for content for entertainment purposes
Those who have only paid for content for work/academic purposes
The separation into groups was possible because of the control questions which directly
asked them if they had ever paid for a particular kind of content.
What we are trying to achieve here is an understanding of whether a user’s past
experience in online content consumption has an effect on their intention to pay for
online content in the future.
For this, we ran an ANOVA test between all of our item scores against the four groups.
We find that there is a statistical difference only between those who have never paid for
any kind of content online, and those who have paid for online content for both
entertainment and professional/academic purposes.
37
The hypothesis in question stands as below:
Hypothesis HB: A user’s behaviour towards the online content service will
differ, based on the type of online content.
From the ANOVA test that we have done for groups, we find a pattern arising. Tables
from the ANOVA test result can be found in Appendix 4, an interpretation of the results
from the test are presented briefly below:
In the ANOVA table, the significance value is below .05 for four of the items,
indicating a difference in measure between our four groups. Further, for all of these four
items, the difference is only between respondents who have never paid for content
before, and those who have paid for both entertainment/leisure as well as
professional/information content.
We see that there is only a statistically significant difference between groups that have
paid for both kinds of content in the past – and those who have never paid for content
before. There does not seem to be any significant difference based on whether users
have consumed entertainment/leisure or professional/information content.
Thus, we reject our hypothesis that the measure of intention differs based on the kind of
content. However, it is an interesting outcome that the measures of perceived
consequences, satisfaction and intention are affected by a user’s past experience with
online content.
The results that we have seen in this chapter open up this discussion about paid online
content to a number of other issues in terms of implications and angles to explore
further. Some of these issues are discussed in the following chapter.
38
Chapter 5: Conclusion
5.1 Results
As we have discussed already, the factors which affect a user’s intention to pay for
content online are the perceived consequences as a result of accessing that online
content service, the user’s normative beliefs or social factors, the value that the user
places on the paid content service as compared to similar free alternatives, and for
continued usage – how satisfied the user is with the online content service that they
have paid to access. The perceived ease of use however, does not play a significant role
in determining a user’s intention to pay for an online content service.
Further, by comparing groups, we have seen that even though the kind of content
consumed does not affect their agreement on various constructs and the intention to pay,
their past experience in dealing with an online content service does.
We can graphically represent our modified model as below:
Figure 5.1: Amended research model
5.2 Research Implications and contributions
Our hypotheses were based broadly on elements of the Theory of Reasoned Action and
the Technology Acceptance Model. Our model can account for about 62% of variance
in total intention as far as online paid is content is concerned, which is a pretty good
result.
39
Our result about the perceived ease of use is quite the opposite of what Choi et al
achieved in a previous study (Choi et al, 2009). One possible explanation could be that
in the time that has passed, the general populace has grown ever more comfortable with
the internet, even if they do not regularly consume content online. That combined with
the fact that the survey was carried out mainly among an internet-savvy respondent base
could explain why the perceived ease of use as a determinant did not sway opinion in
either direction.
This result brings out some interesting points for thought about the concurrence and
relevance of past research papers on similar or even identical research topics. It can be
argued that technology and human interaction with technology is evolving so fast that
even a project done a couple of years ago can start to lose relevance. In addition, the
role of cultural differences should not be underestimated. These two factors lead to a
loss of generalisability of these kinds of studies, and care should be taken when making
decisions based on such results.
Also, as opposed to what Choi et al found, we find that social factors play a smaller but
still notable role in determining a user’s intention to pay for online content. Possible
explanations could be differences in culture, a different time-frame or a combination of
the two.
The last difference between Choi et al’s study and ours is the difference in intention
between various groups based on the kind of content they had previously consumed.
Choi et al’s study found that there were significant differences in measures of perceived
consequences, satisfaction and social factors among the four groups (based on whether
they had consumed entertainment or information content). For our study though, the
difference was much less pronounced, and in fact, there were only differences found in
two of the constructs – perceived consequences and satisfaction, and for both of these
constructs, the differences were between those who had never paid for online content
before, and those who had paid for both entertainment and information content before.
This seems to imply that for our sample, the differentiating factor is not what kind of
content they had consumed, but rather whether they had consumed content before at all
or not.
40
5.3 Research limitations
This research project was conducted completely online using a small number of
channels. One can assume quite safely that the individuals who have responded to this
survey are quite open-minded about technology, and are obviously comfortable with
using online forums and professional or social networking sites. However, how
representative this is of the actual population is open to debate.
Due to time constraints, a pilot study was not carried out, which led to a lower
reliability of the data captured. This problem could almost certainly have been
eliminated if time allowed for a pilot survey to be carried out, and then the
questionnaire modified for improved reliability.
Further, the respondents come from only two countries, and even without cited
references, we can assume that the attitudes towards something like online paid content
would differ in other markets, and it would not be advisable to generalise these results
in order to make a judgment about some other market in another country without due
field research.
5.4 Further research
We have already seen a couple of areas where further investigation is warranted above.
Further research can be done in several directions, such as exploring whether there are
any other factors apart from the ones researched in this paper that affect the take-up of
paid content in a Business To Business context for example, where the end-user has less
control over the action of paying for content – or even carrying out the same study using
a different sampling mechanism in order to try and get a more accurate representation of
the actual population.
With adequate resources, a qualitative study might help to get a more detailed
understanding of what exactly the user is thinking when presented with the option of
paying for an online content service, and also taking the time to explain exactly what is
meant by an “online content service” – as a detailed explanation would be much better
than a brief example in a quantitative survey questionnaire like ours.
41
It would also be interesting to see if demographic factors such as age, income, gender or
profession have any effect on attitudes towards online paid content.
And lastly, there is always room to replicate the study in another market, to see if the
same factors play the same role in say India or Hong Kong. Coupled with this, cultural
elements could be interwoven to see if any peculiar cultural elements influence users’
intentions in these markets.
5.5 Management implications
The results from this study shed light on user behaviour and attitude towards paid online
content at a time when this is a very important topic of discussion in the publishing
industry – more specifically for magazines and newspapers. 2009 saw the closure of
several magazines and newspapers in the UK and the US, and a study and
understanding of the factors that affect a user’s decision to pay for content online would
be very valuable for those involved in this industry. As mentioned earlier, there are
many instances of successful, online paid content services, but for the periodical
publishing industry, this seems to be an uphill task. The results of this research are very
relevant to this situation – and in a way bring out what the critical success factors are
that the affected players in the industry should focus on.
The results of this particular study suggest that in order to achieve success in the form
of revenue obtained from online sources, providers of online content should focus their
efforts on sending out clearly to their audience the message of what the benefits are of
consuming their content; how would a user benefit as a direct result of consuming that
content? But this is probably more apt for the acquisition of new customers.
In order for an online content service to be commercially viable though, returning
customers are also important – and here, customer satisfaction and social factors come
into play. Listening to customers in order to ensure that they are happy with the
experience and the proposition presented to them has to be a continual process, and due
effort must be put in to discover what “customer satisfaction” consists of.
Marketing exercises to build brand goodwill would go some way in convincing users to
pay for their content. If a user gets a strong enough message from their peers or friends
42
or family about a content service, they are – going by the results of our research – more
likely to pay for online content.
Though this needs further investigation, one can now argue that the technology and
processes involved (generally speaking) in accessing a paid online content service have
become simplified and standardised to the extent that users take it for granted that an
online content service will be easy enough to use, and they thus no longer place much
emphasis on this as a differentiating criterion. Marketers would do good to pay heed to
this, and tread with caution when focusing on user-friendliness and great website
navigation – because the effort might be all in vain. At the same time, the opposite is
probably not true – and a system that is difficult to use will have a negative influence on
intention.
Free alternatives also definitely pose a threat to online content services and influence a
user’s intention to pay. And in fact, the combination of perceived consequences and
free alternatives make perfect sense. The user needs to be shown that the value they are
getting out of consuming this paid content (in terms of the consequences –
improvements to their life or work etc.) is far greater than what a free alternative could
offer.
In terms of market research, an important lesson here is in the risk that is involved in
not referring to research that is very up-to-date and tailored to one’s target audience.
The results can be irrelevant and misleading, and that can of course have serious
consequences for the business in question.
In terms of marketing strategy, there are two lessons here – one is that segmenting
customers will help the business to better target their audience and send messages that
are valuable to that audience. The second is that the segmentation has to be right, or the
effort will be in vain. As we have seen with our respondents, what differentiates them is
not whether they have consumed a particular kind of content, but what their past
experience has been with paying for online content. It seems that segmenting the
customer base into those who have paid for content before, and those who have not
would allow marketers to send out the right messages to the right people, based on their
experience – rather than send out a generic marketing message to customers ignoring
this split.
43
For product development or improvement as well, these results can be applied to see
where a failing product is lacking – perhaps the benefits offered are not valuable enough
to persuade the user, or perhaps bad reviews or a lack of awareness is creating issues, or
maybe the satisfaction level of existing customers needs to be revisited in order to
identify why a product is not successful.
A number of important determinants have been identified in this project, paving the way
for further research academically, or for marketing or product development decisions
based on these results.
For those who are in the business of providing content online, the hunt for the perfect
revenue model will continue for a while. Along the way, some will find a solution,
while others will not. Those who manage to understand the consumer and figure out
what it is that they find valuable will (likely) flourish in the new world of content
delivery where free and paid content providers fight over the same pool of users; those
who do not will die trying and lose out to those who do. And with every piece of
research done on this subject, we come one step closer to understanding what the factors
that affect the user acceptance on fee-based online content really are.
44
References
Ajzen, I. (1991) ‘The Theory of Planned Behaviour’, Organizational Behaviour and
Human Decision Processes , 50, pp.179-211.
Ajzen, I., and Fishbein, M. (1972) ‘Attitudes and normative beliefs as factors
influencing behavioural intentions’, Journal of personality and social psycholoy , 21
(1), pp.1-9.
Ajzen, I., and Fishbein, M. (1969) ‘The prediction of behavioural intentions in a choice
situation’, Journal of experimental social psychology , 5, pp.400-416.
Ajzen, I., and Fishbein, M. (1980). Understanding attitudes and predicting social
behaviour. Englewood Cliffs, New Jersey: Prentice-Hall.
Ajzen, I., and Madden, T. J. (1986) ‘Prediction of goal-directed behaviour: Attitudes,
intentions and perceived behavioural control’, Journal of Experimental Social
Psychology , 22, pp.453-474.
Ajzen, I., Timko, C., and White, J. B. (1982) ‘Self-Monitoring and the Attitude-
Behavior Relation’, Journal or Personality and Social Psychology , 42 (3), pp.426-435.
Bailey, J. E., and Pearson, S. W. (1983) ‘Development of a tool for measuring and
analysing computer user satisfaction’, Management Science , 29 (5), pp.530-545.
Bandura, A., Adams, N. E., and Beyer, J. (1977). ‘Cognitive Processes Mediating
Behavioral Change’, Journal of personality and social psychology , 35 (3), pp.125-139.
Bandura, A., Adams, N., Hardy, A. B., and Howells, G. N. (1980) ‘Tests of generality
of self-efficacy theory’, Cognitive Therapy and Research , 4, pp.39-66.
BBC News. (2007, June 4) Launch date for iPhone revealed Available at
http://news.bbc.co.uk/1/hi/technology/6717865.stm (Accessed October 27, 2009)
Bentler, P. M., and Speckart, G. (1979) ‘Models of Attitude-Behavior Relations’,
Psychological Review , 86 (5), pp.452-464.
Bhattacherjee, A. (2001) ‘Understanding Information Systems COntinuance: An
Expectation-Confirmation Model’, MIS Quarterly , 25 (3), pp.351-370.
Bryman, A., and Bell, E. (2007). Business Research Methods (Second ed.). Oxford:
Oxford University Press.
Choi, J., Lee, S. M., and Soriano, D. R. (2009) ‘An empirical study of user acceptance
of fee-based online content’, Journal of Computer Information Systems , 49 (3), pp.60-
70.
Davis, F. D. (1989, September) ‘Perceived usefulness, perceived ease of use, and user
acceptance of information technology’, MIS Quarterly , pp.319-340.
45
Davis, F. D. (1993) ‘User acceptance of information technology: system characteristics,
user perceptions and behavioural impacts’, International Journal of Man-Machine
Studies , 38 (3), pp.475-487.
Davis, F. D., Bagozzi, R. P., and Warshaw, P. R. (1989) ‘User acceptance of computer
technology: A comparison of two theoretical models’, Management Science , 35 (8),
pp.982-1003.
De Cannière, M. H., De Pelsmacker, P., and Geuens, M. (2009) ‘Relationship Quality
and the Theory of Planned Behavior models of behavioral intentions and purchase
behaviour’, Journal of Business Research , 62 (1), pp.82-92.
Dou, W. (2004) ‘Will Internet Users Pay for Online Content?’, Journal of Advertising
Research , 44 (4), pp.349-359.
Fredricks, A. J., and Dossett, D. L. (1983) ‘Attitude-Behavior Relations: A Comparison
of the Fishbein-Ajzen and the Bentler-Speckart Models’, Journal of Personality and
Social Psychology , 45 (3), pp.501-512.
George, J. F. (2004) ‘The theory of planned behaviour and internet purchasing’,
Internet Research , 14 (3), pp.198-212.
Holton, K. (2009, March 26). UK iPhone users lead way in Web, email use: survey.,
Available at
http://www.reuters.com/article/technologyNews/idUSTRE52P2PE20090326 (Accessed
October 24, 2009)
Ives, B., Olson, M. H., and Baroudi, J. J. (1983) ‘The measurement of user information
satisfaction’, Communications of the ACM , 26 (10), pp.785-793.
Johnson, D. J., and Rusbult, C. E. (1989). ‘Resisting Temptation: Devaluation of
Alternative Partners as a Means of Maintaining Commitment in Close Relationships’,
Journal of Personality and Social Psychology , 57 (6), pp.967-980.
Khalifa, M., and Ning Shen, K. (2008). ‘Drivers for transactional b2c m-commerce
adoption: extended theory of planned behaviour’, Journal of Computer Information
Systems , 48 (3), pp.111-117.
Lim, H., and Dubinsky, A. (2005) ‘The theory of planned behavior in e-commerce:
Making a case for interdependencies between salient beliefs’, Psychology and
Marketing , 22 (10), pp.833-855.
Lin, J., Hock Chuan, C., and Kwok Kee, W. (2006) ‘Understanding competing
application usage with the theory of planned behaviour’, Journal of the American
Society for Information Science and Technology , 57 (10), pp.1338-1349.
Lopes, A. B., and Galletta, D. F. (2006) ‘Consumer Perceptions and Willingness to Pay
for Intrinsically Motivated Online Content’, Journal of Management Information
Systems , 23 (2), pp.203-231.
46
Mathieson, K. (1991) ‘Predicting User Intentions: Comparing the Technology
Acceptance Model with the Theory of Planned Behaviour’, Information Systems
Research , 2 (3), pp.173-191.
McCloskey, D. (2003-4, Winter) ‘Evaluating electronic commerce acceptance with the
Technology Acceptance Model’, Journal of COmputer Information Systems , 49-56.
Melone, N. (1990) ‘A theoretical assessment of the user-satisfaction construct in
information systems research’, Management Science , 36 (1), pp.76-91.
Moore, G. C., and Benbasat, I. (1991) ‘Development of an insrument to measure the
perceptions of adapting an information technology innovation’, Information Systems
Research , 2 (3), pp.192-222.
Office for National Statistics. (2009). Internet Access: Households and Individuals.
Oliver, R. L. (1980) ‘A Cognitive Model of the Antecedents and Consequences of
Satisfaction Decisions’, Journal of Marketing Research , 17, pp.460-469.
Pallant, J. (2007). SPSS Survival Manual (Third ed.). Maidenhead: Open University
Press.
Parthasarathy, M., and Bhattacherjee, A. (1998) ‘Understanding Post-Adoption
Behaviour in the Context of Online Services’, Information Systems Research , 9 (4),
pp.362-379.
Pavlou, P., and Fygenson, M. (2006) ‘Understanding and prediction electronic
commerce adoption: an extension of the theory of planned behaviour’, MIS Quarterly ,
30 (1), pp.115-143.
Rederer, A. L., Maupin, D. J., Sena, M. P., and Zhuang, Y. (2000) ‘The technology
acceptance model and the world wide web’, Decision Support Systems , 29, 269-282.
Reuters. (2009, July 31). Research and Markets: Estimates of UK Mobile Internet Users
Range between a Low of 7.2 Million and a High of 17.4 Million, According to This
2009 Study. Available at http://www.reuters.com/article/pressRelease/idUS91623+31-
Jul-2009+BW20090731 (Accessed October 27, 2009)
Rigopoulos, G., and Askounis, D. (2007) ‘A TAM Framework to Evaluate Users'
Perception towards Online Electronic Payments’, Journal of Internet Banking and
Commerce , 12 (3), pp.1-6.
Rusbult, C. E., and Farrell, D. (1983) ‘A longitudinal test of the investment model: the
impact on job satisfaction, job committment, and turnover of variations in rewards,
costs, alternatives, and investments’, Journal of Applied Psychology , 68 (3), pp.429-
438.
Rusbult, C. E., Martz, J. M., and Agnew, C. R. (1998) ‘The Investment Model Scale:
Measuring commitment level, satisfaction level, quality of alternatives, and investment
size’, Personal Relationships , pp.357-391.
47
Ryan, R. M., and Deci, E. L. (2000) ‘Intrinsic and Extrinsic Motivations: Classic
Definitions and New Directions’, Contemporary Educational Psychology , 25 (1), 54-
67.
Saunders, M., Lewis, P., and Thornhill, A. (2007). Research methods for business
students (Fourth ed.). Harlow: Pearson Education Limited.
Sekaran, U. (2003). Research methods for business: A skill building approach (Fourth
ed.). New York, New York, USA: John Wiley and Sons.
Shafer, J. (2009, February 18). Not All Information Wants To Be Free. Available at
http://www.slate.com/id/2211486/pagenum/all (Accessed October 10, 2009)
Smetana, J. G., and Adler, N. E. (1980) ‘Fishbein's Value x Expectancy Model: An
Examination of Some Assumptions’, Personality and Social Psychology Bulletin , 6 (1),
pp.89-96.
Taylor, S., and Todd, P. A. (1995) ‘Understanding Information Technology Usage: A
test of competing models’, Information Systems Research , 6 (2), pp.144-176.
Vasquez, D., and Xu, X. (2009) ‘Investigating linkages between online purchase
behaviour variables’, International journal of retail and distribution management , 37
(5), pp.408-419.
Venkatesh, V., and Davis, F. D. (2000) ‘A theoretical extension of the Technology
Acceptance Model: Four Longitudinal Field Studies’, Management Science , 46 (2),
pp.186-204.
Venkatesh, V., Morris, M., Davis, G., and Davis, F. (2003) ‘User acceptance of
information technology: Toward a unified view’, MIS Quarterly , 27 (3), pp.425-478.
Wixom, B. H., and Todd, P. A. (2005) ‘A Theoretical Integration of User Satisfaction
and Technology Acceptance’, Information Systems Research , 16 (1), pp.85-102.
48
Appendices
49
Appendix 1: Constructs
Table a: Questionnaire showing constructs and items
ConstructItem Code
Item
Perceived consequences
con1I'd pay for online content that will result in an improvement in the way I accomplish my objectives (personal or professional).
con2I would pay for online content that resulted in an improved experience of my activities.
con3I'd pay for access to an online content source that would result in saving time, money and/or effort.
con4I'd pay to access an online content source if the benefits obtained were worth more than the costs.
Perceived ease of use
eou1 I find accessing content online is more convenient than it is offline.
eou2 Searching for the information I want is easier online than it is offline.
Social factorssoc1 Many people around me pay to access content online.
soc2Many people I know have recommended various paid online content services.
Satisfaction
sat1From my experience with paid content services in general, I am satisfied in terms of quality.
sat2I will continue to access content online from my preferred source even if they started charging me for it.
sat3 I believe that paying for good online content is appropriate.
Alternativesalt1
I believe that online paid content services are better suited to my needs than free ones.
alt2In my experience, paid content is of significantly higher value than freely available content.
Table b: Correlation between totals of constructs
t_int t_con t_eou t_soc t_sat t_alt
Pearson Correlation t_int 1.000
t_con .656 1.000
t_eou .197 .201 1.000
t_soc .436 .294 .091 1.000
t_sat .702 .567 .102 .404 1.000
t_alt .592 .485 .084 .359 .649 1.000
Source: SPSS Output
50
Figure a: Charts showing normality for each statistic
Total perceived consequences Total perceived ease of use
Total satisfaction Total alternatives
Total social factors Total intention
Source: SPSS Output
51
Appendix 2: T-Test Tables
Have you ever paid to either read something online or download something off the
internet such as music or software for either work or entertainment purposes?
Table a: Group Statistics (split by past purchase experience)
N Mean Std. Deviation Std. Error Mean
Total intentionNo 41 5.4634 1.50163 .23451
Yes 173 6.3410 1.57169 .11949
Source: SPSS Output
Table b: Independent Samples Test (split by past purchase experience)
Levene's Test for Equality of Variances
t-test for Equality of Means
Sig. tSig. (2-
tailed)
Mean Difference
95% Confidence Interval of the Difference
Lower Upper
Total
intention
Equal variances
assumed.838
-3.24
2.001 -.87763 -1.41132 -.34393
Source: SPSS Output
Do you work or study in the United Kingdom?
Table c: Group Statistics (split by country of residence)
N Mean Std. Deviation Std. Error Mean
Total intentionNo 90 6.2000 1.76228 .18576
Yes 124 6.1532 1.46529 .13159
Source: SPSS Output
Table d: Independent Samples Test (split by country of residence)
Levene's Test for Equality of Variances
t-test for Equality of Means
Sig. tSig. (2-
tailed)
Mean Difference
95% Confidence Interval of the Difference
Lower Upper
Total intention
Equal variances assumed
.042 .212 .833 .04677 -.38907 .48262
Source: SPSS Output
52
Appendix 3: Multiple Regression Tables
Table a: Correlations
t_int t_con t_eou t_soc t_sat t_alt
Pearson Correlation t_int1.000
t_con.656 1.000
t_eou.197 .201 1.000
t_soc.436 .294 .091 1.000
t_sat.702 .567 .102 .404 1.000
t_alt.592 .485 .084 .359 .649 1.000
Sig. (1-tailed) t_int.
t_con.000 .
t_eou.002 .002 .
t_soc.000 .000 .098 .
t_sat.000 .000 .080 .000 .
t_alt.000 .000 .113 .000 .000 .
N t_int214
t_con212 214
t_eou214 214 216
t_soc204 204 205 205
t_sat191 191 192 183 192
t_alt206 205 207 197 189 207
53
t_int t_con t_eou t_soc t_sat t_alt
Pearson Correlation t_int1.000
t_con.656 1.000
t_eou.197 .201 1.000
t_soc.436 .294 .091 1.000
t_sat.702 .567 .102 .404 1.000
t_alt.592 .485 .084 .359 .649 1.000
Sig. (1-tailed) t_int.
t_con.000 .
t_eou.002 .002 .
t_soc.000 .000 .098 .
t_sat.000 .000 .080 .000 .
t_alt.000 .000 .113 .000 .000 .
N t_int214
t_con212 214
t_eou214 214 216
t_soc204 204 205 205
t_sat191 191 192 183 192
Source: SPSS Output
Table b: Residuals Statistics
Minimum Maximum Mean Std. Deviation N
Predicted Value .7291 9.4355 6.2499 1.22977 180
Std. Predicted Value -4.318 2.588 .061 .976 180
Standard Error of Predicted Value
.081 .463 .170 .050 180
Adjusted Predicted Value .3706 9.4610 6.2442 1.23875 179
54
t_int t_con t_eou t_soc t_sat t_alt
Pearson Correlation t_int1.000
t_con.656 1.000
t_eou.197 .201 1.000
t_soc.436 .294 .091 1.000
t_sat.702 .567 .102 .404 1.000
t_alt.592 .485 .084 .359 .649 1.000
Sig. (1-tailed) t_int.
t_con.000 .
t_eou.002 .002 .
t_soc.000 .000 .098 .
t_sat.000 .000 .080 .000 .
t_alt.000 .000 .113 .000 .000 .
N t_int214
t_con212 214
t_eou214 214 216
t_soc204 204 205 205
t_sat191 191 192 183 192
Residual -2.43315 3.14893 .02257 .97613 179
Std. Residual -2.463 3.188 .023 .988 179
Stud. Residual -2.529 3.237 .023 1.008 179
Deleted Residual -2.56534 3.24583 .02394 1.01591 179
Stud. Deleted Residual -2.569 3.327 .024 1.016 179
Mahal. Distance .222 39.050 4.851 4.152 180
Cook's Distance .000 .123 .007 .016 179
Centered Leverage Value .001 .215 .027 .023 180
55
t_int t_con t_eou t_soc t_sat t_alt
Pearson Correlation t_int1.000
t_con.656 1.000
t_eou.197 .201 1.000
t_soc.436 .294 .091 1.000
t_sat.702 .567 .102 .404 1.000
t_alt.592 .485 .084 .359 .649 1.000
Sig. (1-tailed) t_int.
t_con.000 .
t_eou.002 .002 .
t_soc.000 .000 .098 .
t_sat.000 .000 .080 .000 .
t_alt.000 .000 .113 .000 .000 .
N t_int214
t_con212 214
t_eou214 214 216
t_soc204 204 205 205
t_sat191 191 192 183 192
a. Dependent Variable: t_int
Source: SPSS Output
56
Appendix 4: ANOVA Tables (Comparion of four groups)
Table a: ANOVA test output
Item Name Sum of Squares df Mean Square F Sig.
Consequences
con1Between Groups 7.483 3 2.494 4.71 0Within Groups 112.351 212 0.53Total 119.833 215
con2Between Groups 3.325 3 1.108 1.83 0.14Within Groups 128.633 212 0.607Total 131.958 215
con3Between Groups 1.117 3 0.372 0.56 0.65Within Groups 142.364 212 0.672Total 143.481 215
con4Between Groups 3.214 3 1.071 1.78 0.15Within Groups 126.735 210 0.603Total 129.949 213
Perceived ease of use
eou1Between Groups 1.934 3 0.645 0.85 0.47Within Groups 160.839 212 0.759Total 162.773 215
eou2Between Groups 0.549 3 0.183 0.35 0.79Within Groups 110.544 212 0.521Total 111.093 215
Social Factors
soc1Between Groups 2.938 3 0.979 1.11 0.35Within Groups 181.675 205 0.886Total 184.612 208
soc2Between Groups 0.425 3 0.142 0.14 0.93Within Groups 204.551 207 0.988Total 204.976 210
Satisfaction
sat1Between Groups 5.038 3 1.679 2.05 0.11Within Groups 155.704 190 0.819Total 160.742 193
sat2Between Groups 4.891 3 1.63 1.73 0.16Within Groups 199.323 211 0.945Total 204.214 214
sat3Between Groups 8.853 3 2.951 3.06 0.03Within Groups 201.579 209 0.964Total 210.432 212
Alternatives
alt1Between Groups 5.242 3 1.747 2.28 0.08Within Groups 160.420 209 0.768Total 165.662 212
alt2Between Groups 5.326 3 1.775 1.73 0.16Within Groups 210.684 205 1.028Total 216.010 208
Intention
int1Between Groups 10.349 3 3.45 4.77 0Within Groups 153.411 212 0.724Total 163.759 215
int2Between Groups 10.866 3 3.622 4.19 0.01Within Groups 181.438 210 0.864Total 192.304 213
Source: SPSS Output
57
Table b: Multiple Comparisons table for ANOVA test
Tukey HSD
Dep. Variable (I) content_group (J) content_group Mean Difference (I-J) Sig.
con1
Never paid for online content
Paid for entertainment only -0.225 0.564
Paid for information only -0.558 0.107
Have paid for both kinds of content -0.444 0.003
Have paid for both kinds of content
Never paid for online content 0.444 0.003
Paid for entertainment only 0.218 0.462
Paid for information only -0.114 0.959
sat3
Never paid for online content
Paid for entertainment only -0.326 0.511
Paid for information only -0.032 1.000
Have paid for both kinds of content -0.488 0.027
Have paid for both kinds of content
Never paid for online content 0.488 0.027
Paid for entertainment only 0.161 0.855
Paid for information only 0.456 0.452
int1
Never paid for online content
Paid for entertainment only -0.261 0.573
Paid for information only -0.323 0.672
Have paid for both kinds of content -0.541 0.002
Have paid for both kinds of content
Never paid for online content 0.541 0.002
Paid for entertainment only 0.281 0.377
Paid for information only 0.218 0.847
int2
Never paid for online content
Paid for entertainment only -0.020 1.000
Paid for information only -0.653 0.163
Have paid for both kinds of content -0.462 0.026
Have paid for both kinds of content
Never paid for online content 0.462 0.026
Paid for entertainment only 0.442 0.098
Paid for information only -0.192 0.913
Source: SPSS Output
Note: For convenience of presentation, the items where no significant differences were found at all have been removed from Table (b).
58