The relationships between automobile head-up display presentation images and drivers’ Kansei

11
The relationships between automobile head-up display presentation images and drivers’ Kansei Shana Smith , Shih-Hang Fu Department of Mechanical Engineering, National Taiwan University, Taipei, Taiwan article info Article history: Received 14 May 2010 Received in revised form 5 October 2010 Accepted 7 December 2010 Available online 13 December 2010 Keywords: HUD Presentation image design Kansei engineering Cluster analysis QT1 abstract This study explored the relationships between automobile head-up display (HUD) presentation image designs and drivers’ Kansei, using quantitative and qualitative analysis. There were two major stages in this study. The objective of the first stage was to find representative Kansei factors from a large seman- tic space, using factor analysis and cluster analysis. In the second stage, a prediction model for the rela- tionships between the representative Kansei factors and HUD physical image design properties were created, using Quantification Theory Type 1. Results were discussed based on the whole subject popula- tion, age differences, and gender differences, respectively. Finally, two existing HUD presentation images on the market were used to test the validity and feasibility of the prediction model, using a one-sample t- test. The results show that our model can successfully predict drivers’ Kansei for a given HUD presenta- tion image. The results can also be used to customize a HUD presentation image which caters to the driv- ers’ feelings and emotions. Ó 2010 Elsevier B.V. All rights reserved. 1. Introduction HUD technology was originally developed to help pilots operate aircraft, especially military aircraft, more safely. In 1988, HUD technology was first introduced in the automobile industry by General Motor (GM) in Oldsmobile and Pontiac models. In recent years, HUD has become an essential device in most luxury vehicles. For example, BMW’s M series and X5 are both equipped with HUDs. Automobile HUDs present important information to drivers, such as speed, warnings, gas level, gear position, radio setting, tem- perature, and position navigation. HUD presentation images are projected on the windshield, and drivers can view the information along the line of sight. As a result, using a HUD saves time for driv- ers, since they do not need to look down to check driving information. Wierwille [1] showed that driving control is related to driving safety. Prior research studies presented many benefits of HUD use. As a result, HUD use may improve driving control and there- fore, driving safety. Drivers’ response time to an urgent event is faster with a HUD than a HDD (head-down display), and speed control is also more consistent with a HUD [2–4]. Moreover, using a HUD causes less mental stress for drivers and is easier for first time users to use [5]. In addition, most drivers feel safer when driv- ing with a HUD [6,7]; therefore, in the future, HUDs are expected to become an indispensable device for most drivers. On the other hand, Tufano presented two potential negative im- pacts of HUD use on driving safety [8]. First, HUD focal distance may affect drivers’ accommodation and perception of actual ob- jects while driving. Second, HUD images may clutter or block driv- ers’ view and affect visual attention. According to Tufano, both are overlooked HUD-related safety issues. Prior research also shows that the decision making process on driving responses is equally distributed among the machine and human elements for all types of vehicle interface design [9]. There- fore, interface design plays an important role in the effectiveness of a HUD, which could also impact driving control and safety. There has been a significant amount of research focusing on hardware design of HUDs, but there has been little research focusing on HUD presentation image design. To design a HUD presentation im- age, some aspects should be taken into consideration. For example, Yoo et al. [10] studied the effect of HUD warning location on driv- ers’ response and performance enhancement. Charissis and Papa- nastasiou [4] developed a full-windshield HUD interface to improve drivers’ spatial awareness and response time under low visibility conditions. Tonnis et al. [6] built a virtual bar and pro- jected it in front of a car using a HUD, to assist drivers in longitu- dinal and lateral control. Since driving is a heavy visual task, drivers need a visual inter- face to help them focus their attention on the road ahead. HUDs have been used to improve drivers’ visual and cognitive workload, without any physical interaction. However, in the modern automo- bile industry, other than functional requirements, HUD presenta- tion image design is also an important factor which affects 0141-9382/$ - see front matter Ó 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.displa.2010.12.001 Corresponding author. E-mail address: [email protected] (S. Smith). Displays 32 (2011) 58–68 Contents lists available at ScienceDirect Displays journal homepage: www.elsevier.com/locate/displa

Transcript of The relationships between automobile head-up display presentation images and drivers’ Kansei

Page 1: The relationships between automobile head-up display presentation images and drivers’ Kansei

Displays 32 (2011) 58–68

Contents lists available at ScienceDirect

Displays

journal homepage: www.elsevier .com/locate /d ispla

The relationships between automobile head-up display presentation imagesand drivers’ Kansei

Shana Smith ⇑, Shih-Hang FuDepartment of Mechanical Engineering, National Taiwan University, Taipei, Taiwan

a r t i c l e i n f o

Article history:Received 14 May 2010Received in revised form 5 October 2010Accepted 7 December 2010Available online 13 December 2010

Keywords:HUDPresentation image designKansei engineeringCluster analysisQT1

0141-9382/$ - see front matter � 2010 Elsevier B.V. Adoi:10.1016/j.displa.2010.12.001

⇑ Corresponding author.E-mail address: [email protected] (S. Smith).

a b s t r a c t

This study explored the relationships between automobile head-up display (HUD) presentation imagedesigns and drivers’ Kansei, using quantitative and qualitative analysis. There were two major stagesin this study. The objective of the first stage was to find representative Kansei factors from a large seman-tic space, using factor analysis and cluster analysis. In the second stage, a prediction model for the rela-tionships between the representative Kansei factors and HUD physical image design properties werecreated, using Quantification Theory Type 1. Results were discussed based on the whole subject popula-tion, age differences, and gender differences, respectively. Finally, two existing HUD presentation imageson the market were used to test the validity and feasibility of the prediction model, using a one-sample t-test. The results show that our model can successfully predict drivers’ Kansei for a given HUD presenta-tion image. The results can also be used to customize a HUD presentation image which caters to the driv-ers’ feelings and emotions.

� 2010 Elsevier B.V. All rights reserved.

1. Introduction

HUD technology was originally developed to help pilots operateaircraft, especially military aircraft, more safely. In 1988, HUDtechnology was first introduced in the automobile industry byGeneral Motor (GM) in Oldsmobile and Pontiac models. In recentyears, HUD has become an essential device in most luxury vehicles.For example, BMW’s M series and X5 are both equipped withHUDs. Automobile HUDs present important information to drivers,such as speed, warnings, gas level, gear position, radio setting, tem-perature, and position navigation. HUD presentation images areprojected on the windshield, and drivers can view the informationalong the line of sight. As a result, using a HUD saves time for driv-ers, since they do not need to look down to check drivinginformation.

Wierwille [1] showed that driving control is related to drivingsafety. Prior research studies presented many benefits of HUDuse. As a result, HUD use may improve driving control and there-fore, driving safety. Drivers’ response time to an urgent event isfaster with a HUD than a HDD (head-down display), and speedcontrol is also more consistent with a HUD [2–4]. Moreover, usinga HUD causes less mental stress for drivers and is easier for firsttime users to use [5]. In addition, most drivers feel safer when driv-ing with a HUD [6,7]; therefore, in the future, HUDs are expected tobecome an indispensable device for most drivers.

ll rights reserved.

On the other hand, Tufano presented two potential negative im-pacts of HUD use on driving safety [8]. First, HUD focal distancemay affect drivers’ accommodation and perception of actual ob-jects while driving. Second, HUD images may clutter or block driv-ers’ view and affect visual attention. According to Tufano, both areoverlooked HUD-related safety issues.

Prior research also shows that the decision making process ondriving responses is equally distributed among the machine andhuman elements for all types of vehicle interface design [9]. There-fore, interface design plays an important role in the effectiveness ofa HUD, which could also impact driving control and safety. Therehas been a significant amount of research focusing on hardwaredesign of HUDs, but there has been little research focusing onHUD presentation image design. To design a HUD presentation im-age, some aspects should be taken into consideration. For example,Yoo et al. [10] studied the effect of HUD warning location on driv-ers’ response and performance enhancement. Charissis and Papa-nastasiou [4] developed a full-windshield HUD interface toimprove drivers’ spatial awareness and response time under lowvisibility conditions. Tonnis et al. [6] built a virtual bar and pro-jected it in front of a car using a HUD, to assist drivers in longitu-dinal and lateral control.

Since driving is a heavy visual task, drivers need a visual inter-face to help them focus their attention on the road ahead. HUDshave been used to improve drivers’ visual and cognitive workload,without any physical interaction. However, in the modern automo-bile industry, other than functional requirements, HUD presenta-tion image design is also an important factor which affects

Page 2: The relationships between automobile head-up display presentation images and drivers’ Kansei

Span the semantic

space

Span the space of

physical properties

HUD presentation image domain

Extract representative Kansei

factors

Quantification Theory Type I (QT1)

Verification

Stage 1

Stage 2

Model building

Fig. 1. Study flowchart.

S. Smith, S.-H. Fu / Displays 32 (2011) 58–68 59

customers’ overall impression and sensation on a vehicle. In addi-tion, most drivers prefer user-friendly and user-centered automo-bile devices which are sensitive to their personal taste andemotions. Thus, methods to customize a HUD presentation imagewhich cater to the drivers’ psychological feelings and emotions be-come important. In this research, Kansei engineering is used tofacilitate a user-centered HUD presentation image design.

Kansei, which is a Japanese word, means customers’ psycholog-ical feelings or image of a product. It addresses users’ subjectiveemotions, affection, and perceptions toward a product while using.Thus, Kansei engineering aims to produce a new product which ismore acceptable based upon consumers’ feelings and demands[11–14]. Kansei engineering has been used in many fields, e.g.,mechanical design, ergonomic design, industrial design, etc. WithKansei engineering, statistical methods are used to analyze con-sumers’ subjective feelings, and translate the results into a physicaldesign domain [15]. Some automobile companies have introducedKansei engineering into their vehicle designs. For example, Mazdaused Kansei engineering to develop a sport car, and Nissan utilizeda hybrid Kansei engineering system to design a new steering wheelfor passenger cars [16].

The purpose of this study is to determine user preferences andfeelings related to existing types and characteristics of HUD pre-sentation image designs, rather than to address possible safety is-sues. However, improving presentation image design may improveHUD effectiveness and, therefore, driving safety. In addition, thestudy does not consider presentation image design factors whichrequire changes in HUD optical hardware designs, such as focallength, brightness, and resolution. In order to make HUD moreeffective and user-friendly, this study aims to explore the relation-ships between HUD presentation image designs and drivers’ Kanseiresponses, and to build a prediction model for future user-centeredHUD presentation image design. Kansei differences between gen-ders and ages will be discussed. The organization of this paper isas follows. Section 2 gives an overview of the study procedure. Sec-tion 3 aims to extract representative Kansei factors. Section 4 usesQuantification Theory Type 1 (QT1) to build a prediction modelwhich describes the relationships between the representative Kan-sei factors and HUD physical image design properties. Section 5verifies the validity of the prediction model using two existingHUD images on the market. Finally, Section 6 offers conclusionsand recommendations for future work.

2. Study structure

Based on Schütte [17], a chosen domain of study can be de-scribed by two different perspectives: product physical propertiesand emotional semantic properties. These two properties eachspan a vector space. In this study, HUD presentation image wasthe chosen domain. In stage one, physical properties and emotionalsemantic properties were expanded as much as possible to includeall possible elements, and then, the semantic space was reduced sothat only a few representative Kansei factors were considered.

In stage two, the representative Kansei factors and the productphysical properties were synthesized and analyzed in relation toeach other, using QT1, to discover which of the product physicalproperties evoke which semantic emotion. Then, a model was builtto describe how the space of physical properties and the semanticspace were associated. Finally, a validity test was conducted. TheKansei engineering structure of the study is shown in Fig. 1.

3. First stage

The purpose of the first stage was to span the space of the HUDphysical image properties and the emotional semantic space. Since

the initial semantic space included as many Kansei words as possi-ble, it might be very large and diverse. Thus, it was necessary to re-duce the semantic space and find principal or representativeKansei words.

3.1. Span the space of properties

The aim of this step was to list all the physical design elementsin the existing HUD images. HUD presentation images from theexisting products on the market, video games, magazines, or web-site were collected in this stage. Most automobile HUD presenta-tion images, such as BMW, Chevrolet Corvette, Nissan, GMCArcadia, and Pontiac Grand Prix, include speed, gear position, out-side air temperature and oil level. For most of the existing HUD,HUD presentation images are green, blue, or orange in color andthree inches by five inches in size. The images are projected suchthat they appear to be in front of the windshield at a distance ofabout one meter.

The HUD images collected were difficult to use directly for thesubsequent survey because of image quality and portability. In thisresearch, some new HUD images were reproduced or created.Based on the HUD images collected, six new HUD presentationimages were created for the first survey, as shown in Fig. 2. In thisstage, the number of HUD presentation images used to extract rep-resentative Kansei factors does not need to be too many. However,they need to be as diverse as possible.

3.2. Span the semantic space

In this stage, as many Kansei words as possible were collectedfrom customers, automobile magazines, and website. Each Kanseiword was arranged as a pair with opposite meanings for semanticevaluation using Osgood’s original Semantic Differential (SD) scaleor Nagamachi’s SD scale [18–21]. The SD scale is the most commontype of rating scale. Osgood’s SD scale uses synonyms and ant-onyms for spanning the range of ratings. With the method usedby Nagamachi, an extreme adjective is put on the left-side of the

Page 3: The relationships between automobile head-up display presentation images and drivers’ Kansei

Fig. 2. Six representative HUD images.

60 S. Smith, S.-H. Fu / Displays 32 (2011) 58–68

scale and ‘‘not at all’’ is added to the same adjective on the right-side of the scale. For the given study, 86 pairs of Kansei words werefirst collected, and every pair of Kansei words had opposite mean-ings to span the range of ratings.

Six experts worked together to manually select the proper Kan-sei words. Three persons have engineering background, and theother three have design background. Kansei words with more thanthree checks were retained, and others were dropped. From thefirst-round evaluation, 45 pairs had more than three checks. Afterthe second round evaluation, only 32 pairs of Kansei words re-mained, as shown in Table 1.

3.3. First survey

Thirty subjects participated in the first survey, and their agesranged between 18 and 65. There were 20 male and 10 female.All of them had a driver’s license and driving experiences. Sincethere were certain Kansei overlaps among the 32 pairs of Kanseiwords, the purpose of the first survey was to extract representativeKansei words. The questionnaire was designed based on the 6 HUDimage samples and the 32 pairs of Kansei words. A 22-inch com-puter screen was used to render the 6 HUD image samples, oneby one, to the subjects. Each subject was asked to score their Kan-sei for each HUD image for the 32 pairs of Kansei words, using SDscale from 1 to 7, in which ‘‘1’’ means matching the left-side Kanseithe most, and ‘‘7’’ means matching the right-side Kansei the most.Four is neutral, and any number in between represents the degreetoward each Kansei word. Guilford [22] showed that randomizingthe order of the Kansei words could lead to better survey results.Therefore, in this study, the orders in which both image samplesand Kansei words were presented to the subjects randomly.

3.4. Extract representative Kansei factors

The survey data was first analyzed by factor analysis. Factoranalysis was conducted using an extraction method based on the

Table 1Thirty-two pairs of Kansei words.

Fashionable–obsolete Outgoing–reserved ModernVogue–conservative Passionate–cold AmiableBeautiful–ugly Vigorous–lethargic TechnicSporty–not sporty Innovative–non-innovative Brand-nMasculine–feminine Tender–strong RelaxinVivid–gloomy Pleasant–unpleasant OrganizTidy–messy Concrete–abstract Easy toHuman–inhuman Considerate–inconsiderate Safe–da

principle component analysis. From Fig. 3, it is clearly to see thatthere are five factors whose eigenvalues are greater than 1, andthey are extracted out to explain the total variance. The resultingdata was rotated using the varimax rotation method. The cumula-tive contribution of the five factors is 100%, as shown in Table 2. Inother words, 32 pairs of Kansei words can be represented by onlyfive factors.

In order to further find the representative Kansei factors andunderstand the similarity between each pair of Kansei words, clus-ter analysis was used to find the location of every pair of Kanseiwords in the five-dimensional factor space. The hierarchical clusteranalysis and Ward’s method were used to examine the coordinateof each pair of Kansei words. Five clusters and their centers werefound, and the Euclidean distance between each pair of Kanseiwords and its cluster center was calculated. The distance betweeneach pair of Kansei words and its cluster center can be used to esti-mate the importance of the Kansei words in its cluster. Finally, 5pairs of representative Kansei words were extracted: ‘‘Modern–Ancient’’, ‘‘Masculine–Feminine’’, ‘‘Relaxing–Anxious’’, ‘‘Soft–Harsh’’,and ‘‘Explicit–Ambiguous’’.

4. Second stage

In order to find the relationships between the five representa-tive Kansei words and the HUD physical image design elements,QT1 was used in the second stage.

4.1. Create new images

Since this study focused on the relationships between differentHUD physical image design elements and drivers’ feelings andemotions, rather than on the aesthetic aspects of the design pat-terns, artistic issues were not considered. In addition, HUD opticalhardware design, which affects image brightness, focal distance,and resolution, was not considered. After the first survey, theHUD physical image design space was modified and only the

–ancient Young–mature–hostile Comfortable–uncomfortable

al–non-technical Of high quality–of low qualityamed–not brand-named Soft–harsh

g–anxious Noble–commoned–unorganized Spacious–crowdedunderstand–difficult to understand Explicit–ambiguousngerous Tired–rested

Page 4: The relationships between automobile head-up display presentation images and drivers’ Kansei

Fig. 3. Five factors are extracted.

Table 2Cumulative contribution of the total variance.

Factor Initial eigenvalues Rotation sums of squared loadings

Total % of Variance Cumulative% Total % of Variance Cumulative%

1 15.545 48.577 48.577 13.574 42.418 42.4182 9.463 29.571 78.148 10.565 33.014 75.4333 3.567 11.148 89.296 3.983 12.445 87.8784 1.998 6.244 95.540 1.944 6.074 93.9525 1.427 4.460 100.000 1.935 6.048 100.000

S. Smith, S.-H. Fu / Displays 32 (2011) 58–68 61

important elements were selected for further study. Finally, 6 de-sign elements were extracted: form of the major content, form ofthe secondary content, amount of information, image location,font, and color. The 6 design elements and associated design levelsare shown in Table 3.

Major content here includes driving speed. Secondary contentsinclude gear position, fuel gauge, time, and navigation. Each designelement has several levels. For example, color has three levels: or-ange, green, and blue. The location property describes whether theHUD images are projected in front of the steering wheel (center),on the left-side of the steering wheel (left), or on the right-sideof the steering wheel (right). Since it was impossible to test all de-sign combinations, an orthogonal array was used to conduct a suf-ficient number of experiment evaluations. In this study, an L18orthogonal array was used and 18 new design images were cre-ated, as shown in Fig. 4.

Table 3HUD physical image properties in the second survey.

Design elements Design levels

Form of the major content Digital Meter MixedForm of the secondary

contentDigital Meter

Amount of information One item Three items Five itemsSpeedometer Speedometer Speedometer

Gear position Gear positionFuel gauge Fuel gauge

TimeNavigation

Location Left Center RightFont Arial ElectronicColor Orange Green Blue

4.2. Equipment setup

In the second stage, a driving simulator was built. A BMW HUDdevice was remodelled for survey use. Its original display panelwas replaced by a 2.500 LCD panel so that the presentation imagecan be controlled by a laptop computer. Taking advantage of theoptical capability of the BMW HUD, the new HUD presentationimages were able to be projected onto a windshield. Fig. 5a showsan isometric view of the BMW HUD and Fig. 5b shows the LCD pa-nel and its electronic board installed on the HUD. The remodelledBMW HUD was then installed in a car, as shown in Fig. 6.

A virtual city was built using Autodesk Maya. The human–com-puter interaction was implemented using the Quest3D game en-gine and a Logitec game-used steering wheel. Collision detectionwas enabled to resemble the real physical environment. Userscan accelerate, decelerate, and make turns while driving throughthe virtual city. Because of hardware limitations, the images pro-jected on the windshield were not changing while driving. Fig. 7shows the scenery of the virtual city. Fig. 8 shows an example ofthe HUD presentation image projected onto a windshield.

4.3. Quantification Theory Type 1 (QT1)

QT 1 is a multiple regression model, which uses a least squaremethod to find a solution [23]. In this study, a Kansei predictionmodel can be built by QT1 based on the following equation:

Y ¼ aa1A1 þ aa2A2 þ aa3A3 þ ab1B1 þ ab2B2 þ ac1C1 þ ac2C2

þ ac3C3 þ ad1D1 þ ad2D2 þ ad3D3 þ ae1E1 þ ae2E2 þ ae3E3

þ af 1F1 þ af 2F2 þ K ð1Þ

where Y is the Kansei prediction value, Ai is form of the major infor-mation (Ai = 1 for the chosen level, Ai = 0 for others), Bi is form of thesecondary information (Bi = 1 for the chosen level, Bi = 0 for others),

Page 5: The relationships between automobile head-up display presentation images and drivers’ Kansei

Fig. 4. Eighteen HUD presentation images.

(a) Isometric view (b) Rear view

2.5” LCD panel

Fig. 5. A remodeled BMW HUD.

62 S. Smith, S.-H. Fu / Displays 32 (2011) 58–68

Page 6: The relationships between automobile head-up display presentation images and drivers’ Kansei

Fig. 6. Remodeled BMW HUD installed in a car.

Fig. 7. Driving simulator.

Fig. 8. HUD presentation image projected onto a windshield.

S. Smith, S.-H. Fu / Displays 32 (2011) 58–68 63

Ci is amount of information (Ci = 1 for the chosen level, Ci = 0 forothers), Di is colors (Di = 1 for the chosen level, Di = 0 for others),Ei is locations (Ei = 1 for the chosen level, Ei = 0 for others), Fi is fonts(Fi = 1 for the chosen level, Fi = 0 for others), aij is weighting coeffi-cient for each design level, and K is the constant.

Weighting coefficients and the constant can be found using amultiple regression analysis. The final Kansei value of a HUD de-sign can be found by substituting the corresponding weightingcoefficients and the constant into the equation.

4.4. Second survey

A questionnaire in the second survey was designed based on the18 new HUD image samples and the five pairs of representativeKansei words. Each pair of Kansei words was scored from 1 to 7,same as the first survey. Forty subjects different from the first sur-vey participated in the second survey. Twenty subjects were above45 years old, and twenty were between 18 and 45 years old. Ineach age group, there were 10 male and 10 female. All of themhad a driver’s license and driving experiences.

Before the formal survey, participants were first given a briefingconcerning the survey purpose and process. They also practicedhow to use the driving simulator to drive through the virtual citywith a HUD presentation image projected on the windshield. Afterthey got used to the functionality of the driving simulator, theystarted to conduct a formal survey. During the formal survey, oneHUD presentation image was randomly chosen and projected onthe windshield. Subjects then drove through the virtual city. After-wards, they evaluated the projected HUD presentation image withthe five representative Kansei words. Then, another HUD presenta-tion image was chosen randomly for another testing drive. Thesame procedure was repeated until the 18 new HUD presentationimages were all evaluated. It took about 1.5 h for each subject tofinish the survey.

4.5. Data analysis

Survey data was analyzed using QT1. The results can be catego-rized into three parts: whole population, older-age and younger-age groups, and male and female groups. Each group is discussedby the 5 representative Kansei words.

4.5.1. Whole populationIn this section, the results of the whole population for each pair

of the representative Kansei words are discussed. Table 4 showsthe results of ‘‘Modern–Ancient’’ for the forty subjects. The multi-ple correlation coefficient (R) is 0.945, and the coefficient of deter-mination (R2) is 0.893. From the partial correlation coefficient(PCC), we can see that the amount of information shown on theHUD affected Kansei ‘‘Modern–Ancient’’ the most (PCC = 0.895).From the design levels and the associated weightings, we can seethat most people felt that if there was only one item shown onthe HUD, the image was more ‘‘Ancient’’, but if there were fiveitems shown on the HUD, the image was more ‘‘Modern’’.

The next design item which affected Kansei ‘‘Modern–Ancient’’was the form of the major content (PCC = 0.842). It means that ifthe major content was in the form of meter, most people felt theimage was more ‘‘Ancient’’, but if the major content was mixedwith meter and digital, they felt the image was more ‘‘Modern’’.

Same as Kansei ‘‘Modern–Ancient’’, the other four pairs of Kan-sei words were analyzed. Table 5 lists the highest controlling de-sign element, i.e., the design elements with the highest PCC, theassociated dominant design levels, and their weighting coefficientsfor each pair of Kansei words. Here, the positive and negativeweightings do not represent superior or inferior of a design. Thepositive weighting coefficients represent the influence of the corre-sponding design level affects the right-side Kansei. The negativecoefficients represent the influence of the corresponding design le-vel affects the left-side Kansei. As a result, if designers would liketo increase the right-side Kansei in their design, they can use thedesign levels which have the positive weightings. On the otherhand, if designers would like to increase the left-side Kansei, theycan use the design levels which have the negative weightings.

Page 7: The relationships between automobile head-up display presentation images and drivers’ Kansei

Table 4QT1 analysis of the whole population for Kansei ‘‘Modern–Ancient’’.

Weighting coefficient

Modern Ancient

Design elements Design levels PCC

A1: Digital

A2: Meter Form of the major

content A3: Mixed

0.842

B1: Digital Form of the

secondary content B2: Meter 0.380

C1: 1 item

C2: 3 items Amount of

information C3: 5 items

0.895

D1: Orange

D2: Green Color

D3: Blue

0.395

E1: Left

E2: Middle Location

E3: Right

0.717

F1: ArialFont

F2: Electronic0.594

Constant K = 3.790

R = 0.945

R2 = 0.893

← →

Table 5Highest controlling design elements for the whole population.

Kansei Modern–Ancient Feminine–Masculine Soft–Harsh Relaxing–Anxious Explicit–Ambiguous(R2) (0.893) (0.86) (0.712) (0.85) (0.82)

Highest controlling design element Amount of information Color Location Amount of information Form of the major content(PCC) (0.895) (0.898) (0.635) (0.882) (0.811)

Design level Five items–one item Orange–Blue Center–Left One item–five items Digital–Meter(weighting coefficients) (�0.699) (0.760) (�0.524) (0.406) (�0.139) (0.140) (�0.318) (0.436) (�0.344) (0.485)

64 S. Smith, S.-H. Fu / Displays 32 (2011) 58–68

4.5.2. Age groupsThe results of older and younger-age groups are shown in Table

6. For Kansei ‘‘Modern–Ancient’’, both age groups had the samefeelings; that is, the amount of information affected Kansei ‘‘Mod-ern–Ancient’’ the most. Five items in the information brought in‘‘Modern’’ feeling, while one item brought in ‘‘Ancient’’ feeling.For Kansei ‘‘Feminine–Masculine’’, the two age groups also hadthe same feelings; that is, color affected Kansei ‘‘Feminine–Mascu-line’’ the most. Orange brought in ‘‘Feminine’’ feeling, but bluebrought in ‘‘Masculine’’ feeling.

For Kansei ‘‘Soft–Harsh’’, for younger people, the amount ofinformation has the highest PCC. Three items were considered‘‘Soft’’ and five items ‘‘Harsh’’. The older age group had a very

low R2 for this Kansei. However, we still can see that the major con-tent in the form of digital was considered ‘‘Soft’’, but ‘‘Harsh’’ formeter. One of the reasons might be older people have inferior eye-sight and have difficulty in reading meter pointer.

For Kansei ‘‘Relaxing–Anxious’’, both age groups had the samefeelings; that is, the amount of information affected it the most.Five items in the information brought in ‘‘Anxious’’ feeling, butone item brought in ‘‘Relaxing’’ feeling. Finally, for Kansie ‘‘Expli-cit–Ambiguous’’, younger people considered the amount of infor-mation affected it the most. One item of information wasconsidered ‘‘Explicit’’ and five items ‘‘Ambiguous’’. However, olderpeople considered form of the major content affected Kansei ‘‘Ex-plicit–Ambiguous’’ the most. Digital was considered ‘‘Explicit’’,

Page 8: The relationships between automobile head-up display presentation images and drivers’ Kansei

Tabl

e6

Hig

hest

cont

rolli

ngde

sign

elem

ents

for

the

age

grou

ps.

Youn

ger

age

grou

p(1

8–45

)K

anse

iM

oder

n–A

nci

ent

Fem

inin

e–M

ascu

lin

eSo

ft–H

arsh

Rel

axin

g–A

nxi

ous

Expl

icit

–Am

bigu

ous

(R2)

(0.9

45)

(0.8

26)

(0.9

26)

(0.8

45)

(0.8

72)

Hig

hes

tco

ntr

olli

ng

desi

gnel

emen

tA

mou

nt

ofin

form

atio

nC

olor

Am

oun

tof

info

rmat

ion

Am

oun

tof

info

rmat

ion

Am

oun

tof

info

rmat

ion

(PCC

)(0

.931

)(0

.854

)(0

.873

)(0

.875

)(0

.858

))

Des

ign

leve

lFi

veit

ems–

one

item

Ora

nge

–Blu

eTh

ree

item

–five

item

sO

ne

item

–five

item

sO

ne

item

–five

item

s(w

eigh

tin

gco

effi

cien

ts)

(�0.

561)

(0.6

31)

(�0.

508)

(0.4

00)

(�0.

153)

(0.2

81)

(�0.

317)

(0.5

42)

(�0.

611)

(0.4

72)

Old

erag

egr

oup

(45+

)K

anse

iM

oder

n–A

nci

ent

Fem

inin

e–M

ascu

lin

eSo

ft–H

arsh

Rel

axin

g–A

nxi

ous

Expl

icit

–Am

bigu

ous

(R2)

(0.8

27)

(0.7

42)

(0.4

57)

(0.7

83)

(0.6

18)

Hig

hes

tco

ntr

olli

ng

desi

gnel

emen

tA

mou

nt

ofin

form

atio

nC

olor

Form

ofth

em

ajor

con

ten

tA

mou

nt

ofin

form

atio

nFo

rmof

the

maj

orco

nte

nt

(PCC

)(0

.85)

(0.7

95)

(0.5

53)

(0.7

96)

(0.6

89)

Des

ign

leve

lFi

veit

ems–

one

item

Ora

nge

–Blu

eD

igit

al–M

eter

On

eit

em–fi

veit

ems

Dig

ital

–Met

er(w

eigh

tin

gco

effi

cien

ts)

(�0.

836)

(0.8

89)

(�0.

539)

(0.4

11)

(�0.

200)

(0.1

75)

(�0.

319)

(0.3

31)

(�0.

394)

(0.4

56)

S. Smith, S.-H. Fu / Displays 32 (2011) 58–68 65

and meter ‘‘Ambiguous’’. The reason might also be because of olderpeople’s inferior eyesight.

4.5.3. Gender groupsThe results of male and female groups are shown in Table 7. For

Kansei ‘‘Modern–Ancient’’, both groups had the same feelings; thatis, the amount of information affected it the most. Five items in theinformation brought in ‘‘Modern’’ feeling, but one item brought in‘‘Ancient’’ feeling. For Kansei ‘‘Feminine–Masculine’’, male groupconsidered font affected it the most. Ariel font gave ‘‘Feminine’’feeling, but electronic font gave ‘‘Masculine’’ feeling. However, fe-male group considered color affected Kansei ‘‘Feminine–Mascu-line’’ the most. Orange gave ‘‘Feminine’’ feeling, but blue gave‘‘Masculine’’ feeling.

For Kansei ‘‘Soft–Harsh’’, male group considered form of themajor content affected it the most. Digital gave ‘‘Soft’’ feeling, butmixed form gave ‘‘Harsh’’ feeling. For the female group, color af-fected it the most. Female group considered orange color gave‘‘Soft’’ feeling, but blue color gave ‘‘Harsh’’ feeling.

For Kansei ‘‘Relaxing–Anxious’’, male group considered theamount of information affected it the most. One item brought in‘‘Relaxing’’ feeling, but five items brought in ‘‘Anxious’’ feeling.However, for the female group, location of the HUD presentationimage affected it the most. If the image was projected on theright-side of the steering wheel, they felt ‘‘Relaxing’’, but if the im-age was projected on the left-side, they felt ‘‘Anxious’’. Finally, forKansie ‘‘Explicit–Ambiguous’’, both groups considered form of themajor content affected it the most. Digital was considered ‘‘Expli-cit’’, but meter ‘‘Ambiguous’’.

5. Verification

From the results of the second stage, a prediction model wasbuilt using QT1. In this section, the validity of the model was ver-ified. Two distinct commercial HUD presentation images on themarket were tested. Finally, a one-sample t-test was conductedto compare the predicted values and survey values. The same fortysubjects in the second survey participated in the verification test.The process was the same as the second stage. The SD scale from1 to 7 was used.

5.1. Sample 1 – GM HUD image design

A GM HUD image presentation image was tested. According toEq. (1), Kansei value for each pair of Kansei words can be found. Forexample, the design levels and the corresponding weighting coef-ficients for Kansei ‘‘Modern–Ancient’’ for the GM HUD image were:Form of the major content – Mixed (�0.540); Form of the second-ary content – Digital (�0.123); Amount of information – threeitems (�0.061); Color – green (�0.090); Location – center(0.201); Font – Electronic (�0.220); the constant K was 3.790. Thus,the total predicted Kansei value for Kansei ‘‘Modern–Ancient’’ was:

Y ¼ ð�0:540Þ þ ð�0:123Þ þ ð0:061Þ þ ð�0:090Þ þ ð0:201Þþ ð�0:220Þ þ ð3:790Þ

¼ 2:975:

Since the neutral value for each pair of Kansei words was 4, anynumber greater than 4 means the result was toward the right-sideKansei. On the other hand, if the total value was less than 4, itmeans the result was toward the left-side Kansei. The same meth-od was used to calculate the predicted values for other Kanseiwords. The results are shown in Table 8. In this example, QT1 pre-dicted that Sample 1 HUD image designed by GM was ‘‘Modern’’,‘‘Masculine’’, ‘‘Soft’’, ‘‘Relaxing’’, and ‘‘Explicit’’.

Page 9: The relationships between automobile head-up display presentation images and drivers’ Kansei

Table 7Highest controlling design elements for the gender groups.

MaleKansei Modern–Ancient Feminine–

MasculineSoft–Harsh Relaxing–Anxious Explicit–Ambiguous

(R2) (0.904) (0.785) (0.809) (0.910) (0.857)

Highest controlling designelement

Amount ofinformation

Font Form of the major content Amount ofinformation

Form of the major content

(PCC) (0.901) (0.892) (0.746) (0.927) (0.847)

Design level Five items–one item Arial–Electronic Digital–Mixed One item–five items Digital–Meter(weighting coefficients) (�0.717) (0.750) (�0.246) (0.246) (�0.211) (0.189) (�0.394) (0.506) (�0.397) (0.503)

FemaleKansei Modern–Ancient Feminine–

MasculineSoft–Harsh Relaxing–Anxious Explicit–Ambiguous

(R2) (0.863) (0.883) (0.808) (0.797) (0.765)

Highest controlling designelement

Amount ofinformation

Color Color Location Form of the major content

(PCC) (0.868) (0.854) (0.836) (0.747) (0.740)

Design level Five items–one item Orange–Blue Orange–Blue Right–Left Digital–Meter(weighting coefficients) (�0.681) (0.769) (�0.856) (0.653) (�0.308) (0.425) (�0.308) (0.342) (�0.292) (0.467)

Table 8QT1 predicted values for GM HUD.

Design element Design level Modern–Ancient Feminine–Masculine Soft–Harsh Relaxing–Anxious Explicit–Ambiguous

Form of the major content Mixed �0.540 �0.053 0.128 0.090 �0.140Form of the secondary content Digital �0.123 0.014 0.014 �0.137 �0.218Amount of information Three items �0.061 �0.086 �0.039 �0.118 0.072Color Green �0.090 0.118 0.015 0.082 0.031Location Center 0.201 �0.019 �0.139 0.003 0.085Font Electronic �0.220 0.239 0.111 0.103 �0.038

Constant K 3.790 4.432 3.310 3.451 2.961Total value 2.957 4.645 3.400 3.474 2.753

Table 9Comparison between predicted values and survey values.

Predicted value, Y Survey value DOF p-Value 95% CI

Avg. SD Lower limit Upper limit

Modern–Ancient 2.957 2.675 1.439 39 0.223⁄ 2.215 3.135Feminine–Masculine 4.645 5.275 1.281 39 0.003 4.865 5.685Soft–Harsh 3.400 3.850 1.442 39 0.056⁄ 3.389 4.311Relaxing–Anxious 3.474 3.725 1.617 39 0.332⁄ 3.208 4.242Explicit–Ambiguous 2.753 3.075 1.575 39 0.204⁄ 2.571 3.579

66 S. Smith, S.-H. Fu / Displays 32 (2011) 58–68

The predicted values and the survey values were tested by aone-sample t-test, and a = 0.05, confidence interval = 95%. The re-sults in Table 9 show that QT1 successfully predict 4 out of 5 pairsof Kansei words. The p-value for Kansei ‘‘Feminine –Masculine’’was less than 0.05; that means the survey value was different from

the predicted value. However, they both were greater than 4. Thatmeans the survey value and the predicted value both agreed thatSample 1 was more ‘‘Masculine’’ than ‘‘Feminine’’. However, thedegree of ‘‘Masculine’’ for the survey value was much stronger thanthe predicted value. Thus, statistically, they were different.

Page 10: The relationships between automobile head-up display presentation images and drivers’ Kansei

Table 10QT1 predicted values for BMW HUD.

Design element Design level Modern–Ancient Feminine–Masculine Soft–Harsh Relaxing–Anxious Explicit–Ambiguous

Form of the major content Digital �0.053 �0.015 �0.089 �0.097 �0.344Form of the secondary content Digital �0.123 0.014 0.014 �0.137 �0.218Amount of information five items �0.699 �0.065 0.078 0.436 0.293Color Orange �0.090 �0.524 �0.135 �0.139 �0.253Location Right �0.432 �0.036 �0.001 �0.156 �0.136Font Arial 0.221 �0.239 �0.111 �0.103 0.038

Constant K 3.790 4.432 3.310 3.451 2.961Total value 2.614 3.567 3.066 3.255 2.341

Table 11Comparison between predicted values and survey values.

Predict value, Y Survey value DOF p-Value 95% CI

Avg. SD Lower limit Upper limit

Modern–Ancient 2.614 1.875 0.939 39 0.000 1.575 2.175Feminine–Masculine 3.567 4.000 1.359 39 0.051⁄ 3.565 4.435Soft–Harsh 3.066 3.225 1.493 39 0.505⁄ 2.747 3.703Relaxing–Anxious 3.255 3.525 1.768 39 0.340⁄ 2.959 4.091Explicit–Ambiguous 2.341 3.625 2.047 39 0.000 2.970 4.280

S. Smith, S.-H. Fu / Displays 32 (2011) 58–68 67

5.2. Sample 2 – BMW HUD image design

In this example, a BMW HUD presentation image was tested.Same method was used to predict the Kansei value for each pairof Kansei words. The results are shown in Table 10. In this example,QT1 predicted that the BMW HUD image was ‘‘Modern’’, ‘‘Femi-nine’’, ‘‘Soft’’, ‘‘Relaxing’’, and ‘‘Explicit’’.

The predicted values and the survey values were tested by aone-sample t-test, and a = 0.05, confidence interval = 95%. The re-sults in Table 11 show that QT1 successfully predicted 3 out of 5pairs of Kansei words. The p-value for Kansei ‘‘Modern–Ancient’’and ‘‘Explicit–Ambiguous’’ were less than 0.05; that means the sur-vey values were different from the predicted value. Same as Sam-ple 1, although, statistically, the predicted values of the two pairsof Kansei words were different from the survey values, they werein the same Kansei directions. That means both predicted valueand survey value agreed that the BMW HUD image was ‘‘Modern’’and ‘‘Explicit’’, but the degrees were different.

6. Conclusions

Prior research has proven that HUDs can enhance drivers’ aware-ness and responses. Most automobile manufacturers are integratingHUDs into their vehicle design. However, in the modern market,HUDs should not only satisfy functional requirements but also users’feelings and emotions while driving. This study analyzes the rela-tionships between drivers’ feelings and HUD physical image designelements using Kansei engineering. The study shows that drivers’Kansei responses consist of five principal factors. Each Kansei factor

was quantitatively related to the HUD physical image design ele-ments using QT1. A model was built to describe how the semanticspace and the space of physical properties were associated. Twoexisting HUDs in the market were used to test the validity of themodel. The case studies showed that QT1 can successfully predictthe Kansei values for a given HUD presentation image. The resultsof the study can also provide designers with a guideline for person-alized or customized HUD presentation image design, to enhance acar’s uniqueness.

In this research, all the subjects were from the same geographicallocation. Thus, the QT1 model is more applicable to the same popu-lation. There is no evidence to show the applicability of the model fordifferent ethnic groups. However, in the global market, automobilemanufacturers often export their products to different countries.In the future, research concerning Kansei differences between differ-ent ethnic groups for HUD image designs will be carried out.

Acknowledgements

Supports from the Hua-Chuang Automobile Information Tech-nical Center and the Yen Tjing Ling Industrial Research Instituteare gratefully acknowledged.

References

[1] W.W. Wierwille, Development of an initial model relating driver in-vehiclevisual demands to accident rate, in: Third Annual Mid-Atlantic Human FactorsConference Proceedings, Virginia Polytechnic Institute and State University,Blacksburg, VA, 1995.

Page 11: The relationships between automobile head-up display presentation images and drivers’ Kansei

68 S. Smith, S.-H. Fu / Displays 32 (2011) 58–68

[2] Y.C. Liu, M.H. Wen, Comparison of head-up display (HUD) vs. head-downdisplay (HDD): driving performance of commercial vehicle operators inTaiwan, International Journal of Human–Computer Studies 61 (2004) 679–697.

[3] V. Charissis, S. Arafat, W. Chan, C. Christomanos, Driving simulator for head updisplay evaluation: driver’s response time on accident simulation cases, in:Driving Simulation Conference, DSC’06 Asia/Pacific, Tsukuba, Tokyo, Japan,2006.

[4] V. Charissis, S. Papanastasiou, Human–machine collaboration through vehiclehead up display interface, Cognition, Technology and Work 12 (2010) 41–50.

[5] Y.C. Liu, Effect of using head-up display in automobile context on attentiondemand and driving performance, Displays 24 (2003) 157–165.

[6] M. Tonnis, C. Lange, G. Klinker, Visual longitudinal and lateral drivingassistance in the head-up display of cars, in: Proceedings of the Sixth IEEEand ACM International Symposium on Mixed and Augmented Reality, Nara,Japan, 2007, pp. 128–131.

[7] J. Lincoln, How a laser HUD can make driving safer? 2007. <http://www.microvision.com/pdfs/safer_driving.pdf>.

[8] D.R. Tufano, Automotive HUDs: the overlooked safety issues, Human Factors39 (1997) 303–311.

[9] R.S. McCann, J.W. McCandeless, Human–machine teaming for dynamic faultmanagement in next-generation launch vehicles, in: Proceedings of the JointArmy–Navy–NASA–air Force (JANNAF), 3rd Modeling and SimulationSubcommittee Meeting, Colorado Spring, 2003.

[10] H. Yoo, O. Tsimhoni, P. Green, The effect of HUD warning location on driverresponses, in: International Transportation Systems World Congress, 1999, pp.1–10.

[11] M. Nagamachi, Kansei engineering: a new ergonomic consumer-orientedtechnology for product development, International Journal of IndustrialErgonomics 15 (1995) 3–11.

[12] M. Nagamachi, in: M. Nagamachi (Ed.), Kansei Engineering: The Frameworkand Methods, Kansei Engineering 1, Kaibundo Publishing Co. Ltd., Kure, 1997,pp. 1–9.

[13] K. Chen, S.C. Chiu, F.C. Lin, Kansei design with cross cultural perspectives, in:N. Aykin (Ed.), Usability and Internationalization, Part I, HCII 2007, LectureNotes in Computer Science 4559 (2007) 47–56.

[14] H.Y. Chen, Y.M. Chang, Extraction of product form features critical todetermining consumers’ perceptions of product image using a numericaldefinition-based systematic approach, International Journal of IndustrialErgonomics 39 (2009) 133–145.

[15] M. Nagamachi, Perspectives and the new trend of Kansei/affective engineering,The TQM Journal 20 (2008) 290–298.

[16] M. Nagamachi, Kansei engineering as a powerful consumer-orientedtechnology for product development, Applied Ergonomics 33 (2002) 289–294.

[17] S. Schütte, Engineering Emotional Values in Product Design, LinköpingUniversity, Linköping, 2005.

[18] S. Schütte, J. Eklund, Product Development for Heart and Soul, LinköpingUniversity, Department for Quality and Human Systems Engineering,Linköping, 2003.

[19] C.E. Osgood, G.J. Suci, P.H. Tannenbaum, The Measurement of Meaning,University of Illinois Press, Champaign, 1957.

[20] S. Ishihara, Kansei engineering procedure and statistical analysis, in:Workshop at International Conference on Affective Human Factors Design,Singapore, 2001.

[21] M. Nagamachi, Workshop 2 on Kansei engineering, in: Proceedings ofInternational Conference on Affective Human Factors Design, Singapore, 2001.

[22] J.P. Guilford, Psychology Methods, McGraw-Hill Publishing Company, NewYork, 1971.

[23] M. Nagamachi, Kansei engineering and rough sets model, Lecture Notes inComputer Science 4259/2006 (2006) 27–37.