Blindness and Assistive Systems for Blind Navigation

Post on 24-Jan-2015

500 views 1 download

description

An overview of the Human Eye, its function and its uniqueness, along with a brief roundup of assistive technologies developed for blind navigation.

Transcript of Blindness and Assistive Systems for Blind Navigation

life is

INSPIRATION

Life is

KINDNESS

L i f e i sBEAUTIFUL

B U T W H A T I F Y O U C A N N O T S E E I T A N Y M O R E ?

BLINDNESS

A P R E S E N T A T I O N B Y

A S H I S H D I N E S H B A B U

U N D E R G U I D A N C E O F

M R . V I J U S H A N K A R

C H A P T E R 1

T H E H U M A N E Y E

• Cornea : A thin membrane having a refractive index of approx. 1.38. Protects the eye; Refracts light as it enters the eye.• Pupil : An opening in the middle of the eyeball.

Appears black because the incident light is absorbed on the retina and does not exit the eye. The size of the pupil opening can be adjusted by the dilation of the iris.• Iris : A coloured diaphragm capable of adjusting

the size of the opening. Bright-light situations - reduce the pupil opening and limit the amount of light. Dim-light situations - maximize the size of the pupil opening and increase the amount of light.

• Crystalline lens : Made of fibrous layers having a refractive index of roughly 1.40. Fine-tunes the vision process by changing its shape. The lens is attached to the ciliary muscles.• Ciliary muscles : Relax and contract to change the shape

of the lens. Assist the eye in producing an image on the back of the eyeball.• Retina : Inner surface of the eye. Contains rods and

cones that detect intensity and frequency of incident light. Has up to 120 million rods and 6 million cones that detect intensity and frequency of light. Rods and cones send nerve impulses to the brain. • Nerve Cells : Nerve impulses travel through a nerve cell

network. About one million neural pathways exist from rods and cones to the brain. Bundled together to form the optic nerve on the back of the eyeball.

The ultimate goal of such an anatomy is to allow humans to focus images on the back of the retina.

C H A P T E R 2

I M A G E F O R M A T I O N

Cornea - lens - ciliary muscles - retina• The lens of the eye is not where all the refraction of

incoming light rays takes place. Most of the refraction occurs at the cornea. • The refractive index of the cornea is significantly greater

than that of the surrounding air. • This difference in optical density and the double convex

shape - ability of the cornea to do most of the refraction. • Shape of crystalline lens is changed by ciliary muscles.

Induces small alterations in the amount of corneal bulge and fine-tune the additional refraction.

• For an object located at a point more than 2-focal lengths from the "lens," the image will be located between the F and the 2F point. • The image will be inverted, reduced in size, and real. • The cornea-lens system produces an image of the object on

the retinal surface. • The image is • Real - Vision is dependent upon stimulation of

nerve impulses by incident light rays. Only real images are capable of producing such a stimulation.• Reduced in size - Allows the entire image to "fit" on the

retina. • Inverted - Brain properly interprets the signal as

originating from a right-side-up object.

• The Power of Accommodation: Ciliary muscles of the eye serve to contract and relax, thus changing the shape of the lens. This allows the eye to change its focal length and thus focus images of objects that are both close up and far away.

C H A P T E R 3

H U M A N E Y E VS C A M E R A

“WHAT YOU SEE IS NOT WHAT YOU GET”

What makes the human eye unique?• Our eyes are able to look around a scene and dynamically

adjust based on subject matter. • Our eyes can compensate as we focus on regions of varying

brightness, can look around to encompass a broader angle of view, or can alternately focus on objects at a variety of distances. • The end result is akin to a video camera, that compiles

relevant snapshots to form a mental image.

• What we really see is our mind’s reconstruction of objects based on input provided by the eyes, not the actual light received by our eyes.

Angle of View

Resolution and Detail

Sensitivity and Dynamic Range

Angle of View

Plane

Short Focal Length Long Focal Length

• Each eye individually has 120-200 degree angle of view, depending on the definition of being "seen."

• Dual eye overlap region is around 130 degrees.• Our central angle of view (around 40-60 degrees) is what most

impacts our perception.

• Too wide angle of view - Relative sizes of objects are exaggerated• Too narrow angle - Objects are all nearly the same relative

size and sense of depth is lost. • Extremely wide angles - Tend to make objects near the edges of

the frame appear stretched.

Resolution and Detail

• Our mind does not remember images pixel by pixel; it records memorable textures, colour and contrast on an image by image basis.

• To assemble a detailed mental image, our eyes focus on several regions of interest in rapid succession.

• Asymmetry : Each eye is more capable of perceiving detail below our line of sight than above, and their peripheral vision is also much more sensitive in directions away from the nose than towards it. Cameras record

images almost perfectly symmetrically.• Low-Light Viewing : In extremely low light, our eyes

begin to see in monochrome. Our central vision also begins to depict less detail than just off-center.

• Subtle Gradations : With a camera, enlarged detail is always easier to resolve but enlarged detail becomes less visible to our eyes.

Sensitivity and Dynamic Range• Our eye dynamically adjusts like a video camera• After adjusting to low light, our eyes can see anywhere

from 10-14 f-stops of dynamic range.• Dynamic range also depends on brightness and subject

contrast.• Cameras can take longer exposures to bring out even

fainter objects, whereas our eyes do not see additional detail after staring at something for more than about 10-15 seconds.

Our mind is able to intelligently interpret the information from our eyes, whereas with a camera,

all we have is the raw image. Even so, current digital cameras fare surprisingly well, and surpass

our own eyes for several visual capabilities.

C H A P T E R 4

N A V I G A T I O N A S S I S T A N C E S O L U T I O N S

S O F T W A R E S O L U T I O N S

The Loadstone Project• Open source mobile software for navigation

for blind/visually impaired users using S60 Symbian operating system. • A GPS receiver is connected to the cell

phone by Bluetooth. The software provides users options to create and store their own waypoints for navigation and share it with others.• Many blind people around the world use

Nokia cell phones because of availability of “Talks” from Nuance Communications and “Mobile Speak” from Code Factory, which enable text-to-speech speech and also allow the use of Loadstone GPS.

Mobile Geo

• Mobile Geo is Code Factory’s GPS navigation software for Windows Mobile-based smartphones. • Mobile Geo is the first solution specifically designed to

serve as a navigation aid for people with a visual impairment which works with a wide range of mainstream mobile devices. • Mobile Geo is seamlessly integrated with Code Factory’s

popular screen readers, “Mobile Speak” for Pocket PCs and “Mobile Speak” for Windows Mobile smartphones.

BlindSquare • BlindSquare is MIPsoft’s GPS navigation software for iPhone and iPad. • It differs from other

navigation applications by using crowd sourced data. • It uses Foursquare for

points of interest and OpenStreetMap for street information.

R E L A T E D W O R K

Trinetra• Aims to develop cost-effective, independence-

enhancing technologies to benefit blind people. • Addresses accessibility concerns of blind people using

public transportation systems. • Using GPS receivers and Infrared sensors, information is

relayed to a centralized fleet management server via a cellular modem. • Blind people, using common text-to-speech enabled cell

phones can query estimated time of arrival, locality, and current bus capacity using a web browser.• Spearheaded by Professor Priya Narasimhan, Trinetra is

an ongoing project at the Electrical and Computer Engineering department of Carnegie Mellon University.

Drishti• Drishti is a wireless pedestrian navigation system. • Integrates several technologies including wearable

computers, voice recognition and synthesis, wireless networks, GIS and GPS.• Augments contextual information to the visually

impaired and computes optimized routes based on user preference, temporal constraints (e.g. traffic congestion), and dynamic obstacles (e.g. ongoing ground work).• Environmental conditions and landmark information

queries from a spatial database along their route are provided on the fly through detailed explanatory voice cues. • Capability for the user to add intelligence, as perceived

by the blind user, to the central server

NAVIG

• NAVIG is an innovative multidisciplinary project, with fundamental and applied aspects.• The main objective is to increase the autonomy of blind

people in their navigation capabilities.• Reaching a destination while avoiding obstacles is one

of the most difficult issue that blind individuals have to face. Achieving autonomous navigation will be pursued indoor and outdoor, in known and unknown environments.

• Two research centres, one specialized in human-machine interaction (IRIT) for handicapped people and in auditory perception, spatial cognition, sound design and augmented reality (LIMSI)• The other is specialized in human and computer vision

(CERCO).