GSM Basics Tutorial and Overview

116
1 GSM basics tutorial and overview [1] - a tutorial, description, overview about the basics of GSM - Global System for Mobile communications with details of its radio interface, infrastructure technology, network and operation. This GSM tutorial is split into several pages: [1] GSM basics tutorial and overview [2] GSM history [3] GSM network architecture [4] GSM interfaces [5] GSM radio air interface / access network [6] GSM frames, superframes and hyperframes [7] GSM frequency bands and allocations [8] GSM power class, control and amplifiers [9] GSM physical and logical channels [10] GSM codecs / vocoders [11] GSM handover or handoff The GSM system is the most widely used cellular technology in use in the world today. It has been a particularly successful cellular phone technology for a variety of reasons including the ability to roam worldwide with the certainty of being able to be able to operate on GSM networks in exactly the same way - provided billing agreements are in place. The letters GSM originally stood for the words Groupe Speciale Mobile, but as it became clear this cellular technology was being used world wide the meaning of GSM was changed to Global System for Mobile Communications. Since this cellular technology was first deployed in 1991, the use of GSM has grown steadily, and it is now the most widely cell phone system in the world. GSM reached the 1 billion subscriber point in February 2004, and is now well over the 3 billion subscriber mark and still steadily increasing. GSM system overview The GSM system was designed as a second generation (2G) cellular phone technology. One of the basic aims was to provide a system that would enable 1 GSM Basics Tutorial and Overview

Transcript of GSM Basics Tutorial and Overview

Page 1: GSM Basics Tutorial and Overview

1

GSM basics tutorial and overview [1]

- a tutorial, description, overview about the basics of GSM - Global System for Mobile communications with details of its radio interface, infrastructure technology, network and operation.

This GSM tutorial is split into several pages:

[1] GSM basics tutorial and overview[2] GSM history[3] GSM network architecture[4] GSM interfaces[5] GSM radio air interface / access network[6] GSM frames, superframes and hyperframes[7] GSM frequency bands and allocations[8] GSM power class, control and amplifiers[9] GSM physical and logical channels[10] GSM codecs / vocoders[11] GSM handover or handoff

The GSM system is the most widely used cellular technology in use in the world today. It has been a particularly successful cellular phone technology for a variety of reasons including the ability to roam worldwide with the certainty of being able to be able to operate on GSM networks in exactly the same way - provided billing agreements are in place.

The letters GSM originally stood for the words Groupe Speciale Mobile, but as it became clear this cellular technology was being used world wide the meaning of GSM was changed to Global System for Mobile Communications. Since this cellular technology was first deployed in 1991, the use of GSM has grown steadily, and it is now the most widely cell phone system in the world. GSM reached the 1 billion subscriber point in February 2004, and is now well over the 3 billion subscriber mark and still steadily increasing.

GSM system overview

The GSM system was designed as a second generation (2G) cellular phone technology. One of the basic aims was to provide a system that would enable greater capacity to be achieved than the previous first generation analogue systems. GSM achieved this by using a digital TDMA (time division multiple access approach). By adopting this technique more users could be accommodated within the available bandwidth. In addition to this, ciphering of the digitally encoded speech was adopted to retain privacy. Using the earlier analogue cellular technologies it was possible for anyone with a scanner receiver to listen to calls and a number of famous personalities had been "eavesdropped" with embarrassing consequences.

GSM services

1GSM Basics Tutorial and Overview

Page 2: GSM Basics Tutorial and Overview

2Speech or voice calls are obviously the primary function for the GSM cellular system. To achieve this the speech is digitally encoded and later decoded using a vocoder. A variety of vocoders are available for use, being aimed at different scenarios.

In addition to the voice services, GSM cellular technology supports a variety of other data services. Although their performance is nowhere near the level of those provided by 3G, they are nevertheless still important and useful. A variety of data services are supported with user data rates up to 9.6 kbps. Services including Group 3 facsimile, videotext and teletex can be supported.

One service that has grown enormously is the short message service. Developed as part of the GSM specification, it has also been incorporated into other cellular technologies. It can be thought of as being similar to the paging service but is far more comprehensive allowing bi-directional messaging, store and forward delivery, and it also allows alphanumeric messages of a reasonable length. This service has become particularly popular, initially with the young as it provided a simple, low fixed cost.

GSM basics

The GSM cellular technology had a number of design aims when the development started:

It should offer good subjective speech quality It should have a low phone or terminal cost Terminals should be able to be handheld The system should support international roaming It should offer good spectral efficiency The system should offer ISDN compatibility

The resulting GSM cellular technology that was developed provided for all of these. The overall system definition for GSM describes not only the air interface but also the network or infrastructure technology. By adopting this approach it is possible to define the operation of the whole network to enable international roaming as well as enabling network elements from different manufacturers to operate alongside each other, although this last feature is not completely true, especially with older items.

GSM cellular technology uses 200 kHz RF channels. These are time division multiplexed to enable up to eight users to access each carrier. In this way it is a TDMA / FDMA system.

The base transceiver stations (BTS) are organised into small groups, controlled by a base station controller (BSC) which is typically co-located with one of the BTSs. The BSC with its associated BTSs is termed the base station subsystem (BSS).

Further into the core network is the main switching area. This is known as the mobile switching centre (MSC). Associated with it is the location registers, namely the home location register (HLR) and the visitor location register (VLR) which track the location of mobiles and enable calls to be routed to them. Additionally there is the Authentication Centre (AuC), and the Equipment Identify Register (EIR) that are used in authenticating the mobile before it is allowed onto the network and for billing. The operation of these are explained in the following pages.

2GSM Basics Tutorial and Overview

Page 3: GSM Basics Tutorial and Overview

3Last but not least is the mobile itself. Often termed the ME or mobile equipment, this is the item that the end user sees. One important feature that was first implemented on GSM was the use of a Subscriber Identity Module. This card carried with it the users identity and other information to allow the user to upgrade a phone very easily, while retaining the same identity on the network. It was also used to store other information such as "phone book" and other items. This item alone has allowed people to change phones very easily, and this has fuelled the phone manufacturing industry and enabled new phones with additional features to be launched. This has allowed mobile operators to increase their average revenue per user (ARPU) by ensuring that users are able to access any new features that may be launched on the network requiring more sophisticated phones.

GSM system overview

The table below summarises the main points of the GSM system specification, showing some of the highlight features of technical interest.

Specification Summary for GSM Cellular System Multiple access technology FDMA / TDMA Duplex technique FDD

Uplink frequency band 933 -960 MHz (basic 900 MHz band only)

Downlink frequency band 890 - 915 MHz(basic 900 MHz band only)

Channel spacing 200 kHz Modulation GMSK

Speech coding Various - original was RPE-LTP/13

Speech channels per RF channel

8

Channel data rate 270.833 kbps Frame duration 4.615 ms

GSM summary

The GSM system is the most successful cellular telecommunications system to date. With subscriber numbers running into billions and still increasing, it has been proved to have met its requirements. Further pages of this GSM tutorial or overview detail many of the GSM basics from the air interface, frame and slot structures to the logical and physical channels as well as details about the GSM network.

GSM History [2]Today the GSM cell or mobile phone system is the most popular in the world. GSM handsets are widely available at good prices and the networks are robust and reliable. The GSM system is also feature-rich with applications such as SMS text messaging, international roaming, SIM cards and the like. It is also being

3GSM Basics Tutorial and Overview

Page 4: GSM Basics Tutorial and Overview

4enhanced with technologies including GPRS and EDGE. To achieve this level of success has taken many years and is the result of both technical development and international cooperation. The GSM history can be seen to be a story of cooperation across Europe, and one that nobody thought would lead to the success that GSM is today.

The first cell phone systems that were developed were analogue systems. Typically they used frequency-modulated carriers for the voice channels and data was carried on a separate shared control channel. When compared to the systems employed today these systems were comparatively straightforward and as a result a vast number of systems appeared. Two of the major systems that were in existence were the AMPS (Advanced Mobile Phone System) that was used in the USA and many other countries and TACS (Total Access Communications System) that was used in the UK as well as many other countries around the world.

Another system that was employed, and was in fact the first system to be commercially deployed was the Nordic Mobile Telephone system (NMT). This was developed by a consortium of companies in Scandinavia and proved that international cooperation was possible.

The success of these systems proved to be their downfall. The use of all the systems installed around the globe increased dramatically and the effects of the limited frequency allocations were soon noticed. To overcome these a number of actions were taken. A system known as E-TACS or Extended-TACS was introduced giving the TACS system further channels. In the USA another system known as Narrowband AMPS (NAMPS) was developed.

New approaches

Neither of these approaches proved to be the long-term solution as cellular technology needed to be more efficient. With the experience gained from the NMT system, showing that it was possible to develop a system across national boundaries, and with the political situation in Europe lending itself to international cooperation it was decided to develop a new Pan-European System. Furthermore it was realized that economies of scale would bring significant benefits. This was the beginnings of the GSM system.

To achieve the basic definition of a new system a meeting was held in 1982 under the auspices of the Conference of European Posts and Telegraphs (CEPT). They formed a study group called the Groupe Special Mobile ( GSM ) to study and develop a pan-European public land mobile system. Several basic criteria that the new cellular technology would have to meet were set down for the new GSM system to meet. These included: good subjective speech quality, low terminal and service cost, support for international roaming, ability to support handheld terminals, support for range of new services and facilities, spectral efficiency, and finally ISDN compatibility.

With the levels of under-capacity being projected for the analogue systems, this gave a real sense of urgency to the GSM development. Although decisions about the exact nature of the cellular technology were not taken at an early stage, all parties involved had been working toward a digital system. This decision was finally made in February 1987. This gave a variety of advantages. Greater levels of spectral efficiency could be gained, and in addition to this the use of digital circuitry would allow for higher levels of integration in the circuitry. This in turn would result in cheaper handsets with more features. Nevertheless significant hurdles still needed to be overcome. For example, many of the methods for encoding the speech within a sufficiently narrow bandwidth needed to be developed, and this posed a significant risk to the project. Nevertheless the GSM system had been started.

4GSM Basics Tutorial and Overview

Page 5: GSM Basics Tutorial and Overview

5

GSM launch dates

Work continued and a launch date for the new GSM system of 1991 was set for an initial launch of a service using the new cellular technology with limited coverage and capability to be followed by a complete roll out of the service in major European cities by 1993 and linking of the areas by 1995.

Meanwhile technical development was taking place. Initial trials had shown that time division multiple access techniques offered the best performance with the technology that would be available. This approach had the support of the major manufacturing companies which would ensure that with them on board sufficient equipment both in terms of handsets, base stations and the network infrastructure for GSM would be available.

Further impetus was given to the GSM project when in 1989 the responsibility was passed to the newly formed European Telecommunications Standards Institute (ETSI). Under the auspices of ETSI the specification took place. It provided functional and interface descriptions for each of the functional entities defined in the system. The aim was to provide sufficient guidance for manufacturers that equipment from different manufacturers would be interoperable, while not stopping innovation. The result of the specification work was a set of documents extending to more than 6000 pages. Nevertheless the resultant phone system provided a robust, feature-rich system. The first roaming agreement was signed between Telecom Finland and Vodafone in the UK. Thus the vision of a pan-European network was fast becoming a reality. However this took place before any networks went live.

The aim to launch GSM by 1991 proved to be a target that was too tough to meet. Terminals started to become available in mid 1992 and the real launch took place in the latter part of that year. With such a new service many were sceptical as the analogue systems were still in widespread use. Nevertheless by the end of 1993 GSM had attracted over a million subscribers and there were 25 roaming agreements in place. The growth continued and the next million subscribers were soon attracted.

Global GSM usage

Originally GSM had been planned as a European system. However the first indication that the success of GSM was spreading further a field occurred when the Australian network provider, Telstra signed the GSM Memorandum of Understanding.

Frequencies

Originally it had been intended that GSM would operate on frequencies in the 900 MHz cellular band. In September 1993, the British operator Mercury One-to-One launched a network. Termed DCS 1800 it operated at frequencies in a new 1800 MHz band. By adopting new frequencies new operators and further competition was introduced into the market apart from allowing additional spectrum to be used and further increasing the overall capacity. This trend was followed in many countries, and soon the term DCS 1800 was dropped in favour of calling it GSM as it was purely the same cellular technology but operating on a different frequency band. In view of the higher frequency used the distances the signals travelled was slightly shorter but this was compensated for by additional base stations.

5GSM Basics Tutorial and Overview

Page 6: GSM Basics Tutorial and Overview

6In the USA as well a portion of spectrum at 1900 MHz was allocated for cellular usage in 1994. The licensing body, the FCC, did not legislate which technology should be used, and accordingly this enabled GSM to gain a foothold in the US market. This system was known as PCS 1900 (Personal Communication System).

GSM success

With GSM being used in many countries outside Europe this reflected the true nature of the name which had been changed from Groupe Special Mobile to Global System for Mobile communications. The number of subscribers grew rapidly and by the beginning of 2004 the total number of GSM subscribers reached 1 billion. Attaining this figure was celebrated at the Cannes 3GSM conference held that year. Figures continued to rise, reaching and then well exceeding the 3 billion mark. In this way the history of GSM has shown it to be a great success.

GSM Network Architecture [3]The GSM technical specifications define the different elements within the GSM network architecture. It defines the different elements and the ways in which they interact to enable the overall network operation to be maintained.

The GSM network architecture is now well established and with the other later cellular systems now established and other new ones being deployed, the basic GSM network architecture has been updated to interface to the network elements required by these systems. Despite the developments of the newer systems, the basic GSM network architecture has been maintained, and the elements described below perform the same functions as they did when the original GSM system was launched in the early 1990s.

GSM network architecture elements

The GSM network architecture as defined in the GSM specifications can be grouped into four main areas:

Mobile station (MS) Base-station subsystem (BSS) Network and Switching Subsystem (NSS) Operation and Support Subsystem (OSS)

6GSM Basics Tutorial and Overview

Page 7: GSM Basics Tutorial and Overview

7

Simplified GSM Network Architecture

Mobile station

Mobile stations (MS), mobile equipment (ME) or as they are most widely known, cell or mobile phones are the section of a GSM cellular network that the user sees and operates. In recent years their size has fallen dramatically while the level of functionality has greatly increased. A further advantage is that the time between charges has significantly increased.

There are a number of elements to the cell phone, although the two main elements are the main hardware and the SIM.

The hardware itself contains the main elements of the mobile phone including the display, case, battery, and the electronics used to generate the signal, and process the data receiver and to be transmitted. It also contains a number known as the International Mobile Equipment Identity (IMEI). This is installed in the phone at manufacture and "cannot" be changed. It is accessed by the network during registration to check whether the equipment has been reported as stolen.

The SIM or Subscriber Identity Module contains the information that provides the identity of the user to the network. It contains are variety of information including a number known as the International Mobile Subscriber Identity (IMSI).

Base Station Subsystem (BSS)

7GSM Basics Tutorial and Overview

Page 8: GSM Basics Tutorial and Overview

8The Base Station Subsystem (BSS) section of the GSM network architecture that is fundamentally associated with communicating with the mobiles on the network. It consists of two elements:

Base Transceiver Station (BTS):   The BTS used in a GSM network comprises the radio transmitter receivers, and their associated antennas that transmit and receive to directly communicate with the mobiles. The BTS is the defining element for each cell. The BTS communicates with the mobiles and the interface between the two is known as the Um interface with its associated protocols.

Base Station Controller (BSC):   The BSC forms the next stage back into the GSM network. It controls a group of BTSs, and is often co-located with one of the BTSs in its group. It manages the radio resources and controls items such as handover within the group of BTSs, allocates channels and the like. It communicates with the BTSs over what is termed the Abis interface.

Network Switching Subsystem (NSS)

The GSM network subsystem contains a variety of different elements, and is often termed the core network. It provides the main control and interfacing for the whole mobile network. The major elements within the core network include:

Mobile Switching services Centre (MSC):   The main element within the core network area of the overall GSM network architecture is the Mobile switching Services Centre (MSC). The MSC acts like a normal switching node within a PSTN or ISDN, but also provides additional functionality to enable the requirements of a mobile user to be supported. These include registration, authentication, call location, inter-MSC handovers and call routing to a mobile subscriber. It also provides an interface to the PSTN so that calls can be routed from the mobile network to a phone connected to a landline. Interfaces to other MSCs are provided to enable calls to be made to mobiles on different networks.

Home Location Register (HLR):   This database contains all the administrative information about each subscriber along with their last known location. In this way, the GSM network is able to route calls to the relevant base station for the MS. When a user switches on their phone, the phone registers with the network and from this it is possible to determine which BTS it communicates with so that incoming calls can be routed appropriately. Even when the phone is not active (but switched on) it re-registers periodically to ensure that the network (HLR) is aware of its latest position. There is one HLR per network, although it may be distributed across various sub-centres to for operational reasons.

Visitor Location Register (VLR):   This contains selected information from the HLR that enables the selected services for the individual subscriber to be provided. The VLR can be implemented as a separate entity, but it is commonly realised as an integral part of the MSC, rather than a separate entity. In this way access is made faster and more convenient.

Equipment Identity Register (EIR):   The EIR is the entity that decides whether a given mobile equipment may be allowed onto the network. Each mobile equipment has a number known as the International Mobile Equipment Identity. This number, as mentioned above, is installed in the equipment and is checked by the network during registration. Dependent upon the information held in the EIR, the mobile may be allocated one of three states - allowed onto the network, barred access, or monitored in case its problems.

Authentication Centre (AuC):   The AuC is a protected database that contains the secret key also contained in the user's SIM card. It is used for authentication and for ciphering on the radio channel.

Gateway Mobile Switching Centre (GMSC):   The GMSC is the point to which a ME terminating call is initially routed, without any knowledge of the MS's location. The GMSC is thus in charge of obtaining the MSRN (Mobile Station Roaming Number) from the HLR based on the MSISDN (Mobile Station

8GSM Basics Tutorial and Overview

Page 9: GSM Basics Tutorial and Overview

9ISDN number, the "directory number" of a MS) and routing the call to the correct visited MSC. The "MSC" part of the term GMSC is misleading, since the gateway operation does not require any linking to an MSC.

SMS Gateway (SMS-G):   The SMS-G or SMS gateway is the term that is used to collectively describe the two Short Message Services Gateways defined in the GSM standards. The two gateways handle messages directed in different directions. The SMS-GMSC (Short Message Service Gateway Mobile Switching Centre) is for short messages being sent to an ME. The SMS-IWMSC (Short Message Service Inter-Working Mobile Switching Centre) is used for short messages originated with a mobile on that network. The SMS-GMSC role is similar to that of the GMSC, whereas the SMS-IWMSC provides a fixed access point to the Short Message Service Centre.

Operation and Support Subsystem (OSS)

The OSS or operation support subsystem is an element within the overall GSM network architecture that is connected to components of the NSS and the BSC. It is used to control and monitor the overall GSM network and it is also used to control the traffic load of the BSS. It must be noted that as the number of BS increases with the scaling of the subscriber population some of the maintenance tasks are transferred to the BTS, allowing savings in the cost of ownership of the system.

GSM Network Interfaces [4]The network structure is defined within the GSM standards. Additionally each interface between the different elements of the GSM network is also defined. This facilitates the information interchanges can take place. It also enables to a large degree that network elements from different manufacturers can be used. However as many of these interfaces were not fully defined until after many networks had been deployed, the level of standardisation may not be quite as high as many people might like.

1. Um interface   The "air" or radio interface standard that is used for exchanges between a mobile (ME) and a base station (BTS / BSC). For signalling, a modified version of the ISDN LAPD, known as LAPDm is used.

2. Abis interface   This is a BSS internal interface linking the BSC and a BTS, and it has not been totally standardised. The Abis interface allows control of the radio equipment and radio frequency allocation in the BTS.

3. A interface   The A interface is used to provide communication between the BSS and the MSC. The interface carries information to enable the channels, timeslots and the like to be allocated to the mobile equipments being serviced by the BSSs. The messaging required within the network to enable handover etc to be undertaken is carried over the interface.

4. B interface   The B interface exists between the MSC and the VLR . It uses a protocol known as the MAP/B protocol. As most VLRs are collocated with an MSC, this makes the interface purely an "internal" interface. The interface is used whenever the MSC needs access to data regarding a MS located in its area.

5. C interface   The C interface is located between the HLR and a GMSC or a SMS-G. When a call originates from outside the network, i.e. from the PSTN or another mobile network it ahs to pass through the gateway so that routing information required to complete the call may be gained. The protocol used for communication is MAP/C, the letter "C" indicating that the protocol is used for the "C" interface. In

9GSM Basics Tutorial and Overview

Page 10: GSM Basics Tutorial and Overview

10addition to this, the MSC may optionally forward billing information to the HLR after the call is completed and cleared down.

6. D interface   The D interface is situated between the VLR and HLR. It uses the MAP/D protocol to exchange the data related to the location of the ME and to the management of the subscriber.

7. E interface   The E interface provides communication between two MSCs. The E interface exchanges data related to handover between the anchor and relay MSCs using the MAP/E protocol.

8. F interface   The F interface is used between an MSC and EIR. It uses the MAP/F protocol. The communications along this interface are used to confirm the status of the IMEI of the ME gaining access to the network.

9. G interface   The G interface interconnects two VLRs of different MSCs and uses the MAP/G protocol to transfer subscriber information, during e.g. a location update procedure.

10. H interface   The H interface exists between the MSC the SMS-G. It transfers short messages and uses the MAP/H protocol.

11. I interface   The I interface can be found between the MSC and the ME. Messages exchanged over the I interface are relayed transparently through the BSS.

Although the interfaces for the GSM cellular system may not be as rigorouly defined as many might like, they do at least provide a large element of the definition required, enabling the functionality of GSM network entities to be defined sufficiently.

GSM Radio Air Interface, GSM Slot and Burst [5]One of the key elements of the development of the GSM, Global System for Mobile Communications was the development of the GSM air interface. There were many requirements that were placed on the system, and many of these had a direct impact on the air interface. Elements including the modulation, GSM slot structure, burst structure and the like were all devised to provide the optimum performance.

During the development of the GSM standard very careful attention was paid to aspects including the modulation format, the way in which the system is time division multiplexed, all had a considerable impact on the performance of the system as a whole. For example, the modulation format for the GSM air interface had a direct impact on battery life and the time division format adopted enabled the cellphone handset costs to be considerably reduced as detailed later.

GSM signal and GMSK modulation characteristics

The core of any radio based system is the format of the radio signal itself. The carrier is modulated using a form of phase sift keying known as Gaussian Minimum Shift Keying (GMSK). GMSK was used for the GSM system for a variety of reasons:

It is resilient to noise when compared to many other forms of modulation. Radiation outside the accepted bandwidth is lower than other forms of phase shift keying. It has a constant power level which allows higher efficiency RF power amplifiers to be used in the

handset, thereby reducing current consumption and conserving battery life.

10GSM Basics Tutorial and Overview

Page 11: GSM Basics Tutorial and Overview

11Note on GMSK:

GMSK, Gaussian Minimum Shift Keying is a form of phase modulation that is used in a number of portable radio and wireless applications. It has advantages in terms of spectral efficiency as well as having an almost constant amplitude which allows for the use of more efficient transmitter power amplifiers, thereby saving on current consumption, a critical issue for battery power equipment.

Click on the link for a GMSK tutorial

The nominal bandwidth for the GSM signal using GMSK is 200 kHz, i.e. the channel bandwidth and spacing is 200 kHz. As GMSK modulation has been used, the unwanted or spurious emissions outside the nominal bandwidth are sufficiently low to enable adjacent channels to be used from the same base station. Typically each base station will be allocated a number of carriers to enable it to achieve the required capacity.

The data transported by the carrier serves up to eight different users under the basic system by splitting the carrier into eight time slots. The basic carrier is able to support a data throughput of approximately 270 kbps, but as some of this supports the management overhead, the data rate allotted to each time slot is only 24.8 kbps. In addition to this error correction is required to overcome the problems of interference, fading and general data errors that may occur. This means that the available data rate for transporting the digitally encoded speech is 13 kbps for the basic vocoders.

GSM slot structure and multiple access scheme

GSM uses a combination of both TDMA and FDMA techniques. The FDMA element involves the division by frequency of the (maximum) 25 MHz bandwidth into 124 carrier frequencies spaced 200 kHz apart as already described.

The carriers are then divided in time, using a TDMA scheme. This enables the different users of the single radio frequency channel to be allocated different times slots. They are then able to use the same RF channel without mutual interference. The slot is then the time that is allocated to the particular user, and the GSM burst is the transmission that is made in this time.

Each GSM slot, and hence each GSM burst lasts for 0.577 mS (15/26 mS). Eight of these burst periods are grouped into what is known as a TDMA frame. This lasts for approximately 4.615 ms (i.e.120/26 ms) and it forms the basic unit for the definition of logical channels. One physical channel is one burst period allocated in each TDMA frame.

There are different types of frame that are transmitted to carry different data, and also the frames are organised into what are termed multiframes and superframes to provide overall synchronisation.

GSM slot structure

11GSM Basics Tutorial and Overview

Page 12: GSM Basics Tutorial and Overview

12These GSM slot is the smallest individual time period that is available to each mobile. It has a defined format because a variety of different types of data are required to be transmitted.

Although there are shortened transmission bursts, the slots is normally used for transmitting 148 bits of information. This data can be used for carrying voice data, control and synchronisation data.

GSM slots showing offset between transmit and receive

It can be seen from the GSM slot structure that the timing of the slots in the uplink and the downlink are not simultaneous, and there is a time offset between the transmit and receive. This offset in the GSM slot timing is deliberate and it means that a mobile that which is allocated the same slot in both directions does not transmit and receive at the same time. This considerably reduces the need for expensive filters to isolate the transmitter from the receiver. It also provides a space saving.

GSM burst

The GSM burst, or transmission can fulfil a variety of functions. Some GSM bursts are used for carrying data while others are used for control information. As a result of this a number of different types of GSM burst are defined.

Normal burst   uplink and downlink Synchronisation burst  downlink Frequency correction burst  downlink Random Access (Shortened Burst)   uplink

GSM normal burst

This GSM burst is used for the standard communications between the basestation and the mobile, and typically transfers the digitised voice data.

The structure of the normal GSM burst is exactly defined and follows a common format. It contains data that provides a number of different functions:

12GSM Basics Tutorial and Overview

Page 13: GSM Basics Tutorial and Overview

131. 3 tail bits:   These tail bits at the start of the GSM burst give time for the transmitter to ramp up its

power 2. 57 data bits:   This block of data is used to carry information, and most often contains the digitised voice

data although on occasions it may be replaced with signalling information in the form of the Fast Associated Control CHannel (FACCH). The type of data is indicated by the flag that follows the data field

3. 1 bit flag:   This bit within the GSM burst indicates the type of data in the previous field. 4. 26 bits training sequence:   This training sequence is used as a timing reference and for equalisation.

There is a total of eight different bit sequences that may be used, each 26 bits long. The same sequence is used in each GSM slot, but nearby base stations using the same radio frequency channels will use different ones, and this enables the mobile to differentiate between the various cells using the same frequency.

5. 1 bit flag   Again this flag indicates the type of data in the data field. 6. 57 data bits   Again, this block of data within the GSM burst is used for carrying data. 7. 3 tail bits   These final bits within the GSM burst are used to enable the transmitter power to ramp down.

They are often called final tail bits, or just tail bits. 8. 8.25 bits guard time   At the end of the GSM burst there is a guard period. This is introduced to prevent

transmitted bursts from different mobiles overlapping. As a result of their differing distances from the base station.

GSM Normal Burst

GSM synchronisation burst

The purpose of this form of GSM burst is to provide synchronisation for the mobiles on the network.

1. 3 tail bits:   Again, these tail bits at the start of the GSM burst give time for the transmitter to ramp up its power

2. 39 bits of information:   3. 64 bits of a Long Training Sequence:   4. 39 bits Information:   5. 3 tail bits   Again these are to enable the transmitter power to ramp down.6. 8.25 bits guard time:   to act as a guard interval.

GSM Synchronisation Burst

GSM frequency correction burst13

GSM Basics Tutorial and Overview

Page 14: GSM Basics Tutorial and Overview

14With the information in the burst all set to zeros, the burst essentially consists of a constant frequency carrier with no phase alteration.

1. 3 tail bits:   Again, these tail bits at the start of the GSM burst give time for the transmitter to ramp up its power.

2. 142 bits all set to zero:   3. 3 tail bits   Again these are to enable the transmitter power to ramp down.4. 8.25 bits guard time:   to act as a guard interval.

GSM Frequency Correction Burst

GSM random access burst

This form of GSM burst used when accessing the network and it is shortened in terms of the data carried, having a much longer guard period. This GSM burst structure is used to ensure that it fits in the time slot regardless of any severe timing problems that may exist. Once the mobile has accessed the network and timing has been aligned, then there is no requirement for the long guard period.

1. 7 tail bits:   The increased number of tail bits is included to provide additional margin when accessing the network.

2. 41 training bits:   3. 36 data bits:   4. 3 tail bits   Again these are to enable the transmitter power to ramp down.5. 69.25 bits guard time:   The additional guard time, filling the remaining time of the GSM burst provides

for large timing differences.

GSM Random Access Burst

GSM discontinuous transmission (DTx)

A further power saving and interference reducing facility is the discontinuous transmission (DTx) capability that is incorporated within the specification. It is particularly useful because there are long pauses in speech, for example when the person using the mobile is listening, and during these periods there is no need to transmit a signal. In fact it is found that a person speaks for less than 40% of the time during normal telephone conversations. The most important element of DTx is the Voice Activity Detector. It must correctly distinguish between voice and noise inputs, a task that is not trivial. If a voice signal is misinterpreted as noise, the transmitter is turned off an effect known as clipping results and this is particularly annoying to the person listening to the speech. However if noise is misinterpreted as a voice signal too often, the efficiency of DTX is dramatically decreased.

14GSM Basics Tutorial and Overview

Page 15: GSM Basics Tutorial and Overview

15It is also necessary for the system to add background or comfort noise when the transmitter is turned off because complete silence can be very disconcerting for the listener. Accordingly this is added as appropriate. The noise is controlled by the SID (silence indication descriptor).

GSM Frame Structure [6]The GSM system has a defined GSM frame structure to enable the orderly passage of information. The GSM frame structure establishes schedules for the predetermined use of timeslots.

By establishing these schedules by the use of a frame structure, both the mobile and the base station are able to communicate not only the voice data, but also signalling information without the various types of data becoming intermixed and both ends of the transmission knowing exactly what types of information are being transmitted.

The GSM frame structure provides the basis for the various physical channels used within GSM, and accordingly it is at the heart of the overall system.

Basic GSM frame structure

The basic element in the GSM frame structure is the frame itself. This comprises the eight slots, each used for different users within the TDMA system. As mentioned in another page of the tutorial, the slots for transmission and reception for a given mobile are offset in time so that the mobile does not transmit and receive at the same time.

GSM frame consisting of eight slots

The basic GSM frame defines the structure upon which all the timing and structure of the GSM messaging and signalling is based. The fundamental unit of time is called a burst period and it lasts for approximately 0.577 ms (15/26 ms). Eight of these burst periods are grouped into what is known as a TDMA frame. This lasts for approximately 4.615 ms (i.e.120/26 ms) and it forms the basic unit for the definition of logical channels. One physical channel is one burst period allocated in each TDMA frame.

15GSM Basics Tutorial and Overview

Page 16: GSM Basics Tutorial and Overview

16In simplified terms the base station transmits two types of channel, namely traffic and control. Accordingly the channel structure is organised into two different types of frame, one for the traffic on the main traffic carrier frequency, and the other for the control on the beacon frequency.

GSM multiframe

The GSM frames are grouped together to form multiframes and in this way it is possible to establish a time schedule for their operation and the network can be synchronised.

There are several GSM multiframe structures:

Traffic multiframe:   The Traffic Channel frames are organised into multiframes consisting of 26 bursts and taking 120 ms. In a traffic multiframe, 24 bursts are used for traffic. These are numbered 0 to 11 and 13 to 24. One of the remaining bursts is then used to accommodate the SACCH, the remaining frame remaining free. The actual position used alternates between position 12 and 25.

Control multiframe:   the Control Channel multiframe that comprises 51 bursts and occupies 235.4 ms. This always occurs on the beacon frequency in time slot zero and it may also occur within slots 2, 4 and 6 of the beacon frequency as well. This multiframe is subdivided into logical channels which are time-scheduled. These logical channels and functions include the following:

o Frequency correction burst   o Synchronisation burst   o Broadcast channel (BCH)   o Paging and Access Grant Channel (PACCH)   o Stand Alone Dedicated Control Channel (SDCCH)  

GSM Superframe

Multiframes are then constructed into superframes taking 6.12 seconds. These consist of 51 traffic multiframes or 26 control multiframes. As the traffic multiframes are 26 bursts long and the control multiframes are 51 bursts long, the different number of traffic and control multiframes within the superframe, brings them back into line again taking exactly the same interval.

GSM Hyperframe

Above this 2048 superframes (i.e. 2 to the power 11) are grouped to form one hyperframe which repeats every 3 hours 28 minutes 53.76 seconds. It is the largest time interval within the GSM frame structure.

Within the GSM hyperframe there is a counter and every time slot has a unique sequential number comprising the frame number and time slot number. This is used to maintain synchronisation of the different scheduled operations with the GSM frame structure. These include functions such as:

Frequency hopping:   Frequency hopping is a feature that is optional within the GSM system. It can help reduce interference and fading issues, but for it to work, the transmitter and receiver must be synchronised so they hop to the same frequencies at the same time.

Encryption:   The encryption process is synchronised over the GSM hyperframe period where a counter is used and the encryption process will repeat with each hyperframe. However, it is unlikely that the

16GSM Basics Tutorial and Overview

Page 17: GSM Basics Tutorial and Overview

17cellphone conversation will be over 3 hours and accordingly it is unlikely that security will be compromised as a result.

GSM Frame Structure Summary

GSM frame structure summary

By structuring the GSM signalling into frames, multiframes, superframes and hyperframes, the timing and organisation is set into an orderly format that enables both the GSM mobile and base station to communicate in a reliable and efficient manner. The GSM frame structure forms the basis onto which the other forms of frame and hence the various GSM channels are built.

GSM Frequencies and Frequency Bands [7] Although it is possible for the GSM cellular system to work on a variety of frequencies, the GSM standard defines GSM frequency bands and frequencies for the different spectrum allocations that are in use around the globe. For most applications the GSM frequency allocations fall into three or four bands, and therefore it is possible for phones to be used for global roaming.

While the majority of GSM activity falls into just a few bands, for some specialist applications, or in countries where spectrum allocation requirements mean that the standard bands cannot be used, different allocations may be required. Accordingly for most global roaming dual band, tri-band or quad-band phones will operate in most countries, although in some instances phones using other frequencies may be required.

GSM band allocations

There is a total of fourteen different recognised GSM frequency bands. These are defined in 3GPP TS 45.005.

Band Uplink (MHz)

Downlink(MHz)

Comments

380 380.2 - 389.8 390.2 - 399.8   410 410.2 - 419.8 420.2 - 429.8  

17GSM Basics Tutorial and Overview

Page 18: GSM Basics Tutorial and Overview

18

Band Uplink (MHz)

Downlink(MHz)

Comments

450 450.4 - 457.6 460.4 - 467.6   480 478.8 - 486.0 488.8 - 496.0   710 698.0 - 716.0 728.0 - 746.0   750 747.0 - 762.0 777.0 - 792.0   810 806.0 - 821.0 851.0 - 866.0   850 824.0 - 849.0 869.0 - 894.0  

900 890.0 - 915.0 935.0 - 960.0 P-GSM, i.e. Primary or standard GSM allocation

900 880.0 - 915.0 925.0 - 960.0 E-GSM, i.e. Extended GSM allocation 900 876.0 - 915 921.0 - 960.0 R-GSM, i.e. Railway GSM allocation 900 870.4 - 876.0 915.4 - 921.0 T-GSM

1800 1710.0 - 1785.0

1805.0 - 1880.0

 

1900 1850.0 - 1910.0

1930.0 - 1990.0

 

GSM frequency band usage

The usage of the different frequency bands varies around the globe although there is a large degree of standardisation. The GSM frequencies available depend upon the regulatory requirements for the particular country and the ITU (International Telecommunications Union) region in which the country is located.

As a rough guide Europe tends to use the GSM 900 and 1800 bands as standard. These bands are also generally used in the Middle East, Africa, Asia and Oceania.

For North America the USA uses both 850 and 1900 MHz bands, the actual band used is determined by the regulatory authorities and is dependent upon the area. For Canada the 1900 MHz band is the primary one used, particularly for urban areas with 850 MHz used as a backup in rural areas.

For Central and South America, the GSM 850 and 1900 MHz frequency bands are the most widely used although there are some areas where other frequencies are used.

GSM multiband phones

In order that cell phone users are able to take advantage of the roaming facilities offered by GSM, it is necessary that the cellphones are able to cover the bands of the countries which are visited.

Today most phones support operation on multiple bands and are known as multi-band phones. Typically most standard phones are dual-band phones. For Europe, Middle east, Asia and Oceania these would operate on GSM 900 and 1800 bands and for North America, etc dual band phones would operate on GSM 850 and 1900 frequency bands.

18GSM Basics Tutorial and Overview

Page 19: GSM Basics Tutorial and Overview

19To provide better roaming coverage, tri-band and quad-band phones are also available. European triband phones typically cover the GSM 900, 1800 and 1900 bands giving good coverage in Europe as well as moderate coverage in North America. Similarly North America tri-band phones use the 900, 1800 and 1900 GSM frequencies. Quad band phones are also available covering the 850, 900, 1800 and 1900 MHz GSM frequency bands, i.e. the four major bands and thereby allowing global use.

GSM Power Control and Power Class [8]The power levels and power control of GSM mobiles is of great importance because of the effect of power on the battery life. Also to group mobiles into groups, GSM power class designations have been allocated to indicate the power capability of various mobiles.

In addition to this the power of the GSM mobiles is closely controlled so that the battery of the mobile is conserved, and also the levels of interference are reduced and performance of the basestation is not compromised by high power local mobiles.

GSM power levels

The base station controls the power output of the mobile, keeping the GSM power level sufficient to maintain a good signal to noise ratio, while not too high to reduce interference, overloading, and also to preserve the battery life.

A table of GSM power levels is defined, and the base station controls the power of the mobile by sending a GSM "power level" number. The mobile then adjusts its power accordingly. In virtually all cases the increment between the different power level numbers is 2dB.

The accuracies required for GSM power control are relatively stringent. At the maximum power levels they are typically required to be controlled to within +/- 2 dB, whereas this relaxes to +/- 5 dB at the lower levels.

The power level numbers vary according to the GSM band in use. Figures for the three main bands in use are given below:

Power level number

Power output level dBm

2 39 3 37 4 35 5 33 6 31 7 29 8 27 9 25 10 23 11 21 12 19

19GSM Basics Tutorial and Overview

Page 20: GSM Basics Tutorial and Overview

20Power level

number Power output level

dBm 13 17 14 15 15 13 16 11 17 9 18 7 19 5

GSM power level table for GSM 900

Power level number Power output level dBm 29 36 30 34 31 32 0 30 1 28 2 26 3 24 4 22 5 20 6 18 7 16 8 14 9 12 10 10 11 8 12 6 13 4 14 2 15 0

GSM power level table for GSM 1800

Power level number

Power output level dBm

30 33 31 32 0 30 1 28 2 26 3 24 4 22

20GSM Basics Tutorial and Overview

Page 21: GSM Basics Tutorial and Overview

21Power level

number Power output level

dBm 5 20 6 18 7 16 8 14 9 12 10 10 11 8 12 6 13 4 14 2 15 0

GSM power level table for GSM 1900

GSM Power class

Not all mobiles have the same maximum power output level. In order that the base station knows the maximum power level number that it can send to the mobile, it is necessary for the base station to know the maximum power it can transmit. This is achieved by allocating a GSM power class number to a mobile. This GSM power class number indicates to the base station the maximum power it can transmit and hence the maximum power level number the base station can instruct it to use.

Again the GSM power classes vary according to the band in use.

GSM Power Class

Number

GSM 900 GSM 1800 GSM 1900

  Power level number

Maximum power output

Power level number

Maximum power output

Power level number

Maximum power output

1     PL0 30 dBm / 1W PL0 30 dBm / 1W

2 PL2 39dBm / 8W PL3 24 dBm/ 250 mW

PL3 24 dBm / 250 mW

3 PL3 37dBm / 5W PL29 36 dBm / 4W PL30 33 dBm / 2W 4 PL4 33dBm / 2W        

5 PL5 29 dBm / 800 mW

       

GSM power amplifier design considerations

One of the main considerations for the RF power amplifier design in any mobile phone is its efficiency. The RF power amplifier is one of the major current consumption areas. Accordingly, to ensure long battery life it should be as efficient as possible.

21GSM Basics Tutorial and Overview

Page 22: GSM Basics Tutorial and Overview

22It is also worth remembering that as mobiles may only transmit for one eighth of the time, i.e. for their allocated slot which is one of eight, the average power is an eighth of the maximum.

GSM logical and physical channels [9]GSM uses a variety of channels in which the data is carried. In GSM, these channels are separated into physical channels and logical channels. The Physical channels are determined by the timeslot, whereas the logical channels are determined by the information carried within the physical channel. It can be further summarised by saying that several recurring timeslots on a carrier constitute a physical channel. These are then used by different logical channels to transfer information. These channels may either be used for user data (payload) or signalling to enable the system to operate correctly.

Common and dedicated channels

The channels may also be divided into common and dedicated channels. The forward common channels are used for paging to inform a mobile of an incoming call, responding to channel requests, and broadcasting bulletin board information. The return common channel is a random access channel used by the mobile to request channel resources before timing information is conveyed by the BSS.

The dedicated channels are of two main types: those used for signalling, and those used for traffic. The signalling channels are used for maintenance of the call and for enabling call set up, providing facilities such as handover when the call is in progress, and finally terminating the call. The traffic channels handle the actual payload.

The following logical channels are defined in GSM:

TCHf - Full rate traffic channel.

TCH h - Half rate traffic channel.

BCCH - Broadcast Network information, e.g. for describing the current control channel structure. The BCCH is a point-to-multipoint channel (BSS-to-MS).

SCH - Synchronisation of the MSs.

FCHMS - frequency correction.

AGCH - Acknowledge channel requests from MS and allocate a SDCCH.

PCHMS - terminating call announcement.

RACHMS - access requests, response to call announcement, location update, etc.

FACCHt - For time critical signalling over the TCH (e.g. for handover signalling). Traffic burst is stolen for a full signalling burst.

22GSM Basics Tutorial and Overview

Page 23: GSM Basics Tutorial and Overview

23SACCHt - TCH in-band signalling, e.g. for link monitoring.

SDCCH - For signalling exchanges, e.g. during call setup, registration / location updates.

FACCHs - FACCH for the SDCCH. The SDCCH burst is stolen for a full signalling burst. Function not clear in the present version of GSM (could be used for e.g. handover of an eight-rate channel, i.e. using a "SDCCH-like" channel for other purposes than signalling).

SACCHs - SDCCH in-band signalling, e.g. for link monitoring.

GSM Audio Codec / Vocoder [10]Audio codecs or vocoders are universally used within the GSM system. They reduce the bit rate of speech that has been converted from its analogue for into a digital format to enable it to be carried within the available bandwidth for the channel. Without the use of a speech codec, the digitised speech would occupy a much wider bandwidth then would be available. Accordingly GSM codecs are a particularly important element in the overall system.

A variety of different forms of audio codec or vocoder are available for general use, and the GSM system supports a number of specific audio codecs. These include the RPE-LPC, half rate, and AMR codecs. The performance of each voice codec is different and they may be used under different conditions, although the AMR codec is now the most widely used. Also the newer AMR wideband (AMR-WB) codec is being introduced into many areas, including GSM

Voice codec technology has advanced by considerable degrees in recent years as a result of the increasing processing power available. This has meant that the voice codecs used in the GSM system have large improvements since the first GSM phones were introduced.

Vocoder / codec basics

Vocoders or speech codecs are used within many areas of voice communications. Obviously the focus here is on GSM audio codecs or vocoders, but the same principles apply to any form of codec.

If speech were digitised in a linear fashion it would require a high data rate that would occupy a very wide bandwidth. As bandwidth is normally limited in any communications system, it is necessary to compress the data to send it through the available channel. Once through the channel it can then be expanded to regenerate the audio in a fashion that is as close to the original as possible.

To meet the requirements of the codec system, the speech must be captured at a high enough sample rate and resolution to allow clear reproduction of the original sound. It must then be compressed in such a way as to maintain the fidelity of the audio over a limited bit rate, error-prone wireless transmission channel.

Audio codecs or vocoders can use a variety of techniques, but many modern audio codecs use a technique known as linear prediction. In many ways this can be likened to a mathematical modelling of the human vocal

23GSM Basics Tutorial and Overview

Page 24: GSM Basics Tutorial and Overview

24tract. To achieve this the spectral envelope of the signal is estimated using a filter technique. Even where signals with many non-harmonically related signals are used it is possible for voice codecs to give very large levels of compression.

A variety of different codec methodologies are used for GSM codecs:

CELP:   The CELP or Code Excited Linear Prediction codec is a vocoder algorithm that was originally proposed in 1985 and gave a significant improvement over other voice codecs of the day. The basic principle of the CELP codec has been developed and used as the basis of other voice codecs including ACELP, RCELP, VSELP, etc. As such the CELP codec methodology is now the most widely used speech coding algorithm. Accordingly CELP is now used as a generic term for a particular class of vocoders or speech codecs and not a particular codec.

The main principle behind the CELP codec is that is uses a principle known as "Analysis by Synthesis". In this process, the encoding is performed by perceptually optimising the decoded signal in a closed loop system. One way in which this could be achieved is to compare a variety of generated bit streams and choose the one that produces the best sounding signal.

ACELP codec:   The ACELP or Algebraic Code Excited Linear Prediction codec. The ACELP codec or vocoder algorithm is a development of the CELP model. However the ACELP codec codebooks have a specific algebraic structure as indicated by the name.

VSELP codec:   The VSELP or Vector Sum Excitation Linear Prediction codec. One of the major drawbacks of the VSELP codec is its limited ability to code non-speech sounds. This means that it performs poorly in the presence of noise. As a result this voice codec is not now as widely used, other newer speech codecs being preferred and offering far superior performance.

GSM audio codecs / vocoders

A variety of GSM audio codecs / vocoders are supported. These have been introduced at different times, and have different levels of performance.. Although some of the early audio codecs are not as widely used these days, they are still described here as they form part of the GSM system.

Codec name Bit rate(kbps)

Compression technology

Full rate 13 RTE-LPC EFR 12.2 ACELP Half rate 5.6 VSELP AMR 12.2 - 4.75 ACELP AMR-WB 23.85 - 6.60 ACELP

GSM Full Rate / RPE-LPC codec

The RPE-LPC or Regular Pulse Excited - Linear Predictive Coder. This form of voice codec was the first speech codec used with GSM and it chosen after tests were undertaken to compare it with other codec schemes of the day. The speech codec is based upon the regular pulse excitation LPC with long term prediction. The

24GSM Basics Tutorial and Overview

Page 25: GSM Basics Tutorial and Overview

25basic scheme is related to two previous speech codecs, namely: RELP, Residual Excited Linear Prediction and to the MPE-LPC, Multi Pulse Excited LPC. The advantages of RELP are the relatively low complexity resulting from the use of baseband coding, but its performance is limited by the tonal noise produced by the system. The MPE-LPC is more complex but provides a better level of performance. The RPE-LPC codec provided a compromise between the two, balancing performance and complexity for the technology of the time.

Despite the work that was undertaken to provide the optimum performance, as technology developed further, the RPE-LPC codec was viewed as offering a poor level of voice quality. As other full rate audio codecs became available, these were incorporated into the system.

GSM EFR - Enhanced Full Rate codec

Later another vocoder called the Enhanced Full Rate (EFR) vocoder was added in response to the poor quality perceived by the users of the original RPE-LPC codec. This new codec gave much better sound quality and was adopted by GSM. Using the ACELP compression technology it gave a significant improvement in quality over the original LPC-RPE encoder. It became possible as the processing power that was available increased in mobile phones as a result of higher levels of processing power combined with their lower current consumption.

GSM Half Rate codec

The GSM standard allows the splitting of a single full rate voice channel into two sub-channels that can maintain separate calls. By doing this, network operators can double the number of voice calls that can be handled by the network with very little additional investment.

To enable this facility to be used a half rate codec must be used. The half rate codec was introduced in the early years of GSM but gave a much inferior voice quality when compared to other speech codecs. However it gave advantages when demand was high and network capacity was at a premium.

The GSM Half Rate codec uses a VSELP codec algorithm. It codes the data around 20 ms frames each carrying 112 bits to give a data rate of 5.6 kbps. This includes a 100 bps data rate for a mode indicator which details whether the system believes the frames contain voice data or not. This allows the speech codec to operate in a manner that provides the optimum quality.

The Half Rate codec system was introduced in the 1990s, but in view of the perceived poor quality, it was not widely used.

GSM AMR Codec

The AMR, Adaptive Multi-rate codec is now the most widely used GSM codec. The AMR codec was adopted by 3GPP in October 1988 and it is used for both GSM and circuit switched UMTS / WCDMA voice calls.

The AMR codec provides a variety of options for one of eight different bit rates as described in the table below. The bit rates are based on frames that are 20 millisceonds long and contain 160 samples. The AMR codec uses a variety of different techniques to provide the data compression. The ACELP codec is used as the basis of the overall speech codec, but other techniques are used in addition to this. Discontinuous transmission is employed so that when there is no speech activity the transmission is cut. Additionally Voice Activity Detection (VAD) is

25GSM Basics Tutorial and Overview

Page 26: GSM Basics Tutorial and Overview

26used to indicate when there is only background noise and no speech. Additionally to provide the feedback for the user that the connection is still present, a Comfort Noise Generator (CNG) is used to provide some background noise, even when no speech data is being transmitted. This is added locally at the receiver.

The use of the AMR codec also requires that optimized link adaptation is used so that the optimum data rate is selected to meet the requirements of the current radio channel conditions including its signal to noise ratio and capacity. This is achieved by reducing the source coding and increasing the channel coding. Although there is a reduction in voice clarity, the network connection is more robust and the link is maintained without dropout. Improvement levels of between 4 and 6 dB may be experienced. However network operators are able to prioritise each station for either quality or capacity.

The AMR codec has a total of eight rates: eight are available at full rate (FR), while six are available at half rate (HR). This gives a total of fourteen different modes.

Mode Bit rate(kbps)

Full Rate (FR) / Half rate (HR)

AMR 12.2 12.2 FR AMR 10.2 10.2 FR AMR 7.95 7.95 FR / HR AMR 7.40 7.40 FR / HR AMR 6.70 6.70 FR / HR AMR 5.90 5.90 FR / HR AMR 5.15 5.15 FR / HR AMR 4.75 4.75 FR / HR

AMR codec data rates

AMR-WB codec

Adaptive Multi-Rate Wideband, AMR-WB codec, also known under its ITU designation of G.722.2, is based on the earlier popular Adaptive Multi-Rate, AMR codec. AMR-WB also uses an ACELP basis for its operation, but it has been further developed and AMR-WB provides improved speech quality as a result of the wider speech bandwidth that it encodes. AMR-WB has a bandwidth extending from 50 - 7000 Hz which is significantly wider than the 300 - 3400 Hz bandwidths used by standard telephones. However this comes at the cost of additional processing, but with advances in IC technology in recent years, this is perfectly acceptable.

The AMR-WB codec contains a number of functional areas: it primarily includes a set of fixed rate speech and channel codec modes. It also includes other codec functions including: a Voice Activity Detector (VAD); Discontinuous Transmission (DTX) functionality for GSM; and Source Controlled Rate (SCR) functionality for UMTS applications. Further functionality includes in-band signaling for codec mode transmission, and link adaptation for control of the mode selection.

The AMR-WB codec has a 16 kHz sampling rate and the coding is performed in blocks of 20 ms. There are two frequency bands that are used: 50-6400 Hz and 6400-7000 Hz. These are coded separately to reduce the codec

26GSM Basics Tutorial and Overview

Page 27: GSM Basics Tutorial and Overview

27complexity. This split also serves to focus the bit allocation into the subjectively most important frequency range.

The lower frequency band uses an ACELP codec algorithm, although a number of additional features have been included to improve the subjective quality of the audio. Linear prediction analysis is performed once per 20 ms frame. Also, fixed and adaptive excitation codebooks are searched every 5 ms for optimal codec parameter values.

The higher frequency band adds some of the naturalness and personality features to the voice. The audio is reconstructed using the parameters from the lower band as well as using random excitation. As the level of power in this band is less than that of the lower band, the gain is adjusted relative to the lower band, but based on voicing information. The signal content of the higher band is reconstructed by using an linear predictive filter which generates information from the lower band filter.

Bit rate

(kbps) Notes

6.60 This is the lowest rate for AMR-WB. It is used for circuit switched connections for GSM and UMTS and is intended to be used only temporarily during severe radio channel conditions or during network congestion.

8.85 This gives improved quality over the 6.6 kbps rate, but again, its use is only recommended for use in periods of congestion or when during severe radio channel conditions.

12.65 This is the main bit rate used for circuit switched GSM and UMTS, offering superior performance to the original AMR codec.

14.25 Higher bit rate used to give cleaner speech and is particularly useful when ambient audio noise levels are high.

15.85 Higher bit rate used to give cleaner speech and is particularly useful when ambient audio noise levels are high.

18.25 Higher bit rate used to give cleaner speech and is particularly useful when ambient audio noise levels are high.

19.85 Higher bit rate used to give cleaner speech and is particularly useful when ambient audio noise levels are high.

23.05 Not suggested for full rate GSM channels.

23.85 Not suggested for full rate GSM channels, and provides speech quality similar to that of G.722 at 64 kbps.

Not all phones equipped with AMR-WB will be able to access all the data rates - the different functions on the phone may not require all to be active for example. As a result, it is necessary to inform the network about which rates are available and thereby simplify the negotiation between the handset and the network. To achieve this there are three difference AMR-WB configurations that are available:

Configuration A:   6.6, 8.85, and 12.65 kbit/s Configuration B:   6.6, 8.85, 12.65, and 15.85 kbit/s Configuration C:   6.6, 8.85, 12.65, and 23.85 kbit/s

27GSM Basics Tutorial and Overview

Page 28: GSM Basics Tutorial and Overview

28It can be seen that only the 23.85, 15.85, 12.65, 8.85 and 6.60 kbit/s modes are used. Based on listening tests, it was considered that these five modes were sufficient for a high quality speech telephony service. The other data rates were retained and can be used for other purposes including multimedia messaging, streaming audio, etc.

GSM codecs summary

There has been a considerable improvement in the GSM audio codecs that have been in use. Starting with the original RTE-LPC speech codec and then moving through the Enhanced Full Rate, EFR codec and the GSM half rate codec to the AMR codec which is now the most widely used and provides a variable rate that can be tailored to the individual conditions. Also the newer AMR-WB codec wills ee increasing use. Although with newer technologies such as LTE, Long Term Evolution which uses an all IP based system, codecs are still used to provide data compression and improved spectral efficiency, the idea of a codec will still be used, although some of the GSM codecs that are in use today will be superseded.

GSM handover or handoff [11]One of the key elements of a mobile phone or cellular telecommunications system, is that the system is split into many small cells to provide good frequency re-use and coverage. However as the mobile moves out of one cell to another it must be possible to retain the connection. The process by which this occurs is known as handover or handoff. The term handover is more widely used within Europe, whereas handoff tends to be use more in North America. Either way, handover and handoff are the same process.

Requirements for GSM handover

The process of handover or handoff within any cellular system is of great importance. It is a critical process and if performed incorrectly handover can result in the loss of the call. Dropped calls are particularly annoying to users and if the number of dropped calls rises, customer dissatisfaction increases and they are likely to change to another network. Accordingly GSM handover was an area to which particular attention was paid when developing the standard.

Types of GSM handover

Within the GSM system there are four types of handover that can be performed for GSM only systems:

Intra-BTS handover:   This form of GSM handover occurs if it is required to change the frequency or slot being used by a mobile because of interference, or other reasons. In this form of GSM handover, the mobile remains attached to the same base station transceiver, but changes the channel or slot.

Inter-BTS Intra BSC handover:   This for of GSM handover or GSM handoff occurs when the mobile moves out of the coverage area of one BTS but into another controlled by the same BSC. In this instance the BSC is able to perform the handover and it assigns a new channel and slot to the mobile, before releasing the old BTS from communicating with the mobile.

28GSM Basics Tutorial and Overview

Page 29: GSM Basics Tutorial and Overview

29 Inter-BSC handover:   When the mobile moves out of the range of cells controlled by one BSC, a more

involved form of handover has to be performed, handing over not only from one BTS to another but one BSC to another. For this the handover is controlled by the MSC.

Inter-MSC handover:   This form of handover occurs when changing between networks. The two MSCs involved negotiate to control the handover.

GSM handover process

Although there are several forms of GSM handover as detailed above, as far as the mobile is concerned, they are effectively seen as very similar. There are a number of stages involved in undertaking a GSM handover from one cell or base station to another.

In GSM which uses TDMA techniques the transmitter only transmits for one slot in eight, and similarly the receiver only receives for one slot in eight. As a result the RF section of the mobile could be idle for 6 slots out of the total eight. This is not the case because during the slots in which it is not communicating with the BTS, it scans the other radio channels looking for beacon frequencies that may be stronger or more suitable. In addition to this, when the mobile communicates with a particular BTS, one of the responses it makes is to send out a list of the radio channels of the beacon frequencies of neighbouring BTSs via the Broadcast Channel (BCCH).

The mobile scans these and reports back the quality of the link to the BTS. In this way the mobile assists in the handover decision and as a result this form of GSM handover is known as Mobile Assisted Hand Over (MAHO).

The network knows the quality of the link between the mobile and the BTS as well as the strength of local BTSs as reported back by the mobile. It also knows the availability of channels in the nearby cells. As a result it has all the information it needs to be able to make a decision about whether it needs to hand the mobile over from one BTS to another.

If the network decides that it is necessary for the mobile to hand over, it assigns a new channel and time slot to the mobile. It informs the BTS and the mobile of the change. The mobile then retunes during the period it is not transmitting or receiving, i.e. in an idle period.

A key element of the GSM handover is timing and synchronisation. There are a number of possible scenarios that may occur dependent upon the level of synchronisation.

Old and new BTSs synchronised:   In this case the mobile is given details of the new physical channel in the neighbouring cell and handed directly over. The mobile may optionally transmit four access bursts. These are shorter than the standard bursts and thereby any effects of poor synchronisation do not cause overlap with other bursts. However in this instance where synchronisation is already good, these bursts are only used to provide a fine adjustment.

Time offset between synchronised old and new BTS:   In some instances there may be a time offset between the old and new BTS. In this case, the time offset is provided so that the mobile can make the adjustment. The GSM handover then takes place as a standard synchronised handover.

Non-synchronised handover:   When a non-synchronised cell handover takes place, the mobile transmits 64 access bursts on the new channel. This enables the base station to determine and adjust the

29GSM Basics Tutorial and Overview

Page 30: GSM Basics Tutorial and Overview

30timing for the mobile so that it can suitably access the new BTS. This enables the mobile to re-establish the connection through the new BTS with the correct timing.

Inter-system handover

With the evolution of standards and the migration of GSM to other 2G technologies including to 3G UMTS / WCDMA as well as HSPA and then LTE, there is the need to handover from one technology to another. Often the 2G GSM coverage will be better then the others and GSM is often used as the fallback. When handovers of this nature are required, it is considerably more complicated than a straightforward only GSM handover because they require two technically very different systems to handle the handover.

These handovers may be called intersystem handovers or inter-RAT handovers as the handover occurs between different radio access technologies.

The most common form of intersystem handover is between GSM and UMTS / WCDMA. Here there are two different types:

UMTS / WCDMA to GSM handover:   There are two further divisions of this category of handover: o Blind handover:   This form of handover occurs when the base station hands off the mobile by

passing it the details of the new cell to the mobile without linking to it and setting the timing, etc of the mobile for the new cell. In this mode, the network selects what it believes to be the optimum GSM based station. The mobile first locates the broadcast channel of the new cell, gains timing synchronisation and then carries out non-synchronised intercell handover.

o Compressed mode handover:   using this form of handover the mobile uses the gaps I transmission that occur to analyse the reception of local GSM base stations using the neighbour list to select suitable candidate base stations. Having selected a suitable base station the handover takes place, again without any time synchronisation having occurred.

Handover from GSM to UMTS / WCDMA:   This form of handover is supported within GSM and a "neighbour list" was established to enable this occur easily. As the GSM / 2G network is normally more extensive than the 3G network, this type of handover does not normally occur when the mobile leaves a coverage area and must quickly find a new base station to maintain contact. The handover from GSM to UMTS occurs to provide an improvement in performance and can normally take place only when the conditions are right. The neighbour list will inform the mobile when this may happen.

Summary

GSM handover is one of the major elements in performance that users will notice. As a result it is normally one of the Key Performance Indicators (KPIs) used by operators to monitor performance. Poor handover or handoff performance will normally result in dropped calls, and users find this particularly annoying. Accordingly network operators develop and maintain their networks to ensure that an acceptable performance is achieved. In this way they can reduce what is called "churn" where users change from one network to another.

30GSM Basics Tutorial and Overview

Page 31: GSM Basics Tutorial and Overview

31

31GSM Basics Tutorial and Overview

Page 32: GSM Basics Tutorial and Overview

32

32GSM Basics Tutorial and Overview

Page 33: GSM Basics Tutorial and Overview

33

Asynchronous Transfer Mode (ATM) TutorialThe Asynchronous Transfer Mode (ATM) was developed to enable a single data networking standard to be used for both synchronous channel networking and packet-based networking. Asynchrnonous transfer mode also supports multiple levels of quality of service for packet traffic.

In this way, asynchronous transfer mode can be thought of as supporting both circuit-switched networks and packet-switched networks by mapping both bitstreams and packet-streams. It achieves this by sending data in a series or stream of fixed length cells, each of which has its own identifier. These data cells are typically sent on demand within a synchronous time-slot pattern in a synchronous bit-stream. Although this may not appear to be asynchronous, the asynchronous element of the "Asynchronous Transfer Mode", comes from the fact that the sending of the cells themselves is asynchronous and not from the synchronous low-level bitstream that carries them.

One of the original aims of Asynchronous Transfer Mode was that it should provide a basis for Broadband Integrated Services Digital Network (B-ISDN) to replace existing PSTN (Private � ). As a result of this the standards for Asynchronous Transfer Mode standards include not only the definitions for the Physical transmission techniques (Layer 1), but also layers 2 and 3.

33GSM Basics Tutorial and Overview

Page 34: GSM Basics Tutorial and Overview

34In addition to this, the development of Asysnchronous Transfer Mode was focussed heavily on the requirements for telecommunications providers rather than local data networking requirements, and as a result it is more suited to large area telecommunications applications rather than smaller local area data network solutions, or general computer networking.

While Asynchronous Transfer Mode is widely used for many applications, it is generally only used for transport of IP traffic. It has not become the single standard for providing a single integrated technology for LANs, public networks, and user services.

Basic asynchronous transfer mode system

There are two basic elements to an ATM system. Any system can be made up a number of each of these elements:

ATM switch     This accepts the incoming cells or information "packets" from another ATM entity which may be either another switch or an end point. It reads and updates the cell header information and switches the information cell towards its destination

ATM end point     This element contains the ATM network interface adaptor to enable data entering or leaving the ATM network to interface to the external world. Examples of these end points include workstations, LAN switches, video codecs and many more items.

ATM networks can be configured in many ways. The overall network will comprise a set of ATM switches interconnected by point-to-point ATM links or interfaces. Within the network there are two types of interface and these are both supported by the switches. The first is UNI and this is used to connect ATM end systems (such as hosts and routers) to an ATM switch. The second type of interface is known as NNI. This connects two ATM switches.

ATM operation

In ATM the information is formatted into fixed length cells consisting of 48 bytes (each 8 bits long) of payload data. In addition to this there is a cell header that consists of 5 bytes, giving a total cell length of 53 bytes. This format has been chosen because time critical data such as voice packets is not affected by very long packets being sent. The data carried in the header comprises payload information as well as what are termed virtual-circuit identifiers and header error check data.

ATM is what is termed connection orientated. This has the advantage that the user can define the requirements that are needed to support the calls, and in turn this allows the network to allocated the required resources. By adopting this approach, several calls can be multiplexed efficiently and ensuring that the required resources can be allocated.

There are two types of connection that are specified for asynchronous transfer mode:

34GSM Basics Tutorial and Overview

Page 35: GSM Basics Tutorial and Overview

35 Virtual Channel Connections - this is the basic connection unit or entity. It carries a single stream of

data cells from the originator to the end user. Virtual Path Connections - this is formed from a collection of virtual channel connections. A virtual

path is an end to end connection created across an ATM (asynchronous transfer mode) network. For a virtual path connection, the network routes all cells from the virtual path across the network in the same way without regard for the individual virtual circuit connection. This results in faster transfer.

The idea of virtual path connections are also used within the ATM network itself to route traffic between switches

ATM networks can be configured in many ways. The overall network will comprise a set of ATM switches interconnected by point-to-point ATM links or interfaces. Within the network there are two types of interface and these are both supported by the switches. The first is UNI and this is used to connect ATM end systems (such as hosts and routers) to an ATM switch. The second type of interface is known as NNI. This connects two ATM switches.

E-Carrier, E1 tutorialThe E carrier system has been created by the European Conference of Postal and Telecommunications Administrations (CEPT) as a digital telecommunications carrier scheme for carrying multiple links. The E-carrier system enables the transmission of several (multiplexed) voice/data channels simultaneously on the same transmission facility. Of the various levels of the E-carrier system, the E1 and E3 levels are the only ones that are used.

More specifically E1 has an overall bandwidth of 2048 kbps and provides 32 channels each supporting a data rate of 64 kbps. The lines are mainly used to connect between the PABX (Private Automatic Branch eXchange), and the CO (Central Office) or main exchange.

The E1 standard defines the physical characteristics of a transmission path, and as such it corresponds to the physical layer (layer 1) in the OSI model. Technologies such as ATM and others which form layer 2 are able to pass over E1 lines, making E1 one of the fundamental technologies used within telecommunications.

A similar standard to E1, known as T1 has similar characteristics, but it is widely used in North America. Often equipment used for these technologies, e.g. test equipment may be used for both, and the abbreviation E1/T1 may be seen.

E1 beginnings

The life of the standards started back in the early 1960s when Bell Laboratories, where the transistor was invented some years earlier, developed a voice multiplexing system to enable better use to be made of the lines that were required, and to provide improved performance of the analogue techniques that were used. The step of the process converted the signal into a digital format having a 64 kbps data stream. The next stage is to assemble twenty four of the data streams into a framed data stream with an overall data rate of 1.544 Mbps. This structured signal was called DS1, but it is almost universally referred to as T1.

35GSM Basics Tutorial and Overview

Page 36: GSM Basics Tutorial and Overview

36In Europe, the basic scheme was taken by what was then the CCIT and developed to fit the European requirements better. This resulted in the development of the scheme known as E1. This has provision for 30 voice channels and runs at an overall data rate of 2.048 Mbps. In Europe E1 refers to both the formatted version and the raw data rate.

E1 Applications and standards

The E-carrier standards form part of the overall Synchronous Digital Hierarchy (SDH) scheme. This allows where groups of E1 circuits, each containing 30 circuits, to be combined to produce higher capacity. E1 to E5 are defined and they are carriers in increasing multiples of the E1 format. However in reality only E3 is widely used and this can carry 480 circuits and has an overall capacity of 34.368 Mbps.

Physically E1 is transmitted as 32 timeslots and E3 has 512 timeslots. Unlike Internet data services which are IP based, E-carrier systems are circuit switched and permanently allocate capacity for a voice call for its entire duration. This ensures high call quality because the transmission arrives with the same short delay (Latency) and capacity at all times. Nevertheless it does not allow the same flexibility and efficiency to be obtained as that of an IP based system.

In view of the different capacities of E1 and E3 links they are used for different applications. E1 circuits are widely used to connect to medium and large companies, to telephone exchanges. They may also be used to provide links between some exchanges. E3 lines are used where higher capacity is needed. They are often installed between exchanges, and to provide connectivity between countries.

E1 basics

An E1 link runs over two sets of wires that are normally coaxial cable and the signal itself comprises a nominal 2.4 volt signal. The signalling data rate is 2.048 Mbps full duplex and provides the full data rate in both directions.

For E1, the signal is split into 32 channels each of 8 bits. These channels have their own time division multiplexed slots. These are transmitted sequentially and the complete transmission of the 32 slots makes up a frame. These Time Slots are nominated TS0 to TS31 and they are allocated to different purposes:

TS0 is used for synchronisation, alarms and messages TS1 - TS 15 used for user data TS16 is used for signalling, but it may also carry user data TS17 - TS31 are used for carrying user data

Time slot 0 is reserved for framing purposes, and alternately transmits a fixed pattern. This allows the receiver to lock onto the start of each frame and match up each channel in turn. The standards allow for a full Cyclic Redundancy Check to be performed across all bits transmitted in each frame.

36GSM Basics Tutorial and Overview

Page 37: GSM Basics Tutorial and Overview

37E1 signalling data is carried on TS16 is reserved for signalling, including control, call setup and teardown. These are accomplished using standard protocols including Channel Associated Signalling (CAS) where a set of bits is used to replicate opening and closing the circuit. Tone signalling may also be used and this is passed through on the voice circuits themselves. More recent systems use Common Channel Signalling (CCS) such as ISDN or Signalling System 7 (SS7) which sends short encoded messages containing call information such as the caller ID.

Several options are specified in the original CEPT standard for the physical transmission of data. However an option or standard known as HDB3 (High-Density Bipolar-3 zeros) is used almost exclusively.

Future

E1 and also T1 are well established for telecommunications use. However with new technologies such as ADSL, DSL, and the other IP based systems that are now being widely deployed, these will spell the end of E1 and T1. Nevertheless they have given good service over many years, and they will remain in use as a result of this wide deployment for some years to come.

Ethernet IEEE 802.3 tutorialThis Ethernet, IEEE 802.3 tutorial is split into several pages each of which addresses different aspects of Ethernet, IEEE 802.3 operation and technology:

[1] Ethernet IEEE 802.3 tutorial[2] Ethernet IEEE 802.3 standards[3] Ethernet IEEE 802.3 data frames structure[4] 100 Mbps Ethernet inc 100 Base-T[5] Gigabit Ethernet 1GE[6] Ethernet cables[7] Power over Ethernet, 802.3af and 802.3at

Ethernet, defined under IEEE 802.3, is one of today's most widely used data communications standards, and it finds its major use in Local Area Network (LAN) applications. With versions including 10Base-T, 100Base-T and now Gigabit Ethernet, it offers a wide variety of choices of speeds and capability. Ethernet is also cheap and easy to install. Additionally Ethernet, IEEE 802.3 offers a considerable degree of flexibility in terms of the network topologies that are allowed. Furthermore as it is in widespread use in LANs, it has been developed into a robust system that meets the needs to wide number of networking requirements.

Ethernet, IEEE 802.3 history

The Ethernet standard was first developed by the Xerox Corporation as an experimental coaxial cable based system in the 1970s. Using a Carrier Sense Multiple Access / Collision Detect (CSMA/CD) protocol to allow

37GSM Basics Tutorial and Overview

Page 38: GSM Basics Tutorial and Overview

38multiple users it was intended for use with LANs that were likely to experience sporadic use with occasional heavy use.

The success of the original Ethernet project lead to a joint development of a 10 Mbps standard in 1980. This time three companies were involved: Digital Equipment Corporation, Intel and Xerox. The Ethernet Version 1 specification that arose from this development formed the basis for the first IEEE 802.3 standard that was approved in 1983, and finally published as an official standard in 1985. Since these first standards were written and approved, a number of revisions have been undertaken to update the Ethernet standard and keep it in line with the latest technologies that are becoming available.

Ethernet network elements

The Ethernet IEEE 802.3 LAN can be considered to consist of two main elements:

Interconnecting media:   The media through which the signals propagate is of great importance within the Ethernet network system. It governs the majority of the properties that determine the speed at which the data may be transmitted. There are a number of options that may be used:

o Coaxial cable:   This was one of the first types of interconnecting media to be used for Ethernet. Typically the characteristic impedance was around 110 ohms and therefore the cables normally used for radio frequency applications were not applicable.

o Twisted Pair Cables   Type types of twisted pair may be used: Unshielded Twisted Pair (UTP) or a Shielded Twisted Pair (STP). Generally the shielded types are better as they limit stray pickup more and therefore data errors are reduced.

o Fibre optic cable:   Fibre optic cable is being used increasingly as it provides very high immunity to pickup and radiation as well as allowing very high data rates to be communicated.

Network nodes   The network nodes are the points to and from which the communication takes place. The network nodes also fall into categories:

o Data Terminal Equipment - DTE:   These devices are either the source or destination of the data being sent. Devices such as PCs, file servers, print servers and the like fall into this category.

o Data Communications Equipment - DCE:   Devices that fall into this category receive and forward the data frames across the network, and they may often be referred to as 'Intermediate Network Devices' or Intermediate Nodes. They include items such as repeaters, routers, switches or even modems and other communications interface units.

Ethernet network topologies

There are several network topologies that can be used for Ethernet communications. The actual form used will depend upon the requirements.

Point to point:   This is the simplest configuration as only two network units are used. It may be a DTE to DTE, DTE to DCE, or even a DCE to DCE. In this simple structure the cable is known as the network link. Links of this nature are used to transport data from one place to another and where it is convenient to use Ethernet as the transport mechanism.

38GSM Basics Tutorial and Overview

Page 39: GSM Basics Tutorial and Overview

39 Coaxial bus:   This type of Ethernet network is rarely used these days. The systems used a coaxial cable

where the network units were located along the length of the cable. The segment lengths were limited to a maximum of 500 metres, and it was possible to place up to 1024 DTEs along its length. Although this form of network topology is not installed these days, a very few legacy systems might just still be in use.

Star network:   This type of Ethernet network has been the dominant topology since the early 1990s. It consists of a central network unit, which may be what is termed a multi-port repeater or hub, or a network switch. All the connections to other nodes radiate out from this and are point to point links.

Summary

Despite the fact that Ethernet has been in use for many years, it is still a growing standard and it is likely to be used for many years to come. During its life, the speed of Ethernet systems has been increased, and now new optical fibre based Ethernet systems are being introduced. As the Ethernet standard is being kept up to date, the standard is likely to remain in use for many years to come.

Ethernet IEEE 802.3 StandardsThis Ethernet, IEEE 802.3 tutorial is split into several pages each of which addresses different aspects of Ethernet, IEEE 802.3 operation and technology:

[1] Ethernet IEEE 802.3 tutorial[2] Ethernet IEEE 802.3 standards[3] Ethernet IEEE 802.3 data frames structure[4] 100 Mbps Ethernet inc 100 Base-T[5] Gigabit Ethernet 1GE[6] Ethernet cables[7] Power over Ethernet, 802.3af and 802.3at

Ethernet, 802.3 is defined under a number of IEEE standards, each reflecting a different flavour of Ethernet. One of the successes of Ethernet has been the way in which it has been updated so that it can keep pace with improving technology and the growing needs of the users.

As a result of this the IEEE standards committee for Ethernet has introduced new standards to define higher performance variants. Each of the Ethernet IEEE 802.3 standards is given a different reference so that it can be uniquely identified.

In addition to this the different IEEE 802.3 standards may be known by other references that reflect the different levels of performance. These are also defined below.

IEEE 802.3 standards

39GSM Basics Tutorial and Overview

Page 40: GSM Basics Tutorial and Overview

40The IEEE 802.3 standard references all include the IEEE 802.3 nomenclature as standard. Different releases and variants of the standard are then designated by different designated letters after the 802.3 reference, i.e. IEEE 802.3*. These are defined in the table below.

StandardSupplement

Year Description

802.3a 1985 10Base-2 (thin Ethernet)802.3c 1986 10 Mb/s repeater specifications (clause 9)802.3d 1987 FOIRL (fiber link)802.3i 1990 10Base-T (twisted pair)802.3j 1993 10Base-F (fiber optic)802.3u 1995 100Base-T (Fast Ethernet and auto-negotiation)802.3x 1997 Full duplex802.3z 1998 1000Base-X (Gigabit Ethernet)802.3ab 1999 1000Base-T (Gigabit Ethernet over twisted pair)802.3ac 1998 VLAN tag (frame size extension to 1522 bytes)802.3ad 2000 Parallel links (link aggregation)802.3ae 2002 10-Gigabit Ethernet802.3as 2005 Frame expansion802.3at 2005 Power over Ethernet Plus

Ethernet standards supplements and releases

New technologies are being added to the list of IEEE 802.3 standards to keep pace with technology.

Ethernet terminology

There is a convention for describing the different forms of Ethernet. For example 10Base-T and 100Base-T are widely seen in the technical articles and literature. The designator consists of a three parts:

The first number (typically one of 10, 100, or 1000) indicates the transmission speed in megabits per second.

The second term indicates transmission type: BASE = baseband; BROAD = broadband. The last number indicates segment length. A 5 means a 500-meter (500-m) segment length from original

Thicknet. In the more recent versions of the IEEE 802.3 standard, letters replace numbers. For example, in 10BASE-T, the T means unshielded twisted-pair cables. Further numbers indicate the number of twisted pairs available. For example in 100BASE-T4, the T4 indicates four twisted pairs.

Summary

The Ethernet IEEE 802.3 standards are continually being updated to ensure that the generic standard keeps pace with constant advance of technology and the growing needs of the users. As a result, IEEE 802.3, Ethernet is still at the forefront of network communications technology, and it appears it will retain this position of

40GSM Basics Tutorial and Overview

Page 41: GSM Basics Tutorial and Overview

41dominance for many years to come. In addition to the different IEEE 802.3 standards, the terminology used to define the different flavours is also widely used for defining which Ethernet variant is used.

Ethernet IEEE 802.3 Frame Format / StructureEthernet, IEEE 802.3 defines the frame formats or frame structures that are developed within the MAC layer of the protocol stack.

Essentially the same frame structure is used for the different variants of Ethernet, although there are some changes to the frame structure to extend the performance of the system should this be needed. With the high speeds and variety of media used, this basic format sometimes needs to be adapted to meet the individual requirements of the transmission system, but this is still specified within the amendment / update for that given Ethernet variant.

10 / 100 Mbps Ethernet MAC data frame format

The basic MAC data frame format for Ethernet, IEEE 802.3 used within the 10 and 100 Mbps systems is given below:

Basic Ethernet MAC Data Frame Format

The basic frame consists of seven elements split between three main areas:-

Header o Preamble (PRE) - This is seven bytes long and it consists of a pattern of alternating ones and

zeros, and this informs the receiving stations that a frame is starting as well as enabling synchronisation. (10 Mbps Ethernet)

o Start Of Frame delimiter (SOF) - This consists of one byte and contains an alternating pattern of ones and zeros but ending in two ones.

o Destination Address (DA) - This field contains the address of station for which the data is intended. The left most bit indicates whether the destination is an individual address or a group address. An individual address is denoted by a zero, while a one indicates a group address. The next bit into the DA indicates whether the address is globally administered, or local. If the address is globally administered the bit is a zero, and a one of it is locally administered. There are then 46 remaining bits. These are used for the destination address itself.

o Source Address (SA) - The source address consists of six bytes, and it is used to identify the sending station. As it is always an individual address the left most bit is always a zero.

o Length / Type - This field is two bytes in length. It provides MAC information and indicates the number of client data types that are contained in the data field of the frame. It may also indicate the frame ID type if the frame is assembled using an optional format.(IEEE 802.3 only).

41GSM Basics Tutorial and Overview

Page 42: GSM Basics Tutorial and Overview

42 Payload

o Data - This block contains the payload data and it may be up to 1500 bytes long. If the length of the field is less than 46 bytes, then padding data is added to bring its length up to the required minimum of 46 bytes.

Trailer o Frame Check Sequence (FCS) - This field is four bytes long. It contains a 32 bit Cyclic

Redundancy Check (CRC) which is generated over the DA, SA, Length / Type and Data fields.

1000 Mbps Ethernet MAC data frame format

The basic MAC data frame format for Ethernet is modified slightly for 1GE, IEEE 802.3z systems. When using the 1000Base-X standard, there is a minimum frame size of 416bytes, and for 1000Base-T there is a minimum frame size of 520bytes. To accommodate this, an extension is added as appropriate. This is a non-data variable extension field to any frames that are shorter than the minimum required length.

1GE / 1000 Mbps Ethernet MAC Data Frame Format

Half-duplex transmission

This access method involves the use of CSMA/CD and it was developed to enable several stations to share the same transport medium without the need for switching, network controllers or assigned time slots. Each station is able to determine when it is able to transmit and the network is self organising.

The CSMA/CD protocol used for Ethernet and a variety of other applications falls into three categories. The first is Carrier Sense. Here each station listens on the network for traffic and it can detect when the network is quiet. The second is the Multiple Access aspect where the stations are able to determine for themselves whether they should transmit. The final element is the Collision Detect element. Even though stations may find the network free, it is still possible that two stations will start to transmit at virtually the same time. If this happens then the two sets of data being transmitted will collide. If this occurs then the stations can detect this and they will stop transmitting. They then back off a random amount of time before attempting a retransmission. The random delay is important as it prevents the two stations starting to transmit together a second time.

Note: According to section 3.3 of the IEEE 802.3 standard, each octet of the Ethernet frame, with the exception of the FCS, is transmitted low-order bit first.

Full duplex

Another option that is allowed by the Ethernet MAC is full duplex with transmission in both directions. This is only allowable on point-to-point links, and it is much simpler to implement than using the CSMA/CD approach as well as providing much higher transmission throughput rates when the network is being used. Not only is there no need to schedule transmissions when no other transmissions are underway, as there are only two

42GSM Basics Tutorial and Overview

Page 43: GSM Basics Tutorial and Overview

43stations in the link, but by using a full duplex link, full rate transmissions can be undertaken in both directions, thereby doubling the effective bandwidth.

Ethernet addresses

Every Ethernet network interface card (NIC) is given a unique identifier called a MAC address. This is assigned by the manufacturer of the card and each manufacturer that complies with IEEE standards can apply to the IEEE Registration Authority for a range of numbers for use in its products.

The MAC address comprises of a 48-bit number. Within the number the first 24 bits identify the manufacturer and it is known as the manufacturer ID or Organizational Unique Identifier (OUI) and this is assigned by the registration authority. The second half of the address is assigned by the manufacturer and it is known as the extension of board ID.

The MAC address is usually programmed into the hardware so that it cannot be changed. Because the MAC address is assigned to the NIC, it moves with the computer. Even if the interface card moves to another location across the world, the user can be reached because the message is sent to the particular MAC address.

100 Mbps Ethernet / IEEE 802.3u including 100 Base-T100Base-T Ethernet was originally known as "Fast Ethernet" when the IEEE 802.3u standard that defines it was released in 1995. At that time, it was the fastest version of Ethernet that was available offering a speed of 100 Mbps (12.5 MByte/s excluding 4B/5B overhead). Now 100Base-T has been overtaken by other standards such as 1GB and more recently 10 GB Ethernet offering speeds of 10 and 100 times that of the 100Base-T versions. Nevertheless 100Base-T is widely used for most networking applications as it offers a performance that is more than acceptable for many applications. Officially, the 100BASE-T standard is IEEE 802.3u.

100Base-T overview

100BaseT Ethernet, also known as Fast Ethernet is defined under the 802.3 family of standards under 802.3u. Like other flavours of Ethernet, 100Base-T, Fast Ethernet is a shared media LAN. All the nodes within the network share the 100 Mbps bandwidth. Additionally it conforms to the same basic operational techniques as used by other flavours of Ethernet. In particular it uses the CSMA/CD access method, but there are some minor differences in the way the overall system operates.

The designation for 100Base-T is derived from a standard format for Ethernet connections. The first figure is the designation for the speed in Mbps. The base indicates the system operates at baseband and the following letters indicate the cable or transfer medium.

43GSM Basics Tutorial and Overview

Page 44: GSM Basics Tutorial and Overview

44Note on CSMA/CD:

The CSMA/CD protocol used for Ethernet and a variety of other applications falls into three categories. The first is Carrier Sense. Here each station listens on the network for traffic and it can detect when the network is quiet. The second is the Multiple Access aspect where the stations are able to determine for themselves whether they should transmit. The final element is the Collision Detect element. Even though stations may find the network free, it is still possible that two stations will start to transmit at virtually the same time. If this happens then the two sets of data being transmitted will collide. If this occurs then the stations can detect this and they will stop transmitting. They then back off a random amount of time before attempting a retransmission. The random delay is important as it prevents the two stations starting to transmit together a second time.

There are a number of cabling versions available:

100Base-TX:   uses two pairs of Category 5 UTP * 100Base-T4:   uses four pairs of Category 3 (now obsolete) * 100Base-T2:   uses two pairs of Category 3 (now obsolete) * 100Base-FX:   It uses two strands of multi-mode optical fibre for receive and transmit. Maximum length

is 400 metres for half-duplex connections (to ensure collisions are detected) or 2 kilometres for full-duplex and is primarily intended for backbone use

100Base-SX:   It uses two strands of multi-mode optical fibre for receive and transmit. It is a lower cost alternative to using 100Base-FX, because it uses short wavelength optics which are significantly less expensive than the long wavelength optics used in 100Base-FX. 100Base-SX:   can operate at distances up to 300 metres

100Base-BX:   is a version of Fast Ethernet over a single strand of optical fibre (unlike 100Base-FX, which uses a pair of fibres). Single-mode fibre is used, along with a special multiplexer which splits the signal into transmit and receive wavelengths.

* The segment length for a 100Base-T cable is limited to 100 metres.

Fast Ethernet data frame format

Although the frame format for sending data over an Ethernet link does not vary considerably, there are some changes that are needed to accommodate the different physical requirements of the various flavours. The format adopted for Fast Ethernet, 802.3u is given below:

Ethernet data frame format

Fast Ethernet (802.3u) Data Frame Format

44GSM Basics Tutorial and Overview

Page 45: GSM Basics Tutorial and Overview

45It can be seen from the diagram above that the data can be split into several elements:

PRE     This is the Preamble and it is seven bytes long and it consists of a series of alternating ones and zeros. This warns the receivers that a data frame is coming and it allows them to synchronise to it.

SOF     This is the Start Of Frame delimiter. This is only one byte long and comprises a pattern of alternating ones and zeros ending with two bits set to logical "one". This indicates that the next bit in the frame will be the destination address.

DA     This is the Destination Address and it is six bytes in length. This identifies the receiver that should receive the data. The left-most bit in the left-most byte of the destination address immediately follows the SOF.

SA     This is the Source Address and again it is six bytes in length. As the name implies it identifies the source address.

Length / Type     This two byte field indicates the payload data length. It may also provide the frame ID if the frame is assembled using an alternative format.

Data     This section has a variable length according to the amount of data in the payload. It may be anywhere between 46 and 1500 bytes. If the length of data is below 46 bytes, then dummy data is transmitted to pad it out to reach the minimum length.

FCS     This is the Frame Check Sequence which is four bytes long. This contains a 32 bit cyclic redundancy check (CRC) that is used for error checking.

Data transmission speed

Although the theoretical maximum data bit rate of the system is 100 Mbps. The rate at which the payload is transferred on real networks is far less than the theoretical maximum. This is because additional data in the form of the header and trailer (addressing and error-detection bits) on every packet, along with the occasional corrupted packet which needs to be re-sent slows the data transmission. In addition to this time is lost time waiting after each sent packet for other devices on the network to finish transmitting.

Fast Ethernet using Cat 5 cable

Fast Ethernet can be transmitted over a variety of media, but 100Base-t is the most common form and it is carried over Cat 5 cable. These cables have four sets of twisted pair wires of which only two are used for 10Base-T or 100Base-T. For 10Base-T and 100Base-T one pair of wires is used for the transmitted data (TD) and another for the received data (RD) as shown below. The data is carried differentially over the wires and in this way the "+" and "-" wires carry equal and opposite signals. As a result any radiation is cancelled out.

45GSM Basics Tutorial and Overview

Page 46: GSM Basics Tutorial and Overview

46

PinWire

colourFunction

1 White + Green +TD 2 Green -TD 3 White + Orange +RD 4 Blue Not used 5 White + Blue Not used 6 Orange -RD 7 White + Brown Not used 8 Brown Not used

Wiring for Cat 5 cable used 100 Base-T Ethernet

Fast Ethernet Applications

Fast Ethernet in the form of 100Base-T, IEEE 802.3u has become one of the most widely used forms of Ethernet. It became almost universally used for LAN applications in view of the ease of its use and the fact that systems could sense whether 10Base-T or 100Base-T speeds should be used. In this way 100Base-T systems could be incorporated steadily and mixed with existing 10Base-T equipment. The higher specification standard would be used once the two communicating elements were both 100Base-T. In addition to this the fibre based version is also used, but in view of the fact that Cat5 cable is so cheap and easy to use, the wired version is more common. However the fibre version has the advantage of being able to communicate over greater distances.

Gigabit Ethernet, 1GE including 1000Base-TGigabit Ethernet, 1GE, is the next development of the Ethernet standard beyond the popular 100Base-T version. As the name suggests, Gigabit Ethernet, 1GE, allows the transfer of data at speeds of 1000 Mbps or 1Gbps. It is particularly easy to install because the 1000Base-T variant is designed to run over Cat 5 UTP (unshielded twisted pair) that is widely and cheaply available.

Initially Gigabit Ethernet, 1GE was only used for applications such as backbone links within large networks, but as the technology has become more affordable it is being used more widely, and the 1000Base-T variant is often incorporated within PCs themselves. However even 1 Gigabit Ethernet is being superseded as 10 Gigabit Ethernet is available and being widely used. Despite this, the 1 Gigabit version will still be designed into new product for many years to come.

Gigabit Ethernet, 1GE development

The success of the Ethernet standard has been its ability to evolve and move forward in such a way that it can keep up with or even ahead of the networking requirements for local area networks. The original development of Ethernet took place in 1970s at the Xerox Corporation. Since was launched on to the market it has steadily evolved, seeing versions including 10Base-T and later 100Base-T become networking standards.

46GSM Basics Tutorial and Overview

Page 47: GSM Basics Tutorial and Overview

47With its success the Ethernet standard was taken over by the IEEE under their standard IEEE 802.3. Accordingly IEEE 802.3ab, which defines Gigabit Ethernet was ratified in 1999 and it became known as 1000Base-T.

Gigabit Ethernet basics

Although the 1000Base-T version of Gigabit Ethernet is probably the most widely used, the 802.3ab specification also details versions that can operate over other media:

1000Base-CX   This was intended for connections over short distances up to 25 metres per segment and using a balanced shielded twisted pair copper cable. However it was succeeded by 1000Base-T.

1000Base-LX   This is a fiber optic version that uses a long wavelength 1000Base-SX    This is a fiber optic version of the standard that operates over multi-mode fiber using a

850 nanometer, near infrared (NIR) light wavelength 1000Base-T    Also known as IEEE 802.3ab, this is a standard for Gigabit Ethernet over copper wiring,

but requires Category 5 (Cat 5) cable as a minimum.

The specification for Gigabit Ethernet provides for a number of requirements to be met. These can be summarised as the points below:

Provide for half and full duplex operation at speeds of 1000 Mbps. Use the 802.3 Ethernet frame formats. Use the CSMA/CD access method with support for one repeater per collision domain. Provide backward compatibility with 10BASE-T and 100BASE-T technologies.

Note on CSMA/CD:

The CSMA/CD protocol used for Ethernet and a variety of other applications falls into three categories. The first is Carrier Sense. Here each station listens on the network for traffic and it can detect when the network is quiet. The second is the Multiple Access aspect where the stations are able to determine for themselves whether they should transmit. The final element is the Collision Detect element. Even though stations may find the network free, it is still possible that two stations will start to transmit at virtually the same time. If this happens then the two sets of data being transmitted will collide. If this occurs then the stations can detect this and they will stop transmitting. They then back off a random amount of time before attempting a retransmission. The random delay is important as it prevents the two stations starting to transmit together a second time.

Like 10Base-T and 100Base-T, the predecessors of Gigabit Ethernet, the system is a physical (PHY) and media access control (MAC) layer technology, specifying the Layer 2 data link layer of the OSI protocol model. It complements upper-layer protocols TCP and IP, which specify the Layer 4 transport and Layer 3 network portions and enable communications between applications.

47GSM Basics Tutorial and Overview

Page 48: GSM Basics Tutorial and Overview

48

Gigabit transport mechanism for 1000Base-T

In order to enable Gigabit Ethernet, 1000Base-T to operate over standard Cat 5 or Cat 5e cable, the transmission techniques employed operate in a slightly different way to that employed by either 10Base-T or 100Base-T. While it accomplishes this it still retains backward compatibility with the older systems.

Cat 5 cables have four sets of twisted pair wires of which only two are used for 10Base-T or 100Base-T. 1000BaseT Ethernet makes full use of the additional wires.

To see how this operates it is necessary to look at the wiring and how it is used. For 10Base-T and 100BaseT one pair of wires is used for the transmitted data and another for the received data as shown below:

PinWire

colourFunction

1 White + Green +TD 2 Green -TD 3 White + Orange +RD 4 Blue Not used 5 White + Blue Not used 6 Orange -RD 7 White + Brown Not used 8 Brown Not used

Wiring for Cat 5 cable used for 10 and 100 Base-T

The data is transmitted along the twisted pair wires. One wire is used for the positive and one for the negative side of the waveform, i.e. send and return. As the two signals are the inverse of each other any radiation is cancelled out. From the table the lines are labelled RD for received data and TD for transmitted data.

The Cat 5 cable used for transmitting 100BaseT Ethernet actually has a maximum clock rate of 125 Mbps. The reason for this is that the signal is coded so that 8 bits are coded into a 10 bit signal in a scheme known as 8B/10B. Thus to transmit at 100 Mbps the maximum clock rate is 125 MHz. This factor can also be used to advantage by 1000BaseT, Gigabit Ethernet.

To achieve the rate of 1000 Mbps, Gigabit Ethernet, 1000Base-T uses a variety of techniques to retain the maximum clock rate of 125 MHz while increasing the data transfer rate of a Gigabit. In this way the standard Cat 5 cable can be used as Gigabit Ethernet cable.

The first technique is that rather than using two wires to enable it to carry a signal representing a "0" or "1", it uses two sets of twisted pair and in this way four different data combinations can be transmitted: "00", "01", "10", and "11". This gives a four-fold increase in transmission speed. To give a further increase in speed by a factor of two, each twisted pair is used for transmission and reception of data, i.e. each twisted pair is bi-directional.

48GSM Basics Tutorial and Overview

Page 49: GSM Basics Tutorial and Overview

49This method of transmission is known as 4D-PAM5, and the maximum data rate is 125 Mbps x 4 x 2 = 1000Mbps.

A further voltage is used for error correction.

Although the same cables are sued for Gigabit Ethernet, the designations for the individual lines in the Gigabit Ethernet cable is changed to map the way in which the data is carried. The letters "BI" indicate the data is bi-directional and the letters DA, DB, � etc indicate Data A, Data B, � etc.

PinWire

colourFunction

1 White + Green +BI-DA 2 Green --BI-DA 3 White + Orange +BI-DB 4 Blue +BI-DC 5 White + Blue -BI-DC 6 Orange -BI-DB 7 White + Brown +BI-DD 8 Brown -BI-DD

Line designations for Cat 5 Gigabit Ethernet cable

Gigabit Ethernet is rapidly becoming an accepted standard not just for use for high speed links in networks, but also for standard links between PCs and the relevant servers. Many PCs have Gigabit Ethernet fitted as standard and this also means that networks require to use Gigabit Ethernet switches, and routers, etc. However the fact that standard Cat 5 cable can be used for the 1000Base-T variant means that Gigabit Ethernet will rapidly take over from the previous variants of Ethernet, allowing speeds to be steadily increased.

Practical aspects

Gigabit Ethernet, 1GE has been developed with the idea of using ordinary Cat 5 cables. However several companies recommend the use of higher spec Cat 5e cables when Gigabit Ethernet applications are envisaged. Although slightly more expensive, these Cat 5e cables offer improved crosstalk and return loss performance. This means that they are less susceptible to noise. When data is being passed at very high rates, there is always the possibility that electrical noise can cause problems. The use of Cat 5e cables may improve performance, particularly when used in a less quiet electrical environment, or over longer runs.

Ethernet cable summaryThe Ethernet standard is well established. It is used in a variety of different environments and accordingly there is a variety of different types of cable over which it operates. It is possible not only for Ethernet to operate at different speeds, but there are different varieties of cable that can be used within the same speed category. In

49GSM Basics Tutorial and Overview

Page 50: GSM Basics Tutorial and Overview

50order to ensure that Ethernet operates correctly, the types of cable, their electrical conditions and the maximum lengths over which they may operate are specified.

For many applications, ready made Ethernet cables may be purchased, and a knowledge of the construction of any Ethernet cables is not required. However for other applications it is necessary to know the construction of the Ethernet cable. As a result of this advertisements for different types of cable, Cat-5, Cat-5e, Cat-6 may be widely seen. These cables may be used for different applications.

A summary of Ethernet cables and their maximum operating lengths is given below:

 

Specification Cable type Maximum length10BaseT Unshielded Twisted Pair 100 metres10Base2 Thin coaxial cable 180 metres10Base5 Thick coaxial cable 500 metres10BaseF Fibre optic cable 2000 metres100BaseT Unshielded twisted pair 100 metres100BaseTX Unshielded twisted pair 220 metres

Ethernet cable type summary

Lengths provided are those accepted as the maximum.These lengths are not necessarily included in the IEEE standard.

Categories for Ethernet cables

A variety of different cables are available for Ethernet and other telecommunications and networking applications. These cables that are described by their different categories, e.g. Cat 5 cables, Cat-6 cables, etc, which are often recognised by the TIA (telecommunications Industries Association) and they are summarised below:

Cat-1:     This is not recognised by the TIA/EIA. It is the form of wiring that is used for standard telephone (POTS) wiring, or for ISDN.

Cat-2:    : This is not recognised by theTIA/EIA. It was the form of wiring that was used for 4Mbit/s token ring networks.

Cat-3:    : This cable is defined in TIA/EIA-568-B. It is used for data networks employing frequencies up to 16 MHz. It was popular for use with 10 Mbps Ethernet networks (100Base-T), but has now been superseded by Cat-5 cable.

Cat-4:    : This cable is not recognised by the TIA/EIA. However it can be sued for networks carrying frequencies up to 20 MHz. It was often used on 16Mbps token ring networks.

Cat-5:    : This is not recognised by the TIA/EIA. It is the cable that is widely used for 100Base-T and 1000Base-T networks as it provides performance to allow data at 100 Mbps and slightly more (125 MHz for 1000Base-T) Ethernet.

50GSM Basics Tutorial and Overview

Page 51: GSM Basics Tutorial and Overview

51 Cat-5e:    : This form of cable is recognised by the TIA/EIA and is defined in TIA/EIA-568-B.. It has a

slightly higher frequency specification that Cat-5 cable as the performance extends up to 125 Mbps. It can be used for 100Base-T and 1000Base-t (Gigabit Ethernet).

Cat-6:    : This cable is defined in TIA/EIA-568-B. It provides more than double the performance of Cat-5 and Cat-5e cables allowing data at up to 250Mbps to be passed.

Cat-7:    : This is an informal number for ISO/IEC 11801 Class F cabling. It comprises four individually shielded pairs inside an overall shield. It is aimed at applications where transmission of frequencies up to 600 Mbps is required.

Further descriptions of cat-5 and Cat-5e cables are given below as these are widely sued for Ethernet networking applications today.

Ethernet Cat 5 cable

Cat 5 cables or to give them their full name category 5 cable is the current preferred cable type for LAN network and telephone wiring where twisted pair cabling is required. Cat 5 cables consist of an unshielded cable comprising four twisted pairs, typically of 24 gauge wire. The terminating connector is an RJ-45 jack. In view of this these Cat5 network cables are often referred to as RJ45 network cables or RJ45 patch cables. Certified Cat-5 cables will have the wording "Cat-5" written on the side. As they conform to EIA/TIA 568A-5, this is written on the outer sheath. It is always best to use the appropriate network cables when setting up a network as faulty or not to standard cables can cause problems that may be difficult to identify and trace.

Cat5 network cable is now the standard form of twisted pair cable and supersedes Cat 3. The Cat 5 cables can be used for data speeds up to 125 Mbps, thereby enabling them to support 100Base-T which has a maximum data speed of 100 Mbps whereas the Cat-3 cable was designed to be compatible with 10Base-T. The Cat5 cable is able to support working up to lengths of 100 metres at the full data rate.

Where it is necessary to operate at higher speeds, as in the case of Gigabit Ethernet, an enhanced version of Cat 5 cable known as Cat 5e is often recommended, although Cat 5 is specified to operate with Gigabit Ethernet, 1000Base-T. Alternatively Cat 5e can be used with 100Base-T to enable greater lengths (up to 350 metres) to be achieved.

The wires and connections within the Cat 5 or Cat 5e cable vary according to the applications. A summary of the signals carried and the relevant wires and connections is given in the table below:

PinNo

Colour Telephone 10Base-T 100Base-T 1000Base-TPoE

Mode APoE

Mode B1 White / green   +TX +TD +BI_DA 48 V out   2 Green   -TX -TX -BI_DA 48 V out   3 White / orange   +RX +RX +BI_DB 48 V return   4 Blue Ring     +BI_DC   48 V out 5 Blue / white Tip     -BI_DC   48 V out 6 Orange   -RX -RX -BI_DB 48 V return   7 White / brown       +BI_DD   48 V return

51GSM Basics Tutorial and Overview

Page 52: GSM Basics Tutorial and Overview

52PinNo

Colour Telephone 10Base-T 100Base-T 1000Base-TPoE

Mode APoE

Mode B8 Brown       -BI_DD   48 V return

RJ-45 / Cat 5 / Cat 5e Wiring

In the table, TX is transmitted data, and RX is received data. BI_Dn is bi-directional data, A, B, C, and D.

Ethernet Cat 5 crossover cables

There are a number of different configurations of cable that may be employed according to the equipment and the requirement. The most common type are the straight through cables which are wired in a 1 to 1 configuration. However Cat-5 crossover cables are also required on occasions.

Typically a Cat-5 cable used to connect a computer (PC) to a switch will be a straight through cable. However if two computers or two switches are connected together then a Cat5 crossover cable is used.

Many Ethernet interfaces in use today are able to detect the type of cable, whether it is a straight through or crossover cable, and they are able to adapt to the required format. This means that the requirement for Cat-5 crossover cables is less than it might otherwise be.

When using Cat-5 Ethernet crossover cables, they are not marked with the fact that they are crossover cables. Accordingly it is often wise to mark them to avoid confusion later.

Ethernet Cat 5e cables

In order to improve the performance of the cabling used for Ethernet and other applications, the Cat 5 cable was upgraded to cat 5e. This cable provides for improved levels of screening and reduced cross-talk. This is achieved by having individually screened twisted pairs.

Summary

Cat 5 network cable is now the standard for networking. Using a cost effective RJ45 connector, these cables may often be referred to as RJ45 network cable or RJ45 patch cable as they are able to link or patch different Ethernet items together very easily. Now with the introduction of Cat 5e Ethernet cables, these are becoming more widespread in their use.

Although this is not a complete summary of all the types of Ethernet cable that may be found, it gives a guide to some of the most common. 10Base-T and 100Base-T are possibly the most widely used forms of Ethernet, although higher speeds are now becoming common place. In addition to this the variety of cables including Cat-5 cable and all its versions including crossover Cat 5 cables may be required and obtained from a variety of suppliers.

52GSM Basics Tutorial and Overview

Page 53: GSM Basics Tutorial and Overview

53

Power over Ethernet, PoE, IEEE 802.3af / 802.3atPowering network devices can sometime present problems, especially if they are located remotely. One convenient solution is to supply the power over an Ethernet LAN cable. Power over Ethernet, is defined under two IEE standards, namely IEEE 802.3af and later IEE 802.3at which defined a number of enhancements.

This Power over Ethernet, PoE is now being used for a wide variety of applications including powering IP telephones, wireless LAN access points, webcams, Ethernet hubs and switches and many more devices. It is convenient to use and as a result, Power over Ethernet is widely used and many products are available.

PoE Development

With Ethernet now an established standard, one of the limitations of Ethernet related equipment was that it required power and this was not always easily available. As a result some manufacturers started to offer solutions whereby power could be supplied over the Ethernet cables themselves. To prevent a variety of incompatible Power over Ethernet, PoE, solutions appearing on the market, and the resulting confusion, the IEEE began their standardisation process in 1999.

A variety of companies were involved in the development of the IEEE standard. The result was the IEEE802.3af standard that was approved for release on 12 June 2003. Although some products were released before this date and may not fully conform to the standard, most products available today will conform to it, especially if they quote compliance with 802.3af.

A further standard, designated IEEE 802.3at was released in 2009 and this provided for several enhancements to the original IEEE 802.3af specification.

PoE overview

The standard allows for a supply of 48 volts with a maximum current of 400 milliamps to be provided over two of the available four pairs used on Cat 3 or Cat 5 cable. While this sounds very useful with a maximum available power of 19.2 watts, the losses in the system normally reduce this to just under 13 watts.

The standard Cat 5 cable has sets of twisted pair cable, and the IEEE standard allows for either to be used for 10Base-T and 100Base-T systems. The standard allows for two options for Power over Ethernet: one uses the spare twisted pairs, while the second option uses the wires carrying the data. Only one option may be used and not both.

When using the spare twisted pairs for the supply, the pair on pins 4 and 5 connected together and normally used for the positive supply. The pair connected to pins 7 and 8 of the connector are connected for the negative supply. While this is the standard polarity, the specification actually allows for either polarity to be used.

53GSM Basics Tutorial and Overview

Page 54: GSM Basics Tutorial and Overview

54When the pairs used for carrying the data are employed it is it is possible to apply DC power to the centre tap of the isolation transformer that are used to terminate the data wires without disrupting the data transfer. In this mode of operation the pair on pins 3 and 6 and the pair on pins 1 and 2 can be of either polarity.

As the supply reaching the powered device can be of either polarity a full wave rectifier (bridge rectifier) is used to ensure that the device consuming the power receives the correct polarity power.

Within the 802.3af standard two types of device are described:

Power Sourcing Equipment, PSE     This is the equipment that supplies power to the Ethernet cable. Powered Devices, PD     This is equipment that interfaces to the Ethernet cable and is powered by

supply on the cable. These equipments may range from switches and hubs to other items including webcams, etc.

Power Sourcing Equipment, PSE

This needs to provide a number of functions apart from simply supplying the power over the Ethernet system. The PSE obviously needs to ensure that no damage is possible to any equipment that may be present on the Ethernet system. The PSE first looks for devices that comply with the IEEE 802.3af specification. This is achieved by applying a small current-limited voltage to the cable. The PSE then checks for the presence of a 25k ohm resistor in the remote device. If this load or resistor is detected, then the 48V is applied to the cable, but it is still current-limited to prevent damage to cables and equipment under fault conditions.

The PSE will continue to supply power until the Powered Device (PD) is removed, or the PD stops drawing its minimum current.

Powered Device, PD

The powered device must be able to operate within the confines of the Power over Ethernet specification. It receives a nominal 48 volts from the cable, and must be able to accept power from either option, i.e. either over the spare or data cables. Additionally the 48 volts supplied is too high for operating the electronics to be powered, and accordingly an isolated DC-DC converter is used to transform the 48V to a lower voltage. This also enables 1500V isolation to be provided for safety reasons.

PoE Summary

Power over Ethernet, PoE, defined as IEEE 802.3af or the enhancements under IEEE 802.3at provide a particularly valuable means of remotely supplying and controlling equipment that may be connected to an Ethernet network or system. PoE enables units to be powered in situations where it may not be convenient to run in a new power supply for the unit. While there are limitations to the power that can be supplied, the intention is that only small units are likely to need powering in this way. Larger units can be powered using more conventional means.

54GSM Basics Tutorial and Overview

Page 55: GSM Basics Tutorial and Overview

55

Wireless technologiesWireless technology in a variety of forms is an area of electronics that is developing and growing particularly fast. Wirless LAN (WLAN) technology including Wi-Fi (IEEE 802.11), Bluetooth, Ultra-Wide Band (UWB), Wimax, Zigbee, and more are all growing and finding their own market areas. As a result wireless technology is being more widely used and found in many new applications.The development of wireless technologies

Wireless technologies and standards

Wireless technology is being widely used for many applications. Accordingly there is a growing number of different wireless technologies and standards that are being used. Multipage summaries of many of the more widely used wireless standards are given below.

Bluetooth DECT HomeRF SWAP (now obsolete) IEEE 802.11 Wi-Fi standard IEEE 802.15.4 IEEE 802.20 MBWA standard IEEE 802.22 WRAN standard NFC Near Field Communications RFID, Radio Frequency Identification technology Short range device, SRD Ultra Wideband Technology WiMAX Wireless USB - a technlogy utilising UWB transmissions Zigbee standard 6LoWPAN

Wireless Analysis

Our contributions from industry experts on various aspects of wire;ess technology looking at technology trends, case studies and analysis of the current wireless technology situation.

The future of Bluetooth: where do we go now? Robin Heydon, CSR [2010-06] Real world NFC applications Jerome Nadel, Sagem Wireless [2010-03]

Bluetooth technology tutorialBluetooth has now established itself in the market place enabling a variety of devices to be connected together using wireless technology. Bluetooth technology has come into its own connecting remote headsets to mobile phones, but it is also used in a huge number of other applications as well.

55GSM Basics Tutorial and Overview

Page 56: GSM Basics Tutorial and Overview

56In fact the development of Bluetooth technology has progressed so that it is now an integral part of many household items. Cell phones and many other devices use Bluetooth for short range connectivity. In this sort of application, Bluetooth has been a significant success.

Development of Bluetooth technology and Bluetooth SIG

The development of Bluetooth technology dates back to 1994 when Ericsson came up with a concept to use a wireless connection to connect items such as an earphone and a cordless headset and the mobile phone. The idea behind Bluetooth (it was not yet called Bluetooth) was developed further as the possibilities of interconnections with a variety of other peripherals such as computers printers, phones and more were realised. Using this technology, the possibility of quick and easy connections between electronic devices should be possible.

It was decided that in order to enable the development of Bluetooth technology to move forward and be accepted, it needed to be opened up as an industry standard. Accordingly, in Feb 1998, five companies (Ericsson, Nokia, IBM, Toshiba and Intel) formed the Bluetooth SIG - Special Interest Group.

The Bluetooth SIG grew very rapidly, because by the end of 1998 it welcomed its 400th member.

The Bluetooth SIG also worked rapidly on the development of Bluetooth technology. Three months after the formation of the special interest group - it was not yet known as the Bluetooth SIG, the name Bluetooth was adopted.

The following year the first full release of the standard occurred in July 1999.

The Bluetooth SIG performs a number of functions:

Publish and update the Bluetooth specifications Administer the qualification programme Protect Bluetooth trademarks Evangelise Bluetooth technology

The Bluetooth SIG global headquarters is in Kirkland, Washington, USA and there are local offices in Hong Kong, Beijing, China; Seoul, Korea; Minato-Ku, Tokyo; Taiwan; and Malmo, Sweden.

The name Bluetooth

The name of the Bluetooth standard originates from the Danish king Harald Blåtand who was king of Denmark between 940 and 981 AD. His name translates as "Blue Tooth" and this was used as his nickname. A brave warrior, his main achievement was that of uniting Denmark under the banner of Christianity, and then uniting it with Norway that he had conquered. The Bluetooth standard was named after him because Bluetooth endeavours to unite personal computing and telecommunications devices.

56GSM Basics Tutorial and Overview

Page 57: GSM Basics Tutorial and Overview

57

Bluetooth standard releases

There have been many releases of the Bluetooth standard as updates have been made to ensure it keeps pace with the current technology and the needs of the users.

Bluetoothstandardversion

Release date

Key features of version

1.0 July 1999 Draft version of the Bluetooth standard 1.0a July 1999 First published version of the Bluetooth standard 1.0b Dec 1999 Small updates to cure minor problems and issues 1.0b + CE Nov 2000 Critical Errata added to issue 1.0b of the Bluetooth standard

1.1 February 2001

First useable release. It was used by the IEEE for their standard IEEE 802.15.1 - 2002.

1.2 Nov 2003

This release of the Bluetooth standard added new facilities including frequency hopping and eSCO for improved voice performance. Was released by the IEEE as IEEE 802.15.1 - 2005. This was the last version issued by IEEE.

2.0 + EDR Nov 2004 This version of the Bluetooth standard added the enhanced data rate (EDR) to increase the throughput to 3.0 Mbps raw data rate.

2.1 July 2007 This version of the Bluetooth standard added secure simple pairing to improve security.

3.0 + HS Apr 2009 Bluetooth 3 added IEEE 802.11 as a high speed channel to increase the data rate to 10+ Mbps

4.0 Dec 2009 The Bluetooth standard was updated to include Bluetooth Low Energy formerly known as Wibree

Bluetooth basics

The first release of Bluetooth was for a wireless data system that could carry data at speeds up to 721 Kbps with the addition of up to three voice channels. The aim of Bluetooth technology was to enable users to replace cables between devices such as printers, fax machines, desktop computers and peripherals, and a host of other digital devices. One major use was for wirelessly connecting headsets for to mobile phones, allowing people to use small headsets rather than having to speak directly into the phone.

Another application of Bluetooth technology was to provide a connection between an ad hoc wireless network and existing wired data networks.

The technology was intended to be placed in a low cost module that could be easily incorporated into electronics devices of all sorts. Bluetooth uses the licence free Industrial, Scientific and Medical (ISM)

57GSM Basics Tutorial and Overview

Page 58: GSM Basics Tutorial and Overview

58frequency band for its radio signals and enables communications to be established between devices up to a maximum distance of around 100 metres, although much shorter distances were more normal..

Summary

Bluetooth is well established, but despite this further enhancements are being introduced. Faster data transfer rates, and greater flexibility. In addition to this efforts have been made to ensure that interoperation has been improved so that devices from different manufacturers can talk together more easily.

Bluetooth radio interface, modulation, & channelsThe Bluetooth radio interface has been designed to enable communications to be made reliably over short distances. The radio interface is relatively straightforward, although it has many attractive features. The Bluetooth radio interface supports a large number of channels and different power levels, as well as using reliable forms of modulation.

Bluetooth radio interface basics

Running in the 2.4 GHz ISM band, Bluetooth employs frequency hopping techniques with the carrier modulated using Gaussian Frequency Shift Keying (GFSK).

With many other users on the ISM band from microwave ovens to Wi-Fi, the hopping carrier enables interference to be avoided by Bluetooth devices. A Bluetooth transmission only remains on a given frequency for a short time, and if any interference is present the data will be re-sent later when the signal has changed to a different channel which is likely to be clear of other interfering signals. The standard uses a hopping rate of 1600 hops per second, and the system hops over all the available frequencies using a pre-determined pseudo-random hop sequence based upon the Bluetooth address of the master node in the network..

During the development of the Bluetooth standard it was decided to adopt the use of frequency hopping system rather than a direct sequence spread spectrum approach because it is able to operate over a greater dynamic range. If direct sequence spread spectrum techniques were used then other transmitters nearer to the receiver would block the required transmission if it is further away and weaker.

Bluetooth channels and frequencies

Bluetooth frequencies are all located within the 2.4 GHz ISM band. The ISM band typically extends from 2 400 MHz to 2 483.5 MHz (i.e. 2.4000 - 2.4835 GHz). The Bluetooth channels are spaced 1 MHz apart, starting at 2 402 MHz and finishing at 2 480 MHz. This can be calculated as 2401 + n, where n varies from 1 to 79.

58GSM Basics Tutorial and Overview

Page 59: GSM Basics Tutorial and Overview

59This arrangement of Bluetooth channels gives a guard band of 2 MHz at the bottom end of the band and 3.5 MHz at the top.

In some countries the ISM band allocation does not allow the full range of frequencies to be used. In France, Japan and Spain, the hop sequence has to be restricted to only 23 frequencies because of the ISM band allocation is smaller.

There are also some Bluetooth frequency accuracy requirements for Bluetooth transmissions. The transmitted initial centre frequency must be within ±75 kHz from the receiver centre frequency. The initial frequency accuracy is defined as being the frequency accuracy before any information is transmitted and as such any frequency drift requirement is not included.

In order to enable effective communications to take place in an environment where a number of devices may receive the signal, each device has its own identifier. This is provided by having a 48 bit hard wired address identity giving a total of 2.815 x 10^14 unique identifiers.

Bluetooth modulation

The format originally chosen for Bluetooth in version 1 was Gaussian frequency shift keying, GFSK, however with the requirement for higher data rates two forms of phase shift keying were introduced for Bluetooth 2 to provide the Enhanced Data Rate, EDR capability.

Gaussian frequency shift keying:   When GFSK is used for the chosen form of Bluetooth modulation, the frequency of the carrier is shifted to carry the modulation. A binary one is represented by a positive frequency deviation and a binary zero is represented by a negative frequency deviation. The modulated signal is then filtered using a filter with a Gaussian response curve to ensure the sidebands do not extend too far either side of the main carrier. By doing this the Bluetooth modulation achieves a bandwidth of 1 MHz with stringent filter requirements to prevent interference on other channels. For correct operation the level of BT is set to 0.5 and the modulation index must be between 0.28 and 0.35.

Phase shift keying:   Phase shift keying is the form of Bluetooth modulation used to enable the higher data rates achievable with Bluetooth 2 EDR (Enhanced Data Rate). Two forms of PSK are used:

π/4 DQPSK:   This is a form of phase shift keying known as π/4 differential phase shift keying. It enables the raw data rate of 2 Mbps to be achieved.

8DPSK:   This form of Bluetooth modulation is eight point or 8-ary phase shift keying. It is used when link conditions are good and it allows raw data rates of up to 3 Mbps to be achieved.

The enhanced data rate capability for Bluetooth modulation is implemented as an additional capability so that the system remains backwards compatible.

The Bluetooth modulation schemes and the general format do not lend themselves to carrying higher data rates. For Bluetooth 3, the higher data rates are not achieved by changing the format of the Bluetooth modulation, but by working cooperatively with an IEEE 802.11g physical layer. In this way data rates of up to around 25 Mbps can be achieved.

59GSM Basics Tutorial and Overview

Page 60: GSM Basics Tutorial and Overview

60

Bluetooth power levels

The transmitter powers for Bluetooth are quite low, although there are three different classes of output dependent upon the anticipated use and the range required.

Power Class 1 is designed for long range communications up to about 100m devices, and this has a maximum output power of 20 dBm.

Next is Power Class 2 which is used for what are termed for ordinary range devices with a range up to about 10m, with a maximum output power of 6 dBm.

Finally there is Power Class 3 for short range devices. Bluetooth class 3 supports communication only up to distances of about 10cm and it has a maximum output power of 0 dBm.

Power control is mandatory for Bluetooth Class 1, but optional for the others, although its use is advisable to conserve battery power. The appropriate power elvel can be chosen according to the RSSI, Received Strength Signal Indictor reading.

Class Maximum power

dBm Power control capability

1 20 Mandatory 2 4 Optional 3 0 Optional

Summary of Bluetooth Power Classes

Bluetooth power level choice and RSSI

In order to conserve battery power, the lowest transmitted power level consistent with a reliable link should be chosen. Assuming that power level control is available, the power level is chosen according to an RSSI reading. If the RSSI indication falls below a given level, the Bluetooth power level can be increased to bring the RSSI level up to an accepted level.

The value of any RSSI figure is arbitrary as it is simply used to provide an indication of when the signal level and hence the transmitted power level needs to be increased or decreased.

The Bluetooth specification does define a maximum bit error rate of 0.1% and this equates to a minimum requirement for the receive sensitivity of -70dBm. This figures for sensitivity then lead to the distances achievable for the different power levels, although today's receivers are generally more sensitive than those that were used to baseline the specification at its launch..

60GSM Basics Tutorial and Overview

Page 61: GSM Basics Tutorial and Overview

61

Summary

Bluetooth data file transfer, links & codecThe Bluetooth radio interface provides rugged physical layer without any unnecessary complications to carry the required data from one device to the next. With many devices being physically small and not having large battery capacity levels, the radio interface has been designed to keep power consumption low, while still providing the required capabilities.

Bluetooth data transfer can be achieved using a variety of different data packet types and using different forms of links - asynchronous links and synchronous links

These different Bluetooth data file transfer formats provide flexibility, but they are invisible to the user who sees a connection being made and Bluetooth data being transferred.

Bluetooth links

There are two main types of Bluetooth link that are available and can be set up:

ACL   Asynchronous Connectionless communications Link SCO   Synchronous Connection Orientated communications link

The choice of the form of Bluetooth link used is dependent upon the type of Bluetooth data transfer that is required.

Bluetooth ACL

The ACL or Asynchronous Connectionless Communications Link is possible the most widely used form of Bluetooth link. The ACL Bluetooth link is used for carrying framed data - i.e. data submitted from an application to logical link control and adaptation protocol channel. The channel may support either unidirectional or bidirectional Bluetooth data transfer.

There is a variety of different ACL formats that can be used - most of them incorporate forward error coding, FEC as well as header error correction to detect and correct errors that may occur in the radio link.

,p>The Asynchronous Bluetooth link provides connections for most applications within Bluetooth. Data transfers like this are normally supported by profiles which allow the data to be incorporated into frames and transferred to the other end of the Bluetooth link where it is extracted from the frames and passed to the relevant application.

61GSM Basics Tutorial and Overview

Page 62: GSM Basics Tutorial and Overview

62The ACL is enables data to be transferred via Bluetooth 1 at speeds up to the maximum rate of 732.2 kbps. This occurs when it is operating in an asymmetric mode. This is commonly used because for most applications there is far more data transferred in one direction than the other. When a symmetrical mode is needed with data transferred at the same rate in both directions, the data transfer rate falls to 433.9 kbps. The synchronous links support two bi-directional connections at a rate of 64 kbps. The data rates are adequate for audio and most file transfers.

When using Bluetooth 2 enhanced data rate, data rates of 2.1 Mbps may be achieved. Also asynchronous links can be granted a quality of Service, QoS by setting the appropriate channel parameters.

Bluetooth SCO

The SCO or Synchronous Connection Orientated communications link is used where data is to be streamed rather than transferred in a framed format.

The SCO can operate alongside the ACL channels, and in fact needs one ACL to configure the SCOs.

A Bluetooth master node can support up to three simultaneous SCL channels and these can be split between up to three slave nodes.

The idea of the SCO is to ensure that audio data can be streamed without suffering delays waiting for frames or packet slots to become available. The SCO communications links is assigned guaranteed time slots so that they will be transported at the required time with a known maximum latency.

A further form of link known as an eSCO or Extended SCO was introduced with version 1.2 of the Bluetooth standard. Originally no acknowledgement had been sent, whereas using the eSCO greater reliability is provided to the Bluetooth link by sending an acknowledgement and allowing a limited number of re-transmissions if data is corrupted. In view of the latency requirements, re-transmissions are only allowable until the next guaranteed time slot, otherwise new data would be delayed.

Bluetooth codec

Within the core specification, there are a number Bluetooth codec types that are included. These Bluetooth codecs are relatively basic and are not used for audio, including stereo music applications which would use the ACL.

Any Bluetooth codec is intended to provide telephone standard audio, limiting the audio bandwidth to around 4 kHz.

The codecs are often CVSD, Continuously Variable Slope Delta modulation, based and their advantage is that they provide a minimum latency solution so there are no issues with synchronisation. As a result they may often be used with applications such as video phones, etc..

62GSM Basics Tutorial and Overview

Page 63: GSM Basics Tutorial and Overview

63

Bluetooth profiles

In order to enable Bluetooth devices to communicate properly with each other, Bluetooth profiles are used. A Bluetooth profile is effectively a wireless interface specification for communication between Bluetooth devices.

In order to be able to operate, a Bluetooth device must be compatible with a subset of the profiles available sufficient to enable it to utilise the desired Bluetooth services.

Bluetooth profile basics

A Bluetooth profile resides on top of the Bluetooth Core Specification and possibly above any additional protocols that may be used. While a particular Bluetooth profile may use certain features of the core specification, specific versions of profiles are rarely linked to specific versions of the core specification. In this way upgrades are achieved more easily.

The way a particular Bluetooth device uses Bluetooth technology depends on its Bluetooth profile capabilities. The Bluetooth profiles provide standards which manufacturers follow to allow devices to use Bluetooth in the intended manner.

At a minimum, each Bluetooth profile specification contains details of the following topics:

Dependencies on other formats Suggested user interface formats Specific parts of the Bluetooth protocol stack used by the protocol. To perform its task, each profile uses

particular options and parameters at each layer of the stack. This may include an outline of the required service record, if appropriate.

Bluetooth profiles

Overviews of the different Bluetooth profiles are tabulated below:

Bluetooth Profile Details Advanced Audio Distribution Profile (A2DP)

This Bluetooth profile defines how stereo quality audio can be streamed from a media source to a sink.

This Bluetooth profile defines two roles of an audio device: source and sink:

1. Source (SRC):   A device is the SRC when it acts as a source of a digital

63GSM Basics Tutorial and Overview

Page 64: GSM Basics Tutorial and Overview

64Bluetooth Profile Details

audio stream that is delivered to the SNK of the piconet. 2. Sink (SNK):   A device is the SNK when it acts as a sink of a digital audio

stream delivered from the SRC on the same piconet.

Audio/Video Remote Control Profile (AVRCP)

This Bluetooth profile provides a standard interface to control audio visual devices including televisions, stereo audio equipment, and the like. It allows a single remote control (or other device) to control all the equipment to which a particular individual has access.

The AVRCP Bluetooth profile defines two roles:

1. Controller:   The controller is normally the remote control device 2. Target:   As the name suggests, this si the device that is being controlled

or targeted and whose characteristics are being altered

This Bluetooth profile protocol specifies the scope of the AV/C Digital Interface Command Set that is to be used. This protocol adopts the AV/C device model and command format for control messages and those messages are transported by the Audio/Video Control Transport Protocol (AVCTP).

When using AVRCP, the controller detects the user action, i.e. button presses, etc and then translates them into the A/V control signal. This control signal is transmitted it to the remote Bluetooth enabled device. In this way, the functions available for a conventional infrared remote controller can be realized over Bluetooth, thereby providing a mode robust form of communications.

Basic Imaging Profile (BIP)

This Bluetooth profile details how an imaging device can be remotely controlled, how it may print, and how it can transfer images to a storage device. This Bluetooth profile is naturally intended for cameras and other devices that can take pictures, including mobile phones now.

The Basic Image Profile, BIP defines two roles:

1. Imaging Initiator:   This is the device that initiates this feature. 2. Imaging Responder:   As the name implies, this si the device that responds

to the initiator.

The overall profile may be considered to have the following actions:

1. Image Push:   This function allows the sending of an image from a device controlled by the user.

2. Image Pull:   This function within the Bluetooth profile allows browsing nd retrieval of images from a remote device, i.e. pulling images from a remote source.

3. Advanced Image Printing:   This provides for the printing of images using a number of advanced options.

64GSM Basics Tutorial and Overview

Page 65: GSM Basics Tutorial and Overview

65Bluetooth Profile Details

4. Automatic Archive:   This function enables the automatic backup of all new images from a target.

5. Remote Camera:   This function allows the remote control of a camera by an initiator.

6. Remote Display:   This allows for the Imaging Initiator to push images to another device for display.

Basic Printing Profile (BPP)

This Bluetooth profile allows devices to send text, e-mails, v-cards, images or other information to printers based on print jobs.

As would be expected te Basic Printing Profile, BPP defines two roles:

1. Printer:   This is the device that manipulates the data to be printed. Typically this would be a physical printer.

2. Sender:   This is a device, possible a mobile phone or oter form of user equipment, UE, that needs to print some data, but without wanting the full overhead of a print driver.

The advantage of using the Basic Print Profile, BPP rather than the HCRP is that it does not need any printer-specific drivers. This makes it particularly applicable for use with embedded devices such as mobile phones and digital cameras.

Common ISDN Access Profile (CIP)

This Bluetooth profile details the way in which ISDN traffic can be transferred via a Bluetooth wireless connection. It is typically used in Bluetooth enabled office equipment that is ISDN enabled.

The CIP defines two roles within the Bluetooth profile:

1. Access Point (AP):   This node is connected to the external network and acts as an endpoint for it. It handles all the interworking associated with the external ISDN

2. ISDN Client (IC):   This is the remote node accessing the Access Point via the Bluetooth wireless network or link

Cordless Telephony Profile (CTP)

This Bluetooth profile defines how a cordless phone can be implemented using Bluetooth. This Bluetooth profile is aimed at use for either a dedicated cordless phone or a mobile phone acting as a cordless phone when close to a CTP enabled base station. The aim of this Bluetooth profile was to allow a mobile phone to use a Bluetooth CTP gateway connected to a landline when within the home or office, and then use the mobile phone network when elsewhere.

Two roles are defined within this Bluetooth profile:

1. Terminal (TL):   This is the user equipment, and may be a cordless phone or a mobile phone, etc.

2. Gateway (GW):   The gateway acts as the access point for the terminal to

65GSM Basics Tutorial and Overview

Page 66: GSM Basics Tutorial and Overview

66Bluetooth Profile Details

the landline or other network.

Dial-Up Network Profile (DUN)

This Bluetooth profile details a standard for accessing the Internet and other dial-up services via a Bluetooth system. This may be required when accessing the Internet from a laptop by when using a mobile phone, PDA, etc as a wireless dial-up modem.

This user Bluetooth profile defines two roles for the Bluetooth nodes:

1. Gateway (GW):   This is the Bluetooth node or device that provides the access to the public network and ultimately the Internet.

2. Data Terminal (DT):   This is the remote node that interfaces with the Gateway via the Bluetooth wireless link.

Fax Profile (FAX)

This Bluetooth profile defines how a FAX gateway device can be used. This Bluetooth profile may be needed when a personal computer uses a mobile phone as a FAX gateway to send a FAX.

There are two roles for this Bluetooth profile

1. Gateway (GW):   This is the Bluetooth enabled device that provides facsimile services.

2. Data Terminal (DT):   This device connects via the Bluetooth wireless link to be able to send its FAX.

File Transfer Profile (FTP)

This Bluetooth profile details the way in which folders and files on a server can be browsed by a client device. This Bluetooth profile may be used for transferring files wirelessly between two PCs or laptops, or browsing and retrieving files on a server.

Two roles are defined for this Bluetooth profile:

1. Client:   This is the device that initiates the operation and pushes or pulls the files to or from the server.

2. Server:   This is the target device and it is remote from the device that pushes or pulls the files.

General Audio/Video Distribution Profile (GAVDP)

This Bluetooth profile provides the basis for the A2DP and VDP Bluetooth profiles. These are used for systems designed for distributing video and audio streams using Bluetooth technology. This may be used in a variety of scenarios, e.g. with a set of wireless stereo headphones and a music player - the music player sends messages to the headphones to establish a connection or adjust the stream of music, or vise versa.

Two roles are defined within this Bluetooth profile:

66GSM Basics Tutorial and Overview

Page 67: GSM Basics Tutorial and Overview

67Bluetooth Profile Details

1. Initiator (INT):   This device initiates the signalling procedure. 2. Acceptor (ACP):   This device responds to the incoming requests from the

initiator.

Generic Object Exchange Profile (GOEP)

This Bluetooth profile is used to transfer an object from one device to another. One example may be in the exchange of vCards between devices such as mobile phones, PDAs, etc.

Two roles are defined within this Bluetooth profile:

1. Server:   For this Bluetooth profile, this is the device that provides an object exchange server for which data objects can be pushed or pulled.

2. Client:   This is the device that can pushes or pulls data to and from the server.

Hands-Free Profile (HFP)

The HFP Bluetooth profile details the way in which a gateway device may be used to place and receive calls for a hands-free device. This profile adds considerable additional functionality over the original Headset Profile, HSP, allowing remote control, etc. The Bluetooth profile defines two roles:

1. Audio Gateway (AG):   The audio gateway is normally the mobile phone of car kit and it provides connectivity to the source of the voice data.

2. Hands-Free Unit (HF):   This is the device which acts as the remote audio input and output mechanism for the Audio Gateway. It also provides some remote control means.

The Handsfree Bluetooth profile uses a CVSD codec for voice transmission cross the Bluetooth link and it also defines a number of voice control features including volume.

Hard Copy Cable Replacement Profile (HCRP)

This Bluetooth profile defines how driver-based printing is achieved over a Bluetooth link. As might be expected, it is used for wireless links for printing and scanning.

Two roles are defined within this Bluetooth profile:

1. Server:   This is the server device that offers the HRCP service - typically it is a printer.

2. Client:   The client is a device containing a print driver on which the client device wishes to print - typically this may be a laptop or other computer wishing to print documents.

Headset Profile (HSP) The Bluetooth Headset Profile details how a Bluetooth enabled headset communicates with a Bluetooth enabled device. As might be anticipated the Bluetooth Headset Profile was aimed at defining how Bluetooth headsets may connect to a mobile phone or installed car kit. It defines two roles:

67GSM Basics Tutorial and Overview

Page 68: GSM Basics Tutorial and Overview

68Bluetooth Profile Details

1. Audio Gateway:   The device that is the gateway of the audio both for input and output. This would typically be a mobile phone, car kit, or a PC.

2. Headset:   The Headset is defined within the Bluetooth Headset Profile as the device acting as the remote audio input and output connected to the gateway via the Bluetooth link.

Human Interface Device Profile (HID)

This Bluetooth profile details the protocols, procedures and features to be used by Bluetooth keyboards, mice, pointing and gaming devices and remote monitoring devices.

Two roles are defined within this Bluetooth profile:

1. Human Interface Device (HID):   The device providing the human data input and output to and from the host. Typical examples may be a keyboard or a mouse.

2. Host:   The device using the services of a Human Interface Device. This may typically be a computer or laptop, etc

Intercom Profile (ICP)

This profile details the way in which two Bluetooth enabled mobile phones in the same network can communicate directly with each other, i.e. acting as an intercom. As the intercom usage is completely symmetrical, there are no specific roles defined for this Bluetooth profile. However when using the Intercom Profile, the devices at either end of the link will be denoted as a Terminal (TL).

Object Push Profile (OPP)

This Bluetooth profile details the roles of a push server and a push client. These roles need to interoperate with the server and client device roles defined within the GOEP Bluetooth profile.

The OPP defines two roles:

1. Push Server:   This is the device within this Bluetooth profile that provides an object exchange server

2. Push Client:   This device pushes and pulls objects to and from the Push Server and initiates the actions.

Personal Area Networking Profile (PAN)

This Bluetooth profile details the way in which two or more Bluetooth enabled devices can form an ad-hoc network. It also details how the same mechanism can be used to access a remote network through a network access point.

The PAN is somewhat more complicated than other Bluetooth profiles and requires the definition of three roles:

1. Network Access Point (NAP) and NAP Service:   In view of the similarities with Ethernet networks, the NAP can be considered as being equivalent an Ethernet bridge to support network services.

2. Group Ad-hoc Network (GN) and GN Service:   - A Bluetooth device that

68GSM Basics Tutorial and Overview

Page 69: GSM Basics Tutorial and Overview

69Bluetooth Profile Details

supports the GN service is able to forward Ethernet packets to each of the Bluetooth devices that are connected within the PAN.

3. PAN User (PANU) and PANU Service:   As the name indicates the PANU is the Bluetooth device that uses either the NAP or the GN service

Service Discovery Application Profile (SDAP)

The SDAP is a Bluetooth profile that describes how an application should use the Service Discovery Procedure, SDP to discover services on a remote device. SDAP can adopt a variety of approaches to managing the device discovery via Inquiry and Inquiry Scan and service discovery via SDP. The ideas contained in the SDAP specification augment the basic specifications provided in GAP, SDP, and the basic processes of device discovery.

The SDAP defines two roles as given below:

1. Local Device (LocDev):   This is the Bluetooth deveice that initiates the service discovery procedure.

2. Remote Device (RemDev):   There may be one or more RemDevs and these are any device that participates in the service discovery process by responding to the service inquiries it may receive from a LocDev.

Service Port Profile (SPP)

This Bluetooth profile details the way in which virtual serial ports may be set up and how two Bluetooth enabled devices may connect.

This Bluetooth profile defines two roles for communication to proceed:

1. Device A:   The Device A is recognised as the device that initiates the formation of a connection to another device. It may also be thought of as the Initiator.

2. Device B:   This may be thought of as the Acceptor and it is the device that responds to an Initiation process.

Synchronization Profile (SYNC)

This Bluetooth profile is used in conjunction with GOEP to enable synchronization of calendar and address information (personal information manager (PIM) items) between Bluetooth enabled devices.

There are two main roles within this Bluetooth profile:

1. IrMC Server:   The device that takes on the role of object exchange server will become the IrMC Server. Typically this device will be the mobile phone, PDA, etc.

2. IrMC Client:   This device is typically a PC, and it is the device that contains the sync engine and pulls and pushes the PIM data to and from the IrMC server.

Video Distribution Profile (VDP)

This Bluetooth profile details how a Bluetooth enabled device is able to stream video over a Bluetooth link. It could be used in a variety of scenarios such as

69GSM Basics Tutorial and Overview

Page 70: GSM Basics Tutorial and Overview

70Bluetooth Profile Details

streaming video data from a storage areas such as on a PC to a mobile player, or from a video camera to a television, etc.

There are two roles defined within this Bluetooth profile:

1. Source (SRC):   As the name suggests the SRC is the origination point of the streamed video on the piconet.

2. Sink (SNK):   Within this Bluetooth profile, the SNK is the destination for the digital video stream on the same piconet as the SRC.

Summary

There are over twenty different Bluetooth profiles, each having their own function. Naturally some of these Bluetooth profiles are used more than others, but each one may be used in a variety of different places and applications.

Bluetooth network connection & pairingBluetooth networks often operate as a single connection, or a Bluetooth network may involve many devices. Bluetooth also allows for a scheme known as Bluetooth pairing where devices can quickly associate.

The Bluetooth specification defines a variety of forms of Bluetooth network connection that may be set up. In this way Bluetooth networking is a particularly flexible form of wireless system for use in a variety of short range applications.

Bluetooth network connection basics

There are a variety of ways in which Bluetooth networks can be set up. In essence Bluetooth networks adopt what is termed a piconet topology. In this form of network, one device acts as the master and it is able to talk to a maximum of seven slave nodes or devices.

The limit of seven slave nodes in a Bluetooth network arises from the three bit address that is used. This number relates to the number of active nodes in the Bluetooth network at any given time.

Bluetooth scatternets

Bluetooth network connections are also able to support scatternets, although because of timing and memory constraints this form of Bluetooth network has rarely been implemented. For a Bluetooth scatternet, a slave

70GSM Basics Tutorial and Overview

Page 71: GSM Basics Tutorial and Overview

71node or slave device is able to share its time between two different piconets. This enables large star networks to be built up.

Bluetooth connection basics

The way in which Bluetooth devices make connections is more complicated than that associated with many other types of wireless device. The reason for this is the frequency hopping nature of the devices. While the frequency hopping reduces the effects of interference, it makes connecting devices a little more complicated.

Bluetooth is a system in which connections are made between a master and a slave. These connections are maintained until they are broken, either by deliberately disconnecting the two, or by the link radio link becoming so poor that communications cannot be maintained - typically this occurs as the devices go out of range of each other.

Within the connection process, there are four types of Bluetooth connection channel:

Basic piconet channel:   This Bluetooth connection channel is used only when all 79 channels are used within the hop-set - it is now rarely used as the Adaptive piconet channel is more often used as it provides greater flexibility.

Adapted piconet channel:   This Bluetooth connection channel is used more widely and allows the system to use a reduced hop-set, i.e. between 20 and 79 channels. Piconet channels are the only channels that can be used to transfer user data.

Inquiry channel:   Theis Bluetooth connection channel is sued when a master device finds a slave device or devices within range.

Paging channel:   This Bluetooth connection channel is sued where a master and a slave device make a physical connection.

Bluetooth pairing

In order that devices can connect easily and quickly, a scheme known as Bluetooth pairing may be used. Once Bluetooth pairing has occurred two devices may communicate with each other.

Bluetooth pairing is generally initiated manually by a device user. The Bluetooth link for the device is made visible to other devices. They may then be paired.

The Bluetooth pairing process is typically triggered automatically the first time a device receives a connection request from a device with which it is not yet paired. In order that Bluetooth pairing may occur, a password has to be exchanged between the two devices. This password or "Passkey" as it is more correctly termed is a code shared by both Bluetooth devices. It is used to ensure that both users have agreed to pair with each other.

The process of Bluetooth pairing is summarised below:

71GSM Basics Tutorial and Overview

Page 72: GSM Basics Tutorial and Overview

72 Bluetooth device looks for other Bluetooth devices in range:   To be found by other Bluetooth devices,

the first device, Device 1 must be set to discoverable mode - this will allow other Bluetooth devices in the vicinity to detect its presence and attempt to establish a connection.

Two Bluetooth devices find each other:   When the two devices: Device 1 and device 2 find each other it is possible to detect what they are. Normally the discoverable device will indicate what type of device it is - cellphone, headset, etc., along with its Bluetooth device name. The Bluetooth device name is the can be allocated by the user, or it will be the one allocated during manufacture.

Prompt for Passkey:   Often the default passkey is set to "0000", but it is advisable to use something else as hackers will assume most people will not change this.

However many more sophisticated devices - smartphones and computers - both users must agree on a code which must obviously be the same for both.

Device 1 sends passkey:   The initiating device, Device 1 sends the passkey that has been entered to Device 2.

Device 2 sends passkey:   The passkeys are compared and if they are both the same, a trusted pair is formed, Bluetooth pairing is established.

Communication is established:   Once the Bluetooth pairing has occurred, data can be exchanged between the devices.

Once the Bluetooth pairing has been established it is remembered by the devices, which can then connect to each without user intervention.

If necessary, the Bluetooth pairing relationship may be removed by the user at a later time if required.

Bluetooth SecurityBluetooth security issues are an important factor with any Bluetooth device or system. As with any device these days that provide connectivity, security is an important issue.

There are a number of Bluetooth security measures that can be incorporated into Bluetooth devices to prevent various security threats that can be posed.

One of the main requirements for Bluetooth is that it should be easy to connect to other devices. However Bluetooth security needs to be balanced against the ease of use and the anticipated Bluetooth security threats.

Much work has been undertaken regarding Bluetooth security, however it remains high on the agenda so that users can use their Bluetooth devices with ease while keeping the security threats to a minimum.

Bluetooth security basics

Bluetooth security is of paramount importance as devices are susceptible to a variety of wireless and networking attacking including denial of service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation.

72GSM Basics Tutorial and Overview

Page 73: GSM Basics Tutorial and Overview

73Bluetooth security must also address more specific Bluetooth related attacks that target known vulnerabilities in Bluetooth implementations and specifications. These may include attacks against improperly secured Bluetooth implementations which can provide attackers with unauthorized access.

Many users may not believe there is an issue with Bluetooth security, but hackers may be able to gain access to information from phone lists to more sensitive information that others may hold on Bluetooth enabled phones and other devices.

There are three basic means of providing Bluetooth security:

Authentication:   In this process the identity of the communicating devices are verified. User authentication is not part of the main Bluetooth security elements of the specification.

Confidentiality:   This process prevents information being eavesdropped by ensuring that only authorised devices can access and view the data.

Authorisation:   This process prevents access by ensuring that a device is authorised to use a service before enabling it to do so.

Security measures provided by the Bluetooth specifications

The various versions of the specifications detail four Bluetooth security modes. Each Bluetooth device must operate in one of four modes:

Bluetooth Security Mode 1:   This mode is non-secure. The authentication and encryption functionality is bypassed and the device is susceptible to hacking. Bluetooth devices operation in Bluetooth Security Mode 1. Devices operating like this do not employ any mechanisms to prevent other Bluetooth-enabled devices from establishing connections. While it is easy to make connections, security is an issue. It may be applicable to short range devices operating in an area where other devices may not be present. Security Mode 1 is only supported up to Bluetooth 2.0 + EDR and not beyond.

Bluetooth Security Mode 2:   For this Bluetooth security mode, a centralised security manager controls access to specific services and devices. The Bluetooth security manager maintains policies for access control and interfaces with other protocols and device users.

It is possible to apply varying trust levels and policies to restrict access for applications with different security requirements, even when they operate in parallel. It is possible to grant access to some services without providing access to other services. The concept of authorisation is introduced in Bluetooth security mode 2. Using this it is possible to determine if a specific device is allowed to have access to a specific service.

Although authentication and encryption mechanisms are applicable to Bluetooth Security Mode 2, they are implemented at the LMP layer (below L2CAP).

All Bluetooth devices can support Bluetooth Security Mode 2; however, v2.1 + EDR devices can only support it for backward compatibility for earlier devices.

Bluetooth Security Mode 3:   In Bluetooth Security Mode 3, the Bluetooth device initiates security procedures before any physical link is established. In this mode, authentication and encryption are used

73GSM Basics Tutorial and Overview

Page 74: GSM Basics Tutorial and Overview

74for all connections to and from the device.

The authentication and encryption processes use a separate secret link key that is shared by paired devices, once the pairing has been established.

Bluetooth Security Mode 3 is only supported in devices that conform to Bluetooth 2.0 + EDR or earlier. Bluetooth Security Mode 4:   Bluetooth Security Mode 4 was introduced at Bluetooth v2.1 + EDR.

In Bluetooth Security Mode 4 the security procedures are initiated after link setup. Secure Simple Pairing uses what are termed Elliptic Curve Diffie Hellman (ECDH) techniques for key exchange and link key generation.

The algorithms for device authentication and encryption algorithms are the same as those defined in Bluetooth v2.0 + EDR.

The security requirements for services protected by Security Mode 4 are as follows:

o Authenticated link key requiredo Unauthenticated link key requiredo No security required

Whether or not a link key is authenticated depends on the Secure Simple Pairing association model used. Bluetooth Security Mode 4 is mandatory for communication between v2.1 + EDR devices.

Common Bluetooth security issues

There are a number of ways in which Bluetooth security can be penetrated, often because there is little security in place. The major forms of Bluetooth security problems fall into the following categories:

Bluejacking:   Bluejacking is often not a major malicious security problem, although there can be issues with it, especially as it enables someone to get their data onto another person's phone, etc. Bluejacking involves the sending of a vCard message via Bluetooth to other Bluetooth users within the locality - typically 10 metres. The aim is that the recipient will not realise what the message is and allow it into their address book. Thereafter messages might be automatically opened because they have come from a supposedly known contact

Bluebugging:   This more of an issue. This form of Bluetooth security issue allows hackers to remotely access a phone and use its features. This may include placing calls and sending text messages while the owner does not realise that the phone has been taken over.

Car Whispering:   This involves the use of software that allows hackers to send and receive audio to and from a Bluetooth enabled car stereo system

In order to protect against these and other forms of vulnerability, the manufacturers of Bluetooth enabled devices are upgrading he security to ensure that these Bluetooth security lapses do not arise with their products.

74GSM Basics Tutorial and Overview

Page 75: GSM Basics Tutorial and Overview

75

Bluetooth 2 - Enhanced Data Rate, EDRBluetooth EDR or Bluetooth 2 is an upgrade of the original Bluetooth specification. It based on the original Bluetooth standard which is well established as a wireless technology. It has found a very significant number of applications, particularly in areas such as connecting mobile or cell phones to hands-free headsets.

One of the disadvantages of the original version of Bluetooth in some applications was that the data rate was not sufficiently high, especially when compared to other wireless technologies such as 802.11. In November 2004, a new version of Bluetooth, known as Bluetooth 2 was ratified. This not only gives an enhanced data rate but also offers other improvements as well.

Of all the features included in Bluetooth 2, it is the enhanced data rate (EDR), facility that is giving rise to the most comment. In the new specification the maximum data rate is able to reach 3 Mbps, a significant increase on what was available in the previous Bluetooth specifications.

Why is Bluetooth 2 needed?

As proved particularly by the computer industry, there is always a need for increased data rates, and ever increasing capacity. With this in mind and the fact that the previous version of Bluetooth, version 1.2 allowed a maximum data rate of 1 Mbps which reflected in a real throughput of 723 kbps, the next specification should allow many new applications to be run. In turn this will open up the market for Bluetooth even more and allow further application areas to be addressed.

While speed on its own opens up more opportunities, the strategy behind Bluetooth 2 with its enhanced data rate is more deep rooted. When the Bluetooth 2 specification was released there were no applications that were in immediate need of the new enhanced data rate. For example even a high quality stereo audio stream required a maximum of only 345 kbps.

The reason is that as Bluetooth use increases, and the number of applications increase, that users will need to run several links concurrently. Not only may Bluetooth need to be used for streaming audio, but other applications such as running computer peripherals will increase. The reason becomes clearer when looking at real situations when interference is present. Typically it is found that a good margin is required to allow for re-sends and other data. Under Bluetooth 1.2, high quality stereo audio can be sent on its own within the available bandwidth and with sufficient margin. However when other applications are added there is not sufficient margin to allow for the system to operate satisfactorily. Bluetooth 2 solves this problem and enables sufficient bandwidth for a variety of links to be operated simultaneously, while still allowing for sufficient bandwidth margin within the system.

There are other advantages to running Bluetooth 2. One of the major elements is in terms of power consumption. Although the transmitter and receiver and logic need to be able to handle data at a higher speed which normally requires a higher current consumption, this is more than outweighed by the fact that they need only to remain fully active for about a third of the time. This brings significant advantages in terms of battery life, a feature that is of particular important in many of the Bluetooth applications.

75GSM Basics Tutorial and Overview

Page 76: GSM Basics Tutorial and Overview

76Compatibility is a major requirement when any system is upgraded. The same is true for Bluetooth, and this has been a major requirement and concern when developing the Bluetooth 2 standard. The new standard is completely backward compatible and allows networks to contain a mixture of EDR (enhanced data rate) devices as well as the standard devices. A key element of this is that the new modulation schemes that have been incorporated into Bluetooth 2 are compatible in their nature with the standard rate specification. In this way the new standard will be able to operate with any mixture of devices from whatever standard.

How it works

One of the main reasons why Bluetooth 2 is able to support a much higher data throughput is that it utilises a different modulation scheme for the payload data. However this is implemented in a manner in which compatibility with previous revisions of the Bluetooth standard is still retained.

Bluetooth data is transmitted as packets that are made up from a standard format. This consists of four elements which are: (a) The Access Code which is used by the receiving device to recognise the incoming transmission; (b) The Header which describes the packet type and its length; (c) The Payload which is the data that is required to be carried; and finally (d) The Inter-Packet Guard Band which is required between transmissions to ensure that transmissions from two sources do not collide, and to enable the receiver to re-tune.

In previous versions of the Bluetooth standard, all three elements of the transmission, i.e. Access Code, Header and Payload were transmitted using Gaussian Frequency Shift Keying (GFSK) where the carrier is shifted by +/- 160 kHz indicating a one or a zero, and in this way one bit is encoded per symbol.

The Bluetooth 2.0 specification uses a variety of forms of modulation. GFSK is still used for transmitting the Access Code and Header and in this way compatibility is maintained. However other forms of modulation can be used for the Payload. There are two additional forms of modulation that have been introduced. One of these is mandatory, while the other is optional.

A further small change is the addition of a small guard band between the Header and the payload. In addition to this a short synchronisation word is inserted at the beginning of the payload.

Mandatory modulation format

The first of the new modulation formats which must be included on any Bluetooth 2 device gives a two fold improvement in the data rate and thereby allows a maximum speed of 2 Mbps. This is achieved by using pi/4 differential quaternary phase shift keying (pi/4 DQPSK). This form of modulation is significantly different to the GFSK that was used on previous Bluetooth standards in that the new standard uses a form of phase modulation, whereas the previous ones used on frequency modulation.

Using quaternary phase shift modulation means that there are four possible phase positions for each symbol. Accordingly this means that two bits can be encoded per symbol, and this provides the two fold data increase over the frequency shift keying used for the previous versions of Bluetooth.

76GSM Basics Tutorial and Overview

Page 77: GSM Basics Tutorial and Overview

77

Higher speed modulation

To enable the full three fold increase in data rate to be achieved a further form of modulation is used. Eight phase differential phase shift keying (8DPSK) enables eight positions to be defined with 45 degrees between each of them. By using this form of modulation eight positions are possible and three bits can be encoded per symbol. This enables the data rate of 3 Mbps to be achieved.

As the separation between the different phase positions is much smaller than it was with the QPSK used to provide the two fold increase in speed, the noise immunity has been reduced in favour of the increased speed. Accordingly this optional form of modulation is only used when a link is sufficiently robust.

Packet formats

The Bluetooth 2 specification defines ten new packet formats for use with the higher data rate modulation schemes, five each for each of the enhanced data rate schemes. Three of these are for the 1, 3 and 5 slot asynchronous packets used for transferring data. The remaining two are used for 3 and 5 slot extended Synchronous Connection Orientated (eSCO) packets. These use bandwidth that is normally reserved for voice communications.

The new format for these packets does not incorporate FEC. If this is required then the system switches back automatically to the standard rate packets. However many of the links are over a very short range where the signal level is high and the link quality good.

It is necessary for the packet type to be identified so that the receiver can decode them correctly, knowing also the type of modulation being used. An identifier is therefore included in the header which is sent using GFSK. This packet header used for the previous version of Bluetooth only used 4 bits. This gave sufficient capability for the original system. However there was insufficient space for the additional information that needed to be sent for Bluetooth 2.

It was not possible to change the header format because backward compatibility would not be possible. Instead different link modes are defined. When two Bluetooth 2 or EDR devices communicate the messages are used in a slightly different way, indicating the Bluetooth 2 or EDR modes. In this way compatibility is retained while still being able to carry the required information.

Summary

Bluetooth 2 / EDR is a significant improvement to Bluetooth and will enable it to retain its position in the market place. Its introduction, as the Bluetooth has become more widely accepted and used will enable it to build on its position within the market place.

77GSM Basics Tutorial and Overview

Page 78: GSM Basics Tutorial and Overview

78

Bluetooth Low Energy / Wibree Bluetooth low energy or Wibree as it was originally called is an industry standard for enabling wireless connectivity between small devices. The new Wibree technology has been developed by the Nokia Research Centre, and it is hoped Bluetooth Low Energy / Wibree will become an industry wide wireless standard.

The Bluetooth Low Energy / Wibree standard offers a number of advantages:

Ultra low peak, average & idle mode power consumption Ultra low cost & small size for accessories & human interface devices (HID) Minimal cost & size addition to mobile phones & PCs Global, intuitive & secure multi-vendor interoperability

It is claimed that the new Wibree technology complements other local wireless connectivity technologies, and only consumes a fraction of the power. This enables it to be used in small electronics items such as button cell powered devices where power is particularly limited. As a result it is anticipated that Wibree will find a wide variety of uses in applications ranging from watches, wireless keyboards, toys and sports sensors.

Although Nokia took the lead in the development of Wibree, other companies are now involved as the aim is to make it an open standard. The members of the group defining the specification now include: Broadcom Corporation, CSR, Epson and Nordic Semiconductor having licensed the Wibree technology for commercial chip implementation. In addition to this the sports equipment manufacturer Suunto and Taiyo Yuden are contributing to the interoperability specification in their respective areas of expertise.

Bluetooth Low Energy / Wibree specification

Although some experiment and demonstrations have been undertaken at the Nokia Research Centre to check the viability of the Wibree standard, comparatively little has been firmed up yet. The idea is that all partners in the group will be able to contribute to the Wibree standard and in this way it will have a wide degree of industry acceptance.

Some items basic elements of Wibree have been defined. Wibree will operate in the 2.4 GHz ISM band and it will have a physical layer bit rate of 1 Mbps. Even with its very low power level it is able to support communication over distances up to five or ten metres.

There are some further outline requirements for Wibree that have already been laid down, although the way in which they will be implemented has not been decided. The standard will not use frequency hopping techniques like Bluetooth. The reason is that this technique, while very useful in reducing interference uses more power, and one of the chief aims for Wibree is that it will be a very low power technology.

The Wibree standard will also be designed to enable dual-mode implementations to reuse some Bluetooth RF technology. This will also help the standard complement Bluetooth and this is hoped to provide some early acceptance.

78GSM Basics Tutorial and Overview

Page 79: GSM Basics Tutorial and Overview

79Facilities will be added to Wibree to provide the equivalent of the Bluetooth Service Discovery Protocol that prevents sending in appropriate data to devices that are connected, i.e. audio to a printer, or data from a keyboard to a headphone. Again, details of this are yet to be defined.

One fact that has been stated is that the transmitted data packets will be dynamic in size, in comparison with Bluetooth packets that have a defined fixed length. By transmitting only as much data as is needed power will be saved.

Bluetooth Low Energy / Wibree summary

As the first announcement about Wibree was only made in October 2006, and few details have been defined, it will take some time for the work to be completed. Yet despite this it is anticipated that the first releases of the specification will be available by end of the second quarter of 2007, and devices late in the year. It will be interesting to see how Wibree is accepted by the market and whether it takes off in the way Nokia hopes.

79GSM Basics Tutorial and Overview

Page 80: GSM Basics Tutorial and Overview

80

80GSM Basics Tutorial and Overview