Report #1

14
R. V. College of Engineering R.V. COLLEGE OF ENGINEERING, BANGALORE- 560059 (Autonomous Institution Affiliated to VTU, Belgaum) ENHANCING EFFICIENCY & MOBILITY IN WORKSPACES USING DESKTOP VIRTUALISATION SELF STUDY REPORT Submitted by AISHWARYA ANAND T 1RV12CS007 SECTION A ANUSHA V 1RV12CS014 SECTION A DEEPIKA BALAJI 1RV12CS027 SECTION A Praveena T , Assistant Professor Sneha M , Assistant Professor Sundar Kumar, Assistant Professor Department of Computer Science and Engineering, R.V.College of Engineering Submitted to COMPUTER SCIENCE AND ENGINEERING DEPARTMENT Department of Computer Science and Engineering Page 1

description

rep 1

Transcript of Report #1

  • R. V. College of Engineering

    R.V. COLLEGE OF ENGINEERING, BANGALORE-

    560059

    (Autonomous Institution Affiliated to VTU, Belgaum)

    ENHANCING EFFICIENCY & MOBILITY IN WORKSPACES USING

    DESKTOP VIRTUALISATION

    SELF STUDY REPORT Submitted by

    AISHWARYA ANAND T 1RV12CS007 SECTION A ANUSHA V 1RV12CS014 SECTION A

    DEEPIKA BALAJI 1RV12CS027 SECTION A

    Praveena T , Assistant Professor Sneha M , Assistant Professor

    Sundar Kumar, Assistant Professor Department of Computer Science and Engineering,

    R.V.College of Engineering Submitted to

    COMPUTER SCIENCE AND ENGINEERING DEPARTMENT

    Department of Computer Science and Engineering Page 1

  • R. V. College of Engineering

    R.V. COLLEGE OF ENGINEERING, BANGALORE 560059

    (Autonomous Institution Affiliated to VTU, Belgaum)

    DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

    CERTIFICATE

    Certified that the Self Study work titled Enhancing efficiency and mobility in workspaces using Desktop Virtualization is carried out by AISHWARYA ANAND (1RV12CS007), ANUSHA V (1RV12CS014), DEEPIKA BALAJI (1RV12CS027), who are bonafide students of R.V

    College of Engineering, Bangalore, in partial fulfillment for the award of degree of Bachelor of Engineering in Computer Science and Engineering from Visvesvaraya Technological

    University, Belgaum during the year 2014-2015. The Self Study report has been approved as it

    satisfies the academic requirements in respect of Self Study work prescribed by the institution for

    the said degree.

    .

    Marks awarded = Evaluation1 =

    Signature of Staff In-charge Signature of Head of the Department

    100

    Department of Computer Science and Engineering Page 2

  • R. V. College of Engineering

    Contents 1. Introduction to Desktop Virtualization .................................................................... 4 2. Literature Survey ....................................................................................................... 5 3. Research Challenges .................................................................................................. 7 4. Design Process ............................................................................................................ 8

    4.1 Identification of problem ................................................................................................... 8 4.2 Application of Engineering Principles .............................................................................. 8 4.3 Design Challenges ............................................................................................................. 9 4.4 Design Requirements ....................................................................................................... 10 4.5 Design Process ................................................................................................................. 10 4.6 Use of appropriate tools, skills and techniques ............................................................... 13

    5. Future work for next phase ..................................................................................... 13

    Department of Computer Science and Engineering Page 3

  • R. V. College of Engineering

    1. Introduction to Desktop Virtualization Desktop virtualization is the separation of a desktop, consisting of an operating system, applications and user data, from the underlying endpoint. The endpoint is the computer device which is used to access the desktop. Desktop virtualization can be subdivided into two types: Client side and Server side.

    With server side desktop virtualization, the end-user applications are executed remotely, on a central server, and streamed towards the endpoint via a Remote Display Protocol or other presentation and access virtualization technology. Controlling a desktop from a remote location is also called presentation or access virtualization, because the screen images are streamed from one location to another. With client side desktop virtualization, the applications are executed at the endpoint, which is the user location, and presented locally on the users computer.

    Trends in virtualization are always changing. As the technology matures and advances are made, there are more options open to administrators and more cost saving virtualization projects that can be implemented.

    Virtualization of desktops requires significant planning especially if you are moving from a decentralized environment to a centralized service. The desktop services reside on the server rather than desktop itself making the desktop a thin client. Identification of who will have what service is a key component in planning. The key benefits of desktop virtualization are:

    1. Desktops environments are isolated 2. Data is secure in the data center 3. All applications work in a virtual machine 4. Images are managed centrally 5. Desktops are always on and always connected 6. Desktop life cycle is increased 7. Users have access to their desktops from anywhere 8. A users desktop looks the same no matter where they access their desktop 9. Group security access rights

    Figure 1 : Classification of Desktop Virtualization

    Department of Computer Science and Engineering Page 4

  • R. V. College of Engineering

    2. Literature Survey The future of technology always has its roots in the past and the beginning of virtualization goes back much further in time than many would expect. Already in 1974, Robert P. Goldberg said: virtual machines have arrived". Although this is, in fact, our current reality, it seems to have been the reality of the last 40 years with the slow adoption of virtual machines. Virtualization then known as timesharing, was first developed in 1960 by IBM. IBM's engineering team in Cambridge, Massachusetts, came up with a novel approach that gave each user a virtual machine (VM), with an operating system that doesn't have to be complex because it only has to support one user. Virtual machines where identical copies of the underlying hardware.

    In 1964, IBM Cambridge Scientific Center begins development of CP-40, an operating system for the System/360 mainframe to create virtual machines within the mainframe. This operating system was the first step to create virtual machines on these mainframes. The CP-40 was soon replaced by the CP-67 in 1965. The CP-67 had a new component called the "Blaauw Box" designed by Gerrit Blaauw. This component was the first practical implementation of paged virtual memory. The benefits of virtualization were impressive. Virtualization made it possible to provide test platforms for software testing and development so that now all of that activity could be done so much more efficiently. Furthermore, it enabled an interactive environment, where a test application could be run and when the application failed the virtual memory showed exactly what was happening.

    In the 1970s Intel developed the first microprocessor, which in 1978 was named the x86 processor which led to the development of personal computers (PCs). In 1980s, x86 computers lacked the horsepower to run multiple operating systems, but were so inexpensive that organizations would deploy dedicated hardware for each application. As the chip performance increased dramatically, organizations that typically run one application per server to avoid the risk of vulnerabilities when running multiple applications on one server, now experience underutilization of their servers. This underutilization is one of the main reasons VMware invented virtualization for the x 86 platforms. In 1999 VMware released the first virtualization software for x86 computers, making it possible to run multiple operating systems and multiple applications on the same computer at the same time, increasing the utilization and flexibility of hardware.

    Connor (2004) addresses the race to keep up with processor technology by VMWare and others (Connor, 2004). As evidence for this, Connor (2004) says, If you look at processor trends, both Intel and AMD have shifted from increasing the clock speed of their processors to, increasing the number of processor cores on a single chip," says Michael Mullaney, vice president of marketing for VMware. "Going forward, you are going to find out that even a two-CPU server actually has four processors (p. 01) Connor (2004) says "Virtualization is moving from a niche market into the mainstream, especially since Microsoft entered the market" (p. 01) This makes it clear that this is a project topic that is worth pursuing. This is a technology that is just now permeating into the

    Department of Computer Science and Engineering Page 5

  • R. V. College of Engineering

    main stream and the future of the technology is leading in a direction of more and more implementation. This article is very relevant to my project topic because of the statement above.

    Hassell (2007) gives us a summary of his article in the abstract section when he says, Virtualization, the move from real, physical hardware to virtual hardware is being seen as one of the "next big things" in IT. There are more virtualization options for IT departments than ever before, including XenSource Inc's and Virtual Iron Software Inc's open-source applications, Microsoft Corp's Virtual Server and VMware Inc's venerable products. But if you are new to this party, you might not know how to get started. (p. 01). Hassell (2007) gives a broad overview of the topic and a launching point to explore the topic of virtualization and some of the options that are available, phases of implementation. Hassell (2007) is experienced in this field and is therefore a valued source for this subject matter. Hassell (2007) breaks down the process step by step, The first step in virtualization is determining if you have the right type of infrastructure to support it. Look for a lot of machines doing similar tasks, and make sure you have more than 10 of them. For 10 or fewer, the payoff is questionable. (p. 01).

    Bele and Desai (2012) look at virtualization from a slightly broader perspective. Beyond the hypervisor platform alone, Bele and Desai (2012) look at how virtual host tie into the rest of your environment, like SAN with a look at storage virtualization. Bele and Desai (2012) detail how all of these components are related. Any technology that really permeates the market typically is able to perform multiple functions, but is usually part of a larger technology.

    Norall (2007) gives this insight on the potential and advantages of storage virtualization. As with other aspects of virtualization, Norall (2007) makes the point that there are clear advantages to virtualizing your storage. Norall (2007) shows that virtualization as a technology encompasses more than just server virtualization, which is the most common modern use of the overall technology. However, Norall (2007) shows that there are many aspects of virtualization and that it can be applied to many different technologies within the computer science spectrum.

    Peggy (2007) makes the connection between virtualization and cost savings as follows: "There are three key areas of potential benefits: space, time and money. Fewer systems deployed results in lower capital costs. If companies can put 10 applications on 10 machines onto one or two machines, not only is that less money spent on computers, but the amount of money spent on electricity to power and cool these boxes goes down, freeing up precious data center space as well." (p. 01). Peggy (2007) discusses lowering costs through virtualization, which is one of the main advantages of virtualization.

    Kontzer (2010) details a history and projected future for the technology, "More than a decade after VMware introduced the first software that enabled x86 virtualization, the question facing most IT executives is no longer whether they plan to virtualize, but how far they plan to go...The long-term potential of virtualization speaks to an issue that transcends server spread, budget concerns and any other barriers that might get in the way of a virtualization investment: The exponential growth

    Department of Computer Science and Engineering Page 6

  • R. V. College of Engineering

    of data is causing IT environments to burst at the seams." (p. 01). Kontzer (2010) explores not only the origin of the adoption of virtualization, but also the growth of virtualization in the industry, and what factors influence a company to make the investment in virtualization technology.

    Kovar (2008) reveals the research that shows the current growth pattern of the virtualization technology. Everyone is talking about virtualization and just how hot it is, but can that growth or interest be measured Kovar (2008) shows that measurement and quantifies the growth factor.

    Dubie (2009) goes deep into the concept and value of desktop virtualization, "Successful server virtualization deployments lead many IT managers to believe desktop virtualization would provide the same benefits. While that is partly true, companies need to be aware of how the two technologies differ" (p. 01). Dubie (2009) rightly points out that desktop virtualization is a technology not to be thought out lightly. Many industry experts focus on server virtualization, but Dubie (2009) suggests that desktop virtualization is the next big movement for virtualization. Dubie (2009) delves deep into the concept of desktop virtualization and reveals it to be a valid path for the mainstream IT department to pursue, but with caution and understanding.

    Kennedy (2007) offers real solutions to the issue of desktop deployment, "Despite rumors to the contrary, virtualization is not just for the datacenter. From the most complex workstation applications to the simplest DLLs, virtualization is leaving an indelible mark on client computing. The idea behind application virtualization is to eliminate many of the support-draining configuration problems that plague conventional desktop implementations." (p.01) Kennedy (2007) takes on desktop virtualization from a slightly different perspective. Kennedy (2007) makes the point that desktop virtualization will continue to grow in the future

    Hsieh (2008) lays out a strategy that is key to any virtualization project, as follows, "Virtualization has become one of the hottest information technologies in the past few years. Yet, despite the proclaimed cost savings and efficiency improvement, implementation of the virtualization involves high degree of uncertainty, and consequently a great possibility of failures. Experience from managing the VMware based project activities at several companies are reported as the examples to illustrate how to increase the chance of successfully implementing a virtualization project" (p.01). After all of the research, you need to be able to put it all together. Hsieh (2008) covers just that, a way to put it all together and implement a successful virtualization project.

    3. Research Challenges Nearly two-thirds (61%) of large midmarket (i.e., 500 to 999 employees) and enterprise (i.e., 1,000 or more employees) IT organizations have deployed desktop virtualization technology on systems running in production environments. However, due to integration complexities, performance concerns, and vendor support issues, virtualizing applications and their underlying databases has been a challenge for many organizations. They experienced a number of problems when deploying VMware View:

    Department of Computer Science and Engineering Page 7

  • R. V. College of Engineering

    Provisioning new machines would sometimes fail; in some cases, most of the machines in a new pool would be created successfully, but a few would not. In addition, creating new pools of machines would often take a very long time, sometimes hours, with some of the machines failing to be created properly.

    VMware View allows two types of pools: persistent and non-persistent. The persistent machines allow settings and user data to be retained. One of the purported benefits of a persistent machine is the ability to recompose it (i.e., update the image on which it is based) without losing that data. In actual practice, profile information would often be lost after a recomposition.

    Logging into View Manager would periodically be slow, and sessions would time out at seemingly random intervals.

    Deleting machines was sometimes problematic. Some machines would delete within a few minutes; others would take days or require a restart of the backend software. In one case, a machine would not delete even after a restart of the software.

    4. Design Process 4.1 Identification of problem Identification of Virtualization emerged in the 1960s as a practice to optimize the use of expensive computing hardware. Desktop virtualization targets to reduce the complexity associated with deployment and maintenance of client devices, which ultimately helps IT Companies to reduce desktop management costs. The use of thin clients also helps reduce the costs at the client side. Similarly, todays higher education environment is marked by changing student expectations related to their increasingly diverse computing needs. Higher education institutions are striving to create student-friendly IT infrastructures that support students desire to bring their own mobile devices to campus. Gone are the days when these colleges and universities asserted their control over students by regulating everything from computer lab hours to acceptable computing devices. In Server-based Desktop Virtualization, the users physically work on thin clients, but the operational environment that they interact with is actually running on a remote Server. The user input is transmitted across the network to the remote Server, and the user interface i.e., the virtual desktop is presented back through the network to the end-user. This project proposes a server based desktop virtualization system, in which mobile phones can be used as thin clients and a desktop computer acts as the server. All the processing is done centrally on the server itself. Nothing is executed or persistent at the client side. The proposed system will greatly improve the QoS of users and efficiently allocate resources among all clients using appropriate algorithms for the same. 4.2 Application of Engineering Principles Virtual desktop computing, which shifts computing from local desktop PCs to a backend datacenter, offered some promise. It helped take pressure off aging PC inventories, since the thin

    Department of Computer Science and Engineering Page 8

  • R. V. College of Engineering

    clients used to access the datacenter require much less processing power, simplified support, and moved much of the heavy lifting of IT to the datacenter. However, virtual desktop infrastructure, or VDI, also has proven to have drawbacks it can be complex, often requiring specialized skills that are beyond many IT departments, and it calls for a large up-front capital investment before deployment.

    Enter a solution that combines the benefits of VDI with the growing power and popularity of the cloud desktop-as-a-service, or DaaS. The concept of DaaS builds on virtual desktop computing, but takes it a step further, moving the backend virtual desktop infrastructure off campus and into the cloud, w here an experienced service provider can handle the servers, software and support required. The result combines the benefits of VDI with further advantages for higher education no big upfront capital expense investment, a simplification of computing in general, and little IT expertise required. In short, DaaS uses the power of the cloud to help institutions reduce IT complexity and cost, while remaining competitive and meeting the computing expectations of their clients.

    4.3 Design Challenges The following are some of the design challenges that were faced during the process of deployment. Also given below are the solutions for those challenges.

    Self service Standardize and automate IT service provisioning and management. Instead of waiting for manual provisioning, academic departments can deploy standardized, preconfigured IT services from a catalog that is available via a self-service Web portal. Departments get tailored services on demand, and central IT administrators reduce maintenance burdens.

    Instant provisioning Quickly deploy a single virtual machine, multiple virtual machines or a pool of virtual desktops to save time and money. With instant provisioning, IT teams can eliminate extensive funding-justification processes that can stall the development and deployment of new applications for months.

    Resource pooling Abstract and pool IT resources into logical building blocks of storage, network and server unitseffectively creating virtual datacenters. To increase efficiency, IT teams can dynamically allocate these resource containers to various applications on the basis of defined business rules and user demand.

    Efficient workload distribution Move workloads seamlessly across private or shared college and department clouds and external clouds, according to academic-institution and application requirements. By freeing deployments from consisting of custom projects and applications that are tightly coupled to a particular cloud, the hybrid model can increase IT automation and reduce manual operations.

    Policy management Map business rules, policies and defined service levels to IT resources after they are virtually pooled. Through intelligent policy management, IT

    Department of Computer Science and Engineering Page 9

  • R. V. College of Engineering

    organizations can create highly efficient, self-managing infrastructure in which IT is available as a service.

    Application portability and interoperability Support a shared management and security modelbased on open standardsto prevent vendor lock-in and help enable application portability between internal datacenters and external clouds hosted by service-provider partners. The right cloud infrastructure enables developers to build robust modern applications that are portable, dynamic and optimized for elastically scalable deployment on hybrid clouds.

    IT control and freedom of choice Enjoy all of the cost-saving and agility benefits of an institution-wide cloud deployment while preserving departmental IT control over policies, compliance and internal chargeback. Department IT teams can maintain autonomy and gain the flexibility to make the right implementation choices to meet their specific requirements.

    4.4 Design Requirements Here are some of the strongest reasons why virtual desktops in general, and DaaS in particular, make sense for energy saving and efficiency increase:

    Centralized management: With virtual desktops, IT departments can manage all of their computers from any location, eliminating the need to travel to different locations for desktop maintenance and end-user support. The result can be huge cost savings and much lower labor and support costs.

    Security: Because use of virtual desktops means that images only not actual data are stored on user devices, security is greatly enhanced. Since no data resides on the device, virtual desktops do away with security concerns if a device is lost or stolen.

    Far greater agility: Any new applications had to be installed, when user computers in workspaces or labs could be brought down for the time needed to install new software. Virtual desktops do away with that constraint. Instead of updating computers in labs and classrooms one by one, or running a remote program to touch each device from the datacenter or IT center and upgrade it, virtual desktops can all be deployed at once, within minutes, to any device.

    Flexible resource sharing: The flexibility of virtual desktops extends to location-based printing amongst other things, allowing users to print from any device to a nearby printer, depending on how permissions are set.

    Reduce IT cost and complexity: Virtual computing in general, and DaaS specifically, extends the life of existing desktop computers, allowing schools to reuse aging PC inventories. Legacy computers that are no longer powerful enough to run the latest applications can still be used as virtual clients.

    4.5 Design Process Plan

    Department of Computer Science and Engineering Page 10

  • R. V. College of Engineering

    Desktop Virtualization Strategy Service: Understand the value that a desktop virtualization solution can bring to the organization and provide the metrics that you need to validate the investment. Software solutions provide a holistic analysis that encompasses the business, technology strategies, operating systems, and applications. It uses detailed discovery to calculate potential savings in capital and operating expenses. With a clear snapshot of the current infrastructure, you can determine how desktop virtualization can cost effectively protect sensitive information while delivering a user experience that is comparable to that of a standalone desktop environment.

    o Business case definition and solution strategy: Determine how a desktop virtualization solution can help to secure data, simplify operations, improve end user experience, and reduce costs. Prepare for an in-depth design activity by mapping business and technical requirements to your desired solution.

    o Operational readiness assessment: Undertake an operational readiness review, define operational requirements, and develop a transformation roadmap to an efficient virtual desktop architecture.

    o Collaboration and innovation architecture planning: Plan for a virtualized desktop solution by incorporating collaboration into your architecture.

    Desktop Virtualization Assessment Service: Determine the feasibility and confirm the

    TCO benefits of implementing desktop virtualization in your environment. Desktop Virtualization Assessment Service provides a comprehensive study that includes data and risk assessment of existing desktop solutions and examines the feasibility of a virtualized desktop infrastructure. DVAS uses a minimally intrusive data collection tool to assess up to Microsoft Windows desktops with no user disruption. Data analysis and interviews provide a data-driven projection of WAN, LAN, and storage infrastructure required to support a successful desktop virtualization deployment. An estimated build of materials and TCO analysis are then completed, which show the benefits of implementing your desired desktop virtualization solution.

    Desktop Virtualization Planning and Design Service: With your metrics-driven strategy in place, Desktop Virtualization Planning and Design Service provides the necessary resources and expertise to design a comprehensive solution that integrates the network, data center, desktop computing, rich media applications, and storage infrastructures. Without proper planning, design, and deployment, response times and user productivity can be affected.

    o Desktop virtualization planning: Evaluate your virtualization opportunities against your current desktop infrastructure, rich media applications, and management systems to help you better understand the benefits and costs of migrating to a virtual desktop solution.

    Department of Computer Science and Engineering Page 11

  • R. V. College of Engineering

    o Desktop virtualization design: Create a high-level design for your desktop

    virtualization solution and a plan for your physical-to-virtual migration process.

    o Desktop virtualization operations management: Assess processes and plan for operational changes and implementation.

    o Mobility services readiness assessment: Speed deployment of mobility services for

    your desktop virtualization solution. This service provides requirements development, WLAN architecture analysis, and recommendations on how to deploy a mobile virtual desktop solution

    Build

    Desktop Virtualization Pre-Production Pilot Service: Validate specific technical requirements for your proposed desktop virtualization design prior to full production. The Desktop Virtualization Pre-Production Pilot Service is a fixed price, fixed scope pre-production deployment offering for up to 250 users that helps you understand and experiment with the unique characteristics of your desktop virtualization project.

    Desktop Virtualization Implementation Service: Smoothly implement your desktop virtualization solution, including creating an implementation plan and migrating users.

    Manage

    Desktop Virtualization Optimization Service: Review best practices for evolving your environment to support data center growth, improve productivity and IT operational processes, accelerate change, and develop a future deployment roadmap, including architecture recommendations for desktop virtualization deployments.

    o Define and align your operational and technical objectives with industry best practices to meet current and future business operational requirements.

    o Identify areas for platform improvement by analyzing performance and capacity patterns, peak utilization trends, and underutilized assets.

    o Improve operational maturity, manageability, and delivery of desktop services through periodic reviews of the architecture and data center operations.

    o Enhance your desktop virtualization operational environment with ongoing expertise, support, and training.

    Solution Support for Desktop Virtualization: Rapidly resolve operational issues with

    solution support that provides a single point of contact.

    Department of Computer Science and Engineering Page 12

  • R. V. College of Engineering

    4.6 Use of appropriate tools, skills and techniques Following gives a list of tools that would be used for the next phases

    VMware vSphere vSphere is the industry-leading virtualization platform for building cloud infrastructures. It enables users to run business-critical applications with confidence and respond quickly to business needs. vSphere accelerates the shift to cloud computing for existing data centers and underpins compatible public cloud offerings, forming the foundation for the industrys best hybrid cloud model.

    VMware Horizon 6 with View Horizon 6 delivers hosted virtual desktops and applications to end users through a single platform. These desktop and application servicesincluding RDS-hosted applications, packaged applications with VMware ThinApp, software-as-a-service (SaaS) applications, and even virtualized applications from Citrixcan all be accessed from one unified workspace across devices, locations, media, and connections. Leveraging closed-loop management and optimized for the software-defined data center, Horizon helps IT control, manage, and protect the Windows resources that end users want at the speed they expect and with the efficiency that business demands. Horizon 6 also provides the ability to manage both virtual and physical desktop images using VMware Mirage. Mirage allows you to manage persistent, full-clone desktops. Horizon 6 allows users to access desktops and applications via VMware Workspace Portal. Workspace Portal also provides IT a central place to entitle and deliver Windows applications, desktops, SaaS applications, ThinApp packaged applications, and XenApp applications to users.

    5. Future work for next phase The concept of desktop virtualization in general makes sense for higher education. It removes institutions from the desktop computer supply-and-support business, making it much easier to maintain and upgrade thin clients and student devices in classrooms, labs and libraries. Desktop-as-a-service takes those benefis a step further, using the power of the cloud to offload backend services and support from the campus IT shop to a professional offsite service provider.

    VMware Horizon Air Desktops and Apps, part of a family of well-known virtual computing solutions, offer an easy-to-deploy, affordable virtual desktop solution that moves all traditional datacenter services into the cloud.

    Todays colleges and universities need to drive enrollment while still keeping costs low, and Horizon Air can help do that. By leveraging thin clients on BYOD, and by

    Department of Computer Science and Engineering Page 13

  • R. V. College of Engineering

    outsourcing support to a trusted service provider, DaaS can support more students with fewer IT resources. It also allows more remote classrooms and labs, and better student access to resources around the clock.

    6. References

    Department of Computer Science and Engineering Page 14

    ENHANCING EFFICIENCY & MOBILITY IN WORKSPACES USINGDESKTOP VIRTUALISATIONSELF STUDY REPORTSubmitted by

    1. Introduction to Desktop Virtualization2. Literature Survey3. Research Challenges4. Design Process4.1 Identification of problem4.2 Application of Engineering Principles4.3 Design Challenges4.4 Design Requirements4.5 Design Process4.6 Use of appropriate tools, skills and techniques

    5. Future work for next phase6. References