ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers...

22

Click here to load reader

Transcript of ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers...

Page 1: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

PILLAI’S INSTITUTE OF INFORMATION OF TECHNOLOGY

LAB MANNUAL

SUBJECT: E-COMMERCE

DEPT OF: COMPUTER ENGINEERING

YEAR: FOURTH SEM: VII

Experiment List

1. Home page designing

2. Case Study of AJAX

3. Form Validation using AJAX

4. To study Access control mechanism.

5. To design catalog.

6. Study of Web Mashup

7. Server side services

8. Study of Ubuntu OS

9. Study of Clusters

10.Study of Hadoop clustering

Page 2: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

PILLAI’S INSTITUTE OF INFORMATION OF TECHNOLOGY

LAB MANNUAL

SUBJECT: E-COMMERCE

DEPT OF: COMPUTER ENGINEERING

YEAR: FOURTH SEM: VII

EXPIREMENT NO: 01

AIM : Home page designing

THEORY : Depending upon students project home page will be designed.It should contain:

1. Snapshot of project home page.2. Hardware and software requirements3. Brief description about their project

CONCLUSION ;

Thus home page has been created for the project.

REFRENCE BOOK:

1. Introduction to e-Commerce · Internet Direct Mail

2. The Complete Guide to Successful E-Mail Marketing Campaigns ·

Page 3: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

PILLAI’S INSTITUTE OF INFORMATION OF TECHNOLOGY

LAB MANNUAL

SUBJECT: E-COMMERCE

DEPT OF: COMPUTER ENGINEERING

YEAR: FOURTH SEM:VII

EXPIREMENT NO: 02

AIM : Case Study of AJAX

THEORY:

Ajax is not new. These techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the technology was known as web remoting or remote scripting. Web developers have also used a combination of plug-ins, Java applets, and hidden frames to emulate this interaction model for some time. What has changed recently is the inclusion of support for the XMLHttpRequest object in the JavaScript runtimes of the mainstream browsers. The real magic is the result of the JavaScript technology's XMLHttpRequest object. Although this object is not specified in the formal JavaScript technology specification, all of today's mainstream browsers support it. The subtle differences with the JavaScript technology and CSS support among current generation browsers such as Mozilla Firefox, Internet Explorer, and Safari are manageable. JavaScript libraries such as Dojo, Prototype, and the Yahoo User Interface Library have emerged to fill in where the browsers are not as manageable and to provide a standardized programming model. Dojo, for example, is addressing accessibility, internationalization, and advanced graphics across browsers -- all of which had been thorns in the side of earlier adopters of Ajax. More updates are sure to occur as the need arises.

Page 4: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

CONCLUSION ;

Thus we have studied AJAX.

REFRENCE BOOK:

“ The Complete Reference” By THOMAS POWELL

Page 5: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

PILLAI’S INSTITUTE OF INFORMATION OF TECHNOLOGY

LAB MANNUAL

SUBJECT: E-COMMERCE

DEPT OF: COMPUTER ENGINEERING

YEAR: FOURTH SEM:VII

EXPIREMENT NO: 03

AIM : Form Validation using AJAX

THEORY:

Form validation is the process of checking that a form has been filled in correctly before

it is processed. For example, if your form has a box for the user to type their email

address, you might want your form handler to check that they've filled in their address

before you deal with the rest of the form.

There are two main methods for validating forms: server-side (using CGI scripts, ASP,

etc), and client-side (usually done using JavaScript). Server-side validation is more

secure but often more tricky to code, whereas client-side (JavaScript) validation is easier

to do and quicker too (the browser doesn't have to connect to the server to validate the

form, so the user finds out instantly if they've missed out that required field!).

Students will validate their forms using java script.

CONCLUSION;

Thus form validation has been done for the project.

Page 6: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

PILLAI’S INSTITUTE OF INFORMATION OF TECHNOLOGY

LAB MANNUAL

SUBJECT: E-COMMERCE

DEPT OF: COMPUTER ENGINEERING

YEAR: FOURTH SEM:VII

EXPIREMENT NO: 04

AIM : To study Access control mechanism.

THEORY:

You can control access in several ways with Windows communication foundation. The access technologies are listed order of complexity. The simplest is the Principal Permission Attribute; the most complex is the Identity model.

An internet use is developing more and more companies are opening their information system to their partners and suppliers. Therefore, it is essential to know use of the company’s resource need protecting and to control system access and the user agents of their information system. The sum is true when opening company access on the Internet.

Moreover, because of today’s increasingly nomadic lifestyle, which allows employees to connect to information systems from virtually anywhere, employees are required to carry part of the information system.

Conclusion: We have studied access control mechanism.

REFRENCE BOOK:

“ The Complete Reference” By THOMAS POWELL

Page 7: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

PILLAI’S INSTITUTE OF INFORMATION OF TECHNOLOGY

LAB MANNUAL

SUBJECT: E-COMMERCE

DEPT OF: COMPUTER ENGINEERING

YEAR: FOURTH SEM:VII

EXPIREMENT NO: 05

AIM : To design catalog.

THEORY:

Catalog designing gives the process of developing products with photographs and their cost.

Catalog should contain following content:

1. Product Photography2. Shopping Cart Design

3. Glossary of Catalog Printing Terminology

4. Useful Information

5. Contact us

Students will design a catalog which will give user complete idea about the products. It should contain sufficient amount of products

CONCLUSION:

Page 8: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

Thus we have successfully designed catalog for our project

PILLAI’S INSTITUTE OF INFORMATION OF TECHNOLOGY

LAB MANNUAL

SUBJECT: E-COMMERCE

DEPT OF: COMPUTER ENGINEERING

YEAR: FOURTH SEM:VII

EXPIREMENT NO: 06

AIM : Study of Web Mashup

THEORY: Web Mashup

In web development, a mashup is a web page or application that uses and combines data, presentation or functionality from two or more sources to create new services.

The term implies easy, fast integration, frequently using open APIs (an interface implemented by a software program that enables it to interact with other software) and data sources to produce enriched results that were not necessarily the original reason for producing the raw source data.

The main characteristics of the mashup are combination, visualization, and aggregation. Mashup is important to make more useful already existing data, moreover for personal and professional use

There are many types of mashup, such as data mashups, consumer mashups, and enterprise mashup .The most common type of mashup is the consumer mashup, aimed at the general public.

Data mashups combine similar types of media and information from multiple sources into a single representation. The combination of all these resources create a new and distinct Web service that was not originally provided by either source.

Consumer mashups opposite to the data mashup, combines different data types. Generally visual elements and data from multiple sources. (e.g.: Wikipediavision combines Google Map and a Wikipedia API)

Page 9: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

Business mashups generally define applications that combine their own resources, application and data, with other external web services They focus data into a single presentation and allow for collaborative action among businesses and developers. This works well for an Agile Development project, which requires collaboration between the Developers and Customer (or Customer proxy, typically a product manager) for defining and implementing the business requirements. Enterprise Mashups are secure, visually rich web applications that expose actionable information from diverse internal and external information sources

CONCLUSION:

Thus we have studied Web Mashup and have used it in our project

Page 10: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

PILLAI’S INSTITUTE OF INFORMATION OF TECHNOLOGY

LAB MANNUAL

SUBJECT: E-COMMERCE

DEPT OF: COMPUTER ENGINEERING

YEAR: FOURTH SEM:VII

EXPIREMENT NO:07

AIM : Server side services

THEORY: Server Side services

As is the case with any client-server paradigm, in the world of web services there are web service providers and web service consumers. Server-Side SOAP is a tutorial which deals with how to build and provide web services using Apache SOAP

.Various server side services can be implemented depending on the project requirements, few are listed below:

1. Fault and Error Handling

2. Call Completion

3. Call Lifetime

4. Server side service operations and memory consideration

CONCLUSION:

Thus we have successfully implemented server side services in our project

Page 11: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

PILLAI’S INSTITUTE OF INFORMATION OF TECHNOLOGY

LAB MANNUAL

SUBJECT: E-COMMERCE

DEPT OF: COMPUTER ENGINEERING

YEAR: FOURTH SEM:VII

EXPIREMENT NO:08

AIM : Study of Ubuntu OS

THEORY: Ubuntu is a computer operating system based on the Debian GNU/Linux distribution and distributed as free and open source software.

With an estimated global usage of more than 12 million user Ubuntu is designed primarily for desktop use, although netbook and server editions exist as well Web statistics suggest that Ubuntu's share of Linux desktop usage is about 50% and indicate upward trending usage as a web server

Ubuntu is sponsored by the UK-based company Canonical Ltd., owned by South African entrepreneur Mark Shuttleworth. Canonical generates revenue by selling technical support and services tied to Ubuntu, while the OS itself is entirely free.

System requirements

The desktop version of Ubuntu currently supports the Intel x86 and AMD64 architectures. Unofficial support is available for the PowerPC, IA-64 (Itanium) and PlayStation 3 architectures (note however that Sony officially removed support for Other OS on the PS3 with firmware 3.21, released on April 1, 2010). A supported GPU is required to enable desktop visual effects

FeaturesUbuntu is composed of many software packages, of which the vast majority are distributed under a free software license, making an exception only for some proprietary

Page 12: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

hardware drivers. The main license used is the GNU General Public License (GNU GPL) which, along with the GNU Lesser General Public License (GNU LGPL), explicitly declares that users are free to run, copy, distribute, study, change, develop and improve the software. On the other hand, there is also proprietary software available that can run on Ubuntu. Ubuntu focuses on usability, security and stability The Ubiquity installer allows Ubuntu to be installed to the hard disk from within the Live CD environment, without the need for restarting the computer prior to installation. Ubuntu also emphasizes accessibility and internationalization to reach as many people as possible. Beginning with 5.04, UTF-8 became the default character encoding which allows for support of a variety of non-Roman scripts. As a security feature, the sudo tool is used to assign temporary privileges for performing administrative tasks, allowing the root account to remain locked, and preventing inexperienced users from inadvertently making catastrophic system changes or opening security holes Policy Kit is also being widely implemented into the desktop to further harden the system through the principle of least privilege.

Ubuntu comes installed with a wide range of software that includes Open Office, Firefox, Empathy (Pidgin in versions before 9.10), Transmission, GIMP (in versions prior to 10.04), and several lightweight games (such as Sudoku and chess). Additional software that is not installed by default can be downloaded and installed using the Ubuntu Software Center or the package manager Synaptic, which come pre-installed. Ubuntu allows networking ports to be closed using its firewall, with customized port selection available. End-users can install (GUI for Uncomplicated Firewall) and keep it enabled GNOME (the current default desktop) offers support for more than 46 languages. Ubuntu can also run many programs designed for Microsoft Windows (such as Microsoft Office), through Wine or using a Virtual Machine (such as VMware Workstation or Virtual Box). For the upcoming 11.04 release, Canonical intends to drop the GNOME Shell as the default window manager in favor of Unity, a graphical interface it first developed for the netbook edition of Ubuntu.

CONCLUSION:

Thus we have studied Ubuntu OS.

Page 13: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

PILLAI’S INSTITUTE OF INFORMATION OF TECHNOLOGY

LAB MANNUAL

SUBJECT: E-COMMERCE

DEPT OF: COMPUTER ENGINEERING

YEAR: FOURTH SEM:VII

EXPIREMENT NO:09

AIM : Study of Clusters

THEORY:

CLUSTER:

In a computer system, a cluster is a group of servers and other resources that act like a single system and enable high availability and, in some cases, load balancing and parallel processing

A computer cluster is a group of linked computers, working together closely thus in many respects forming a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.

The tradeoff in cluster size is that even the smallest file (and even a directory itself) takes up the entire cluster. Thus, a 10-byte file will take up 2,048 bytes if that's the cluster size. In fact, many operating systems set the cluster size default at 4,096 or 8,192 bytes. Until the file allocation table support in Windows 95 OSR2, the largest size hard disk that could be supported in a single partition was 512 megabytes. Larger hard disks could be divided into up to four partitions, each with a FAT capable of supporting 512 megabytes of clusters.

Page 14: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

Compute clusters

Often clusters are used primarily for computational purposes, rather than handling IO-oriented operations such as web service or databases. For instance, a cluster might support computational simulations of weather or vehicle crashes. The primary distinction within computer clusters is how tightly-coupled the individual nodes are. For instance, a single computer job may require frequent communication among nodes - this implies that the cluster shares a dedicated network, is densely located, and probably has homogenous nodes. This cluster design is usually referred to as Beowulf Cluster. The other extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication. This latter category is sometimes called "Grid" computing. Tightly-coupled compute clusters are designed for work that might traditionally have been called "supercomputing". Middleware such as MPI (Message Passing Interface) or PVM (Parallel Virtual Machine) permits compute clustering programs to be portable to a wide variety of clusters.

CONCLUSION:

Thus we have studied cluster.

Page 15: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

PILLAI’S INSTITUTE OF INFORMATION OF TECHNOLOGY

LAB MANNUAL

SUBJECT: E-COMMERCE

DEPT OF: COMPUTER ENGINEERING

YEAR: FOURTH SEM:VII

EXPIREMENT NO:10

AIM : Study of Hadoop clustering

THEORY:

Hadoop consists of the Hadoop Common, which provides access to the filesystems supported by Hadoop. The Hadoop Common package contains the necessary jar files and scripts needed to start Hadoop. The package also provides source code, documentation, and a contribution section which includes projects from the Hadoop Community.

A key feature is that for effective scheduling of work, every filesystem should provide location awareness the name of the rack (more precisely the network switch) where a worker node is. Hadoop applications can use this information to run work on the node where the data is, and, failing that, on the same rack/switch, so reducing backbone traffic. The HDFS file system uses this when replicating data, to try to keep different copies of the data on different racks. The goal is to reduce the impact of a rack power outage or switch failure so that even if these events occur, the data may still be readable

A multi-node Hadoop cluster

A typical Hadoop cluster will include a single master and multiple slave nodes. The master node consists of a jobtracker, tasktracker, namenode, and datanode. A slave or compute node consists of a datanode and tasktracker. Hadoop requires JRE 1.6 or higher and ssh to be set up between nodes in the cluster.

Page 16: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

How Does Hadoop Work?Hadoop is designed to efficiently distribute large amounts of processing across a set of machines, from a few to over 2,000 servers. A small scale Hadoop cluster can easily crunch terabytes or even petabytes of data.The key steps in analyzing data in a Hadoop framework are:

Step 1: Data Loading and Distribution:The input data is stored in multiple files. The scale of parallelism in a Hadoop job is related to the number of input files. For example, for data in ten files, the computation can be distributed across ten nodes. Therefore, the abilityto rapidly process large data sets across compute servers is related to the number of files and the speed of the network infrastructure used to distribute the data to the compute nodes. data set across nodes to avoid idle time. A network designer can manage storage costs by implementing a highspeed switched data network scheme.

Steps 2 & 3: Map/Reduce:The first data processing step applies a mapping function to the data loaded during step 1. The intermediate output of the mapping process is partitioned using some key, and all data with the same key is next moved to the same“reducer” node. The final processing step applies a reduce function to the intermediate data; the output of the reduce is stored back on disk.Between the map and reduce operations the data is shuffled between nodes. All outputs of the map function with the same key is moved to the same reducer node. At this point the data network is the critical path. Its performance and latency directly impact the shuffle phase of a data set reduction. High-speed, non-blockingnetwork switches ensure that the Hadoop cluster is running at peak efficiency.Another benefit of a high-speed switched network is the ability to store shuffled data back in the HDFS instead of the reducer node. Assuming sufficient switching capacity, the data center manager can resume or restart Hadoop

Page 17: ankurm.com Com/Lab Mannual.doc · Web viewThese techniques have been available to developers targeting Internet Explorer on the Windows platform for many years. Until recently, the

data reductions as the HDFS holds the intermediate data and knows the next stage of the reduction. Shuffled data stored on a reduction server is effectively lost if the process is suspended.

Step 4: ConsolidationAfter the data has been mapped and reduced, it must be merged for output and reporting. This requires another network operation as the outputs of the reduce function must be combined from each “reducer” node onto a single reporting node. Again, switched network performance can improve throughout, especially if the Hadoop cluster is running multiple data reductions. In this instance, high performance data center switching reduces idle time, further optimizing the performance of the Hadoop cluster.

SSH

Short for Secure Shell (a software developed at SSH Communications Security Ltd.), SSH is one of the most trusted names when it comes to data confidentiality and security. SSH provides web administrators a way to access their servers in a more secured way even when using a remote computer. Through an encrypted connection, SSH access allows you to log in to your account. This means that all data will be shown in an unreadable format, which makes it hard for hackers to get anything from it. A login system installed with SSH requires a user to undergo heavy authentication process to tell whether or not the user trying to open the account is authorized.

CONCLUSION:

Thus we have studied Hadoop clustering.