Final Mini Prject1
-
Upload
anonymous-ty7maz -
Category
Documents
-
view
219 -
download
0
Transcript of Final Mini Prject1
-
7/31/2019 Final Mini Prject1
1/52
1. INTRODUCTION
1.1 Introduction to the project:
PEER-TO-PEER (P2P) networks have become very popular in the last few
years. Nowadays, they are the most widespread approach for exchanging data among
large communities of users in the file sharing context. Our aim is devising a P2P-based
framework supporting the analysis of multidimensional historical data. Specifically, our
efforts will be devoted to combining the amenities of P2P networks and data
compression to provide a support for the evaluation of range queries, possibly trading
off efficiency with accuracy of answers. The framework should enable members of an
organization to cooperate by sharing their resources to host data and perform aggregate
queries on them, while preserving their autonomy.
1.2 Purpose of the project:
P2p networks are used for extracting data from multidimensional data from the
distributed networks with efficient and robust techniques. P2P based framework
supporting the analysis of multidimensional historical data.
For example, consider the case of a worldwide virtual organization with users
interested in geographical data, as well as the case of a real organization on an
enterprise network. In both cases, even users who are not continuously interested in
performing data analysis can make a part of their resources available for supporting
analysis tasks needed by others.
1.3. ORGANIGATION OF DOCUMENTATION
Documentation is organized as follows.
1. In the first chapter the introduction of the project and purpose of the project are
described.
2. In the second chapter, about literature survey i.e. existing system, disadvantages
of existing system and proposed system are described.
3. In the third chapter,Architecture of the project.
4. In the Fourth chapter, user requirements, software requirements, hardware
requirements are described.
5. Fifth chapter includes UML Diagrams and feasibility study of the project.
6. Sixth chapter includes the method of implementation etc.
1
-
7/31/2019 Final Mini Prject1
2/52
7. Seventh chapter includes introduction to testing, testing criteria and testing
scenarios of project.
8. Eighth chapter includes System Security will be described.
Finally Ninth chapter concludes the project
2
-
7/31/2019 Final Mini Prject1
3/52
2. LITERATURE SURVEY
2.1 Introduction of the system:
The problem of suitably extending-data-compression-based solutions to
application contexts other than file sharing has not been deeply investigated yet.Specifically, no P2P-based solution has imposed itself as an effective evolution of
traditional distributed databases. This is quite surprising, as the huge amount of
resources. Provided by P2P networks (in terms of storage capacity, computing power,
and data transmission capability) could effectively support data management. From this
standpoint, one of the application contexts which are likely to benefit from the support
of a P2P network is the analysis of multidimensional data.
2.2 Problems in existing system
Finally there is also a problem of storage space. Since we handle the
multidimensionaldata there is no solution for space management.
2.3 Proposed System
The proposed solution for the above problem is given in 3 steps
First the partition of data and building an indexed aggregate structure over amultidimensional data population and store it in the server.
The indexed data is distributed in a P2P network by the server and it will be
stored in the location table
Finally the end user sends the query to the P2P network and access the data
2.4 Conclusion:
This shows that the existing system is having a lot of problems in storage space.
So, in order to overcome those problems, this project has been aimed to solve those
problems. The proposed system may have some disadvantages. We expect that those
disadvantages can be solved even by enhancing this project.
3
-
7/31/2019 Final Mini Prject1
4/52
3. ARCHITECTURE
With two tier client/server architectures the user interface is usually located in
the users desktop environment and thedatabase management services are usually in aserver that is a more powerful machine that services many clients. Processing
management is split between the user system interface environment and the database
management server environment. The database management server provides stored
procedures and triggers. There are a number of software vendors that provide tools to
simplify development of applications for the two tier client/server architecture.
The two tier architecture is a good solution for distributed computing whenwork groups are defined as dozen to 100 people interacting on LAN simultaneously. It
does have a number of limitations. When the number of users exceeds 100 performancebegins to deteriorate.
This limitation is a result of the server maintaining a connection via KEEP
ALIVE messages with each client, even when no work is being done. A second
limitation of two architecture is that implementation of processing management
services using vendor proprietary database procedures restricts flexibility and choice of
DBMS for applications. Finally, current implementations of the two tier architecture
provide limited flexibility in moving (repartitioning) program functionality from one
server to another without manually regenerating the procedural code.
4. ANALYSIS
4
end n e uestHttp Request,Files,SQL...................
..................
Fig3: Two Tier System
atabaseer er
Receive ReplyClient
-
7/31/2019 Final Mini Prject1
5/52
4.1 Introduction:
The spiral model is a software development process combining elements of both
design and prototyping-in-stages, in an effort to combine advantages of top down and
bottom up concepts. Also known as the spiral lifecycle model, it is a system
development method (SDM) used in information technology (IT). This model of
development combines the feature of the prototyping model and the waterfall model.
This spiral model is intended for large, expensive and complicated projects.
Each phase starts with a design goal and ends with the client reviewing theprogress thus far. Analysis and engineering efforts are applied at each phase of the
project, with an eye towards the end goal of the
The steps in the spiral model can be generalized as follows:
The new system requirements are defined in as much detail as possible. This
usually involves interviewing a number of users representing all the external or
internal users and other aspects of the existing system.
A preliminary design is created for the new system.
A first prototype of the new system is constructed from the preliminary design.This is usually a called-down system, and represents an approximation of the
characteristics of the final product.
A second prototype is evolved by a fourfold procedure
Evaluating the first prototype in terms of its strengths, weakness, and risks.
Defining the requirements of the second prototype.
Planning and designing the second prototype.
Constructing and testing the second prototype.
At the customers option, the entire project can be aborted if the risk is deemed
too great. Risk factors might involve development cost over runs, operating-cost
miscalculation, or any other factor that could, in the customers judgment, result
in a less than satisfactory final product.
The existing prototype is evaluated in the same manner as was the previous
prototype and if necessary,another prototype i developed from it according tothe fourfold procedure outlined above.
5
-
7/31/2019 Final Mini Prject1
6/52
The preceding steps is constructed, based on the refined prototype.
The final system is constructed, based on the refined prototype.
The final system is thoroughly evaluated and tested. Routine maintenance is
carried out on a continuing basis to prevent large-scale failures and to minimize
downtime.
Fig4.1: Spiral model
4.2 System Analysis:
All projects are feasible when given unlimited resources and infinite time! But
the development of computer-based system is likely to be played by Scarcity of
resources and difficulty in completion dates.
The feasibility of a computer-based system can be studied in three major areas:
Economic Feasibility.
Technical Feasibility.
Functional Feasibility.
6
-
7/31/2019 Final Mini Prject1
7/52
4.2.1 Economic Feasibility:
An evaluation of development cost weighed against the ultimate income or
benefit derived from the developed system. Very important information contained in
the feasibility study is that it takes care of the cost benefit analysis, which is theassessment of the economic justification for a computer based system project.
The system is very user friendly and only common terms are used in the
application and so it will not be difficult for the end-user in handling the system. The
system provides a very guidance for every step to follow while using.
4.2.2 Technical Feasibility:
A study of function, performance and constraints that may effect the ability toachieve an acceptable system. It is here the analyst evaluates the technical merits of the
system, while at the same time collects additional information about performance,
reliability and maintainability end products.
Technology is not a constraint to system development. The latest technologies
are incorporated so as to achieve the best of these new developments on the system.
The systems developed fully generalize, so that any future expansion will not be a
problem.
4.2.3 Functional Feasibility:
The current existing system is less interactive and not up to the mark in terms of
customer support. From all these, we can conclude that this system is Economically,
Technically and functionally feasible.
4.3 Software Requirement Specifications:
4.3.1 Functional Requirements:4.3.1 Functional Requirements:
Detachment of the data
Marking
Distributing the indexed data
Data querying
7
-
7/31/2019 Final Mini Prject1
8/52
Partitioning the data:
The aim of the partitioning step is to divide the data domain into non
overlapping blocks. For each of them, a portion of the amount of storage space B
chosen to represent the whole synopsis will be invested. We denote the maximumamount of storage space which can be invested for summarizing a single block.
Marking:
In order to limit the space needed and also for the fast retrieving of data we
create an index for each partitioned data. It will be stored in the server. Since data are
historical, the storage space consumption can be reduced by adopting packing strategies
for its construction, which aim at obtaining 100 percent space utilization.
Distributing the marked data:
Once the indexing is over then the server can distribute the data in the P2P
network. The data will be disturbed to the peers those who are available in the network.
A location table is maintained in all the peers to store the data.
Data querying
End user or the client can send the query to the P2P network It will be search
across the network and the data will be collected to compute the result and it will be
send to the client .The searching is based on the name or the keyword that is stored on
the location tab
4.3.2 Nonfunctional Requirements:
These are the requirements that specify criteria that can be used to judge the
operation of the system, rather than specific behavior which are given by functional
requirements. Non functional requirements are often called qualities of the system.
Security:
A measure of systems ability to resist unauthorized attempts at usage or
behavior modification, while still providing service to legitimate users. Users with valid
username and password are only allowed to use this application.
8
-
7/31/2019 Final Mini Prject1
9/52
Correctness:
This application facilitates the users to perform validation checks on data.
Reusability:
This application can be reused for any specific purpose.
Maintainability:
This application doesnt require high maintenance because the data we get is
dynamically retrieved from data base.
Performance:
The performance of the application depends on the speed of the Internet.
Portability:
As we are using java technology, which is platform independent, and thus can
run on any operating system and thus it is portable.
4.3.3 Software requirements:
Programming Language :Java, JavaScript
Web based Technologies :Servlets, JSP ,HTML,CSS
Database Connectivity API :JDBC
Backend Database :Oracle10g
Operating System :Windows XP/2000/2003
J2EE Web Server :Tomcat6.0
IDEs :Eclipse with My Eclipse plug-in
4.3.4 Hardware requirements (MIN):
Hard Disk :40GB
Processor : Intel P4 based system
Processor Speed :250 MHz to 833MHz
RAM :512 MB
9
-
7/31/2019 Final Mini Prject1
10/52
5. DESIGN
5.1 Introduction:
Design is the first step in the development phase for any technique and
principles for the purpose of defining a device, a process or system in sufficient details
to permit its physical realization. Once the software requirement have been analyzed
and specified the software design involves three technical activities design, coding,
generation and testing that are required to build and verify the software.
The design activities are of main importance in this phase, because in theactivity, decision ultimate affecting the success of the software implementation and its
ease of maintained are made. These decisions have the final bearing upon reliability and
maintainability of the system. Design is the only way to accurately translate the
customer requirements into finished software or a system.
Design is the place where quality is fostered in development software design is
a process through which requirements are translated into a representation of software.
Software design is conducted in two steps. Preliminary design is concerned with the
transformation of requirement into data.
5.1.1 System Design:
Software design sits at the technical kernel of the software engineering process
and is applied regardless of the development paradigm and area of application. Design
is the first step in the development phase for any engineered product or system. The
designers goal is to produce a model or representation of an entity that will later be
built. Beginning, once system requirement have been specified and analyzed, system
design is the first of the three technical activities -design, code and test that is requiredto build and verify software.
The importance can be stated with a single word Quality. Design is the place
where quality is fostered in software development. Design provides us with
representations of software that can assess for quality. Design is the only way that we
can accurately translate a customers view into a finished software product or system.
Software design serves as a foundation for all the software engineering steps that
follow. Without a strong design we risk building an unstable system one that will be
difficult to test, one whose quality cannot be assessed until the last stage.
10
-
7/31/2019 Final Mini Prject1
11/52
During design, progressive refinement of data structure, program structure, and
procedural details are developed reviewed and documented. System design can be
viewed from either technical or project management perspective.
From the technical point of view, design is comprised of four activities
architectural design, data structure design, interface design and procedural design.
5.1.2 Front End Tool:
We use JAVA as the Front-End because it provides object oriented features
which are very helpful. Due to the use of classes, we can achieve data security and
integrity. Data accessing is also very easy.
In this project we used JSP as front-End
JSP Technology is simple but powerful technology used to generate dynamic
HTML on the server side.
JSP is a direct extension of Serves and provide a way to separate content generation
from content presentation.
The JSP Engine takes care of converting .jsp files into Serves.
Java Server Pages (JSP) lets you separate the dynamic part of your pages from
the static HTML. You simply write the regular HTML in the normal manner, using
whatever Web-page-building tools you normally use. Youthen enclosethecode or thedynamic parts in special tags, most of which start with "
-
7/31/2019 Final Mini Prject1
12/52
Advantages
1. Easier to use
Web developers and designers use Java Server Pages technology to rapidlydevelop and easily maintain information-rich, dynamic web pages that leverage existing
business systems. The upcoming, next release makes JSP technology even easier to use.
"Easier to use" was the major objective driving the Java Server Pages v2.0
specification changes. Now, it is:
2. Easier to use JSP technology without needing to learn the Java
language
HTML-savvy web page developers and designers can use JSP technology
without needing to learnhow to write Java scriptlets. Although scriptlets are no longerrequired to generate dynamic content, they are still supported to provide backward
compatibility.
3. Easier to extend the JSP language
Java technology-savvy tag library developers and designers will find it is even
easier to extend the JSP language with "simple tag handlers". Simple tag handlers
utilize a new, much simpler and cleaner, tag extension API. This will spur the growing
number of Pluggable, reusable tag libraries available, which reduces the amount of code
needed to write powerful web applications.
5.1.3 Hyper Text Mark Up Language:
Introduction:
Hyper Text Markup Language is a structural markup language used to createand format a web document. A markup language such as HTML is simply a collection
of codes, called Elements that are used to indicate the structure and format of a
document. A user agent, usually a web browser that renders the document, interprets
the meaning of these codes to figure how to structure or display a document. HTML is
not invention but it is an improved version of Standard Generalized Markup Language
(SGML).
HTML in the following four stages:
Level-0 included only the basic structural elements and assured that all browsers
supported all features.
12
-
7/31/2019 Final Mini Prject1
13/52
Level-1 advanced features included highlighted text and graphics that were
supported depending on the browser capability.
Level 2 introduced the World Wide Web as an interactive medium and the feature
of fill out forms on the Internet.
Level-3 introduced frames, inline, video, sound, etc.
Importance of HTML
HTML can be used to display any type of document on the host computer, which
can be geographical at a different location.
It is a versatile language and can be used on any platform or desktop.
The appearance of a Web page is important, and HTML provides tags to make the
document look attractive. Using graphics, fonts, different sizes, color, etc. can
enhance the presentation of the document.
Functionality of HTML in the project:
As we know this is purely web-based project. This helps to embed Java Server
Pages within the page using some simple tags.
Used to design the forms.
User can communicate easily with server
5.1.3 Introduction to JDBC:
JDBC is a java API for executing SQL statements. JDBC is often thought of as
standing for Java Database Connectivity. It consists of a set of classes and interfaces
written in the Java programming language. JDBC provides standard API tool/database
developers and makes it possible to write database application using a pure Java API.
Using JDBC, it is easy to send SQL statements to virtually any relation database.
One can write a single program using the JDBC API, and the program will be
able to send SQL statements to the appropriate database. The combination of Java and
JDBC lets a programmer to write it once and run it anywhere.
Java being robust, secure, easy to understand and automatically downloadable
on a network, is an excellent language basis for database applications. A programmer
can write or update once, put it on the server, and everybody has access to the latestversion.
13
-
7/31/2019 Final Mini Prject1
14/52
JDBC makes it possible to do three things:
Establishes a connection with a database.
Send SQL statements.
Process the results.
5.1.4 Backend Tool:
Oracle:
Oracle is a relational database management system (RDBMS). Oracle being on
RDBMS stores data in tables called relational. These relations are two-dimensional
representation of data where the rows are tables in relational jargon-represents records,
and columns-call attributes-are pieces of information contained in the record.
ORACLE provides a rich set of tools to allow design and maintains of the
database.
Features and capability of oracle:
1. Changed Terminology.
a. Database Objects
b. Schema.
c. Server process.
d. Sessions
e. Shared SQL areas.
2. Data type changed.
3. Stored procedures and Triggers.
4. Distributed database options.
5. The snapshot capability.
6. Backup and recovery.
7. Security.
8. Performance.9. Cost-based optimization.
10. Data Dictionary
Importance Of Oracle Backend Tool:
Oracle provides efficient and effective solutions for major database features
like:
1. Large database and space management control.2. Many concurrent database users.
14
-
7/31/2019 Final Mini Prject1
15/52
3. High transaction processing performance.
4. High Availability.
5. Controlled Availability
6. Industry accepted standards.
7. Manageable security.8. Database enforces integrity.
9. Client/Server environment.
10. Distributed database systems.
11. Compatibility.
12. Connectivity.
Oracle itself contains database constraints. Even if user enter the wrong data
and try to save that data is gives the error message.
5.2 Normalization:
It is a process of converting a relation to a standard form. The process is used to
handle the problems that can arise due to data redundancy i.e. repetition of data in the
database, maintain data integrity as well as handling problems that can arise due to
insertion, updating, deletion anomalies.
Decomposing is the process of splitting relations into multiple relations toeliminate anomalies and maintain anomalies and maintain data integrity. To do this we
use normal forms or rules for structuring relation.
Insertion anomaly:
Inability to add data to the database due to absence of other data.
Deletion anomaly:
Unintended loss of data due to deletion of other data.
Update anomaly:
Data inconsistency resulting from data redundancy and partial update
Normal Forms:
These are the rules for structuring relations that eliminate anomalies.
15
-
7/31/2019 Final Mini Prject1
16/52
First Normal Form:
A relation is said to be in first normal form if the values in the relation are
atomic for every attribute in the relation. By this we mean simply that no attribute
value can be a set of values or, as it is sometimes expressed, a repeating group.
Second Normal Form:
A relation is said to be in second Normal form is it is in first normal form and it
should satisfy any one of the following rules.
Primary key is a not a composite primary key
No non key attributes are present
Every non key attribute is fully functionally dependent on full set of primary
key.
Third Normal Form:
A relation is said to be in third normal form if their exits no transitive
dependencies.
Transitive Dependency:
If two non key attributes depend on each other as well as on the primary key
then they are said to be transitively dependent.
The above normalization principles were applied to decompose the data in
multiple tables thereby making the data to be maintained in a consistent state.
5.3 E R Diagrams:
The relation upon the system is structure through a conceptual ER-Diagram, which not only specifics the existential entities but also the
standard relations through which the system exists and the cardinalities
that are necessary for the system state to continue.
The Entity Relationship Diagram (ERD) depicts the relationship
between the data objects. The ERD is the notation that is used to
conduct the date modeling activity the attributes of each data object
noted is the ERD can be described resign a data object descriptions.
The set of primary components that are identified by the ERD are
Data object
16
-
7/31/2019 Final Mini Prject1
17/52
Relationships
Attributes
Various types of indicators.
The primary purpose of the ERD is to represent data objects and their relationships.
User login
Fig5.3: ER Diagram
17
User Name
Password Uploadfiles
User
User Login
UserFunctions
ChangePassword Update details
Configure
OnlineFileManager
Downloadfiles
-
7/31/2019 Final Mini Prject1
18/52
5.4 Data Flow Diagrams:
Data flows are data structures in motion, while data stores are data structures.
Data flows are paths or pipe lines, along which data structures travel, where as the
data stores are place where data structures are kept until needed.
Data flows are data structures in motion, while data stores are data structures at
rest. Hence it is possible that the data flow and the data store would be made up of the
same data structure
Data flow diagrams is a very handy tool for the system analyst because it gives
the analyst the overall picture of the system, it is a diagrammatic approach.
A DFD is a pictorial representation of the path which data takes From its initialinteraction with the existing system until it completes any interaction. The DFD also
gives the insight into the data that is used in the system i.e., who actually uses it is
temporarily stored.
A DFD does not show a sequence of steps. A DFD only shows what the
different process in a system is and what data flows between them.
The following are some DFD symbols used in the project
External entities
DATAFLOWS
Fig 5.4:Data Flow Symbols
18
Process: A transaction of information that resides
within the bounds of the system to be module.
DATASTORE:A repository of data that is
to be stored for use by one or more
processes, may be as simple as buffer of
queue or as a relational database.
-
7/31/2019 Final Mini Prject1
19/52
Rules for Data Flow Diagram:
Fix the scope of the system by means of context diagrams.
Organize the DFD so that the main sequence of the actions reads left to right
and top to bottom.
Identify all inputs and outputs.
Identify and label each process internal to the system with rounded circles.
A process is required for all the data transformation and transfers. Therefore,
never connect a data store to a data source or the destinations or another data
store with just a data flow arrow.
Do not indicate hardware and ignore control information.
Make sure the names of the processes accurately convey everything the process
is done.
There must not be unnamed process.
Indicate external sources and destinations of the data, with squares.
Number each occurrence of repeated external entities.
Identify all data flows for each process step, except simple Record retrievals.
Label data flow on each arrow.
Use details flow on each arrow.
Use the details flow arrow to indicate data movements.
There cant be unnamed data flow.
A data flow cant connect two external entities.
19
-
7/31/2019 Final Mini Prject1
20/52
Levels of Dataflow Diagram:
The complexity of the business system means that it is a responsible to represent
the operations of any system of single data flow diagram. At the top level, an Overviewof the different systems in an organization is shown by the way of context analysis
diagram. When exploded into DFD
They are represented by:
LEVEL-0 : SYSTEM INPUT/OUTPUT
LEVEL-1:SUBSYSTEM LEVEL DATAFLOW FUNCTIONAL
LEVEL-2: FILE LEVEL DETAIL DATA FLOW.
The input and output data shown should be consistent from one level to the next.
Level-0: System Input/output Level:
A level-0 DFD describes the system-wide boundaries, dealing inputs to and
outputs from the system and major processes. This diagram is similar to the combined
user-level context diagram.
Level-1: Subsystem Level Dataflow Functional:
A level-1 DFD describes the next level of details within the system, detailing
the data flows between subsystems, which make up the whole.
Level-2: File Level Detail Dataflow:
All the projects are feasible given unlimited resources and infinite time. It is
both necessary and prudent to evaluate the feasibility of the project at the earliest
possible time. Feasibility and the risk analysis are pertained in many ways.
5.3.2 Use Case Diagram:
Use Case is a description of a set of sequence of actions that a system performs
that yields an observable result of value to a particular things in a model. User is an
actor and these are use cases are login, view work details, assign work, approval link,
view voter request details, view ward member and helper details.
20
-
7/31/2019 Final Mini Prject1
21/52
Identification of actors:
Actor:
Actor represents the role a user plays with respect to the system. An actor
interacts with, but has no control over the use cases.
An actor is someone or something that:
Interacts with or uses the system.
Provides input to and receives information from the system.
Is external to the system and has no control over the use cases.
Questions to identify actors:
Who is using the system? Or, who is affected by the system? Or, which groups
need help from the system to perform a task?
Who affects the system? Or, which user groups are needed by the system to
perform its functions? These functions can be both main functions and secondary
functions such as administration.
Which external hardware or systems (if any) use the system to perform tasks?
What problems does this application solve (that is, for whom)?
And, finally, how do users use the system (use case)? What are they doing with
the system?
21
-
7/31/2019 Final Mini Prject1
22/52
Login
Upload and download Audio files
Upload and download Documents
Upload and download Images
Logout
User
Upload and download Video files
Fig 5.5:Use Case Diagram
5.3.3 Class Diagram:
Identification of analysis classes:
A class is a set of objects that share a common structure and common behavior
(the same attributes, operations, relationships and semantics). A class is an abstraction
of real-world items.
There are 4 approaches for identifying classes:
Noun phrase approach:
22
-
7/31/2019 Final Mini Prject1
23/52
Common class pattern approach.
Use case Driven Sequence or Collaboration approach.
Classes, Responsibilities and collaborators Approach
Identification of responsibilities of each class:
The questions that should be answered to identify the attributes and methods of
a class respectively are:
What information about an object should we keep track of?
What services must a class provide?
Three types of relationships among the objects are:
Association: How objects are associated?
Super-sub structure: How are objects organized into super classes and sub
classes?
Aggregation: What is the composition of the complex classes?
Super-sub class relationships
Super-sub class hierarchy is a relationship between classes where one class is
the parent class of another class (derived class).This is based on inheritance. This
hierarchy is represented with Generalization.
Guidelines for identifying the super-sub relationship, a generalization are
1. Top-down:
Look for noun phrases composed of various adjectives in a class name. Avoid
excessive refinement. Specialize only when the sub classes have significant behavior.
2. Bottom-up:
Look for classes with similar attributes or methods. Group them by moving the
common attributes and methods to an abstract class. You may have to alter the
definitions a bit.
3. Reusability:
Move the attributes and methods as high as possible in the hierarchy.
4. Multiple inheritances:
Avoid excessive use of multiple inheritances. One way of getting benefits of
multiple inheritances is to inherit from the most appropriate class and add an object ofanother class as an attribute
23
-
7/31/2019 Final Mini Prject1
24/52
Packages
Packages allow you to break up a large number of objects into related
groupings. In many object oriented languages (such as Java), packages are used toprovide scope and division to classes and interfaces. In the UML, packages serve a
similar, but broader purpose.
Classes:
The core element of the class diagram is the class. In an object oriented system,
classes are used to represent entities within the system; entities that often relate to real
world objects.
The Contact class above is an example of a simple class that stores location
information.
Classes are divided into three sections:
Top: The name, package and stereotype are shown in the upper section of the
class
Centre: The centre section contains the attributes of the class.
Bottom: In the lower section are the operations that can be performed on the
class.
Attributes:
An attribute is a property of a class. In the example above, we are told that a
Contact has an address, a city, a province, a country and a postal code. It is generally
understood that when implementing the class, functionality is provided to set and
retrieve the information stored in attributes. The format for attributes is: In object
oriented design, it is generally preferred to keep most attributes private.
Static:
Attributes that are static only exist once for all instances of the class. In the
example above, if we set city to be static, any time we used the Contact class the city
attribute would always have the same value.
Final:
If an attribute is declared final, it's value cannot be changed. The attribute is a
constant.
24
-
7/31/2019 Final Mini Prject1
25/52
Operations:
The operations listed in a class represent the functions or tasks that can be
performed on the data in the class.
In the List class above, there is one attribute (a private array of Objects) and
three operations.
The format is very similar to that of the attribute except with the removal of a default
value and the addition of parameters.
Generalization
The generalization link is used between two classes to show that a class
incorporates all of the attributes and operations of another, but adds to them in some
way.
Interfaces
Many object oriented programming languages do not allow for multiple
inheritances. The interface is used to solve the limitations posed by this. For example,
in the earlier class diagram Client and Company both generalize Contact but one or the
other child classes may have something in common with a third class that we do not
want to duplicate in multiple classes.
Associations
Classes can also contain references to each other. The Company class has two
attributes that reference the Client class.
Aggregation and Composition:
The composition association is represented by the solid diamond. It is said that
Product Group is composed of Products. This means that if a Product Group is
destroyed, the Products within the group are destroyed as well.
The aggregation association is represented by the hollow diamond. Purchase
Order is an aggregate of Products. If a Purchase Order is destroyed, the Products still
exist.
Dependencies:
A dependency exists between two elements if changes to one will affect theother.
25
-
7/31/2019 Final Mini Prject1
26/52
If for example, a class calls an operation in another class, then a dependency
exists between the two. If you change the operation , than the dependent class will have
to change as well . When designing your system, the goal is to minimize dependencies.
A ud io
k e y
f i le locat ion
p lay fi les( )
V i d e o
k e y
f i le locat ion
p lay fi les( )
Doc um en ts
k ey
f i le locat ion
d i sp lay da ta ( )
Im ag es
k e y
f i le locat ion
view im age s ()
u s e r
Us ernam e
P a s s w o r d
login()
up load f iles( )
down load f iles( )
logout( )
Fig5.6: Class Diagram
5.3.4 Sequence Diagrams:
A sequence diagram is a graphical view of a scenario that shows object
interaction in a time-based sequence what happens first, what happens next.
Sequence diagrams establish the roles of objects and help provide essential
information to determine class responsibilities and interfaces.
Object:
An object has state, behavior, and identity. The structure and behavior of similar
objects are defined in their common class. Each object in a diagram indicates some
instance of a class.
26
-
7/31/2019 Final Mini Prject1
27/52
An object that is not named is referred to as a class instance. The object icon is
similar to a class icon except that the name is underlined. An object's concurrency is
defined by the concurrency of its class
.
Message:
A message is the communication carried between two objects that trigger an
event. A message carries information from the source focus of control to the destination
focus of control.
The synchronization of a message can be modified through the message
specification. Synchronization means a message where the sending object pauses to
wait for results.
Link:
A link should exist between two objects, including class utilities, only if there is
a relationship between their corresponding classes.
The link is depicted as a straight line between objects or objects and class
instances in a collaboration diagram. If an object links to itself, use the loop version of
the icon.
27
-
7/31/2019 Final Mini Prject1
28/52
User Video files Documents ImagesAudio files
Login
Enter Key
Browse Location
Upload files
Down load files
Search files
enter key
upload and download files
upload and download files
search files
upload files and download files
search files
Fig 5.7: Sequence Diagram
5.3.5 Collaboration Diagram:
During the Elaboration phase the project team is expected to capture a healthy
majority of the system requirements. However, the primary goals of Elaboration are to
address known risk factors and to establish and validate the system architecture.
28
-
7/31/2019 Final Mini Prject1
29/52
Common processes undertaken in this phase include the creation of use case
diagrams, conceptual diagrams (class diagrams with only basic notation) and package
diagrams (architectural diagrams).
The final Elaboration phase deliverable is a plan (including cost and scheduleestimates) for the Construction phase. At this point the plan should be accurate and
credible, since it should be based on the Elaboration phase experience and since
significant risk factors should have been addressed during the Elaboration phase.
User
Video
files
Audio
files
Docume
nts Images
1: Login
2: Enter Key
3: Browse Location
4: Upload files
5: Down load files
6: Search files7: enter key
8: upload and download files
9: upload and download files
10: search files
11: upload files and download files
12: search files
Fig 5.8: Collaboration Diagram
5.3.6 Activity Diagram:
The focus of activity modeling is the sequence and conditions for coordinating
lower-level behaviors, rather than which classifiers own those behaviors. These are
commonly called control flow and object flow models. The behaviors coordinated by
these models can be initiated because other behaviors finish executing, because objects
and data become available, or because events occur external to the flow
29
-
7/31/2019 Final Mini Prject1
30/52
Fig 5.9:Activity Diagram
30
Login
Upload and downloadAudio files
Upload and download
Video files
Upload and downloadDocuments
Upload and download
Image files
Search files
Logout
-
7/31/2019 Final Mini Prject1
31/52
5.4 Modules:
1. Audio
2. Video
3. Documents4. Images
Audio
The main purpose of the audio module is to upload or download the audio files.
Here it supports only mp3 format audio files. To listen these audio files we need a audio
player in client system or adobe plug-in required.
Video:
The main purpose of the Video module is to upload or download the video files.
Here it supports only MPEG2 format video files. To view this video files we need a
video player in client system or adobe plug-ins required.
Documents
The main purpose of the Documents module is to upload or download the
documents files. Here it supports only .doc, .txt, .pdf file format of document files. To
view this document files we need a pdf reader or MS-office in client system. Adobe
plug-ins are not required.
Images:
The main purpose of the Images module is to upload or download the images
files. Here it supports only file format of images files. To view these images files we
need a pdf reader or MS-office in client system. Adobe plug-ins are not required.
31
-
7/31/2019 Final Mini Prject1
32/52
6. IMPLEMENTATION
The Java Language
What Is Java?
Java is two things: a programming language and a platform.
The Java Programming Language:
Java is a high-level programming language that is all of the follows:
Simple
Object-oriented
Distributed
Interpreted
Robust
Secure
Architecture-neutral
Portable
High-performance
Multithreaded
Dynamic
Java is also unusual in that each Java program is both compiled and interpreted.
With a compiler, you translate a Java program into an intermediate language called Java
byte codes--the platform-independent codes interpreted by the Java interpreter. With an
interpreter, each Java byte code instruction is parsed and run on the computer.
Compilation happens just once; interpretation occurs each time the program is
executed. This figure illustrates how this works.
Fig 6.1: Java Virtual Machine
32
-
7/31/2019 Final Mini Prject1
33/52
Java byte codes can be considered as the machine code instructions for the Java
Virtual Machine (Java VM). Every Java interpreter, whether it's a Java development
tool or a Web browser that can run Java applets, is an implementation of the Java VM.
The Java VM can also be implemented in hardware.
Java byte codes help make "write once, run anywhere" possible. The Java
program can be compiled into byte codes on any platform that has a Java compiler. The
byte codes can then be run on any implementation of the Java VM. For example, the
same Java program can run on Windows NT, Solaris, and Macintosh.
The Java Platform
A platform is the hardware or software environment in which a program runs.
The Java platform differs from most other platforms in that it's a software-only platform
that runs on top of other, hardware-based platforms. Most other platforms are described
as a combination of hardware and operating system.
The Java platform has two components:
The Java Virtual Machine (Java VM)
The Java Application Programming Interface (Java API)
The Java API is a large collection of ready-made software components that
provide many useful capabilities, such as graphical user interface (GUI) widgets. The
Java API is grouped into libraries (packages) of related components.
33
http://var/www/apps/conversion/current/tmp/scratch8196/Documents%20and%20Settings/user1.USER.001/tutorial/figures/getStarted/2comp.gif -
7/31/2019 Final Mini Prject1
34/52
The following figure depicts a Java program, such as an application or applet,
that's running on the Java platform. As the figure shows, the Java API and Virtual
Machine insulates the Java program from hardware dependencies.
As a platform-independent environment, Java can be a bit slower than native
code. However, smart compilers, well-tuned interpreters, and just-in-time byte code
compilers can bring Java's performance close to that of native code without threatening
portability.
What Can Java Do?
Probably the most well-known Java programs are Java applets. An applet is a
Java program that adheres to certain conventions that allow it to run within a Java-
enabled browser.
However, Java is not just for writing cute, entertaining applets for the World
Wide Web ("Web"). Java is a general-purpose, high-level programming language and a
powerful software platform. Using the generous Java API, we can write many types of
programs.
The most common types of programs are probably applets and applications,
where a Java application is a standalone program that runs directly on the Java
platform.
How does the Java API support all of these kinds of programs? With packages
of software components that provide a wide range of functionality. The core API is theAPI included in every full implementation of the Java platform. The core API gives you
the following features:
The Essentials:
Objects, strings, threads, numbers, input and output, data structures, system
properties, date and time, and so on.
34
-
7/31/2019 Final Mini Prject1
35/52
Applets:
The set of conventions used by Java applets.
Networking:
URLs, TCP and UDP sockets, and IP addresses.
Internationalization:
Help for writing programs that can be localized for users worldwide. Programs
can automatically adapt to specific locales and be displayed in the appropriate language.
Security:
Both low-level and high-level, including electronic signatures,
public/private key management, access control, and certificates.
Software components:
Known as JavaBeans, can plug into existing component architectures
such as Microsoft's OLE/COM/Active-X architecture, OpenDoc, and Netscape's
Live Connect.
Object serialization:
Allows lightweight persistence and communication via Remote Method
Invocation (RMI).
Java Database Connectivity (JDBC):
Provides uniform access to a wide range of relational databases.
Java not only has a core API, but also standard extensions. The standardextensions define APIs for 3D, servers, collaboration, telephony, speech, animation,
and more.
How Will Java Change My Life?
Java is likely to make your programs better and requires less effort than other
languages. We believe that Java will help you do the following:
Get started quickly: Although Java is a powerful object-oriented language, it's
easy to learn, especially for programmers already familiar with C or C++.
35
-
7/31/2019 Final Mini Prject1
36/52
Write less code: Comparisons of program metrics (class counts, method counts,
and so on) suggest that a program written in Java can be four times smaller than
the same program in C++.
Write better code: The Java language encourages good coding practices, and its
garbage collection helps you avoid memory leaks. Java's object orientation, itsJavaBeans component architecture, and its wide-ranging, easily extendible API
let you reuse other people's tested code and introduce fewer bugs.
Develop programs faster: Your development time may be as much as twice as
fast versus writing the same program in C++. Why? You write fewer lines of
code with Java and Java is a simpler programming language than C++.
Avoid platform dependencies with 100% Pure Java: You can keep your
program portable by following the purity tips mentioned throughout this book
and avoiding the use of libraries written in other languages.
Write once, run anywhere: Because 100% Pure Java programs are compiled into
machine-independent byte codes, they run consistently on any Java platform.
Distribute software more easily:
You can upgrade applets easily from a central server. Applets take advantage
of the Java feature of allowing new classes to be loaded "on the fly," without
recompiling the entire program.
We explore the java.net package, which provides support for networking. Itscreators have called Java programming for the Internet. These networking classes
encapsulate the socket paradigm pioneered in the Berkeley Software Distribution
(BSD) from the University of California at Berkeley.
The initial release of Java was nothing of revolutionary, but it did not mark
the end of javas era of rapid innovation. Unlike most other software systems that
usually settle into a pattern of small, increment improvements, java had already created
java1.1 were more significant and substantial than the increase 1 the minor revision
number would have you think. Java1.1 added many new library elements, redefined the
way events are handled by applets, and reconfigured many features of the 1.0 library .It
also deprecated several features originally defined by java1.0. Thus java1.1 both
added and subtracted attributes from its original specification. Continuing in this
evolution, java2 also and subtracts features.
Features added by 1.1
Version1.1 added some important elements to Java. Most of the additionoccurred in the Java library. However, a few new language features were also included.
Here is a list of the important features added by 1.1:
36
-
7/31/2019 Final Mini Prject1
37/52
Java Beans, which are software components that are written in Java.
Serialization, which allows you to save and restore the state of an object.
Remote Method Invocation, which allows a Java, objects to invoke the methods of
another Java object that located on a different machine. This is an important facility
for building distributed applications.
Java Database Connectivity (JDBC), which allows programs to access SQL,
databases from many different vendors.
The Java Native Interface (JNI), which provides a new way for your programs to
interface with code libraries written in other languages.
Reflection, which is a process of determining the fields, constructors and methods
of a java object at run time.
Various security features, such as digital signatures, messages digests, and access
control lists and key generation.
Built in support for 16-bit character streams that handle Unicode characters.
Significant changes to event handling that improve the way in which events
generated by graphical user interface (GUI) components are handled.
Inner classes, which allow one class to be defined within another.
Deprecated by Features 1.1
As just mentioned java 1.1 depreciated many earlier library elements. For
example, most of the original Date class was deprecated. However, the deprecatedfeatures did not go away. Instead, they were replaced with updated alternatives. In
general, deprecated 1.0 feature is still available in java to support legacy code, but they
should not be used by new applications.
Features added by java 2.0
Building upon 1.1, java 2.0 adds many important new features.
Here is a partial list.
37
-
7/31/2019 Final Mini Prject1
38/52
Swing is a set of user interface components that is implemented entirely in java you
can use a look and feel that is either specific to a particular operating system or
uniform across operating systems. You can also design your own look and feel.
Collections are group of objects. Java 2.0 provides several types of collection, suchas linked lists, dynamic arrays and hash tables for use. Collections offer a new way
to solve several common-programming problems.
Digital certificates provide mechanism to establish the identity of a user. You may
think of them as electronic passports. Java programs can parse and use certificates
to enforce security policies.
Text components can now receive Japanese, Chinese and Korean characters from
keyboard. Using a sequence of keystrokes to represent one character does this.
The Common Object request Broker Architecture (CORBA) defines an Object
request Broker (ORB) and an Interface Definition Language (IDL). Java 2.0
includes an ORB and an IDL to java compiler. The latter generates code from an
IDL specification.
Performance improvements have been made in several areas. A Just-In-Time (JIT)
compiler is included in JDK.
Many browsers include a Java Virtual Machine that is used to execute applets.
Unfortunately, browsers JVMs typically do not include the latest java features. The
java Plug-in solves this problem. It directs a browsers JVM .The JRE is a subset of
the JDK.It does not include the tools and classes that are used in a development
environment.
Various tools such as Javac, Java and Javadoc have been enhanced. Debugger and
Profiler interfaces for the JVM are available.
Features Deprecated by 2
Although not as extensive as the deprecations experienced between 1.0 and 1.1
some features of java 1.1 are deprecated by java 2.0. For example, the suspend (),
resume () and stop () methods of the Thread class should not be used in new code
Javas Magic: The Byte Code:
The key that allows java to solve both the security and the portability
problems just described is that the output of the java compiler is not an executable code.Rather, it is Byte Code.
38
-
7/31/2019 Final Mini Prject1
39/52
Byte Code is a highly optimized set of instructions designed to be executed by
virtual machine that the java Run-time system emulates. This may come as it of surprise
as you know c++ is compiled, not interpreted-mostly because of performance concerns.
However, the fact that a java program is interpreted helps solve the major problems
associated with downloading the program over the Internet.
Here is why java was designed to be interpreted language. Because java
programs are interpreted rather than compiled .It is easier to run them in wide variety of
environments. Only the java runtime system needs to be implemented for each
platform. Once the runtime package exists for a given system any java program can run
on it. If java were a compiled langu8age then different versions of the same program
will have to exist for each type of CPU connected to the Internet. Thus interpretation is
the easiest way to create truly portable programs. Although java was designed to be
interpreted, there is technically nothing about java that prevents on the fly compilation
of Byte Code into native code. However, even if dynamic compilation were applied to
Byte Code, the portability and safety would still apply, because the run time system
would still be in charge of the execution environment.
The Java Buzz Words
No discussion of the genesis of java is complete without a look at the java
buzzwords. Although the fundamentals that necessitated the invention of java are
portability and security, there are other factors that played an important role on molding
the final form of the language. The java in the following list of buzzwords summed upthe key considerations.
Simple
Portable
Object-oriented
Robust
Multithreaded
Architectural-neutral
High performance
Distributed
Dynamic
JDBC Introduction
The JDBC API is a Java API that can access any kind oftabular data,especially data stored in aRelational Database .
39
http://java.sun.com/docs/books/tutorial/jdbc/overview/index.html#relational%23relationalhttp://java.sun.com/docs/books/tutorial/jdbc/overview/index.html#relational%23relational -
7/31/2019 Final Mini Prject1
40/52
JDBC helps you to write java applications that manage these three programming
activities:
1. Connect to a data source, like a database
2. Send queries and update statements to the database
3. Retrieve and process the results received from the database in answer to your
query
The following simple code fragment gives a simple example of these three
steps:
4. Connection con = DriverManager.getConnection
5. ( "jdbc:myDriver:wombat", "myLogin","myPassword");
6.
7. Statement stmt = con.createStatement();8. ResultSet rs = stmt.executeQuery("SELECT a, b, c FROM Table1");
9. while (rs.next()) {
10. int x = rs.getInt("a");
11. String s = rs.getString("b");
12. float f = rs.getFloat("c");
13. }
This short code fragment instantiates a Driver Managerobject to connect to a
database driver and log into the database, instantiates a Statement object that carries
your SQL language query to the database; instantiates a Result Set object that retrieves
the results of your query, and executes a simple while loop, which retrieves and
displays those results. It's that simple
JDBC Product Components
JDBC includes four components:
1. The JDBC API
The JDBC API provides programmatic access to relational data fromthe Java
programming language. Using the JDBC API, applications can execute SQL
statements, retrieve results, and propagate changes back to an underlying data source.
The JDBC API can also interact with multiple data sources in a distributed,
heterogeneous environment.
The JDBC API is part of the Java platform, which includes the Java Standard
Edition (Java SE ) and the Java Enterprise Edition (Java EE). The JDBC 4.0
API is divided into two packages: java.sql and javax.sql. Both packages are included inthe Java SE and Java EE platforms.
40
-
7/31/2019 Final Mini Prject1
41/52
1. JDBC Driver Manager :
The JDBC Driver Manager class defines objects which can connect
Java applications to a JDBC driver. Driver Managerhas traditionally been the
backbone of the JDBC architecture. It is quite small and simple.
The Standard Extension packages javax.naming and javax.sql let you
use a DataSource object registered with a Java Naming and Directory
Interface (JNDI) naming service to establish a connection with a data source.
You can use either connecting mechanism, but using a DataSource object is
recommended whenever possible.
2. JDBC Test Suite:
The JDBC driver test suite helps you to determine that JDBC drivers will run
your program. These tests are not comprehensive or exhaustive, but they do
exercise many of the important features in the JDBC API.
3. JDBC-ODBCBridge:
The Java Software bridge provides JDBC access via ODBC drivers. Note that
you need to load ODBC binary code onto each client machine that uses this driver.As a result, the ODBC driver is most appropriate on a corporate network where
client installations are not a major problem, or for application server code written in
Java in a three-tier architecture.
This Trail uses the first two of these these four JDBC components to connect to
a database and then build a java program that uses SQL commands to communicate
with a test Relational Database. The last two components are used in specialized
environments to test web applications, or to communicate with ODBC-aware DBMSs.
JDBC Architecture
Two-tier and three-tier Processing Models
The JDBC API supports both two-tier and three-tier processing models for
database access.
41
-
7/31/2019 Final Mini Prject1
42/52
Figure 6.1: Two-tier Architecture for Data Access.
In the two-tier model, a Java applet or application talks directly to the data
source. This requires a JDBC driver that can communicate with the particular data
source being accessed. A user's commands are delivered to the database or other data
source, and the results of those statements are sent back to the user. The data source
may be located on another machine to which the user is connected via a network. This
is referred to as a client/server configuration, with the user's machine as the client, and
the machine housing the data source as the server. The network can be an intranet,
which, for example, connects employees within a corporation, or it can be the Internet.
In the three-tier model, commands are sent to a "middle tier" of services, which
then sends the commands to the data source. The data source processes the commands
and sends the results back to the middle tier, which then sends them to the user. MIS
directors find the three-tier model very attractive because the middle tier makes it
possible to maintain control over access and the kinds of updates that can be made tocorporate data. Another advantage is that it simplifies the deployment of applications.
Finally, in many cases, the three-tier architecture can provide performance advantages.
Figure 6.2: Three-tier Architecture for Data Access.
Until recently, the middle tier has often been written in languages such as C orC++, which offer fast performance. However, with the introduction of optimizing
42
-
7/31/2019 Final Mini Prject1
43/52
compilers that translate Java byte code into efficient machine-specific code and
technologies such as Enterprise JavaBeans, the Java platform is fast becoming the
standard platform for middle-tier development. This is a big plus, making it possible to
take advantage of Java's robustness, multithreading, and security features.
With enterprises increasingly using the Java programming language for writing
server code, the JDBC API is being used more and more in the middle tier of a three-
tier architecture. Some of the features that make JDBC a server technology are its
support for connection pooling, distributed transactions, and disconnected rowsets. The
JDBC API is also what allows access to a data source from a Java middle tier.
A Relational Database Overview:
A database is a means of storing information in such a way that information can
be retrieved from it. In simplest terms, a relational database is one that presentsinformation in tables with rows and columns. A table is referred to as a relation in the
sense that it is a collection of objects of the same type (rows). Data in a table can be
related according to common keys or concepts, and the ability to retrieve related data
from a table is the basis for the term relational database. A Database Management
System (DBMS) handles the way data is stored, maintained, and retrieved. In the case
of a relational database, a Relational Database Management System (RDBMS)
performs these tasks. DBMS as used in this book is a general term that includes
RDBMS.
Integrity Rules:
Relational tables follow certain integrity rules to ensure that the data they
contain stay accurate and are always accessible. First, the rows in a relational table
should all be distinct. If there are duplicate rows, there can be problems resolving which
of two possible selections is the correct one. For most DBMSs, the user can specify that
duplicate rows are not allowed, and if that is done, the DBMS will prevent the addition
of any rows that duplicate an existing row.
A second integrity rule of the traditional relational model is that column values
must not be repeating groups or arrays. A third aspect of data integrity involves the
concept of a null value. A database takes care of situations where data may not be
available by using a null value to indicate that a value is missing. It does not equate to a
blank or zero. A blank is considered equal to another blank, a zero is equal to another
zero, but two null values are not considered equal.
When each row in a table is different, it is possible to use one or more columns
to identify a particular row. This unique column or group of columns is called a primary
43
-
7/31/2019 Final Mini Prject1
44/52
key. Any column that is part of a primary key cannot be null; if it were, the primary key
containing it would no longer be a complete identifier. This rule is referred to as entity
integrity.
Table 1.2 illustrates some of these relational database concepts. It has five
columns and six rows, with each row representing a different employee.
Employee Number First name Last Name Date_of_Birth Car Number
10001 Axel Washington 28-Aug-43 5
10083 Arvid Sharma 24-Nov-54 null
10083 Arvid Sharma 24-Nov-54 null
10120 Jonas Ginsberg 01-Jan-69 null
10005 Florence Wojokowski 04-Jul-71 12
10099 Sean Washington 21-Sep-66 null10035 Elizabeth Yamaguchi 24-Dec-59 null
Table 6: Employees
The primary key for this table would generally be the employee number because
each one is guaranteed to be different. (A number is also more efficient than a string for
making comparisons.)
It would also be possible to use First Name and Last Name because the
combination of the two also identifies just one row in our sample database. Using the
last name alone would not work because there are two employees with the last name of
"Washington." In this particular case the first names are all different, so one could
conceivably use that column as a primary key, but it is best to avoid using a column
where duplicates could occur.
If Elizabeth Taylor gets a job at this company and the primary key is First
Name, the RDBMS will not allow her name to be added (if it has been specified that noduplicates are permitted). Because there is already an Elizabeth in the table, adding a
second one would make the primary key useless as a way of identifying just one row.
Note that although using First Name and Last Name is a unique composite key for this
example, it might not be unique in a larger database. Note also that Table 1.2 assumes
that there can be only one car per employee.
7. TESTING AND VALIDATION
44
-
7/31/2019 Final Mini Prject1
45/52
7.1 Introduction:
Testing is the major quality control measure employed for software development.
Its basic function is to detect errors in the software. During requirement analysis
and design, the output is document that is usually textual and non-textual. After the
coding phase, computer programs are available that can be executed for testing purpose.
This implies that testing has to uncover errors introduced during coding phases. Thus,
the goal of testing is to cover requirement, design, or coding errors in the program.
The starting point of testing is unit testing. In this a module is tested separately
and is often performed by the programmer himself simultaneously while coding the
module.
The purpose is to exercise the different parts of the module code to detect
coding errors. After this the modules are gradually integrated into subsystems, which are
then integrated themselves too eventually forming the entire system.
During integration of module integration testing is performed. The goal of this
is to detect designing errors, while focusing the interconnection between modules. After
the system was put together, system testing is performed. Here the system is tested
against the system requirements to see if all requirements were met and the system
performs as specified by the requirements. Finally accepting testing is performed todemonstrate to the client for the operation of the system.
7.2 Test Cases:
There are two different approaches for selecting test case. The software or the
module to be tested is treated as a black box, and the test cases are decided based on the
specifications of the system or module. For this reason, this form of testing is also called
black box testing. The focus here is on testing the external behavior of the system. In
structural testing the test cases are decided based on the logic of the module to be tested.A common approach here is to achieve some type of coverage of the statements in the
code.
The two forms of testing are complementary: one tests the external behavior, the
other tests the internal structure. Often structural testing is used for lower levels of
testing, while functional testing is used for higher levels.
Testing is an extremely critical and time-consuming activity. It requires proper
planning of the overall testing process. Frequently the testing process starts with the test
plan. This plan identifies all testing related activities that must be performed and specifiesthe schedule, allocates the resources, and specifies guidelines for testing. The test plan
45
-
7/31/2019 Final Mini Prject1
46/52
specifies conditions that should be tested; different units to be tested, and the manner in
which the module will be integrated together.
Then for different test unit, a test case specification document is produced, which
lists all the different test cases, together with the expected outputs, that will be used fortesting.
During the testing of the unit the specified test cases are executed and the actual
results are compared with the expected outputs. The final output of the testing phase is
the testing report and the error report, or a set of such reports. Each test report contains a
set of test cases and the result of executing the code with the test cases.
The error report describes the errors encountered and the action taken to remove the error.
Error Messages
The term error is used in two different ways. Errors refer to the discrepancy
between computed and observed values. That is error refers to the difference between
the actual output of the software and the correct output. In this interpretation, error
essentially is a measure of the difference between the actual and the ideal. Error is also
used to refer to human action that results in the software containing a defect or a fault.
This detection is quite general and encompasses all phases.
The consequence of thinking is the belief that the errors largely occur during
programming, as it is the can see, the errors occur through the development. As we can
see, the errors occur throughout the development process. However, the cost of
connecting the errors of different phases is not the same and depends upon when the
error was detected and corrected. The cost of correcting errors in the function of where
they are detected.
As one would expect the greater the delay in detecting an error after it occurs, the
more expensive it is to correct it. Suppose an error occurs during the requirement phase and
it was corrected after the coding then the cost is higher than correcting it in the
requirements phase itself. The reason for this is fairly obvious. If there was error in the
requirements phase that error will affect the design and coding also. To correct the error
after coding is done require both the design and the code to be changed there by increasing
the cost of correction.
The main moral of this section is that we should attempt to detect the errors that
occur in a phase during the phase itself should not wait until testing to detect errors. This
is not often practiced. In reality, sometimes testing is the sole point where errors are
detected. Besides the cost factor, reliance on testing as a primary source for errordetection and correction should be a continuous process that is done throughout the
46
-
7/31/2019 Final Mini Prject1
47/52
software development. In terms of the development phase, what this means is that we
should try to validate each phase before starting the next.
7.3 TESTING TECHNIQUES:
Testing is a process, which reveals errors in the program. It is the major quality
measure employed during software development. During testing, the program is executed
with a set of conditions known as test cases and the output is evaluated to determine
whether the program is performing as expected.
In order to make sure that the system does not have errors, the different levels of
testing strategies that are applied at differing phases of software development are:
Unit Testing
Unit Testing is done on individual modules as they are completed and becomeexecutable. It is confined only to the designer's requirements.
Each Module Can Be Tested Following Strategies:
Black Box Testing:
In this strategy some test cases are generated as input conditions that fully
execute all Functional requirements for the program. This testing has been uses to find
errors in the following categories:
Incorrect or missing functions
Interface errors
Errors in data structure or external database access
Performance errors
Initialization and termination errors
In this testing only the output is checked for correctness. The logical flow of the
data is not checked.
White Box Testing:
In this the test cases are generated on the logic of each module by drawing
flow graphs of that module and logical decisions are tested on all the cases.It has been uses to generate the test cases in the following cases:
47
-
7/31/2019 Final Mini Prject1
48/52
Guarantee that all independent paths have been executed.
Execute all logical decisions on their true and false sides.
Execute all loops at their boundaries and within their operational
Execute internal data structures to ensure their validity.
Integration Testing
Integration testing ensures that software and subsystems work together as a
whole. It tests the interface of all the modules to make sure that the modules behave
properly when integrated together.
System Testing
Involves in-house testing of the entire system before delivery to the user. Its
aim is to satisfy the user the system meets all requirements of the client's specifications.
Acceptance Testing
It is a pre-delivery testing in which entire system is tested at client's site on real
world data to find errors.
Validation Testing
The system has been tested and implemented successfully and thus ensured that
all the requirements as listed in the software requirements specification are completely
fulfilled. In case of erroneous input corresponding error messages are displayed.
Compiling test:
It was a good idea to do our stress testing early on, because it gave us time to fix
some of the unexpected deadlocks and stability problems that only occurred when
components were exposed to very high transaction volumes.
Execution Test:
This program was successfully loaded and executed. Because of good
programming there were no execution errors.
48
-
7/31/2019 Final Mini Prject1
49/52
8. SYSTEM SECURITY
8.1. Introduction:
The protection of computer based resources that includes hardware, software,
data, procedures and people against unauthorized use or natural
Disaster is known as System Security.
System Security can be divided into four related issues:
Security
Integrity
Privacy Confidentiality
System Security:
Itrefers to the technical innovations and procedures applied to the hardware and
operation systems to protect against deliberate or accidental damage from a defined
threat.
Data Security:
It is the protection of data from loss, disclosure, modification and destruction.
System Integrity:
It refers to the power functioning of hardware and programs, appropriate
physical security and safety against external threats such as eavesdropping and
wiretapping.
Privacy:
Itdefines the rights of the user or organizations to determine what information
they are willing to share with or accept from others and how the organization can be
protected against unwelcome, unfair or excessive dissemination of information about it.
49
-
7/31/2019 Final Mini Prject1
50/52
Confidentiality:
It is a special status given to sensitive information in a database to minimize the
possible invasion of privacy. It is an attribute of information that characterizes its need
for protection.
To set up authentication for Web Applications:
1. Open the web.xml deployment descriptor in a text editor or use the
Administration Console. Specify the authentication method using the element. The available options are:
Basic:
Basic authentication uses the Web Browser to display a username/password
dialog box. This username and password is authenticated against the realm.
Form:
Form-based authentication requires that you return an HTML form containing
the username and password. The fields returned from the form elements must be:
j_username and j_password, and the action attribute must be j_security_check. Here is
an example of the HTML coding for using FORM authentication:
The resource used to generate the HTML form may be an HTML page, a JSP,
or a servlet. You define this resource with the element.
The HTTP session object is created when the login page is served. Therefore,
the session.isNew () method returns FALSE when called from pages served aftersuccessful authentication.
50
-
7/31/2019 Final Mini Prject1
51/52
9. CONCLUSION
We implement a framework for sharing and performing analytical queries on
historical multidimensional data in unstructured peer-to-peer networks. In our approach,
the resources are maintained across P2P network for the possibility of accessing and
posing queries against the data published by others.
Our solution is based on suitable data partitioning and indexing techniques, and
on mechanisms for data distribution. The testing results showed the effectiveness of our
approach in providing fast and accurate query answers, and ensuring the robustness that
is mandatory in peer-to-peer setting
51
-
7/31/2019 Final Mini Prject1
52/52
BIBLIOGRAPHY
Software Engineering: Roger.S.Presman.
System analysis and design: Elias.M.Awad.
Database Management System :Henry.F.Korth.
Programmers Guide : Manual, Microsoft Press.
www.w3schools.com.
www.webbloodbank.org.
http://www.w3schools.com/http://www.w3schools.com/