SUPERVISED BY LECTURER - scbaghdad.edu.iq · Predictive methods, the BIG-IP system analyzes the...
Transcript of SUPERVISED BY LECTURER - scbaghdad.edu.iq · Predictive methods, the BIG-IP system analyzes the...
Republic of Iraq
Ministry of Higher Education
And Scientific Research
Baghdad University
College of Science
CPU & Main Storage Load Evaluation
A project Submitted to the College of Science, Baghdad
University in Partial Fulfillment of the Requirements for
The Degree BSC of Science in computer Science
BY
Mrarwan H. Mohameed Ali
Shahad F. Mohameed Ali
SUPERVISED BY
LECTURER
Husam Ali
Khulood Iskandar
I gift my research to the best and great person in the
world. They to give Feeling and Passion. This person,
** My Mother and My Father**
I ask my God. It good fortune for my Mother, my
father so that I give this research to my entire
friend and to my star in the sky MR. Husam Ali and
Ms.Khulood Dagher
Thank you
Abstract
Page : 2
Chapter One
1
1.1
Introduction
Page : 4
1.2
The aim of the Project Page : 5
1.3
Project layout
Page : 6
Chapter Two
2
2.1
Introduction
Page : 8
2.2
Load Balance Methods
Page: 9
2.2.1
Round Robin
Page : 9
2.2.2
Ratio (member)
Page : 9
2.2.3
Dynamic Ratio (member)
Page : 9
2.2.4
Fastest (node)
Page : 9
2.2.5
Least Connections (member)
Page : 10
2.2.6
Weighted Least Connections (member)
Page : 10
2.2.7
Observed (member)
Page : 11
2.2.8
Predictive (member)
Page : 11
2.2.9 Least Sessions
Page : 11
2.2.10
L3 Address
Page : 12
2.3
How server Load Balance Works
Page : 12
2.4
Network Load Balance Topologies
Page : 14
2.4.1
Routed Mode
Page : 15
2.4.2
Transparent Mode
Page : 16
2.4.3
One Arm Mode
Page : 17
2.4.4
Direct Server Return
Page : 18
In CPU & main storage load evaluation we have observed the usage for
main storage and CPU for our own personal computer and for every
computer in the LAN and provide a chart for the CPU usage and Main
storage Usage the most serious problem we have faced is that the java
program do not have method for measuring CPU usage so we use DLL
file for obtaining this services
Load balancing is dividing the amount of work that a computer has to
do between two or more computers or a computer cluster, network links,
central processing units, disk drives, or other resources, to achieve
optimal resource utilization, maximize throughput, minimize response
time, and avoid overload. Using multiple components with load
balancing, instead of a single component, may increase reliability
through redundancy so that more work gets done in the same amount of
time and, in general, all users get served faster. Load balancing can be
implemented with hardware, software, or a combination of both.
On the Internet, companies who's Web sites get a great deal of traffic
usually use load balancing. For load balancing Web traffic, there are
several approaches. One approach is to route each request in turn to a
different server host address in the domain name system (DNS) table,
round-robin fashion. Usually, if two servers want to participate in the
load balancing, a third server is needed to determine which server to
assign the work to.
Since load balancing requires multiple servers, it is usually
combined with failover and backup services. In some approaches, the
servers are distributed over different geographic locations.
Load balancing ensures high availability for applications by
monitoring the health and performance of individual servers in real time.
This ensures that local users are always connected to a fully functional,
responsive system that is best suited for handling their requests.
Many content-intensive applications have scaled beyond the point
where a Single server can provide adequate processing power. Both
enterprises and Service providers need the flexibility to deploy additional
servers quickly and transparently to end-users. Server load balancing
makes multiple servers Appear as a single server – a single virtual service
– by transparently Distributing user requests among the servers. The
highest performance is achieved when the processing power of servers is
used intelligently. Advanced server load-balancing Products can direct
end-user service requests to the servers that are least busy and therefore
capable of providing the fastest Response times. Necessarily, the load-
balancing device should be capable of handling the aggregate traffic of
multiple servers.
1.2 The aim of the Project
The aim of the project is to design and implement a java load
balancing evaluation system that analysis the load of the servers in the
network and determine which node needs to be balanced and witch node
can share in the load distributing, and the project proposed can work on a
homogeneous and non homogeneous Network, and the final aim, is to
reach to a system that can evaluate the load without depending on the O.S
used in that server.
1.3 Project layout
This project contains four chapters, chapter one is an introduction to
the project, purpose and the aim of the project, chapter two is an
introduction to the general and specific field of this project (load
balancing), chapter three shows in details the practical implementation of
the project describing the code of the project step by step, chapter four
discuses the conclusion and results of the project, the related work to this
field, also determine the future work of this project, and lists the
resources used to accomplish this project.
2.1 Introduction
Several years ago, a new type of network appliance came into the
market, the server load balancer (SLB). Server load balancers came into
being at a time when computers (typically PCs) did not offer the capacity
to host busy web sites, and so it was necessary to replicate web sites
across multiple PCs to achieve scalability and performance. The server
load balancer treats multiple PCs as one large virtual PC, thereby
providing the capacity required to handle large volumes of traffic with
peak responsiveness.
So what does load balancing have to do with ensuring the availability
of web sites, preventing unplanned downtime from crashes, disasters and
attacks, and facilitating planned downtime for backup, maintenance and
upgrades.
While performance and scalability were originally the hallmarks of
server load balancers, high availability has always been a key benefit.
(After all, a down system offers no performance whatsoever and can
service exactly zero users.) So one of the capabilities of good server load
balancers is to cope with the failure of any of the servers it is load
balancing, thereby shifting the work to other servers. Server load
balancers furthermore allow any server to be taken out of operation,
thereby sharing the load among the remaining servers. Thereby, as a
byproduct of facilitating scalability for web sites, server load balancers
solve many unplanned downtime problems and also facilitate planned
downtime.
2.2 Load Balance Methods
Load balancing calculations can be localized to each pool which is
call (member-based calculation) or they may apply to all pools of which a
server is a member this call (node-based calculation). There are different
types of load balancing methods as shown below:
2.2.1. Round Robin: This is the default load balancing method. Round
Robin mode passes each new connection request to the next server in
line, eventually distributing connections evenly across the array of
machines being load balanced.
2.2.2. Ratio (member): The BIG-IP system distributes connections
among pool members or nodes in a static rotation according to the
defined ratio weights, thus the number of connections that each system
receives over time is proportional to the defined ratio weight for each
pool member or node.
2.2.3. Dynamic Ratio (member): The Dynamic Ratio methods select a
server based on various aspects of real-time server performance analysis.
These methods are similar to the Ratio methods, except that with
Dynamic Ratio methods, the ratio weights are system-generated, and the
values of the ratio weights are not static. These methods are based on
continuous monitoring of the servers, and the ratio weights are therefore
continually changing.
2.2.4. Fastest (node): The Fastest methods select a server based on the
least number of current sessions. The following rules apply to the fastest
load balancing methods: These methods require that you assign both a
Layer 7 and a TCP type of profile to the virtual server. If a Layer 7
profile is not configured, the virtual server falls back to Least
Connections load balancing mode.
Note: If the One Connect feature is enabled, the Least Connections
methods do not include idle connections in the calculations when
selecting a pool member or node. The Least Connections methods use
only active connections in their calculations
2.2.5. Least Connections (member): The Least Connections
methods are relatively simple in that the BIG-IP system passes a new
connection to the pool member or node that has the least number of active
connections.
Note: If the One Connect feature is enabled, the Least Connections
methods do not include idle connections in the calculations when
selecting a pool member or node. The Least Connections methods use
only active connections in their calculations.
2.2.6. Weighted Least Connections (member): Like the Least
Connections methods, these load balancing methods select pool members
or nodes based on the number of active connections. However, the
Weighted Least Connections methods also base their selections on server
capacity. The Weighted Least Connections (member) method specifies
that the system uses the value you specify in Connection Limit to
establish a proportional algorithm for each pool member.
The system bases the load balancing decision on that proportion and
the number of current connections to that pool member. For example,
member_a has 20 connections and its connection limit is 100, so it is at
20% of capacity. Similarly, member_b has 20 connections and its
connection limit is 200, so it is at 10% of capacity. In this case, the
system select selects member_b. This algorithm requires all pool
members to have a non-zero connection limit specified.
The Weighted Least Connections (node) method specifies that the
system uses the value you specify in the node's Connection Limit setting
and the number of current connections to a node to establish a
proportional algorithm. This algorithm requires all nodes used by pool
members to have a non-zero connection limit specified. If all servers have
equal capacity, these load balancing methods behave in the same way as
the Least Connections methods.
Note: If the One Connect feature is enabled, the Weighted Least
Connections methods do not include idle connections in the calculations
when selecting a pool member or node. The Weighted Least Connections
methods use only active connections in their calculations.
2.2.7. Observed (member): With the Observed methods, nodes are
ranked based on the number of connections. The Observed methods track
the number of Layer 4 connections to each node over time and creates a
ratio for load balancing.
2.2.8. Predictive (member): The Predictive methods use the ranking
methods used by the Observed methods, where servers are rated
according to the number of current connections. However, with the
Predictive methods, the BIG-IP system analyzes the trend of the ranking
over time, determining whether a nodes performance is currently
improving or declining. The servers with performance rankings that are
currently improving, rather than declining, receive a higher proportion of
the connections.
2.2.9. Least Sessions: The Least Sessions method selects the server
that currently has the least number of entries in the persistence table. Use
of this load balancing method requires that the virtual server reference a
type of profile that tracks persistence connections, such as the Source
Address Affinity or Universal profile type.
Note: The Least Sessions methods are incompatible with cookie
persistence.
2.2.10. L3 Address: This method functions in the same way as the
Least Connections methods.
2.3 How the Server Load Balancer Works
Traditionally, a single client issues a request to a single server to
get some sort of data Fig 1.
Fig 1 Single client request Single server
As the number of clients grows, the number of servers grows too. The
load balancer is brought in. It owns a single Virtual IP and spreads the
traffic to multiple Servers see Fig 2
Fig 2 Multiple Clients Multiple Servers
The Load balancer has to
1-sit between the client and the servers
2- see both the incoming And Outgoing packets a simple rule of
TCP/IP clients sending traffic to the Virtual IP or VIP, has to get replies
back from the VIP. Otherwise, the whole process breaks (only exception
is DSR mode which will be covered at the end). With thin mind, see (fig
3) to examine the packet flow in routed mode. The client 1.1.1.1 sends a
request to the server load balancer with a VIP of 10.10.10.10 (it is
assumed there is a router between the internet client and the load
balancer). The source of the packet is 1.1.1.1 with the destination of the
packet labeled for 10.10.10.10. When the load balancer gets the packet, it
chooses a server, and rewrites the packet destination to 20.20.20.21. At
this point, the source IP on the packet is still 1.1.1.1, so the server logs are
accurate with client machine information. In response to this request, the
server replies with a packet from 20.20.20.21 destined for 1.1.1.1. Since
the load balancer is the server‟s default gateway, the packet is handed to
the load balancers inside interface 20.20.20.200. Next, the SLB does
something very important to this whole process.
Fig 3 Packet Flow in Routed Mode
It takes the packet from the server destined for 1.1.1.1 and rewrites
the source address from 20.20.20.1 to the VIP address of 10.10.10.10.
After all, the client only knows it sent a request to the VIP and has no
knowledge of the backend servers. If it happened to get a packet from one
of the backend servers, it wouldn‟t know what to do with it and would
discard it. The client gets its request fulfilled by the VIP and everyone is
happy.
2.4 Network Load Balance Topologies
There are multiple ways of inserting a load balancer in network. The
four most common topologies are routed, transparent, one-arm, and
Direct Server Return mode.
2.4.1 Routed Mode:
Virtual server has to sit on a different subnet than the real servers.
Load balancer routes traffic between these two subnets. Load balancer is
configured as the default gateway for the Real servers. See Fig 4.
Fig 4 Routed Modes
Properties:
1. Fast, efficient packet flow utilizing two or more interfaces
2. Back end servers can be masked from online clients.
3. Additional filtering can be done on the SLB through ACLs.
Cons:
1. Servers have to point their default route at the SLB.
2. VIP has to be on a different subnet than server In order to address
the last „con‟, many companies create a new private network to
address this issue.
2.4.2 Transparent Mode:
Virtual server sits on the same subnet as the real servers but
physically separated by the load balancer. Load balancer serves as a
bridge. Very easy to deploy and poses minimal disruption to the network.
See Fig 5
Fig 5 Transparent Mode
Properties:
1. Low impact to the network as no new subnets need to be created.
2. No change to servers since they retain their default gateway.
Cons:
The load balancer has to be the only link between the servers and the
client. Otherwise, a loop can cause high load to the network.
2.4.3 One Arm Mode:
This is also known as load balancing as a service, since the load
balancer can sit anywhere in the network. Virtual server can be on the
same subnet as the real servers or a different subnet. See Fig 6.
Fig 6 One Arm Mode
Properties:
1. Easy to deploy as the load balancer only sees the load balanced
traffic.
2. No change to the servers as they still talk to the same default
gateway.
3. Perfect mode to deploy load balancer evaluation.
Cons:
In order for this mode to work properly, the load balancer re-writes
the source IP of the packet from the client‟s to one of its own (also known
as source NAT). The server sees every request as a request from the load
balancer and not the real client on the Internet. If the traffic is HTTP
based, the client‟s IP can be embedded in to a custom header for the
server to collect (using x-forwarded-for).
2.4.4 Direct Server Return
This mode breaks many of the rules discussed previously. The load
balancer only sees half of the IP conversation, the incoming half. And the
server sends the request directly back to the client. The virtual server
usually resides on the same subnet as the real servers see Fig 7.
Fig 7 Direct Server Return
Properties:
1. Can be very fast as the load balancer only deals with half of the
conversation.
2. Perfect for streaming type traffic when the request is tiny compared
to the outbound traffic volume. Huge throughput can be obtained in
this mode.
Cons:
1. Advanced layer 7 features on the load balancer can‟t be used since
it only sees half of the conversation. No SSL acceleration, cookie
stickiness, or layer 7 traffic manipulation. Most of the capabilities
of the load balancer are unusable.
2. Troubleshooting can be difficult in practical most companies are
moving away from direct server return mode. The inability to use
layer 7 features renders a good portion of the load balancing
investment worthless. The mode you ultimately choose will depend
largely on the current topology of your network and which parts
can or cannot be altered.
3.1 Introduction
This chapter discus the main class and Methods used in this project
and explains how to use the program proposed.
3.2 Classes and Methods
One of the main classes in this work is the CPU class which is used to
link the “dll” to the java, this “dll” used to calculate the CPU percentage
because java don‟t support low level method so in this project the “dll”
was written in “c++” language to measure the CPU and memory in both
heap and non heap percentage .
The runtime class is used to invoke the ping command on a specific
IP to check its state ,the process class with used the runtime class and the
getinputstream method to read the output of the execution function that
we used to invoke the ping command.
The output of executing the ping command may request time or the
destination host is unreachable, for those two cases, it means that that
specific IP is either not used, shutdown, not available at this time or there
is a firewall in the way blocking the response from that IP.
In the other hand, the host IP responses to the ping command, and
will replay to the ping, and this means the specific IP address is available
and ready to use at that time. The available IP will be saved in file, the
saved IP‟s may be used to determine which IP used in the network The
file in which we saved the online IP in it may be use it for security
measure‟s
Now to explain the security measures assume that the IP‟s are known
and have assigned to the nodes in the my network statically and when,
the test performed to Unknown IP” that not have been assigned to any
node in network”
We discover it‟s online.
Assume that intruder know the specific Net id that use in my network
and assign it to his node and start steal info or just load on my network
this will be more dangerous
Fig (3-1) below show the GUI frame which used to find a specific IP as
explain above:
Fig (3-1) Finding a specific IP
The IP field is used to determine the state of specific IP, this field is
restricted by four functions programmed within our project, to check for
syntax error's done by the user, and they are:
a- Adjdot method: is used to check that there are no Consecutive
two dots in that IP.
b- Checkrange method: is used to check the range of the first three
bytes of the IP which must be in the range (0-255), the last byte
must be in the range (1-244).
c- Vaildip Method to confirm that the IP contain only numbers.
d- Dotcount method: is used to confirm the written IP has only
four byte and each byte is separate by only one dot.
For writing on the file so it's more safe to used BufferedWriter this
give the ability of newline function to save each IP in separate line and
give the ability of using flush function transfer which saved in the buffer
to the file and clear the buffer to make it ready for new information
Management factory class: contains methods to receive information
about each of the following:
1-Heap memory usage: Returns the current memory usage of the heap
that is used for object allocation. The heap consists of one or more
memory pools. The used and committed size of the returned memory
usage is the sum of those values of all heap memory pools whereas the
initial and maximum size of the returned memory usage represents the
setting of the heap memory which may not be the sum of the all heap
memory pools and contains function‟s to determine:
A- Max memory: Returns the maximum amount of memory in
bytes that can be used for memory management.
B- Committed memory: Returns the amount of memory in bytes
that is committed for the Java virtual machine to use. Init
memory: Returns the amount of memory in bytes that the Java
virtual machine initially requests from the operating system for
memory management.
C- Used memory: Returns the amount of used memory in bytes.
2- Non heap memory usage: returns the current memory usage of
non-heap memory that is used by the Java virtual machine. The non-heap
memory consists of one or more memory pools. The used and committed
size of the returned memory usage is the sum of those values of all non-
heap memory pools whereas the initial and max size of the returned
memory usage represents the setting of the non-heap memory which may
not be the sum of the all non-heap memory pools. (i.e. the same
classification in both).
3- Operating System: returns the management for the operating
system on which the Java virtual machine is running, and contains
methods to determine information about:
a- Get Arch: Returns the operating system architecture.
b- Get name: Returns the operating system name.
c- Get version: Returns the operating system version.
d- Get available processor: Returns the number of processors
available to the Java virtual machine.
4- Threads: returns the management for the thread system of the Java
virtual machine, and contains methods to determine information about:
a- GetDaemonThreadCount: Returns the current number of live
daemon threads.
b- GetPeakThreadCount: Returns the peak live thread count
since the Java virtual machine started or peak was reset.
c- GetThreadCount: Returns the current number of live threads
including both daemon and non-daemon threads.
d- GetCurrentThreadCpuTime: returns the total CPU time for
the current thread in nanoseconds. The returned value is of
nanoseconds precision but not necessarily nanoseconds accuracy. If
the implementation distinguishes between user mode time and
system mode time, the returned CPU time is the amount of time
that the current thread has executed in user mode or system mode.
e- GetCurrentThreadCpuTime: returns the total CPU time for
the current thread in nanoseconds. The returned value is of
nanoseconds precision but not necessarily nanoseconds accuracy. If
the implementation distinguishes between user mode time and
system mode time, the returned CPU time is the amount of time
that the current thread has executed in user mode or system mode.
In our project we use A multicast server that enable as to exchange
CPU and memory percentage from one node to all other in the
network, so we use tow method’s enable us in our work
-First one is send method obvious from its name it used to send
information over the network
-Second one is receive method used inside the server loop to
recive the information from all node in the network, there is an
probability for receive a packet contains CPU percent value for any
node in the network so when the CPU value gets the first step save
the percentage value to compare it with other node percentage in
order to determine the load node and the second step change one of
the chart based on the percentage that received see Fig (3-2).
Fig (3-2) Sever Control CPU chart
The green color in the CPU chart, indicate that the value received
between the range (0-20) and it‟s in the second column then the value
will be between the range (6-10) or the packet received will contain
memory information, then now change the column that match the value
received as shown in the Fig (3-3).
Fig (3-3) Sever Control Memory chart
Looking to the system, you can see, there is a node with high CPU
percent for a long time, at that moment, the system will alert that loaded
IP, to avoid any system halt, so the system will determine the IP of that
node as a high CPU percent, and “KIK” the node out the system, but he
can login later as shown in the Fig (3-4).
Fig (3-4) the use of kik
When the system “KIK” the node out of the cycle, it will send a
massage alerting him, that he was kicked out of the cycle because of his
high CPU percentage load.
As mentioned before when the packets containing the CPU usage
values are received, they will be saved and a simple check will be
performed to determine which node is with a low CPU percentage, to use
this node to execute an “SQL” statement written by other user.
if the “SQL” statement is a “select statement “the statement will be
executed without any checking and replay the answer to the user, but if
the “SQL” statement every think other than a "select the statement" then
the system will check if the admin password is correct or not before
execute it.
4.1 Conclusions
Throughout this project the following conclusions were determined:
Java is a Platform independent due to a byte code java compiler
javac in JDK will compile source file .java to byte codes file with
.class extension.
Java is Portable its run with little or no modification on variety
computer.
Java is a robust language it has a built in Exception handling so
java deal with run time error thus the program can continue in
execution rather than terminating.
Java is Multi-threaded (java have a built in multithreading which
makes concurrency available i.e. java performs Varity operations in
the same time.
Java has its own virtual machine.
In the process of determining the IP‟s of the Node‟s attached to the
network, the method named “ISREACHABLE” belong to”
INETADDRESS” class, named in our work as “PING” command,
the description for this method as follows:
inetaddress.getbyname(STRING).ISREACHABLE( TIME OUT);
This method checks if the address is reachable in the network. Best
effort is made by the implementation to try to reach the host, but firewalls
and server configuration may block requests resulting in unreachable
status while some specific ports may be accessible. A typical
implementation will use ICMP ECHO REQUESTs if the privilege can be
obtained, otherwise it will try to establish the TCP connection on port 7
(Echo) of the destination host. The timeout value, in milliseconds,
indicates the maximum amount of time the try should take. If the
operation time is out before getting an answer, the host is seemed
unreachable. A negative value will result in an
IllegalArgumentExceptionbeing thrown.
In practice, Performing the above function, work one time only,
and doesn‟t work in other time cause it has more requirement‟s, if
we implement it, our project will not be easy to use by every one
So we use the class (RUNTIME) to execute a “PING” command
throw “CMD” by the “JAVA” language and filter the IP based on
the response of the “PING command”.
When we try to implement our project in different O.S we discover
there are some difficulties if we run as windows XP and we need
some updates, but if the O.S was VISTA or Windows 7 it was easy
to implement and work with the program directly after install it. If
we run as linix Ubuntu then our program will work efficiently after
set the paths.
When we try to determine the percent of the CPU usage, we
discovered that the JAVA don‟t support such low level function so
we try to link a file with extension “.dll” written in C++ language
help us to provide this capability in our program. The class CPU
specifies how to link “.dll” file to JAVA language.
When we try to use “while (true) LOOP” we discovered that
executing this loop, the hole program goes into freeze state, if there
is no condition inside that loop that makes it stop and I will never
stop it or control the program again, because reaching a node in the
network might take a long time for some node, or will never reach
a node ever, so we forced to switch every “while (true) LOOP” to a
TIMER to help the method complete its job.
As we try to graph the percent of the CPU for all node „s (State of
the system) we try to do a chart like CPU chart provided by the
windows O.S, so we tried to use JAVA Applet but many functions
were not found, so we draw the chart of type chart bar, cause it
easy to use and control in NetBeans services, and there is the
database server that facilitates our work, but it‟s not configured, so
we download all the classes related to the database functions, and
installed it inside the java package, the new build our database and
worked with it at the port () method.
Note: this port is used by default by the database server not configured by
us.
4.2 Results:
The figures below show the results that we obtained from this
project in details.
Fig(3-5)illustrate the overall design to the project
Fig(3-6)show the start of screen monitor on our node in case we leave it
this for security measure and it cannot be stop unless user enter correct
user name and password
Fig(3-7)Finding IP State on our network and force it to join if it online
but not connect to the server
Fig(3-8) show some information about my computer and the usage of
“Load Balancing evaluation system” to my node
Fig(3-9) Resources Usage
Fig(3-10)Server Control
Illustrate the state of my network from the CPU point and Memory Point
in general “to hole network”
Fig(3-11) server details
Use to KIK high load computer and to tell network administrator which
user is join to server and how isn‟t and to monitor every node CPU Load
and Memory Load
Fig(3-12) execute SQL statement over network
show as how to execute specific SQL Statement over the network
4.3 Future Work:
So far we have create a program that sends a request to all the
computer in the network and works Compare If this computer is the
best in terms of performance according to certain conditions, it
performs the request in that computer and return the results to the
computer that having sent the request and if they were not the best one
or not meet the conditions specified by the program, it is discard the
request.
In the future we can use CORBA technique or the RMI technique by
which we can send the request only to the one node which is the load
node to perform load balancing by that node .
No.
Author
Name
Year
Publisher
[1]
CIA Network‟s
Team.
Ensuring Continuous
Availability and Peak
Performance
for Load Balance
servers
November 10,
2006
http://www.cainetworks.com/support/training/
load-balance-intro.htmlhttp://wwwo.html
[2]
Office of
information
technology
What load balancing
methods are available?
/
http://oit2.utk.edu/helpdesk/kb/entry/1699/
[3]
Don MacVittie
Intro to Load
Balancing for
Developers – The
Algorithms
Tuesday,
March 31,
2009
https://devcentral.f5.com/weblogs/dmacvittie/
archive/2009/03/31/intro-to-load-balancing-
for-developers-ndash-the-algorithms.aspx
[4]
Ekkehard
Gümbel
Network Topology
Examples
Tuesday,
09/07/2010
http://www.loadbalancerblog.com/blog/2010/
09/load-balancing-exchange-2010-
%E2%80%93-network-topology-examples
[5]
Thomas Shinder
Using NLB with ISA
Server
Feb 04, 2003
http://www.isaserver.org/tutorials/basicnlbpart
1.html
To our dear Dr. Loay E. George who inspire provide inspiration to us and
for his helpful opinion
To the great teacher Mr.Hussam and Miss Khulood who help us in every
step along this year
To all our friend who give us the moral support and brilliant ideas to
accomplish our work specially Mhmood Fawzi , Idress Latif ,
Ban Abd Allah , Jeneen Simak , Neriman Samir
To My Mother And Father Who help us by pray
To My brother‟s and Sister‟s Who listen to us and help us every time we
need
And in the first place we thank GOD who help us to finish our studies
and give us such good people who help and support us
جمهورية العراق وزارة التعليم العالي
والبحث العلمي جامعة بغداد كلية العلوم
CPU & Main Storage Load Evaluation
المشروع مقدم الى كلية العلوم , جامعة بغداد كجزء من متطلبات الحصول علىدرجة البكلوريوس في العلوم لعلوم الحاسبات
أعداد الطلبه مروان حسين محمد علي شهد فيصل محمد علي
يشرف علية المحاظر
حسام علي خلود أسكندر