Solving the security constrained optimal power flow problem in a distributed computing environment

6
Solving the security constrained optimal power flow problem in a distributed computing environment 0.R.Saavedra Indexing term: Distributed processing, Parallel processing, Security-constrained optimal power flow Abstract: The security constrained optimal power flow problem is solved using a distributed processing environment. By using a parallel programming system, a computer network can be viewed as a parallel virtual computer. An efficient and portable asynchronous algorithm, extended to include fault tolerance capabilities, has been used. This approach has been successfully implemented on a 10-workstation network. Results obtained with two Brazilian networks are also reported. 1 Introduction During the last few years there has been an exponential growth in networked computing resources. The old paradigm of a single computer serving a whole organi- sation is being rapidly replaced by a large number of separate, but interconnected autonomous computers. The performance of local networks is being improved tenfold each decade [l-31. The growth of computer net- works has been accompanied by an increase of individ- ual computational power. There are many motivations for the utilisation of workstation clusters (a) to exploit the cost/performance benefits of RISC processors; (b) availability of abundant MFLOPS in light load periods for batch processing; (c) utilisation of idle workstation cycles without financial impact; (d) scalability, it pro- vides an incremental growth path; (e) the workstation cluster offers the best features of sequential and parallel processing, due to the availability of high-speed proces- sors and to the unlimited growth of computing power in parallel processing; (0 the desire to exploit heteroge- neous computing environments. On the other hand, Energy management systems (EMS) show every day an obvious tendency for distrib- uted architectures, because they provide open standard architecture and due to the significant cost reduction of SCADA systems. The open distributed concept pro- vides easy maintenance, high software and hardware performance as compared to a centralised architecture. Software migration from old architectures to new ones which are emerging, as well as the development of new 0 IEE, 1996 IEE Proceedings online no. 19960677 Paper first received 1st November 1995 and in revised form 20th May 1996 The author is with the Electrical Engineering Department, Federal Uni- versity of Maranhao, Sao Luis - MA - Brazil IEE Proc-Gener. Transm. Distrib., Vol. 143, No. 6, November 1996 applications prepared to handle these changes of envi- ronments, are aspects which must be addressed. The aspect of portability must be imposed as a characteris- tic of new applications, so that these can easily adapt to new computational environments. In [4] an asynchronous model for the security constrained optimal power flow (SCOPF) solution is proposed. This approach has two main characteristics: higher efficiency and better portability. Programming models and paradigms are used in the development of this algorithm. In addition to the increased efficiency, this model allows the development of applications that can be ported among different parallel computer architectures, without significant loss of computing efficiency. Rather than simply parallelising a serial algorithm, constructs and concepts that are naturally suited for concurrent programming have been used. The handling of message interchanging is made by using an asynchronous producer-consumer programming model. Also, this programming style accommodates a variety of problem formulations in the family of SCOPF problems, and facilitates the mapping onto different physical environments. The algoritlim has been successfully implemented and tested on a nine-processor shared memory parallel machine and ported to a 64-processor distributed-memory parallel machine [5]. The proposal reported in [4] can be viewed as a sys- tematic strategy for parallel processing applications to problems related with optimal-secure control of power systems. The concepts and ideas used there can also be extended to other types of problems, because they are independent of the application. This work demon- strates, in the case of SCOPF, that developing the algo- rithm with portability in mind makes the migration process easier. In this paper, that approach is extended to include fault tolerance capabilities and then ported to a distrib- uted system. The functions related with fault tolerance can be modelled as subtasks and easily integrated to the programming model and abstractions suggested in The distributed environment is composed of a workstation network and by a parallel programming system which allows the network to be treated as a virtual parallel machine. There are many systems of parallel programming for distributed environments. One of them is the PVM, which can be freely obtained from public sites together with its documentation manual. This system has been used in this work. It provides a user-friendly environment. Results obtained in tests performed with two Brazilian networks are [41. 593

Transcript of Solving the security constrained optimal power flow problem in a distributed computing environment

Page 1: Solving the security constrained optimal power flow problem in a distributed computing environment

Solving the security constrained optimal power flow problem in a distributed computing environment

0.R.Saavedra

Indexing term: Distributed processing, Parallel processing, Security-constrained optimal power flow

Abstract: The security constrained optimal power flow problem is solved using a distributed processing environment. By using a parallel programming system, a computer network can be viewed as a parallel virtual computer. An efficient and portable asynchronous algorithm, extended to include fault tolerance capabilities, has been used. This approach has been successfully implemented on a 10-workstation network. Results obtained with two Brazilian networks are also reported.

1 Introduction

During the last few years there has been an exponential growth in networked computing resources. The old paradigm of a single computer serving a whole organi- sation is being rapidly replaced by a large number of separate, but interconnected autonomous computers. The performance of local networks is being improved tenfold each decade [l-31. The growth of computer net- works has been accompanied by an increase of individ- ual computational power. There are many motivations for the utilisation of workstation clusters (a) to exploit the cost/performance benefits of RISC processors; (b) availability of abundant MFLOPS in light load periods for batch processing; (c) utilisation of idle workstation cycles without financial impact; (d) scalability, it pro- vides an incremental growth path; (e) the workstation cluster offers the best features of sequential and parallel processing, due to the availability of high-speed proces- sors and to the unlimited growth of computing power in parallel processing; (0 the desire to exploit heteroge- neous computing environments.

On the other hand, Energy management systems (EMS) show every day an obvious tendency for distrib- uted architectures, because they provide open standard architecture and due to the significant cost reduction of SCADA systems. The open distributed concept pro- vides easy maintenance, high software and hardware performance as compared to a centralised architecture. Software migration from old architectures to new ones which are emerging, as well as the development of new 0 IEE, 1996 IEE Proceedings online no. 19960677 Paper first received 1st November 1995 and in revised form 20th May 1996 The author is with the Electrical Engineering Department, Federal Uni- versity of Maranhao, Sao Luis - MA - Brazil

IEE Proc-Gener. Transm. Distrib., Vol. 143, No. 6, November 1996

applications prepared to handle these changes of envi- ronments, are aspects which must be addressed. The aspect of portability must be imposed as a characteris- tic of new applications, so that these can easily adapt to new computational environments.

In [4] an asynchronous model for the security constrained optimal power flow (SCOPF) solution is proposed. This approach has two main characteristics: higher efficiency and better portability. Programming models and paradigms are used in the development of this algorithm. In addition to the increased efficiency, this model allows the development of applications that can be ported among different parallel computer architectures, without significant loss of computing efficiency. Rather than simply parallelising a serial algorithm, constructs and concepts that are naturally suited for concurrent programming have been used. The handling of message interchanging is made by using an asynchronous producer-consumer programming model. Also, this programming style accommodates a variety of problem formulations in the family of SCOPF problems, and facilitates the mapping onto different physical environments. The algoritlim has been successfully implemented and tested on a nine-processor shared memory parallel machine and ported to a 64-processor distributed-memory parallel machine [5].

The proposal reported in [4] can be viewed as a sys- tematic strategy for parallel processing applications to problems related with optimal-secure control of power systems. The concepts and ideas used there can also be extended to other types of problems, because they are independent of the application. This work demon- strates, in the case of SCOPF, that developing the algo- rithm with portability in mind makes the migration process easier.

In this paper, that approach is extended to include fault tolerance capabilities and then ported to a distrib- uted system. The functions related with fault tolerance can be modelled as subtasks and easily integrated to the programming model and abstractions suggested in

The distributed environment is composed of a workstation network and by a parallel programming system which allows the network to be treated as a virtual parallel machine. There are many systems of parallel programming for distributed environments. One of them is the PVM, which can be freely obtained from public sites together with its documentation manual. This system has been used in this work. It provides a user-friendly environment. Results obtained in tests performed with two Brazilian networks are

[41.

593

Page 2: Solving the security constrained optimal power flow problem in a distributed computing environment

reported The first system is composed of 725 buses, 1212 branches and 76 adjustable power generators. The second has 1663 buses, 2349 branches and 99 adjustable power generators.

2

In this section, the concurrent algorithm is revised. More details are found in [4]

Outline of the concurrent algorithm

2. 'I Security constrained optimal power flow The objective of SCOPF is to determine a minimum cost operation point which will not lead to overload if any contingency out of a given list occurs. SCOPF was originally suggested in [6] and extensions for dealing with postcontingency rescheduling capabilities were suggested in [7].

SCOPF is a large dimension optimisation problem, with m control variables and ne + 1 constraints; ne is the number of postcontingency scenarios resulting from outages. A linearised formulation of the security constrained optimal power flow is given in the following:

min f = ctx (1)

A;x 5 b; i = 0,. . . , n e (2)

(3) xmin I I(: I zmax where x represents the control variables vector, c is the cost vector, eqn. 2 represents the operation constraints for the base-case (i = 0) and for the ne configurations that result from each contingency in a list (i = 1, ..., ne); eqn. 3 represents the constraints for the control variables.

init ial solution I ordering l ist

contingency

viol?

I Yes

select the

contingency constraint

Fig. 1 Serial algorithm

2.2 The solution methodology The concurrent algorithm of this paper was proposed in [4]; it is based on an extension of the algorithm originally proposed in [6] for the sequential solution of the security-constrained optimal power flow problem (Fig. 1). Two levels of relaxation are identified in this algorithm: a lower level and a higher level. At the lower level, the problem for i = 0 is solved, i.e. the base-case optimisation subproblem (security constraint relaxed), by a dual relaxation method; the process is initialised with an unconstrained solution and is

594

gradually improved by sequential addition of active constraints until optimality is reached.

At the higher level of relaxation, contingency constraints are temporarily relaxed, and a sohition for the base-case (inferior level of relaxation) is obtained. In the next step, a process contingency analysis at this operation point is performed. If violations are detected, the solution obtained is gradually improved by including the contingency constraint that corresponds to the largest violation observed in each iteration.

The higher level of relaxation is naturally suitable for resolution in a multiprocessing environment, since the process of contingency analysis presents a reasonable granularity and each postcontingency scenario can be analysed in an almost independent manner.

2.3 Tolerance fault extensions Distributed computing systems are potentially more relizble, because they have the so-called partial failure property; a failure in one processor does not affect the correct functioning of the other processors [8]. This advantage may be exploited when porting the application to a distributed environment, by including extensions that allow the application to tolerate hardware failures. The algorithm running in this new environment must include some prevention strategies to deal with host failures.

In order to include fault tolerance tools, the original proposal has been extended. Substasks that deal with fault tolerance are simply added to the others obtained from the decomposition of the algorithm. In the last sections we discuss a proposal for practical implemen- tation of fault tolerance functions. The generic sub- stasks are: (i) fault tolerance manager; (ii) local fault tolerance substask. Considering that the number of slave tasks may be altered after process mapping in the physical environ- ment has taken place (due host failures), the original static programming model is redefined as a dynamic model.

3 Problem decomposition

Problem decomposition leads to four basic types of tasks: (a) optimisation; (b) contingency ordering; (c) contingency severity index calculation; (d) contingency analysis and constraint generation. Tasks (a-d) can be executed asynchronously in one or more processors.

3. 'I Programming model The programming model is given in Fig. 2. This model gives an abstract view of the actual program. This abstraction is independent of the physical environment where the program will be executed and it helps the development of portable concurrent applications. In this paper, we consider a dynamic programming model, i.e. after the physical environment (number of processors/hosts) has been defined and the tasks in this environment has been mapped, the possibility of alterations in the number of slave tasks is taken into account. The programming model is composed of four basic structures:

IEE Proc.-Gener. Transm. Distrib., Vol. 143, No. 6, November 1996

Page 3: Solving the security constrained optimal power flow problem in a distributed computing environment

(i) master task; (ii) slave tasks; (iii) communications; (iv) data-base.

- -

master 1 task 1

database

U I

local fault tolerance subtask

Fij slave

* .

Ft 1 communication slave

I

communication slave

slave

L___-l I

Fig. 2 Concurrent programming model

In the current implementation of the approach, we added to the tasks obtained from the decomposition subtasks related with fault tolerance. Those are mapped to the programming model as indicated below. 3.7.7 Master task: this is composed of the following subtasks:

slave tasks initialisation, fault tolerance manager, execution of the core optimisation algorithm, ranking (ordering of the contingency list according to

a given severity index), stopping criterion.

3.7.2 Slave tasks: these are composed of three basic subtasks:

calculation of severity indices; generation of active constraints; local fault tolerance substask.

The fault tolerance manager is a subtask of the master task. It manages the detection and from partial failure of the system.

The local fault tolerance substask sends ‘signs of life’ to the manager, and in case of host failure, becomes responsible for part of the process running on the dead machine. In this approach, we suppose the master host to be reachable.

3.2 Messages interchange Message interchange can be modelled through a pro- ducer-consumer abstraction as shown in Fig. 3. Thus, the subtasks produce data that are consumed by some of the other subtasks, and, at the same time, consume data that are generated by other subtasks.

The subtask ranking orders the contingency list according to the severity indices produced by the subtasks index calculation, and selects a subset of critical contingencies (the rest of the contingencies in the list are temporarily relaxed). Normally this subtask is invoked only once by the master task at the beginning of the concurrent processing. The subtasks

IEE Proc.-Gener. Transm. Distrib., Vol. 143, No. 6, November 1996

that deal with tolerance fault send and receive information related with detection of host failures, load rescheduling, etc. Therefore, the data transmitted among the tasks are: (i) Contingency ordering indices (ii) Ordered contingency list (iii) State vector (iv) Contingency constraints (v) Data related to fault-tolerance (FTD)

current case-base state *

act I ve constraint severity index

I conti-1 *

3.3 Mapping the tasks onto the architecture The tasks were mapped onto the physical environment as follows: one master task was allocated on host 0. One slave task was allocated to each host for the dis- tributed system, including host 0. Due to the nature of the problem, the master task is idle during a considera- ble amount of time. Therefore, when the master task becomes idle at host 0 (i.e. the master task does not receive contingency constraints to be processed), the slave task is activated. Therefore, all processors are always doing some work towards the global solution of the problem.

3.4 Task distribution among processors In the asynchronous approach, the access to a contingency index must be exclusive. It is not acceptable that more than one slave task solves the same contingency, since it causes degradation of the algorithm performance.

A strategy that preserves efficiently the ‘exclusive access’ characteristic can be obtained by utilising the ‘pvmfmytid’ function. This function returns the task identification number (tid) where it was invoked. Then, the contingencies to be analysed by a certain task are parametrised with the tid. Therefore, the task ‘mytid’ will solve, for instance, the following cases (indices of contingencies list):

mytid mytid + nt mytid + 2 * nt mytid + 3 * nt

595

Page 4: Solving the security constrained optimal power flow problem in a distributed computing environment

where nt is the number of slave tasks; in the current implementation, nt is equal to the number of hosts of the distributed environment. For simplicity, we suppose the contingencies list to be greater than nt.

3.5 Stopping criterion On the distributed-memory computer, each slave task is the owner of a part of the contingencies list. When a contingency is tested and causes no violation, it is tagged with a serial integer number. These numbers are associated with the most recent state vector received by the slave task.

Therefore, when all contingencies associated with a slave task are tagged, this sends a flag to the master task. The global process is finished when the master task receives these flags from all the slave tasks.

4 l ~ p l e m e n ~ a ~ i o n issues

Previous sections presented the decomposition of the algorithm, aiming at its solution through parallel processing. An abstract model of concurrent program- ming was also presented. In both this and following sections, the equivalence among these abstractions and the physical environment to be used, will be performed.

Et hernet

I I I

4. I The physical environment utilised The physical environment used was formed by a computer network (hardware) and by a parallel programming package (software). This software allows the computer network to be viewed as a unique concurrent computational resources. The network used in the tests was formed by 10 SUN Sparc-2 workstations (Fig. 4), and the software package was the platform PVM version 3.2.0.

4.2 PVM package The PVM programming system provides the functions to automatically start up tasks on the parallel virtual machine and allows the information interchange among theses tasks. Also, the available PVM version has host failure detection. If a host fails, the PVM sys- tem will continue to function and will automatically delete this host from the virtual machine. It is still the responsibility of the application developer to make his application tolerant to host failure [9]. The PVM pro- vides non-blocking asynchronous routines for sending and receiving messages; these routines are especially suited for the asynchronous approach implemented here. There are clear analogies between PVM function calls and the nCUBE parallel library; this fact makes SCOPF migration to the new environment easy and comfortable.

4.3 Load process On the PVM platform, the task distribution across the virtual machine can be either made automatically or predefined by the user. In the first case, a heuristic is used to distribute the processes across the virtual

596

machine. In the second case, the user forces the task allocation across the virtual machine. Both strategies were tested, but all tests performed showed that the second strategy (distribution task predefined by user) is the most efficient one; it was adopted and is reported in this work.

4.4 Communication Using the PVM software system, a network of workstations can be viewed as a distributed-memory machine, where communication is performed by means of message passing. The main routines for sending and receiving messages are summarised in the Appendix. Sending a message is composed of three steps in PVM. First, a sending buffer must be initialised by a call to the corresponding routine. Secondly, the message must be ‘packed’ into this buffer. Finally, the complete message is sent to another process. The PVM model guarantees that the message order is preserved. For communication among tasks, the PVM provides three route options for the set up (or not) of direct task-to- task links for all subsequent communication. In the current implementation, the selected option was the direct task-to-task link, because in all tests performed it showed to be more efficient than any other option [9].

4.5 Implementation of fault tolerance functions Some PVM library functious can be used to make the application fault tolerant. However the response time can reach unacceptable levels for real-time applications; the time-out criteria must be compatible with the global processing time order. In the current implementation, some practical strategies were added in order to make the SCOPF operation more robust. These strategies were implemented using the common communication functions of PVM. The goal is to supply a minimum level of fault tolerance to the application, with the condition that the master host be reachable.

We can see two synchronisation points in the algorithm; the first point corresponds to the ending of the list ordering stage (and to the beginning of contingency-constrained optimisation). The second point corresponds to the ending of the contingency- constrained optimisation stage, when optimality is reached. In the first case the master task waits for the indices sent by the slave task, and then orders the contingencies list. The host failures at the first stage are not critical because the only impact will be some reduction in the performance of the algorithm (inferior quality ranking). On the other hand, the second stage is critical since, if a machine is down, the optimal solution may be false.

Early implementations used the ‘pvmfmstat’ function of PVM. This function checks the host status; if the parameter returned is null, then the host requested has lost contact with the master and the cases to be solved must be rescheduled through the other slave tasks. Consequently, it is omitted from the virtual machine. The basic structures of subtasks are: # Fault Tolerance Manager Call pvmfmstat (machine, info) if(info. It. 0 ) then - Send flag ‘msgtype’ informing the remaining hosts that a host failure has occurred.

IEE Pvoc -Gene Transm Distrib , Vol 143, No 6, November 1996

Page 5: Solving the security constrained optimal power flow problem in a distributed computing environment

- Load rescheduling endif # Local Fault Tolerance Substask - check if some message of ‘msgtype’ type has arrived. - If it has arrived, change the local configuration (node number and identification number).

The main obstacle to this strategy is the dimension of the time we are dealing with. The response time of the ‘pvmfmstat’ function is in the range of minutes, while the application is in the range of seconds. To overcome this difficulty, the following practical strategy has been implemented.

The response test was implemented only in the second synchronisation function. Rather than using the ‘pvmfmstat’ function, a strategy based on time tolerance was utilised. If, after this tolerance, a machine does not respond, its load (contingencies to be analysed) is redistributed through the remaining slave tasks. The extreme case occurs when the master host takes over all the processes, that is, the SCOPF is executed in a sequential mode. Note that if a machine has a heavy load (i.e. due to another concurrent processes), it may not reply within the tolerance and will be skipped. However, since the SCOPF is a real- time tool, these machines may be included in the optimisation process at the next SCOPF execution. In the case of host failure, this machine will be omitted definitively and the PVM will delete it automatically from the configuration. The time tolerance adopted was 50% of the ideal execution time (the ideal time is given by the rate of sequential time versus number of machines). The idea behind this choice is that the parallel execu- tion time should not be (in case of host failure) greater than the sequential processing time (algorithm executed in a single machine).

91- / /‘

I I , I I I 1 2 3 4 5 6 7 8 9 1 0

number of workstations Fig.5 Speed-ups for test system

5 Results

The algorithm was tested on a homogeneous 10- workstation network running PVM 3.2.0; the codes were written in FORTRAN 77. Two real-life Brazilian systems operating with average load, with 725 and 1663 buses, respectively, were utilised. Some of its characteristics are given in Table 1. The list of contingencies is formed by single branch outages. The objective function was the minimum deviation from the

IEE Proc-Genes. Trunsm. Distrib., Vol. 143, No. 6, November 1996

optimal operating point obtained with all security constraints relaxed. The control variables are the controllable active power generations. The results are presented in Tables 2, 3 and 4.

Table 1: Test systems

Controllable List of Branches generators contingencies System

725-bus 1212 76 900

1663-bus 2349 99 1555

Table 2: Characteristics of the optimal solution of tested systems

Mw Objective Actives constraints rescheduled function System

725-bus 6 3290 0.421

1663-bus 7 1230 0.795

Table 3: Processing times and efficiencies as a function of the number of hosts, considering the 725 and 1663- bus systems

Number Of 7 2 5 - b ~ ~ workstations

1663-bus

Time Efficiency Time Efficiency (S) (%I (S) (%)

1 41.0 100.0 152.2 100.0

2 21.2 96.0 78.1 97.5

3 15.0 91.0 52.3 97.1

4 11.6 88.3 39.7 96.0

5 9.3 88.0 31.8 95.8

6 7.8 87.5 26.7 95.0

7 7.5 78.6 23.0 94.6

8 6.7 77.3 20.8 91.6

9 6.3 72.6 18.7 90.2

10 5.8 70.2 17.6 86.3

Table 4: Processing times for shutdown simulations using the 1663-bus system. Initial configuration was formed by 10 workstations

Number of host Time (s) Tolerance (s)

1 43.1 7.5

2 80.7 150.0

Table 2 summarises the characteristics of the optimal solution for both systems. Table 3 shows the results from tests with the two Brazilian networks. The 725- bus system is composed of 1212 branches and 76 adjustable power generators. The 1663-bus system has 2349 branches and 99 adjustable power generators. The tests are performed with the computer network operating in light load period. Fig. 5 shows the speed- ups corresponding to Table 3 . Table 4 shows the behaviour of the algorithm when shutdown for one and two hosts of parallel machines are simulated. In both cases, the initial configuration of the virtual machine was formed by 10 workstations and the 1663-bus system was utilised. In this table, the second column gives the global process time and the third one gives the adopted time tolerance. In these abnormal conditions, the global time is increased due to the

597

Page 6: Solving the security constrained optimal power flow problem in a distributed computing environment

conservative time tolerance assumed and to the recovery process time after task redistribution.

5.1 Comments In this new environment, good performance was attained from the algorithm implementation. The extended asynchronous approach allows inclusion of the functions related to fault tolerance. In particular, a practical and simple proposal aiming to make the algo- rithm fault-tolerant was implemented, tested and reported.

5.2 Future work In spite of the advantages associated with network uti- lisation, there are still many problems to be faced. The hardware heterogeneity is an aspect where much research activity is concentrated. On the other hand, an application executed on a network must share resources with other users, which affects the perform- ance and communication. This is a complex problem with dynamic characteristics.

Further research must be undertaken to determine strategies of load allocation in a computer network, providing adequate exploration of available resources. It seems that a dynamic strategy may be adequate, but the criteria which theses strategies must be based on are still challenges.

6 Conclusions

Currently, computer applications are moving towards distributed processing environments. The emergence of powerful processors linked together by super high- speed data networks with decreasing costs is a reality.

This paper has presented an extended algorithm for SCOPF solution that takes into account fault tolerance aspects and the migration of that algorithm to a distributed processing environment. The distributed platform used was formed by a workstation network running the PVM 3.2.0 software package. A few changes are necessary for implementation on the new environment, and are related with call function formats of parallel libraries. In terms of fault tolerance, a simplified and practical strategy was suggested and implemented: the condition is that the master host be reachable.

Finally, results obtained in tests performed with two Brazilian networks were reported. The first system is composed of 725 buses, 1212 branches and 76 adjustable power generators. The second has 1663 buses, 2349 branches and 99 adjustable power generators.

7 Acknowledgments

The author wishes to acknowledge the support and cooperation of Dr Monticelli and Dr Garcia of Uni- camp. This project was supported by CNPq, Conselho Nacional de Desenvolvmiento Cientifico e Tecnologico, and FAPEMA, Funda@o de Amparo a Pesquisa do Estado do Maranhgo, Brazil.

~

8

1

2

3

4

5

6

7

8

9

9

~

Some PVM library functions utilised in this work are summarised in the following; more details are found in

Loading the application: a loader program is imple- mented which starts the master task on host 0 (master host) and one slave task on each host of the parallel virtual machine.

pvmfspawn (args): this function starts copies of an executable file on the virtual machine. In accord with the arguments specified in ‘args’, this function may spawn processes automatically (PVM chooses where to spawn processes), or distributed by the user.

pvmfmytid (mytid): this function enrols a process into PVM on it first call and returns the ‘tid’ (mytid argument) of the process on every call (analogous to the ‘whoami’ of nCUBE library).

Message sending: the function associated with mes- sage sending among tasks are: ‘pvmfinitsend (args)’: this function clears the default send buffer and prepares it for packing a new message. ‘pvmfpack (args)’: it packs the active message buffer with arrays of prescribed data type. ‘pvmfsend (args)’: this function sends a message stored in the active send buffer to the process identified in one of the arguments ‘args’.

Message receiving: the functions utilised for reading a message are: ‘pvmfnrecv (args)’: this function checks to see if a mes- sage with label given by ‘args’ has arrived from another task. This function is nonblocking, in the sense that it always returns immediately, either with the message or with the information that the message has not yet arrived. ‘pvmfunpack (args)’: this function unpacks an array of the given data type from the active receive buffer.

[91.

References

TURCOTTE, L.H.: ‘A survey of software environments for exploiting networked computing resources’. Report of the Engi- neering Research Center for Computational Field Simulation, Mississippi State University, 1993 TANENBAUM, AS.: ‘Computer Networks’ (Prentice Hall Inter- national, 1989, 2nd edn) WU, F.F.: ‘Optimisation in a distributed processing environ- ments’. PICA Conference Tutorial, Baltimore. 1991 RODRIGUES, M., SAAVEDRA,’O.R., and ‘MONTICELLI, A.: ‘Asynchronous programming model for the concurrent solution of security constrained optimal power flow problem’, IEEE Trans. Power Systems, 1994, 9, (4), pp. 2021-2027 SAAVEDRA, O.R., and MONTICELLI, A.: ‘Solution of the security constrained optimal power flow on a distributed-memory computer’. Proceedings of 10th Chilean Electrical Engineering Congress, (in Portuguese), Valdivia, Chile, November 1993, pp. 189-1 9P _-_ _ _ . STOTT, B., and MARINHO, J.L.: ‘Linear programming for power system network security applications’, IEEE Trans. Power Apparatus and Systems, 1979, 98, pp. 837-848 MONTICELLI, A., PEREIRA, M.V.F., and GRANVILLE, S.: ‘Security constrained optimal power flow with post-contingency corrective rescheduling’, IEEE Trans., 1987, PWRS-2, (1) BAL, H.E., STEINER, J.G., and TANENBAUM, A.S.: ‘Pro- gramming languages for distributed comuting systems’, ACM Computing Surveys, 1989, 21, (3), pp. 261-322 ‘PVM 3 User’s guide and reference manual’. May, 1993

Appendix: PVM library

598 IEE Proc.-Gener. Transm. Distrib., Vol. 143, No. 6, November 1996