Lecture 25: Index Selection - University of...
Transcript of Lecture 25: Index Selection - University of...
1
Lecture 25:
Index Selection
Bruce Walsh lecture notes
Synbreed course
version 10 July 2013
2
Selection on an Index of Traits
I =!
bjzj = bTz
!2I = !(bTz,bT z) = bT !(z, z)b = bTPb
!2AI
= !A(bTz,bTz) = bT !A(z, z)b = bTGb
A common way to select on a number of traits at once
is to base selection decisions on a simple index of
trait values,
The resulting phenotypic and additive-genetic variance
for this synthetic trait are;
3
!2I = !(bTz,bTz) = bT !(z, z)b = bTPb
!2AI
= !A(bT z,bT z) = bT !A(z, z)b = bTGb
h2I =
!2AI
!2I
= bTGbbTPb
This gives the resulting heritability of the index as
If iI is the selection intensity on the index, then the
response in the index from the breeder’s eq. becomes
RI = iI hI2 !I
4
Class problem
For
Compute additive and phenotypic variance for I, and h2 for the
index.
Consider the following data set (which will be used often)
5
"RI = ı h2I !I = ı·b
T GbbTPb
"bT Pb = ı· bTGb
bTPb
Sj =ı
!I
!
k
bkPjk S =ı
!IPb
The response in the index due to selection is
How does selection on I translate into selection on the
underlying component traits making up the index?
Since R = GP-1S, the response vector for the underlying
component means becomes
R = GP!1S =ı
!IGb = ı· Gb"
bTPb
6
Class problem
For a selection intensity of i = 2.06 (upper 5% selected)
compute the response in for the index in the last
problem.
Compute the vectors of selection differentials and
responses (S and R) for the vector of component traits.
7
Changes in the additive variance of the index
Bulmer’s equation holds for the index:
We can also apply multivariate Bulmer to follow the changes
in the variance-covariance matrix of the components:
8
Direction of selection vs. direction of
response
Recall from our discussion of multivariate response that if we
select in a particular direction ", the actual vector R of
response is in a different direction
"
R A selection index represents a desired
direction of response. Hence, if we want
optional response in a particular
direction (vector a), we might best
achieve this by selection in a different
direction (vector b) such that the
response is in the direction of vector a
9
a
Ra Selection on I = aTz,
get a response in
direction away from a
Rb
b Can we find a direction (index
weights b) such that selection on
I = bTz, gives a response in the
desired direction of a
Yes. These are the Smith-Hazel weights.
10
The Smith-Hazel Index
Suppose we wish the maximize the response on the
linear combination aTz of traits
This is done by selecting those individuals with
the largest breeding value for this index, namely
Hence, we wish to find a phenotypic index I = bTz
so that the correlation between H and I in an
individual is maximized.
Key: This vector of weights b is typically rather
different from a!
11
The Smith-Hazel Index
Smith and Hazel show that the maximal gain is obtained
by selection on the index bTz where
Thus, to maximize the response in aTz in the next generation,
we choose those parents with the largest breeding value for
this trait, H = aTg. This is done by choosing individuals with the
largest values of I = bsTz.
12
Response under Smith-Hazel
Select on Ib = bTz, interest is response in Ia = aTz.
Response in the
index of interest
13
In-class problem
We wish to improve the index
Compute the Smith-Hazel weights (using previous
P, G)
Compare response in Ia = aTz with response in aTz given
by selection on bsTz
14
Problems with Smith-Hazel
• The major issue is that both P (easy) and G(harder, less precise) must be estimated.– Especially problematic (for G) with a large
number of component traits.
– Errors in either can have major impact on theindex, significantly reducing precision
– P and G change each generation from LD
• One estimation issue is that H = P-1G canhave negative eigenvalues.– The method of bending (regressing eigenvalues
back to their means) provides one solution
15
Bending:
Tune (slowly increase) bending parameter # until all eigenvalues of
H* are positive.
1.00.80.60.40.20.0
0.2
0.4
0.6
0.8
1.0V
alu
e o
f E
igen
va
lue
!
!
!
Value of bending coefficient
1
2
3
16
Other indices
• Because of these estimation issueswith Smith-Hazel, several otherindices have been proposed
• Idea is to weight information withhigher confidence more
– e.g., weight by heritabilities and ignoregenetic correlations
17
Heritability index : bi = a ih2i
Other indices
Estimated index:
Base-index:
Elston (or weight-free) index:
Retrospective index: since R = Gb, b = G-1R
18
Restricted and Desired-gains Indices
Instead of maximizing the response on a index, we
may (additionally) wish to restrict change in certain
traits or have a desired gain in specific traits in mind
Suppose for our soybean bean data, we wish no
response in trait 1
Class Problem: Suppose weights are aT = (0,1,1), i.e,
no weight on trait 1. Compute the response in trait
one under this index.
19
Morely (1955): The simplest case of want to change
trait z1 while z2 remains constant
Want b1, b2 such that
no response in trait 2
Setting b1 = 1 and
solving gives weights
20
Kempthorne-Nordskog restricted index
Suppose we wish the first k (or m) traits to be unchanged.
Since response of an index is proportional to Gb,
the constraint is CGbr = 0, where C is an m X k matrix
with ones on the diagonals and zeros elsewhere.
Kempthorne-Nordskog showed that br is obtained by
where Gr = CG
21
22
Cost of using a restricted index is less total response
in the unrestricted components of the index.
23
Tallis restriction index
More generally, suppose we specify the desired
gains for k combinations of traits,
Here the constraint is CGbr = d
Solution is
24
Desired-gain index
Suppose we desire a gain of R, then since R = Gb,
the desired-gains index is b = G-1R
25
26
Class problem
Compute b for desired gains (in our soybean traits)
of (1,-2,0)
27
Non-linear indicesCare is required in considering the improvement goals when using
a nonlinear merit function (index), as apparently subtle differences
in the desired outcomes can become critical
A related concern is whether we wish to maximize the additive
genetic value in merit in the parents or in their offspring. Again,
with a linear index these are equivalent, as the mean breeding value
of the parents u equals the mean value in their offspring. This
is not the case with nonlinear merit.
28
Quadratic selection index
Consider the optimal weights on a quadratic index
The matrix A of quadratic
weights is given by
Optimal weights
Smith-Hazel
weights
Quadratic
effects
29
Linearization
30
31
Linear weights
change with
the current
mean
32
Fitting the best linear index to a nonlinear
merit function
33
34
Sets of response vectors and selection differentials
Given a fixed selection intensity i, we would like to
know the set of possible R (response vectors) or
S (selection vectors)
Substituting the above value of b into
recovers
Hence,
35
Sets of response vectors and selection differentials
This equation describes a quadratic surface of possible, R values,
Similarly, using R = GP-1S, gives the surface
These give surfaces of all possible values of either
R or S given the selection intensity. Nonlinear
index selection uses these results
36
Goddard’s method• Mike Goddard (1983) suggested a simple approach
for ANY nonlinear index.– First, the planned total selection intensity over the
course of the experiment in used in the previous resultsto obtain a surface of responses (response values in thevector of traits given the intensity)
– Next, contours of each value for the nonlinear functionare overlaid on this response surface. The largest valuethat intersects the response surfaces gives the optimalresponse
• Namely the final trait values we are trying to achieve
– Using these values (final weights), a desired gains indexis used to construct the optimal weights to achieve thisfinal target
37
Contours for different
selection intensities (dashed
lines)
Contours of equal value for
The merit function (solid
lines)
38
Key: Optimal weights a function of intensity of selection
As total selection intensity changes, so do the optimal weights
Hence, optimal weights change as a function of the
length of selection.
39
Sequential Approaches for Multitrait selection
• Often not all traits can be selected at once.
• Index selection generally optimal, but may also beeconomic reasons for other approaches.
• Sequential approaches are used in this case.– Multistage selection
• Selecting on different traits (or indexes) over differentlife stages
– Tandem Selection
• Select different traits in different generations
– Independent Culling
• Cull the first trait, then select on second, etc
– Selection of extremes
• Selection an individual with ANY trait value in theuppermost fraction p of the population
40
41
Multistage selection
Selecting on traits that appear in different life
stages
Optimal selection schemes
Cotterill and James optimal 2-stage. Select 1st on x,
(save p1), then then on y (save p2 = p/p1). Goal
optimize the breeding value for merit g.