Matematika | Statisztika » Isotalo-Puntanen - Decomposing the Watson efficiency in a linear statistical model

Alapadatok

Év, oldalszám:2007, 22 oldal

Nyelv:angol

Letöltések száma:4

Feltöltve:2019. augusztus 15.

Méret:569 KB

Intézmény:
-

Megjegyzés:
McGill University

Csatolmány:-

Letöltés PDF-ben:Kérlek jelentkezz be!



Értékelések

Nincs még értékelés. Legyél Te az első!


Tartalmi kivonat

Source: http://www.doksinet 1 Decomposing the Watson efficiency in a linear statistical model Jarkko Isotalo, Simo Puntanen Tampere, Finland Ka Lok Chu, George P. H Styan McGill University, Montréal, Canada Tartu, 27 June 2007 Source: http://www.doksinet 2 ABSTRACT • In Chips1 (2004), we introduced a particular decomposition for the Watson efficiency of the OLS estimator β̂: product of three factors. • Chips2 (2005) shows: all three factors are related to the efficiencies of particular submodels or their transformed versions. • There is: an interesting connection between a particular reduction of the Watson efficiency and the concept of linear sufficiency. Source: http://www.doksinet 3 • The efficiency and specific canonical correlations. • Decomposition for the Bloomfield–Watson commutator criterion, and a condition for its specific reduction. • Efficiency of K0 β̂ Source: http://www.doksinet 4 TOC 1. Introduction (a serious one . ) 2.

Decomposing the Watson efficiency 3. Canonical correlations 4. Decomposing the commutator criterion 5. References Source: http://www.doksinet 5 1. Introduction Consider the partitioned linear model y = Xβ + ε = X1 β 1 + X2 β 2 + ε, M12 = {y, Xβ, σ 2 V}, E(y) = Xβ, E(ε) = 0, cov(y) = cov(ε) = σ 2 V. • y is an n × 1 observable random vector, • ε is an n × 1 random error vector, • X is a known n × p model matrix, • β is a p × 1 vector of unknown parameters. Source: http://www.doksinet 6 Denote H = PX , M = I − H. The ordinary least squares estimator of Xβ: OLSE(Xβ) = Xβ̂ = ŷ = Hy = PX y. An unbiased estimator Gy is the best linear unbiased estimator (BLUE) of Xβ if GVG0 ≤ BVB0 ∀ B : BX = X, i.e, BVB0 − GVG0 is nnd ∀ B : BX = X. Source: http://www.doksinet 7 In addition to M12 , we consider M1 = {y, X1 β 1 , V}, M1H = {Hy, X1 β 1 , HVH}, M12·1 = {M1 y, M1 X2 β 2 , M1 VM1 }. M1 = a small model, M12·1 = a reduced

model. • M12·1 is obtained by premultiplying M12 by M1 . • M1H is obtained by premultiplying M1 by H. Source: http://www.doksinet 8 We consider a weakly singular model which means that V may be singular but C (X) ⊂ C (V). (*) Under this model, β̃ = (X0 V+ X)−1 X0 V+ y, cov(β̃) = (X0 V+ X)−1 = U−1 [X0 VX − X0 VM(MVM)− M0 VX]U−1 , where U = X0 X. Source: http://www.doksinet 9 Hence, φ12 = eff(β̂ | M12 ) = |cov(β̃)| |cov(β̂)| |X0 X|2 = + 0 0 |X VX| · |X V X| |X0 VX − X0 VZ(Z0 VZ)− Z0 VX| = |X0 VX| = |Ip − X0 VZ(Z0 VZ)− Z0 VX(X0 VX)−1 |, where Z = X⊥ . Source: http://www.doksinet 10 Chips1 introduced a new decomposition for φ12 : eff(β̂ | M12 ) = eff(β̂ 1 | M1 ) · eff(β̂ 2 | M12 ) · α1 , where eff(· | ·) refers to the Watson efficiency under a particular model, and α1 is a specific determinant ratio. • One of the key results in Chips2: 1/α1 = the efficiency of β̂ 1 under M1H . • Another interesting observation:

the reduction eff(β̂ | M12 ) = eff(β̂ 2 | M12 ), is closely connected to the concept of linear sufficiency. (1.2) Source: http://www.doksinet 11 • Formally, Fy is defined to be linearly sufficient for Xβ under M = {y, Xβ, V} if there exists a matrix A such that AFy is the BLUE of Xβ. Source: http://www.doksinet 12 . that was the the Introduction Source: http://www.doksinet 13 I told you it was supposed to be a serious one . Source: http://www.doksinet 14 2. Decomposing the Efficiency |X01 X1 |2 eff(β̂ 1 | M1 ) = := φ1/1 , 0 0 + |X1 VX1 | · |X1 V X1 | eff(β̂ 2 | M12 ) = eff(β̂ 2 | M12·1 ) = |X02 M1 X2 |2 |X02 M1 VM1 X2 | · |X02 Ṁ1 X2 | := φ2/12 , where Ṁ1 = M1 (M1 VM1 )− M1 . Source: http://www.doksinet 15 • Important Note: The BLUE of β 2 under model M12 coincides with the BLUE of β 2 under model M12·1 , i.e, β̃ 2 (M12·1 ) = β̃ 2 (M12 ). Source: http://www.doksinet 16 Theorem 1 The total Watson efficiency φ12 of

the OLSE(β) under the partitioned weakly singular linear model M12 = {y, Xβ, V}, can be expressed as the product eff(β̂ | M12 ) = eff(β̂ 1 | M1 ) · eff(β̂ 2 | M12 ) · 1 eff(β̂ 1 | M1H ) , where |X01 X1 |2 eff(β̂ 1 | M1H ) = , 0 0 − |X1 VX1 | · |X1 (HVH) X1 | Source: http://www.doksinet 17 eff(β̂ 1 | M1H ) := φ1H |X01 X1 |2 = |X01 VX1 | · |X01 (HVH)− X1 | = |Ip1 − X01 VM1 X2 (X02 M1 VM1 X2 )−1 · X02 M1 VX01 (X01 VX1 )−1 |. Source: http://www.doksinet 18 Next we take a look at the conditions for a particular reduction of the total Watson efficiency. Theorem 2 Let M12 = {y, Xβ, V} be a partitioned weakly singular linear model. Then the following statements are equivalent: (a) eff(β̂ | M12 ) = eff(β̂ 2 | M12 ), (b) C (X1 ) ⊂ C (VX), (c) Hy is linearly sufficient for X1 β 1 under M1 . Source: http://www.doksinet 19 3. The commutator criterion Theorem 3 The Bloomfield–Watson efficiency eff BW (β̂ | M12 ) = ψ12 = 21 kHV − VHk2 =

kHVMk2 has the decomposition ψ12 = ψ1/1 + ψ2/12 − ψ1H ; ψ1/1 = eff BW (β̂ 1 | M1 ) = kP1 VM1 k2 , ψ2/12 = eff BW (β̂ 2 | M12 ) = kPM1 X2 VMk2 , ψ1H = eff BW (β̂ 1 | M1H ) = kP1 VPM1 X2 k2 . Source: http://www.doksinet 20 Consider the condition under which • ψ12 reduces into ψ2/12 . It is interesting that this condition differs from the corresponding condition regarding the Watson efficiency. Theorem 4 The B-W efficiency has property ψ12 = ψ2/12 (3.2) C (VX1 ) ⊂ C (X). (3.3) if and only if Under a weakly singular linear model (3.3) becomes C (X1 ) ⊂ C (V X). + Source: http://www.doksinet 21 References Baksalary, J.K and Kala, R (1981) Ann Statist 4, 913–916. Baksalary, J.K & Kala, R (1986) JSPI, 14, 331–338 Barnard, G.A (1963) JRSS, B 25, 124–127 Bartmann, F.C & Bloomfield, P (1981) Biometrika, 68, 67–71. Bloomfield, P. & Watson, GS (1975) Biometrika, 62, 121–128. Source: http://www.doksinet 22 Chu, K.L, Isotalo, J,

Puntanen, S & Styan, GPH (2004). (Chips1) Sankhyā, 66, 634–651 Chu, K.L, Isotalo, J, Puntanen, S & Styan, GPH (2005). (Chips2) Sankhyā, 67, 74–89 Drygas, H. (1983) Sankhyā, Ser A 45, 88–98 Groß, J. & Puntanen, S (2000) LAA 321, 131–144 Rao, C.R & Mitra, SK (1971) Wiley, New York Watson, G.S (1955) Biometrika 42, 327–341 Zyskind, G. & Martin, FB (1969) SIAM J Appl Math 17, 1190–1202