Pendugaan generalized linear model dengan Iteratively Reweighted Least Squares

GLM
Computational Statistics
Author

Gerry Alfa Dito

Published

September 2, 2023

Generalized Linear Model

Generalized Linear Model (GLM) adalah perluasan dari model regresi linear, yang memungkinkan pemodelan peubah respons yang tidak harus berdistribusi normal.

GLM terdiri dari tiga komponen utama yaitu:

  1. Komponen Acak: Menentukan distribusi peluang dari peubah respons \(\boldsymbol{Y}\). Pada GLM, respons \(\boldsymbol{Y}\) berasal dari Exponential (Dispersion) Family atau Keluarga (Dispersi) Eksponensial, yang merupakan generalisasi dari distribusi normal dan mencakup berbagai distribusi seperti binomial, Poisson, normal, gamma, dll.
  2. Komponen Sistematis:Mendefinisikan prediktor linear \(\eta\), yang merupakan kombinasi linear dari peubah penjelas (prediktor). \[ \eta = \boldsymbol{X\beta} \] di mana \(\boldsymbol{X}\) adalah matriks desain dari prediktor, dan \(\boldsymbol{\beta}\) adalah vektor koefisien.
  3. Fungsi Hubung: Menghubungkan prediktor linear \(\eta\) dengan rata-rata distribusi \(\boldsymbol{\mu} = E(\boldsymbol{Y}|\boldsymbol{X})\). Fungsi hubung \(g(\boldsymbol{\mu})\) memastikan bahwa kombinasi linear dari prediktor \(\eta\) memetakan secara tepat ke domain \(\boldsymbol{\mu}\). \[ g(\boldsymbol{\mu}) = \boldsymbol{\eta} \] Fungsi hubung umum termasuk logit (untuk hasil biner), log (untuk jumlah kejadian), dan identitas (untuk respon yang memiliki domain bilangan Real).

Keluarga (Dispersi) Eksponensial

Keluarga Dispersi Eksponensial adalah kelas distribusi peluang yang mencakup banyak distribusi umum yang digunakan dalam GLM, termasuk distribusi normal, binomial, Poisson, gamma, dan invers Gaussian. Sebuah peubah acak \(Y\) termasuk dalam Keluarga (Dispersi) Eksponensial jika fungsi kepekatan peluang (fkp) atau fungsi massa peluang (fmp)-nya dapat ditulis dalam bentuk berikut:

\[ f(y; \theta, \phi) = \exp\left\{ \frac{y \theta - b(\theta)}{a(\phi)} + c(y, \phi) \right\} \qquad(1)\]

dengan

  • \(\theta\) adalah natural parameter
  • \(\phi\) adalah dispersion parameter
  • \(b(.)\) adalah cumulant function
  • \(c(y_i, \phi)\)adalah normalizing function yang memastikan integral dari fkp sama dengan 1.

Distribusi yang termasuk dalam keluarga ini memiliki sifat-sifat antara lain:

\[ \begin{aligned} E(y)&=\frac{d}{d\theta}b(\theta)=b^{'}(\theta) \\ \text{Var}(y)&=\left[\frac{d^2}{d\theta^2}b(\theta)\right]a(\phi)=b^{''}(\theta)a(\phi) \end{aligned} \]

Fungsi log-likelihood dan Score Function

Fungsi log-likelihood dari Exponential dispersion family untuk \(n\) pengamatan dapat dituliskan sebagai

\[ l(\boldsymbol{\theta}) = \sum_{i=1}^{n} \frac{y_{i} \theta_{i} - b(\theta_{i}) }{a(\phi)} + \sum_{i=1}^{n} c(y_{i}, \phi) \qquad(2)\]

Kemudian, score function dari Equation 2 dapat diturunkan sebagai berikut

\[ \begin{aligned} S(\boldsymbol{\theta})&=\frac{d}{d\boldsymbol{\theta}}l(\boldsymbol{\theta}) \\ &= \frac{d}{d\theta_{i}}\left( \sum_{i=1}^{n} \frac{y_{i} \theta_{i} - b(\theta_{i}) }{a(\phi)} + \sum_{i=1}^{n} c(y_{i}, \phi) \right) \\ &= \sum_{i=1}^{n} \frac{y_{i} - b^{'}(\theta_{i}) }{a(\phi)} + 0 \\ &= \sum_{i=1}^{n} \frac{y_{i} - b^{'}(\theta_{i}) }{a(\phi)} \end{aligned} \]

Selanjutnya, untuk memasukan unsur GLM dalam Equation 2, mari kita perhatikan hubungan antara natural parameter \(\theta\), fungsi hubung \(g(.)\), mean \(\mu=E(Y)\) dan komponen sistematis \(\eta\) berikut ini:

\[ \begin{aligned} \eta &= g(\mu) \\ g(\mu) &=\eta =\sum_{i=1}^{p}x_{ij}\beta_{j} \\ g^{-1}(g(\mu))&=g^{-1}\left(\sum_{i=1}^{p}x_{ij}\beta_{j}\right) \\ \mu &= g^{-1}\left(\sum_{i=1}^{p}x_{ij}\beta_{j}\right) \end{aligned} \]

Misalkan fungsi hubung \(g(.)\) merupakan fungsi hubung kanonik, yang didefinisikan sebagai \[ g(\mu)=\theta \] sehingga berimplikasi bahwa

\[ \begin{aligned} \theta&=g(\mu) \\ &= \eta \\ &= \sum_{i=1}^{p}x_{ij}\beta_{j} \end{aligned} \qquad(3)\]

Kemudian, berdasarkan Equation 3 diperoleh

\[ \theta_{i}= \sum_{i=1}^{p}x_{ij}\beta_{j} \]

sehingga fungsi log-likelihood menjadi

\[ l(\boldsymbol{\beta}) = \sum_{i=1}^{n} \frac{y_{i} \left( \sum_{i=1}^{p}x_{ij}\beta_{j} \right) - b\left(\sum_{i=1}^{p}x_{ij}\beta_{j}\right) }{a(\phi)} + \sum_{i=1}^{n} c(y_{i}, \phi) \]

Kemudian untuk mendapatkan score function dihitung turunan pertama dari fungsi log-likelihood dengan aturan rantai turunan

\[ S(\boldsymbol{\beta}) = \frac{dl(\boldsymbol{\theta})}{d\boldsymbol{\theta}} \frac{d\boldsymbol{\theta}}{d\boldsymbol{\mu}} \frac{d\boldsymbol{\mu}}{d\boldsymbol{\eta}}\frac{d\boldsymbol{\eta}}{d\boldsymbol{\beta}} \qquad(4)\]

Aturan rantai pada Equation 4 terbentuk dari hubungan antara \(\theta,\mu,\text{ dan }\eta\) pada Equation 3. Selanjutnya, karena diketahui \[ \begin{aligned} \frac{dl(\boldsymbol{\theta})}{d\boldsymbol{\theta}} = S(\boldsymbol{\theta} )=\sum_{i=1}^{n} \frac{y_{i} - b^{'}(\theta_{i}) }{a(\phi)} \\ \frac{d\boldsymbol{\theta}}{d\boldsymbol{\mu}} = \frac{1}{\frac{d\boldsymbol{\mu}}{d\boldsymbol{\theta}}} = \frac{1}{\frac{db^{'}(\boldsymbol{\theta})}{d\boldsymbol{\theta}}} =\frac{1}{b^{''}(\boldsymbol{\theta})} \\ \frac{d\boldsymbol{\eta}}{d\boldsymbol{\beta}} =\frac{\left( \sum_{i=1}^{p}x_{ij}\beta_{j} \right)}{d\boldsymbol{\beta}} =x_{ij} \end{aligned} \]

maka diperoleh hasil sebagai berikut

\[ \begin{aligned} S(\boldsymbol{\beta}) &= \sum_{i=1}^{n} \frac{y_{i} - b^{'}(\theta_{i}) }{a(\phi)} \frac{d\boldsymbol{\theta}}{d\boldsymbol{\mu}} \frac{d\boldsymbol{\mu}}{d\boldsymbol{\eta}}\frac{d\boldsymbol{\eta}}{d\boldsymbol{\beta}} \\ &= \sum_{i=1}^{n} \frac{y_{i} - b^{'}(\theta_{i}) }{a(\phi)} \frac{1}{b^{''}(\theta_{i})} \frac{d\boldsymbol{\mu}}{d\boldsymbol{\eta}}\frac{d\boldsymbol{\eta}}{d\boldsymbol{\beta}} \\ &= \sum_{i=1}^{n} \frac{y_{i} - b^{'}(\theta_{i}) }{a(\phi)} \frac{1}{b^{''}(\theta_{i})} \frac{d\boldsymbol{\mu}}{d\boldsymbol{\eta}}x_{ij} \\ &= \sum_{i=1}^{n} \frac{y_{i} - b^{'}(\theta_{i}) }{a(\phi)} \frac{1}{b^{''}(\theta_{i})}x_{ij} \frac{d\boldsymbol{\mu}}{d\boldsymbol{\eta}} \\ &= \sum_{i=1}^{n} \frac{y_{i} - \mu_i }{a(\phi)} \left( \frac{a(\phi)}{\text{Var}(y_i)} \right)x_{ij} \frac{d\boldsymbol{\mu}}{d\boldsymbol{\eta}} \\ &= \sum_{i=1}^{n} \frac{y_{i} - \mu_i }{\text{Var}(y_i)} x_{ij} \frac{d\boldsymbol{\mu}}{d\boldsymbol{\eta}} \end{aligned} \] \(\frac{d\boldsymbol{\mu}}{d\boldsymbol{\eta}}\) bergantung pada pemilihan fungsi hubung \(g(.)\) karena \(g(\mu)=\eta\).

Jadi dapat disimpulkan bahwa score function dalam GLM adalah

\[ \begin{aligned} S(\boldsymbol{\beta}) &= \sum_{i=1}^{n} \frac{y_{i} - \mu_i }{\text{Var}(y_i)} x_{ij} \frac{d\boldsymbol{\mu}}{d\boldsymbol{\eta}} \end{aligned} \qquad(5)\]

atau dalam bentuk matriks dapat dinyatakan sebagai

\[ \boldsymbol{S}_{ p \times 1}= \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} (\boldsymbol{y}_{n \times 1}-\boldsymbol{\mu}_{n \times 1}) \qquad(6)\]

dengan
  • \[\boldsymbol{X}_{n \times p} = \begin{bmatrix} 1 & x_{11} & x_{12} & \dots & x_{1p} \\ 1 & x_{21} & x_{22} & \dots & x_{2p} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n1} & x_{n2} & \dots & x_{np} \end{bmatrix}\]

  • \[\boldsymbol{D}_{n \times n} = \begin{bmatrix} \frac{d\mu_1}{d\eta_1} & 0 & \dots & 0 \\ 0 & \frac{d\mu_2}{d\eta_2} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \frac{d\mu_n}{d\eta_n} \end{bmatrix}\]

  • \[\left(\boldsymbol{V}_{n \times n}\right)^{-1} = \begin{bmatrix} \frac{1}{\text{Var}(y_1)} & 0 & \dots & 0 \\ 0 & \frac{1}{\text{Var}(y_2)} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \frac{1}{\text{Var}(y_n)} \end{bmatrix}\]

  • \[\boldsymbol{y}_{n \times 1}= \begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{bmatrix}\]

  • \[\begin{aligned} \boldsymbol{\mu}_{n \times 1}&= \begin{bmatrix} \mu_1 \\ \mu_2 \\ \vdots \\ \mu_n \end{bmatrix} \\ &= \begin{bmatrix} g^{-1}(\boldsymbol{X}_{1:1,1:p} \space \boldsymbol{\beta}_{p \times 1}) \\ g^{-1}(\boldsymbol{X}_{2:2,1:p} \space \boldsymbol{\beta}_{p \times 1}) \\ \vdots \\ g^{-1}(\boldsymbol{X}_{n:n,1:p} \space \boldsymbol{\beta}_{p \times 1}) \end{bmatrix} \\ &= g^{-1}(\boldsymbol{X}_{n \times p} \space \boldsymbol{\beta}_{p \times 1}) \end{aligned}\]

Hubungan Mean-Ragam dalam GLM

Dari sifat keluarga dispersi eksponensial, kita tahu bahwa \[ \begin{aligned} E(y_{i})&=\mu_i = b^{'}(\theta_i) \\ \text{Var}(y_{i}) &= b^{''}(\theta_i) a(\phi) \end{aligned} \]

Karena \(b^{''}(\theta_i)\) didapatkan dari turunan pertama \(\mu_i\) maka dapat dipandang \(b^{''}(\theta_i)\) merupakan fungsi dari \(\mu_i\). Kemudian, kita misal suatu fungsi \(\nu\) sebagai fungsi dari \(\mu_i\) yang berkaitan dengan \(b^{''}(\theta_i)\) maka

\[ \begin{aligned} \text{Var}(y_{i}) &= b^{''}(\theta_i) a(\phi) \\ &= \nu(\mu_{i}) a(\phi) \end{aligned} \qquad(7)\]

Persamaan Equation 7 menunjukkan hubungan antara mean-ragam. Hubungan ini dapat menjadi karakteristik sebaran-sebaran tertentu yang masuk dalam keluarga dispersi eksponensial. Misalnya

  • jika \(y_i\) Poisson distribution maka \(v(\mu_i) = \mu_i\)
  • jika \(y_i\) Binomial distribution maka \(v(\mu_i) = \mu_i(1 - \mu_i)\)
  • jika \(y_i\) Normal distribution maka \(v(\mu_i) = \sigma^2\)

Selanjutnya, Equation 7 menyebabkan

\[ \begin{aligned} \left(\boldsymbol{V}_{n \times n}\right)^{-1} &= \begin{bmatrix} \frac{1}{\text{Var}(y_1)} & 0 & \dots & 0 \\ 0 & \frac{1}{\text{Var}(y_2)} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \frac{1}{\text{Var}(y_n)} \end{bmatrix} \\ &= \begin{bmatrix} \frac{1}{\nu(\mu_1) a(\phi)} & 0 & \dots & 0 \\ 0 & \frac{1}{\nu(\mu_2) a(\phi)} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \frac{1}{\nu(\mu_n) a(\phi)} \end{bmatrix} \end{aligned} \]

Expected Fisher Information

Kemudian kita akan menghitung turunan pertama dari @#eq-sfglm2 untuk mendapatkan Observed Fisher Information Fisher

\[ \begin{aligned} \boldsymbol{I}_{p \times p} = I(\boldsymbol{\beta}) &= \frac{d}{d\boldsymbol{\beta}} \boldsymbol{S}_{ p \times 1} \\ &= -\frac{d}{d\boldsymbol{\beta}} \left( \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} (\boldsymbol{y}_{n \times 1}-\boldsymbol{\mu}_{n \times 1}) \right) \\ &\text{karena } \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \text{ tidak mengandung } \boldsymbol{\beta} \text{ maka } \\ &= -\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \frac{d}{d\boldsymbol{\beta}} (\boldsymbol{y}_{n \times 1}-\boldsymbol{\mu}_{n \times 1}) \\ &=-\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} (\frac{d}{d\boldsymbol{\beta}}\boldsymbol{y}_{n \times 1}-\frac{d}{d\boldsymbol{\beta}}\boldsymbol{\mu}_{n \times 1}) \\ &= -\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} (-\frac{d}{d\boldsymbol{\beta}}\boldsymbol{\mu}_{n \times 1}) \\ &= \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \left(\frac{d\boldsymbol{\mu}_{n \times 1}}{d\boldsymbol{\eta}} \right) \left(\frac{d\boldsymbol{\eta}_{n \times 1}}{d\boldsymbol{\beta}} \right) \\ &=\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \left(\frac{d\boldsymbol{\mu}_{n \times 1}}{d\boldsymbol{\eta}} \right) \left(\frac{d\boldsymbol{X}_{n \times p}\boldsymbol{\beta}_{p \times 1}}{d\boldsymbol{\beta}} \right) \\ &=\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \left(\frac{d\boldsymbol{\mu}_{n \times 1}}{d\boldsymbol{\eta}} \right) \left(\boldsymbol{X}_{n \times p}\right) \\ &\text{ karena } \frac{d\boldsymbol{\mu}_{n \times 1}}{d\boldsymbol{\eta}} = \boldsymbol{D}_{n \times n} \text{ maka } \\ &= \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \boldsymbol{D}_{n \times n} \left(\boldsymbol{X}_{n \times p}\right) \end{aligned} \] jadi Observed Fisher Information dihasilkan adalah

\[ \boldsymbol{I}_{p \times p} = \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \boldsymbol{D}_{n \times n} \left(\boldsymbol{X}_{n \times p}\right) \]

Selanjutnya untuk mendapatkan Expected Fisher Observed, kita akan menghitung ekspetasi dari Observed Fisher Information sebagai berikut:

\[ \begin{aligned} \boldsymbol{\mathcal{I}}_{p \times p} &= E(\boldsymbol{I}) \\ &= E\left( \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \boldsymbol{D}_{n \times n} \left(\boldsymbol{X}_{n \times p}\right) \right) \\ &= E\left(\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \boldsymbol{D}_{n \times n} \left(\boldsymbol{X}_{n \times p}\right) \right) \\ &\text{karena tidak mengandung } \boldsymbol{y}_{n \times 1} \text{ maka }\\ &= \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \boldsymbol{D}_{n \times n} \left(\boldsymbol{X}_{n \times p}\right) \end{aligned} \] untuk simplifikasi, kita misalkan

\[ \begin{aligned} \boldsymbol{W}_{n \times n} &= \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \boldsymbol{D}_{n \times n} \\ &= \begin{bmatrix} \frac{d\mu_1}{d\eta_1} & 0 & \dots & 0 \\ 0 & \frac{d\mu_2}{d\eta_2} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \frac{d\mu_n}{d\eta_n} \end{bmatrix} \begin{bmatrix} \frac{1}{\nu(\mu_1) a(\phi)} & 0 & \dots & 0 \\ 0 & \frac{1}{\nu(\mu_2) a(\phi)} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \frac{1}{\nu(\mu_n) a(\phi)} \end{bmatrix} \begin{bmatrix} \frac{d\mu_1}{d\eta_1} & 0 & \dots & 0 \\ 0 & \frac{d\mu_2}{d\eta_2} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \frac{d\mu_n}{d\eta_n} \end{bmatrix} \\ &= \begin{bmatrix} \frac{\left( \frac{\partial \mu_1}{\partial \eta_1} \right)^2}{\text{var}(\mu_1) a(\phi)} & 0 & \dots & 0 \\ 0 & \frac{\left( \frac{\partial \mu_2}{\partial \eta_2} \right)^2}{ \text{var}(\mu_2) a(\phi)} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \frac{\left( \frac{\partial \mu_n}{\partial \eta_n} \right)^2}{\text{var}(\mu_n) a(\phi)} \end{bmatrix} \end{aligned} \]

sehingga

\[ \begin{aligned} \boldsymbol{\mathcal{I}}_{p \times p} &= \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \boldsymbol{D}_{n \times n} \left(\boldsymbol{X}_{n \times p}\right) \\ &= \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n} \boldsymbol{X}_{ n \times p} \end{aligned} \] dengan

\[ \boldsymbol{W}_{n \times n} = \begin{bmatrix} \frac{\left( \frac{\partial \mu_1}{\partial \eta_1} \right)^2}{\text{var}(\mu_1) a(\phi)} & 0 & \dots & 0 \\ 0 & \frac{\left( \frac{\partial \mu_2}{\partial \eta_2} \right)^2}{ \text{var}(\mu_2) a(\phi)} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \frac{\left( \frac{\partial \mu_n}{\partial \eta_n} \right)^2}{\text{var}(\mu_n) a(\phi)} \end{bmatrix} \]

jadi Expected Fisher Information adalah sebagai berikut:

\[ \begin{aligned} \boldsymbol{\mathcal{I}}_{p \times p } &= \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n} \boldsymbol{X}_{n \times p} \end{aligned} \]

Pemisalan matriks \(\boldsymbol{W}_{n \times n}\) juga berakibat pada score function.

\[ \begin{aligned} \boldsymbol{W}_{n \times n} &= \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \boldsymbol{D}_{n \times n} \\ \boldsymbol{W}_{n \times n} \boldsymbol{D}_{n \times n}^{-1} &= \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \boldsymbol{D}_{n \times n} \boldsymbol{D}_{n \times n}^{-1} \\ &= \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} \end{aligned} \] Sehingga score function menjadi

\[ \begin{aligned} \boldsymbol{S}_{ p \times 1}&= \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{D}_{n \times n} (\boldsymbol{V}_{n \times n})^{-1} (\boldsymbol{y}_{n \times 1}-\boldsymbol{\mu}_{n \times 1}) \\ &= \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n} \boldsymbol{D}_{n \times n}^{-1} (\boldsymbol{y}_{n \times 1}-\boldsymbol{\mu}_{n \times 1}) \end{aligned} \]

Penduga koefisien \(\boldsymbol{\beta}\)

Nilai dugaan koefisien \(\boldsymbol{\beta}\) dapat dihitung dengan menggunakan metode Fisher Scoring. Berikut adalah formulasinya

\[ \begin{aligned} \boldsymbol{\beta}_{p \times 1}^{(i+1)} &= \boldsymbol{\beta}_{p \times 1}^{(i)} + \left[ \mathcal{I}_{p \times p}^{(i)} \right]^{-1} \boldsymbol{S}_{p \times 1} \\ &= \boldsymbol{\beta}_{p \times 1}^{(i)} + \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \boldsymbol{X}_{n \times p} \right]^{-1} \left[ \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n} \left(\boldsymbol{D}_{n \times n}^{(i)}\right)^{-1} (\boldsymbol{y}_{n \times 1}-\boldsymbol{\mu}_{n \times 1}^{(i)}) \right] \\ \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \boldsymbol{X}_{n \times p} \right] \boldsymbol{\beta}_{p \times 1}^{(i+1)} &= \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \boldsymbol{X}_{n \times p} \right]\boldsymbol{\beta}_{p \times 1}^{(i)} + \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \boldsymbol{X}_{n \times p} \right] \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \boldsymbol{X}_{n \times p} \right]^{-1} \left[ \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n} \left(\boldsymbol{D}_{n \times n}^{(i)}\right)^{-1} (\boldsymbol{y}_{n \times 1}-\boldsymbol{\mu}_{n \times 1}^{(i)}) \right] \\ &= \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \boldsymbol{X}_{n \times p} \right]\boldsymbol{\beta}_{p \times 1}^{(i)} + \left[ \boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n} \left(\boldsymbol{D}_{n \times n}^{(i)}\right)^{-1} (\boldsymbol{y}_{n \times 1}-\boldsymbol{\mu}_{n \times 1}^{(i)}) \right]\\ &= \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \right] \left[\boldsymbol{X}_{n \times p}\boldsymbol{\beta}_{p \times 1}^{(i)} + \left(\boldsymbol{D}_{n \times n}^{(i)}\right)^{-1} (\boldsymbol{y}_{n \times 1}-\boldsymbol{\mu}_{n \times 1}^{(i)}) \right] \\ &\text{ misal } \boldsymbol{z}_{n \times 1} = \boldsymbol{X}_{n \times p}\boldsymbol{\beta}_{p \times 1}^{(i)} + \left(\boldsymbol{D}_{n \times n}^{(i)}\right)^{-1} (\boldsymbol{y}_{n \times 1}-\boldsymbol{\mu}_{n \times 1}^{(i)}) \\ &= \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \right] \boldsymbol{z}_{n \times 1}^{(i)} \\ \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \boldsymbol{X}_{n \times p} \right] \boldsymbol{\beta}_{p \times 1}^{(i+1)} &= \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \right] \boldsymbol{z}_{n \times 1}^{(i)} \\ \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \boldsymbol{X}_{n \times p} \right]^{-1}\left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \boldsymbol{X}_{n \times p} \right] \boldsymbol{\beta}_{p \times 1}^{(i+1)} &= \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \boldsymbol{X}_{n \times p} \right]^{-1}\left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \right] \boldsymbol{z}_{n \times 1}^{(i)} \\ \boldsymbol{\beta}_{p \times 1}^{(i+1)} &= \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \boldsymbol{X}_{n \times p} \right]^{-1} \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \right] \boldsymbol{z}_{n \times 1}^{(i)} \end{aligned} \]

jadi, formulasi fisher scoring untuk menduga koefisien \(\boldsymbol{\beta}\) adalah

\[ \boldsymbol{\beta}_{p \times 1}^{(i+1)} = \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \boldsymbol{X}_{n \times p} \right]^{-1} \left[\boldsymbol{X}_{ p \times n}^{t} \boldsymbol{W}_{n \times n}^{(i)} \right] \boldsymbol{z}_{n \times 1}^{(i)} \qquad(8)\]

Equation 8 sering disebut sebagai Iteratively Reweighted Least Squares (IRLS).

Algoritme IRLS dapat dituliskan sebagai berikut:

  1. Tentukan perkiraan awal nilai optimal \(\boldsymbol{\beta}^{(0)}_{p\times 1}\) dan stopping criterion

  2. Tentukan \(\boldsymbol{W}^{(0)}_{n \times n}\) dan \(\boldsymbol{z}_{n \times 1}^{(0)}\)

  3. Hitung \[\boldsymbol{\beta}^{(1)}_{ p\times 1} = \left\{ \boldsymbol{X}^t_{p \times n} \boldsymbol{W}^{(0)}_{n \times n} \boldsymbol{X}_{n \times p} \right\}^{-1} \boldsymbol{X}^t_{p \times n} \boldsymbol{W}^{(0)}_{n \times n} \boldsymbol{z}_{n \times 1}^{(0)}\] sehingga diperoleh perkiraan nilai optimal \(\boldsymbol{\beta}^{(1)}_{p \times 1}\)

  4. Lakukan Langkah 5 sampai stopping criterion terpenuhi

  5. Hitung \[\boldsymbol{\beta}^{(i+1)}_{ p\times 1} = \left\{ \boldsymbol{X}^t_{p \times n} \boldsymbol{W}^{(i)}_{n \times n} \boldsymbol{X}_{n \times p} \right\}^{-1} \boldsymbol{X}^t_{p \times n} \boldsymbol{W}^{(i)}_{n \times n} \boldsymbol{z}_{n \times 1}^{(i)}\] untuk iterasi \(i = 1, 2, \dots\)

  6. Saat stopping criterion terpenuhi, maka \(\boldsymbol{\beta}^{(i+1)}_{p\times 1}\) merupakan nilai penduga bagi parameter \(\boldsymbol{\beta}_{p\times 1}\)

  7. Ragam dari \(\boldsymbol{\beta}_{p\times 1}\) dapat diperoleh dari \[\text{Var}(\boldsymbol{\beta}_{p \times 1}) = \left( \boldsymbol{X}^t_{p \times n} \boldsymbol{W}^{(m)}_{n \times n} \boldsymbol{X}_{n \times p} \right)^{-1}\] saat stopping criterion terpenuhi.

Sifat Sebaran Asimtotik dari Penduga koefisien \(\boldsymbol{\beta}\)

Sifat Sebaran Asimtotik dari Penduga koefisien \(\boldsymbol{\beta}\) diturunkan dari sifat asimtotik ML (Maximum Likelihood) yaitu

Sifat Asimtotik dari MLE

Misalkan \(X_1, X_2, X_3, \dots, X_n\) adalah sampel acak dari suatu distribusi dengan parameter \(\theta\). Misalkan \(\hat{\theta}_{ML}\) menyatakan penduga maksimum likelihood (MLE) dari \(\theta\). Sedemikian sehingga

  1. \(\hat{\theta}_{ML}\) bersifat konsisten secara asimtotik, yaitu, \[\lim_{n \to \infty} P\left(|\hat{\theta}_{ML} - \theta| > \epsilon \right) = 0.\]

  2. \(\hat{\theta}_{ML}\) tidak bias secara asimtotik, yaitu, \[\lim_{n \to \infty} \mathbb{E}[\hat{\theta}_{ML}] = \theta.\]

  3. Ketika \(n \to \infty\) , \(\hat{\theta}_{ML}\) mendekati peubah acak normal. Lebih tepatnya, peubah acak acak \[\frac{\hat{\theta}_{ML} - \theta}{\sqrt{\text{Var}(\hat{\theta}_{ML})}}\] berkonvergensi dalam distribusi ke \(N(0, 1)\).

Berdasarkan sifat asimtok MLE ketiga maka dapat disimpulkan bahwa

\[ \frac{\boldsymbol{\hat{\beta}}_{ p\times 1}-\boldsymbol{\beta}_{ p\times 1}}{\left( \boldsymbol{X}^t_{p \times n} \boldsymbol{W}^{(m)}_{n \times n} \boldsymbol{X}_{n \times p} \right)^{-1}} \overset{n \to \infty}{\sim} N(\boldsymbol{0},\boldsymbol{1}) \] Sehingga untuk melakukan pengujian koefisien akan menggunakan sebaran z.