A proof of Maximum Likelihood Estimator
1. Proving Convergence in Probability
if set
, then usually we will have following conditions:
![{\displaystyle ES_{n}(\theta )=0,\,\,\,\,\,\,\,-S'_{n}(\theta ^{})>0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e4d2ced45817925e9320b63307d4305715b7e008)
and we know
![{\displaystyle P(|\theta _{mle}-\theta |\geq \delta )=P(\theta _{mle}\geq \theta +\delta )+P(\theta _{mle}\leq \theta -\delta )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/776729ebbd49c98dd7731cb82c98975c66806a62)
so
![{\displaystyle {\begin{aligned}P(\theta _{mle}\geq \theta +\delta )&=P(0\leq S_{n}(\theta +\delta ))\\&=P(0\leq S_{n}(\theta )+\delta S'_{n}(\theta ^{*}))\\&=P(S_{n}(\theta )\geq -\delta S'_{n}(\theta ^{*}))\\&\leq P(S_{n}(\theta )>0)\rightarrow 0\,\,\,\,\,\,\,(\,\,\,since\,\,\,S_{n}(\theta )\rightarrow _{a.s.}0\,\,)\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2c8224a67aa07bd1ca5db43026d284905d6e6edc)
similarly
![{\displaystyle P(\theta _{mle}\leq \theta -\delta )\rightarrow 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/28d21e058d47d7e534af72a82aec89d655bd6dbd)
thus complete the proof of
![{\displaystyle \theta _{mle}\rightarrow _{p}\theta }](https://wikimedia.org/api/rest_v1/media/math/render/svg/e32add2461544a49bd7d6cb53373f93c767fce2b)
2. Derive limiting distribution
given
we will have
![{\displaystyle S_{n}(\theta _{mle})=0=S_{n}(\theta _{})+(\theta -\theta _{mle})S'_{n}(\theta _{})+o_{p}(\theta -\theta _{mle})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8d4c6e65a287a061d825b59339d44e12a85ddf82)
which indicates
![{\displaystyle {\sqrt {n}}(\theta _{mle}-\theta _{})={\frac {{\sqrt {n}}S_{n}(\theta _{})}{S'_{n}(\theta _{})+o_{p}(1)}}\rightarrow {\frac {N(0,I_{1}[\theta ])}{I_{1}[\theta ]}}=N(1,I_{1}^{-1}[\theta ])}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dbffab869bc28ad68b0796b37389e3aace772191)