University of California, Berkeley**We aren't endorsed by this school
Course
EECS 126
Subject
Statistics
Date
Dec 19, 2024
Pages
8
Uploaded by MateRoseChinchilla3
EECS 126 Probability and Random ProcessesUC Berkeley, Spring 2024Jiantao JiaoMay 6, 2024Final ExamLast NameFirst NameSIDLeft NeighborFirst and Last NameRight NeighborFirst and Last NameRules.•Unless otherwise stated, all your answers need to be justified and your work mustbe shown. Answers without sufficient justification will get no credit.•All work you want graded should be on the front or back of the sheets in thespace provided. Scratch paper will not be scanned/graded.•You have 180 minutes to complete the exam. (DSP students withX% time accommodationshould spend 180·X% time on completing the exam).•This exam is closed-book. You may reference one double-sided handwritten sheet of paper.No calculator or phones allowed.•Collaboration with others is strictly prohibited. If you are caught cheating, you may fail thecourse and face disciplinary consequences.
Midterm 1, Page 2 of 8Student ID (write on every page for 1 point):1Probability Party [45 points]For each of the following subparts, please directly provide your answerwithout explanationin theprovided boxes. Answers outside of the boxes will not be graded.(a) We’ve learned MAP and MLE in class. Some people would view MLE as a special case of MAPwhere the prior of the parameterθis uniform over the parameter space Θ. However, it is notalways true since a uniform distribution might not exist over the entire Θ. Please provide anexample of Θ (note that Θ is a set) for each of the following four cases: [8 points](i) Θ is discrete and there exists a uniform distribution over the entire Θ;Your answer:(ii) Θ is discrete and there does not exist a uniform distribution over the entire Θ;Your answer:(iii) Θ is continuous and there exists a uniform distribution over the entire Θ;Your answer:(iv) Θ is continuous and there does not exist a uniform distribution over the entire Θ.Your answer:(b) Let the sample space be Ω ={ω1, ω2, ω3}. Let eventsA={ω1, ω2},B={ω2, ω3}. If Pr(A) =x,Pr(B) =y, wherex∈[0,1], y∈[0,1], x+y≥1, what are the probabilities of the three outcomesω1, ω2, ω3respectively? When (i.e., under what choices of the values ofxandy) willAandBbe independent? [8 points]Pr({ω1}):Pr({ω2}):Pr({ω3}):
Midterm 1, Page 3 of 8Student ID (write on every page for 1 point):Values ofxandys.t.AandBare independent:(c) Given a fixedp∈(0,1). For any positive integern∈N+, we define a discrete positive randomvariableXnwhich satisfiesPr(Xn> N) =1−pnN·n,∀N∈N.What’s the name of the distribution ofX1? What’s the expectation ofXn? What’s the varianceofXn? What’s limn→∞Pr(Xn> N) for anyN∈N? [8 points]Name of distribution ofX1:The expectation ofXn:The variance ofXn:limn→∞Pr(Xn> N):
Midterm 1, Page 4 of 8Student ID (write on every page for 1 point):(d) LetX0= 0. For eachi∈N,Xi+1is generated conditioned onXi, where with probability 1/2,Xi+1=Xi+1 and with probability 1/2,Xi+1=Xi−1. What’s the probability thatX126= 10?What’s the expectationE[X126]? What’s the conditional expectationE[X126|X120]? What’s theconditional variance Var(X127|X126)? What’s the variance ofX126? [12 points]Pr(X126= 10):E[X126]:E[X126|X120]:Var(X127|X126):Var(X126):(e) For the following three random variablesX, please findrsuch thatr·Pr(X≥r) is maximized.[9 points](i)X∼exp(λ);Your answer:(ii)X∼Uniform{1,2,3,4,5};Your answer:(iii)X∼Uniform[0,10].Your answer:
Midterm 1, Page 5 of 8Student ID (write on every page for 1 point):2Markov Chain with Actions [15 points]We’ve learned DTMC in class: when an agent is in statesat timet, i.e.,st=t, the probabilitythat the agent will be in states′at timet+ 1 is specified by Pr(st+1=s′|st=s) =Ps,s′whereP∈R|S|×|S|is the transition matrix andSis a finite state space.Now we consider a more general setting. At each time stept, the agent will first choose an actionat∈ Aconditioned on the current statest, whereAis a finite action space. The strategy that theagent choosing actions is specified by a policyπ:S → Awhich is a mapping from states to actions.Therefore, the agent will choose actionat=π(st), and the next statest+1follows a distributionPr(·|st, at) determined by bothstandat.(a) Ifπis fixed, is the above procedure a Markov chain? Briefly explain why. [4 points](b) LetS={0,1}andA={0,1}. Pr(st+1= (s+a)mod 2|st=s, at=a) = 9/10. Assumes0= 0. What is the optimal policy that maximizes Pr(s10= 1)? [6 points](c) LetS={0,1}andA={0,1}. Pr(st+1= (s+a)mod 2|st=s, at=a) = 9/10. Under policyπ(0) = 0, π(1) = 1, what is the stationary distribution? [5 points]
Midterm 1, Page 6 of 8Student ID (write on every page for 1 point):3Continuous-Time Markov Chain [30 points]Let{N(t)}t≥0∼PP(λ) be a Poisson process with rateλ >0.(a) For fixedt > s >0 andj∈N, find the conditional distribution ofN(s) givenN(t) =j. (6points)(b) DoesN(t)tconverge? If so, find and prove its limit and specify the strongest mode of convergence;otherwise, briefly explain the reason. (8 points)(c) Let{Yn}n=0,1,...be a discrete-time Markov Chain over state spaceSwith transition probabilityˆP= (ˆpi,j)i,j∈S.Furthermore,{Yn}n=0,1,...is independent from{N(t)}t≥0∼PP(λ).Define acontinuous-time stochastic process{X(t)}t≥0as follows:X(t) =YN(t),∀t≥0.(1) Prove that{X(t)}t≥0is a continuous-time Markov Chain with transition probabilityP(X(t) =j|X(0) =i) =∞Xk=0ˆpi,j(k)(λt)ke−λtk!where ˆpi,j(k) =P(Yk=j|Y0=i) is the (i, j)-entry ofˆPk. (10 points)(2) Supposeπis the stationary distribution of{Yn}n=0,1,.... Find the stationary distribution of{X(t)}t≥0. (6 points)
Midterm 1, Page 7 of 8Student ID (write on every page for 1 point):4Frequentist Statistics [15 points]Consider i.i.d sampleX1, . . . , Xnfrom the uniform distribution Unif[−θ,6θ] whereθ >0.(a) Prove thatˆθMLE= maxn−min{X1, . . . , Xn},max{X1,...,Xn}6ois the Maximum Likelihood Esti-mation (MLE) estimator ofθ. (5 points)(b) Prove thatˆθUMV U=n+1nˆθMLEis an unbiased estimator ofθ. (5 points)Hint: what is the CDF ofˆθMLE?(c) We say that an estimatorˆθis consistent if it converges toθin probability asn→ ∞. IsˆθUMV Uconsistent? Justify your claim. (5 points)
Midterm 1, Page 8 of 8Student ID (write on every page for 1 point):5Bayesian Statistics [15 points]Consider i.i.d sampleX1, . . . , Xnfrom the distributionpθ(x) =(eθ−x,x≥θ0,x < θwhereθhas priordistributionρ(θ) =(λe−λθ,θ≥00,θ <0.(a) Whenλ < n, find the Maximum a Posteriori (MAP) estimator. (6 points)(b) Whenλ=n= 1, find the Bayesian Least Mean Square (LMS) estimator (i.e., the estimatorthat minimizes the mean squared errorE[(θ−ˆθ)2]). (5 points)(c) Whenλ=n= 1, find the estimatorˆθ1that minimizes theℓ1estimation error:E[|θ−ˆθ|]. (4points)