**Instructor**: Tim Chumley

**Office**: Clapp 423

**Phone**: 413-538-2525

**e-mail**: tchumley

**Office Hours**: Mondays 4:00-5:00, Wednesdays 3:00-5:00, Fridays 11:00-12:00; additional availability by appointment

**Textbook**: *Introduction to Stochastic Processes with R* by Robert P. Dobrow, ISBN: 9781118740651;

on library reserve under QC20.7.S8 D63 2016;

available as a free e-text

Announcements will be posted on the Course Announcements Moodle forum throughout the semester, but essentially all other materials will be posted on this page.

Check the syllabus for all the important class policies (grades, attendance, etc.).

There will be weekly homework assignments throughout the semester to be turned in. Please read these guidelines for writing in our class.

**General information**. A selection of problems will be assigned to be written up individually and turned in each week.- These problems will be due
**Fridays at 5 pm**. - You may work with others but the writing should be done on your own.

- These problems will be due
**Gradescope**. Homework will be turned in through Gradescope.- You should be enrolled automatically. Please let me know if you have any issues logging in.
- Gradescope has made a short tutorial on submitting homework.

**Collaboration**. I want you to work together on the homework! The process of explaining your ideas to one another really helps in learning math. However, you must write up the assignments on your own and avoid copying others’ work directly. Also, please only write what you understand so that I know where to help, and please avoid using online forums like Math StackExchange, solutions manuals, or similar resources. A huge part of learning in this class comes from figuring out how to get unstuck by spending time thinking on your own or talking to me and classmates; these other resources can end up being counter-productive in the long term.**Rewrites**. Homework is for practice, and you are not expected to write perfect answers from the start!- You will be allowed to submit revisions of most problems for full credit each week.
- Your revisions will be due on
**Fridays at 5 pm**. - Please resubmit (only the problems you’re revising) on Gradescope.

Assignment | Due |
---|---|

Homework 0 | Jan 26 |

Homework 1 | Feb 2 |

Homework 2 | Feb 9 |

Homework 3 | Feb 16 |

Homework 4 | Feb 23 |

Homework 5 | Mar 1 |

Homework 6 | Mar 15 |

Homework 7 | Mar 29 |

Project 0 | Apr 2 |

Homework 8 | Apr 5 |

Homework 9 | Apr 12 |

Exam revisions | Apr 12 |

Homework 10 | Apr 19 |

Project 1 | Apr 19 |

Project 2 | May 3 |

There will be two exams. The dates for the exams are subject to change slightly.

Exam | Due Date | Format | Material |
---|---|---|---|

Exam 1 | Mar 8 | take-home | Sections 1.1-3.6 |

Exam 2 | May 7 | take-home | Sections 6.1-7.4 |

We’ll devote the last week of the semester to a mini-symposium of short presentations. Since the field of stochastic processes is rich with interesting examples and topics, more than we could cover in a single semester, projects will cover a topic or application that we might otherwise not have time for in class.

- Project preferences are due April 2.
- Project notes related to your topic are due April 19.
- Project presentation on your topic will be April 25 or April 30.
- Project reflections on your topic and presentation are due May 3.

- Markov Chains and Music based on the paper
*Markov Chains for Computer Music Generation*by Ilana Shapiro and Mark Huber - Markov Chains and Card Shuffling based on the paper
*Shuffling Cards and Stopping Times*by David Aldous and Persi Diaconis - Markov Chain Monte Carlo and Ecology based on the paper
*An Application of Markov Chain Monte Carlo to Community Ecology*by George W. Cobb and Yung-Pin Chen - Branching processes based on the paper
*On Caterpillars, Trees, and Stochastic Processes*by John C. Turner - Markov Chains and Mixing based on the paper
*A real-world Markov chain arising in recreational volleyball*by David Aldous and Madelyn Cruz - Markov Chains and Badminton based on the paper
*Using Markov chains to identify player’s performance in badminton*by Javier Galeano, Miguel-Ángel Gómez, Fernando Rivas, and Javier M. Buldú

Our plan is to cover most of chapters 1-3 and 6-7 in the textbook, and possibly some of 4, 5, or 8, time permitting. Below is a rough outline of what is to be covered week by week through the semester. Please check back regularly for precise details on what is covered, as well as postings of class materials like lecture notes. A note about files: LaTeX and LyX source code is available for nearly all typeset documents. Simply change the file extension from `.pdf`

to `.tex`

or `.lyx`

in the URL.

**Topic**: Sections 1.1, 1.2: Introduction. We start to get familiar with the notion of a stochastic process and discuss the terms state space, index set, Markov chain, transition state diagram, and transition matrix.**Class materials**: Lecture notes, worksheet**After class**: Work on Homework 0.

**Topic**: Sections 2.1, 2.3: \(n\)-step transitions. We give a formal definition of a discrete time, discrete state Markov chain and begin to discuss how the transition matrix of a Markov chain can be used to understand multi-step transition probabilities using the conditional law of total probability.**Class materials**: Lecture notes, worksheet**After class**: Work on Homework 1 and make sure you can run RStudio in class Tuesday.

**Topic**: Sections 2.2, 2.3, 2.4: Random walks. We continue our discussion on using matrix powers to understand \(n\)-step transition probabilities. Our main example will be a random walk on a graph, but the ideas hold for all kinds of Markov chains.**Class materials**: Lecture notes, worksheet**After class**: Read about the distribution of \(X_n\) and joint distributions in Section 2.3.

**Topic**: Section 2.3: Joint distributions. We discuss the (unconditional) distribution of \(X_n\) and joint distributions of a Markov chain at multiple time steps.**Class materials**: Lecture notes, worksheet**After class**: Read about a random walk on a cycle in Section 2.4 and about simulation in Example 2.19 of Section 2.5.

**Topic**: Section 3.1: Limiting distributions. We begin our discussion of the notion of limiting distributions. Our aim in the chapter will be to understand the long term behavior of Markov chains theoretically, but we start with some numerical work.**Class materials**: Lecture notes, worksheet**After class**: Make sure to bring a laptop to class next time and download the rref.Rmd file in advance.

**Topic**: Section 3.2: Stationary distributions. We begin our discussion of how to turn the problem of understanding limiting distributions from a question about limits to a question about algebra. This entails introducing the idea of a stationary distribution and solving linear systems of equations.**Class materials**: Lecture notes, worksheet, worksheet solutions**After class**: Post a question on Piazza about Problem 3 in today’s worksheet or some other piece of class material so far. Feel free to respond to others’ questions. For Tuesday, read Examples 3.8 and 3.11 and the definition of communication class.

**Topic**: Section 3.3: Communication classes. We discuss what it means for states to communicate and get introduced to the ideas of recurrence and transience. This discussion, while separate from limiting and stationary distributions, is the first piece to developing an understanding of the Limit Theorem for Regular Matrices and more broadly the question of which Markov chains have a limiting distribution.**Class materials**: Lecture notes, worksheet**After class**: Read pages 96-98 on recurrence and transience.

**Topic**: Section 3.3: Recurrence and transience. Our aim is to deepen our understanding of recurrent and transient states. We discuss the related ideas of excursions, regeneration, and the geometric distribution in order to give a quantitative criterion for showing a state is recurrent or transient.**Class materials**: Lecture notes**After class**: Read Theorem 3.3, Corollary 3.4, and Lemma 3.5.

**Topic**: Section 3.3: Canonical decomposition. We discuss how the recurrence and transience series criterion, briefly introduced last time, is derived, and then show how it can be used to show that states in a communication class are either all recurrent or all transient.**Class materials**: Lecture notes, worksheet**After class**: Read about first step analysis in Section 3.4, focusing on Example 3.17.

**Topic**: Section 3.4: Irreducible Markov chains. We discuss finite state irreducible Markov chains and a theorem that states that these Markov chains always have a unique stationary distribution whose components can be found using the expected length of an excursion. We discuss a computational method called first step analysis for finding the expected length of an excursion.**Class materials**: Lecture notes**After class**: Read about periodicity in Section 3.5. Put your focus on Lemma 3.7 and Example 3.18.

**Topic**: Sections 3.5, 3.6: Periodicity and ergodicity. Through examples we have seen two main obstructions for the existence of a limiting distribution with positive components: multiple communication classes (which obstructs the limiting matrix from having equal rows) or states having a periodic structure (which prevents \(\lim_{n \to\infty} P^n\) from existing like in the random walk on the 6-cycle). We discuss periodicity in more depth and a new theorem, the Limit Theorem for Ergodic Markov chains, which tells us that once these obstructions are overcome we must have a limiting distribution: any finite state, irreducible Markov chain whose states have period 1 must have a limiting distribution with positive components.**Class materials**: Lecture notes, Ergodicity**After class**: Read the opening discussion in Section 3.8 up to Example 3.27.

**Topic**: Section 3.8: Absorbing chains. Our final aim in our study of discrete time Markov chains is to understand in detail the limiting behavior of a chain that starts in a transient state. We begin with the special case of absorbing Markov chains. These chains consist of transient states and states \(i\) which satisfy the property \(P_{ii} = 1\).**Class materials**: Lecture notes, Absorbing chains, R code**After class**: Read Theorem 3.11 and the second proof given.

**Topic**: Section 3.8: Absorbing chains. We discuss the expected number of steps until absorption. We also present the idea of using absorption probabilities to understand the limiting matrix of a Markov chain with transient states and two recurrent communication classes.**Class materials**: Lecture notes, More on absorption**After class**: Work on the exam!

**Topic**: Exam work day. Instead of a regular class, I’ll be available for office hours.**After class**: Work on the exam!

**Topic**: Section 6.1: Poisson process. We introduce our first example of a continuous time Markov chain, the Poisson process, which models the occurence of arrivals of events (for example, telephone calls or customers entering a store) over continuous time.**Class materials**: Lecture notes, Poisson process**After class**: Review your probability theory notes on the exponential distribution.

**Topic**: Section 6.2: Arrival times. We continue our study of the Poisson process, discussing the inter-arrival times in a Poisson process. We show that these times are exponentially distributed. The gamma distribution is also used to model the time of the \(n\)th arrival.**Class materials**: Lecture notes, Arrival times**After class**: Enjoy spring break!

**Topic**: Spring break, no class.

**Topic**: Spring break, no class.

**Topic**: Section 6.4: Thinning and superposition. We spend the day discussing what kinds of statements and questions can be addressed about independent Poisson processes and independent exponentially distributed random variables.**Class materials**: Lecture notes, Thinning and superposition**After class**: Try the worksheet problems above, come ask questions about them, and read the introductory paragraphs of Sections 7.1 and 7.2.

**Topic**: Sections 6.3, 7.1: Introduction to continuous-time Markov chains. We introduce the definition of a Markov chain with finite or countable state space that evolves in continuous time. This involves the interrelated notions of hold times (the exponentially distributed amount time until a transition to another state occurs), the embedded chain (the discrete time Markov chain that determines which state to transition to), and the idea that in a certain sense a Poisson process can have at most one arrival in an infinitesimal time interval.**Class materials**: Lecture notes, Hold times and embedded chains**After class**: Read Example 7.5.

**Topic**: Section 7.2: Alarm clocks and transition rates. We work on more examples of continuous-time Markov chains and further discuss the idea that a CTMC can be defined by its transition rates.**Class materials**: Lecture notes, Transition rates**After class**: Read the introductory discussion of Section 7.3 and the proof of the Kolmogorov forward equation.

**Topic**: Section 7.3: Infinitesimal generator. We introduce the matrix \(Q\) of transition rates between states and its relationship with the transition probability function \(P(t)\). The Kolmogorov forward equation establishes a differential equation that relates these ideas, and the matrix exponential \(e^{tQ}\) enables us to solve the Kolmogorov forward equation.**Class materials**: Lecture notes, Infinitesimal generator**After class**: Read the Theorem 7.2 and the proposition and proof on the following page titled Stationary Distribution and Generator Matrix.

**Topic**: Community day, no class.

**Topic**: Section 7.4: Stationary distribution. We discuss limiting and stationary distributions in the regime of continuous time Markov chains. Our main goal is to establish an algebraic equation that can be solved to find stationary distributions analogous to the equation \(pi P=pi\) from the discrete time case.**Class materials**: Lecture notes, Limiting distribution**After class**: .

**Topic**: Project work time.

**Topic**: Project work time.

**Topic**: Project work time.

**Topic**: Project work time.

**Topic**: Project presentations.**After class**: Work on Exam 2. Enjoy your summer! Keep in touch!

Here are a few ways to get help:

**Office Hours**: Mondays 4:00-5:00, Wednesdays 3:00-5:00, Fridays 11:00-12:00; additional availability by appointment

**Study groups**: Other students in the class are a wonderful resource. I want our class to feel like a community of people working together. Please get in touch if you’d like me to help you find study partners, and please reach out if you’d like others to join your group. You may work together on homework, explain and check answers, but make sure you write what*you*know on homework in order to get good feedback.**Piazza**: I’ve set up an online forum for asking and answering questions about class material, including homework problems. My plan is to check the forum regularly and answer questions as they come up, but my hope is everyone in the class will pitch in and answer others’ questions as another form of participation in the class.

- Everyone is invited to join DataCamp, which provides an introductory R tutorial. It’s a convenient way to gain some familiarity with R, a useful tool for our course and beyond. Our textbook also provides a thorough tutorial of some R basics in the appendix.
- Our textbook also has a useful collection of R scripts available; contained there are all the R code snippets you’ll notice interspersed in the text.
- I’ve collected some resources to help you with some basics of RMarkdown.
- Here is a RMarkdown/LaTeX template file for writing nicely formatted documents, along with its pdf output.
- A LaTeX quick reference is available for commonly used symbols.
- RStudio Server is a cloud service that lets you edit and compile R and RMarkdown files through a web browser so that no local installation is needed. The server is hosted on the MHC network and you need to be on the VPN to access it if you’re away from campus.
- You can also install R and RStudio locally on your personal computer (you must install R before RStudio), or you can also use RStudio Cloud, which is a commercial RStudio cloud service with a free tier.

- Here are some useful RMarkdown documents that have code snippets for ideas that will be used through the class.