Course: Probability: Basic Concepts & Discrete Random Variables
Length: 6 weeks, 4-6 hrs/wk (New session starts Oct. 13, 2018)
Instructor: Mark Ward
Quote:In this course, we will first introduce basic probability concepts and rules, including Bayes theorem, probability mass functions and CDFs, joint distributions and expected values.
Then we will discuss a few important probability distribution models with discrete random variables, including Bernoulli and Binomial distributions, Geometric distribution, Negative Binomial distribution, Poisson distribution, Hypergeometric distribution and discrete uniform distribution.
I grew to love this course, and I feel like it greatly helped my understanding of probability, but when I sat down and put all my thoughts together, I remembered being less enthusiastic at the start. So now I have to wonder: do I have a highly positive view of the class because it’s really good, or because I did well? And did I do well because it was a great course, or because, after running at this stuff multiple times, I was finally ready to learn some of it? I don’t know, but this was the right course for me at the right time and I’m very pleased – and ready to tackle part 2, which is the highest endorsement anyone can give.
This is part 1 of a 2-part series, and covered general probability techniques and discrete models; continuous models and the fancy stuff like Markov chains are covered in Part 2. I took it in archived form, meaning the forums weren’t active. I’m happy to discover that both parts will be re-starting on Oct. 13; I’m registered for both. For me, math is a process, with lots of loops and restarts.
By way of brief recap, since in math classes, background is everything: I ended up in this course after I tried the Harvard follow-up to the very introductory mooc Fat Chance, but it was too “mathy” for me; the focus was on proving theorems with little guidance on what to do with them. This was more my speed, an in-between step, since explanation shared the stage with deriving and proving theorems.
The course is structured so that all the lectures are presented for the week, at which point I would feel generally confused. I felt like I was missing an overview, a sense of where we were going, the connective tissue of narrative. But I have learned patience, and it paid off: the lectures were followed by three problem sets of non-graded practice questions, complete with answers and varying degrees of detailed explanation. This is where the course really worked well for me: doing these questions – or in many cases, not doing them because I didn’t know what to do at first – and reviewing the given solutions made sense of the lectures. I do wish there had been a few “basic nuts and bolts” questions after each video, but that’s me.
Because it took me a while to catch on to the rhythm of the course, I think I still have more to learn from the first weeks in particular, which is why I signed up again. It will also be helpful to have forums for questions (I still don’t think I fully understand how to calculate variance, particularly using the “diagonal” approach shown in the video), though I found I could get most of my questions answered through old forum posts.
Each week ended with a graded set of 11 or 12 questions. These varied in complexity, which is always helpful. For the most part they shadowed the practice questions, though some would venture into unexplored territory or require some extra consideration of just what manipulation was necessary. Most of the questions required calculation; a few were multiple choice. Grading was generous: three attempts were possible for each question (although the syllabus claimed they were single-attempt; maybe they are single-attempt in live sessions, with more leeway in archive. Or maybe they changed their minds. Or maybe it’s a mistake).
And again, these questions is where the lectures came together for me. I wish there had been another round somewhere along the line, since often I didn’t figure out how something worked until the last question, but there were no further questions on that aspect to make sure I knew what I was doing.
I found one outside source to be an enormous help: the Youtube channel run by jbstatistics (aka Jeremy Balka, assistant prof of Math at Guelph University). These videos are extremely clear, step-by-step explanations of basic topics in Discrete (and continuous) probability without a lot of technical verbiage.
I’m really glad I found this course, and I’m hoping to be able to tackle Part 2 on continuous models, which is where I completely fell apart in the Harvard series. It might not be the course for everyone; for someone at my level, a good deal of frustration tolerance is required, but a little patience went a long way and in the end, the result was very much worth it.