Advanced Q Learning
Today I watched lecture 7 from Berkeley’s deep rl course, which covered various implementation tricks with DQN and general Q-learning methods.
Overall, this chapter didn’t introduce any completely “new concepts” and serves more as an information session for the respecitve homework assignment (which is to implement Q-learning). However, since Q-learning is so important to the field, I appreciate the extra exposure to the algorithm and various tips and tricks to implement it. After finishing the homework and implementing Q-learning for atari pong, I plan to use it to implement my own game (to get more experience with the field)
Here’s my notes: