Journal Updates
As part of the Canadian Distributed Mentorship Program, here is where I will be posting my journal entries relating to the work done.
I - | II - | III - | IV - | V - | VI - | VII - | VIII - | IX- | X- | XI- | XII- | XIII- | XIV- | XV- | XVI- | XVII- | XVIII |
Week Four: May 21 - May 26
The work on the ICML competition continues! This week I worked on the Blackjack environment, again similar to the one developed at the University of Alberta. Where by worked, I mean translated to Java, and by similar, I mean really similar :). It worked without a problem, and I discovered a bug in the previous MountainCar agent code I had released. Being slightly obsessive compulsive, I gave myself a boot to the head, as the bug came from misplacing two paranthesis. Hrmph.
I succesfully read about Decision Trees from Alpaydin's
Introduction to Machine Learning
book, and a bit on parametric methods.
The real breakthrough of the week was understanding Principal Components Analysis
(PCA).
It always comes up in Machine/Reinforcement Learning papers, so I became increasingly terrified of it. Much like
neural networks, topology, and POMDPs.
They just sound like you ought to be reasonably scared of them.
Well, PCA is just a method to map inputs from a d-dimensional space to a (k < d)-dimensional space, with minimum loss
of information. The criterion to be maximized is variance, and PCA does that by waving the magic eigenvector
wand around. Suprisingly straight forward. The moral of this story is: there is no correlation between acronyms
and concept complexity. Next week: the correlation between the number of greek letters and proof complexity. Myth or Fact.
This week was kind of slow, as I caught a cold, and felt like I swallowed tacks. And then I took Nyquill during the day, which took care of the tacks, but made life positively more interesting. Night-time cold medication + staying awake and reading about decision trees = feeling like you've been hit by a Mack truck. I don't recommend it. So next week I think I'm going to spend more time next week reading/thinking about Decision Trees. While I understand the concepts, I don't really know where I would start if I were given a problem to solve. It's the classic problem of trying to understand a concept without doing examples. So I'm going to do examples.