Rene Bidart

Keeping up with Deep Learning Research

When I started my PhD I was overwhelmed trying to keep up with new research. Trying to read everything will take forever, but by not reading enough we risk rediscovering old ideas, or doing unimportant work. So can we balance this?

Here are a few lessons I learned:

  1. Papers are often dense and filled with unnecessary math. Read summaries (blogs, twitter threads, or newsletters) to understand the paper and decide if it is worth reading.
  2. If a paper is worth looking at for more than 30 seconds, write a quick summary of it to aid understanding and improve recall. It’s too easy to convince yourself you know something out of laziness.
  3. My approach was to spend one day per week reviewing research, because many newsletters come out every week, and it’s easy to sort other sources like reddit by top weekly.

Resources

Arxiv Sanity and Papers with code are probably the best way to see what real researchers are interested in.

Reddit machine learning is good, lots of smart people but high percentage of idiots and beginners, so lots of basic/clickbait stuff gets highly upvoted.

The best AI newsletters:

  1. Jack Clark - My favourite weekly overview of the latest ML research
  2. The Batch - Another great weekly overview of ML research, from Andrew NG
  3. AI Alignment - Good but very focused on AI safety
  4. China AI - China specific, but useful because so much research and implementation happens there

As far as legit research these are the best organizations to look at. They’re good at hyping their research, so checking this could be redundant:

  1. OpenAI
  2. Deep Mind
  3. Google AI

These rarely publish but are higher quality:

  1. Distill (really good, but will take some time because you actually learn things reading it)
  2. The Gradient (good quality blog)

There’s a ton of good people on twitter, I made an account specifically as a news source, but most people recommend you actually engage with people. I made a list of some good ones :

How much time to spend on this?

This is the classic exploration-exploitation problem in reinforcement learning, and there’s no right or wrong way to do it. Spending more time keeping up with random results in related fields to your core work lets you make better connections between fields, and can give you inspiration, but can easily eat up too much time. Once you’ve setted on a topic the key is learning to filter the research relevant to your subfield.

It’s important to remember learning the basics is more important that furiously reading all the most recent research. Most research is quickly forgotten, and 5 years later the whole field will be condensed into a one semester course for undergrads. As long as you have a general idea of the field and anything relevant to your specific subfield you’ll probably be fine, and extra time is better spent producing something.