What are the most interesting things Facebook is doing in ML research?

Answer by Joaquin Quiñonero Candela:

The Applied ML team I am a part of is Facebook’s applied research arm. We work on core ML, on computer vision, on computational photography and on language technologies. We work very closely with Facebook AI Research (FAIR) who is pushing the state-of-the-art on these areas, and we are complementary in that we focus more heavily on applications. I would like to highlight a couple of recent pieces of research I find very exciting. This is by no means a complete list, and we’re not doing this alone but in collaboration with FAIR and the many product teams we partner with.
There are many interesting research problems that the team is tackling: Universal vision models using multi task learning, representation learning (link to paper), large scale distributed training using Elastic SGD, space-time convolutional networks for videos (link to paper), cascade of networks for faster and better vision models (link to paper), learning from videos (link to paper).
If you are curious about further details of our work on applied computer vision at Facebook, try asking Manohar Paluri a Quora question!
In language technology, one thing we are trying to do is eliminate language barriers on Facebook. In order to do this we serve over 2B translations of posts every single day, with over 1800 language directions representing more than 40 unique languages. We used to depend on Bing translate for a while and have built and deployed our own technology. And now we are pushing forth to evaluate deep learning for translation, hoping to achieve more human-like translations using neural networks. You can ask Alan Packer a Quora question if you would like to know more about what is going on in our language technologies applied research and product work.
In core ML, we focus on researching and shipping large scale and realtime ML/AI algorithms for some of the biggest ML applications in the world. Whenever a users logs into Facebook, these models are used to rank news feed stories (1B users every day, 1.5K stories per user per day on average), ads, search results (1B+ queries a day), trending news, friend recommendations and even rank notifications that a user receives, or rank the comments on a post. The Core ML team also builds state of the art text understanding algorithms using deep learning. These algorithms are integrated into the ML platform we’ve built to facilitate and scale ML from training to model deployment. This platform is used by every team that uses ML in production. To give an idea of how prevalent ML is at Facebook, a bit over 20% of all Facebook engineers (and even some non Engineers) actively use the platform. Our current wave of research involves deep learning models for event prediction, distributed learning for sparse modeling and deep learning, representation learning for text understanding through convolutional and recurring nets and model compression through multitask learning.  If you want to learn more about Core ML at Facebook, ask Hussein Mehanna a question.

What are the most interesting things Facebook is doing in ML research?


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s