OpenAI was started just over 6 months ago, and I feel like they have done enough to warrant a review of what they have done so far, and my thoughts of what they should do next.
What is OpenAI?
OpenAI was announced in December 2015 and their stated mission is:
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
In the short term, we’re building on recent advances in AI research and working towards the next set of breakthroughs.
What have they done so far?
- Started a new, small (so far) research center
- Experimented with a novel organization of the research center
- Hired a variety of smart people
- Released a toolkit for reinforcement learning (RL)
Since it has only been six months and they are still getting setup, it is still difficult to assess how well they have done. But here are my first impressions of the above points.
- Always great to have more places hiring researchers!
- Way too early to assess. I’m always intrigued by experiments of new ways to organize research, since there are three dominant types of organizations today (academia, industry focused on development, and industry focused on longterm research).
- Bodes well for their future success.
- I have yet to use it, but the it looks awesome. Supervised learning was sped along by datasets such as UC Irvine’s Machine Learning Repository, MNIST, and Imagenet, and I think their toolkit could have a similar impact on RL.
What do I think they should do?
This blog post was motivated by me having a large list of things that I think OpenAI should be doing. After I started writing, I realized that many of the things on my wish list would probably be better run by a new research institute, which I will detail in a future post. So here, I focus on my research wish-list for OpenAI.
Keep the Data Flowing
As Neil Lawrence pointed out shortly after OpenAI’s launch, data is king. So I am very happy with OpenAI’s RL toolkit. I hope that they keep adding new datasets or environments that machine learners can use. Some future ideas include supporting new competitions (maybe in partnership with Kaggle?), partnering with organizations to open up their data, and introducing datasets for unsupervised learning.
Unsupervised Learning
But maybe I’m putting the cart (data) before the horse (algorithms and understanding). Unsupervised learning is tough for a series of interconnected issues:
- What are good test cases / datasets for unsupervised learning?
- How does one assess learning success?
- Are our current algorithms even close to the “best”?
The reason supervised learning is easier is that algorithms require data with labels, there are lots of established metrics for evaluating success (for example, accuracy of label predictions), and we know for most metrics what is the best (100% correct label predictions). Reinforcement learning has some of that (data and a score), but is much less well defined that supervised learning.
So while I think the progress on reinforcement learning will definitely lead to new ideas for unsupervised learning, more work needs to be done directly on unsupervised learning. And since they have no profit motives or tenure pressure, I really hope OpenAI focuses on this extremely tough area.
Support Deep Learning Libraries
We currently have a very good problem: lots of deep learning libraries, to the point of almost being too many. A few years ago, everyone had to essentially code their own library, but now one can choose from Theano and TensorFlow for low end libraries, to Lasagne and Keras for high end libraries, just to name a few examples from Python.
I think that OpenAI could play a useful role in standardization and testing of libraries. While there are tons of great existing libraries, their documentation quality varies significantly, and in general is sub par (for example compared to NumPy). Additionally, besides choosing a language (I strongly advocate Python), one usually needs to choose a backend library (Theano vs TensorFlow), and then a high end library.
So my specific proposal for OpenAI is the following initiatives:
- Help establish some deep learning standards so people can verify the accuracy of a library and assess its quality and speed
- Set up some meetings between Theano, TensorFlow, and others to help standardize the backend (and include them in the settings of standards)
- Support initiatives for developers to improve documentation of their libraries
- Support projects that are agnostic to the backend (like Keras) and/or help other packages that are backend specific (like Lasagne) become backend agnostic
As a recent learning of deep learning, and someone who interacts extensively with non-machine learners, I think the above initiatives would allow a wider population of researchers to incorporate deep learning in their research.
Support Machine Learning Education
I believe this is the crucial area that OpenAI is missing, and it will prevent them from their stated mission to help all of humanity.
Check out a future post for my proposed solution…