The final projects will be completed in groups of up to 4 students depending on class size. The project may consist of any combination of theoretical analysis, applications, and literature survey (possibly incorporating incorporate all three). The only constraint on the project is that it should include some aspect of probabilistic inference and/or learning. You should aim for a project suitable for submission to a machine learning conference e.g. NIPS, ICML, KDD, or domain-specific conferences e.g. CVPR, EMNLP, RECOMB.
Proposals should be 1-2 pages long, and should include:
* Project title and list of group members
* Overview of project
* Literature survey of 3 or more relevant papers
* Description of data sets to use for the experiments, if applicable
* Plan of activities, including final goal and how you plan to divide up the work
Final Project Report
The final project will be submitted through Relate in NIPS 2016 format i.e. main manuscript of up to 8 pages, plus one page of references. Any member of the group may submit the project.
You may include a supplement as additional pages appended to the main pdf. If you have supplements that cannot be submitted as pdf, please email a zipped file to me by the deadline. The main paper should be self-contained i.e. do not assume that the reviewer will read your supplement.
Each group will give a short presentation of the course project in class. Live demonstrations of your software are highly encouraged (if applicable). Feedback from the presentation should be used to improve your project report.
You are expected to have a significant literature review and brainstorming stage. Your initial presentation may be primarily literature review, where the class helps you with brainstorming.
Finding references: A good place to start is using standard search with relevant keywords. From the initial search list, you should consider a breadth first search -- in future and past directions i.e., papers that cite the current paper, and papers that it cites. Once your idea is a bit more concrete, the course staff can also help supplement your reference list.
Video presentations: Each group is expected to present to the class their findings. The midway project presentation will focus on background and literature review, and will primarily introduce the main ideas to the class. The video length is preferred to be in range 15-20 minutes. The videos are accessible here.
Peer-grading: Presentations will be evaluated based on peer reports. Each individual must complete at least four peer reports in order to receive the participation grade (10%).
(We will have a 2-4 invited lectures, so may move presentations as appropriate.)
- March 23: Finalize teams
- March 31 - April 9: Literature review project presentations
- April 10: Submit project proposal (problem statement)
- April 17: Midway project report presentations (link)
- April 24: Peer review reports (written review of the project presentations from others). (Submission link)
- May 5: Final project reports (written) Submission link.
Students may develop projects based on their own research, with the constraint that it includes some aspect of probabilistic inference and/or learning. Alternatively, here are a list of possible directions.
- Deep learning and graphical models: There are three common ways to combine deep learning models with graphical models. Work broadly in this area would be interesting.
- Graphical Model Structure Learning: This is the task of estimating a graphical model which describes the distribution of a set of variables given only samples. The underlying graph may be either a directed or undirected graphical model. Potential projects include theory or applications of the following
- Inference: The most common use of a probabilistic graphical model is computing queries, most often the conditional distribution of a set of variables given an assignment to a set of evidence variables. In general, this problem is NP-hard, which has led to a number of algorithms (both exact and approximate). Potential topics include
- Comparing approximate inference algorithms in terms of accuracy, computational complexity, sensitivity to parameters. Some exact algorithms include Junction trees and Bucket elimination. On larger networks one typically resorts to algorithms that produce approximate solutions, such as sampling (Monte Carlo methods), variational inference, and generalized belief propagation.
- Convex Procedures -- Methods that performance approximate inference by convex relaxation (Wainwright 2002 and Mudigonda et al. 2007)
- Linear programming methods for approximating the MAP assignment (Wainwright et al. 2005b, Yanover et al. 2006, Sontag. et al. 2008)
- Recursive conditioning -- An any-space inference algorithm that recursively decomposes an inference on a general Bayesian network into inferences on a smaller subnetwork. (Darwiche 2001).
- Black-box variational inference (Ranganath et al. 2013)
- Clustered variational inference
- Probabilistic Inference For Event Prediction: The use of statistics to overcome uncertainty is one of the pillars of a large segment of the machine learning market. Probabilistic reasoning has long been considered one of the foundations of inference algorithms and is represented is all major machine learning frameworks and platforms. Recently, probabilistic reasoning has seen major adoption within tech giants like Uber, Facebook or Microsoft helping to push the research and technological agenda in the space. Specifically, probabilistic programming languages (PPLs) have become one of the most active areas of development in machine learning sparking the release of some new and exciting technologies. See here for more information.
- Pyro: Deep Probabilistic Programming
- Application areas: