Chris Pollett > Students > Li

    Print View



    [CS 297 Proposal]

    [Del 1: Back Propagation]

    [Del 2: Word Embedding]

    [Del 3: Setup]

    [Del 4: QA System Architecture]

    [CS297Report [PDF]]

    [CS 298 Proposal]

    [Del 6: QA System]

    [CS298Report [PDF]]

    [Oral Defense Slides[PDF]]

CS298 Proposal

A Question Answering System on SQuAD Dataset Using an End-to-end Neural Network

Bo Li (

Date: Feb 7, 2018

Advisor: Dr. Chris Pollett


Question Answering (QA) is a subfield of artificial intelligence concerned with developing computer systems capable of answering natural language questions automatically. QA techniques are used widely in search engines, personal assistant applications on smartphones, voice control systems and a lot more other applications. In recent years, more end-to-end neural network architectures have been built to do question answering tasks. In contrast, traditional QA solutions use syntactic and semantic analyses as well as hand made features. Today, end-to-end neural network approaches give more accurate result. In this project, the Stanford Question Answering Dataset (SQuAD)[1] is used. It includes questions asked by human beings on Wikipedia articles. The answer to each question is a segment of the corresponding Wikipedia article[1]. In total, SQuAD contains 100,000+ question-answer pairs on 500+ articles[1]. The goal of this project includes two parts. First, we will build a baseline QA system on SQuAD using an end-to-end neural network architecture. Second, we will experiment different architectures to find out to what extent do different architectures influence the systems performance.

CS297 Results

  • Calculation of Back Propagation on a feed forward network example
  • Implementation of word embedding using neural probabilistic language model and skip-gram model
  • System design of a baseline question answering system on SQuAD using the match-lstm and answer pointer model in [2]

CS298 Schedule

Week 1 - 2: 01/29 - 02/11Deliverable #6 : A baseline QA system on SQuAD based on [2]
Week 3 - 4: 02/12 - 02/25Deliverable #7 : An extension of the build QA system to support doing experiments on different architectures
Week 5 - 6: 02/26 - 03/11Deliverable #8 : Experimental results of different architectures
Week 7 - 9: 03/12 - 04/01Deliverable #9 : CS298 Report
Week 10 - 16: 04/02 - 05/20 Oral Defense


  • Deliverable #6: A baseline QA system on SQuAD based on [2]. This deliverable is a complicated implementation which includes processing data, training the QA system and deploying the QA system.
  • Deliverable #7: An extension of the build QA system to support doing experiments on different architectures. In this deliverable, various architectures will be added the the QA system.
  • Deliverable #8: Experimental results of different architectures. In this deliverable, we will try to find a better model, which might saves memory or gives better accuracy. [3-7] are part of the books or papers we may refer to.
  • Deliverable #9: CS298 report.

Innovations and Challenges

  • There are many challenges in this project. First, since attention mechanism is used, training time is a big challenge. Second, since Tensorflow API does not support the attention mechanism we will use, I need to implement the attention mechanism of match-lstm from scratch. The last but not the least, data processing, which includes tokenization, building vocabulary, building embedding matrix and a lot more other works, is quite time consuming.
  • This project has several quite exciting innovations. First, the technique we are using is the state-of-art technique. Second, we will do experiments on different architectures to try to find a better model, which might saves memory or gives better accuracy.


  2. Wang, Shuohang, and Jing Jiang. "Machine comprehension using match-lstm and answer pointer." arXiv preprint arXiv:1608.07905 (2016).
  3. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
  4. Wang, Wenhui, et al. "Gated self-matching networks for reading comprehension and question answering." Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vol. 1. 2017.
  5. Hu, Minghao, Yuxing Peng, and Xipeng Qiu. "Reinforced mnemonic reader for machine comprehension." CoRR, abs/1705.02798 (2017).
  6. Huang, Hsin-Yuan, et al. "FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension." arXiv preprint arXiv:1711.07341 (2017).
  7. Liu, Rui, et al. "Phase Conductor on Multi-layered Attentions for Machine Comprehension." arXiv preprint arXiv:1710.10504 (2017).