NLPRelative
目录
Semantic Parsing: aims to translate a natural languages sentence into its corresponding executable programming language, which relieves users from the burden of learning techniques behind the programming language.
1. Grammar-based
- Recent Question as context
- Precedent SQL as context
level: author: Kelvin Guu date: 2020 keyword:
- NLP
Guu, K., Lee, K., Tung, Z., Pasupat, P., & Chang, M. W. (2020). Realm: Retrieval-augmented language model pre-training. arXiv preprint arXiv:2002.08909.
Paper: REALM
Proble Statement
- models such as BERT, RoBERTa, T5 store a surprising amount of world kownledge,;
- the learned world knowledge is stored implicitly in the parameters of the underlying neural network;
- Storage&Database space is limited by the size of network;
previous work:
- Language model pre-training: the pre-trained model can be further trained for a downstream task of primary interest, leading to better generalization;
- Masked Language model: predict the missing tokens in an input text message;
- Open-domain question answer(OpenQA): unlike traditional reading comprehension tasks, QA doesn’t receive a pre-identified document that is known to contain the answer;
- retrieval-based approach: given a question x, retrieve potentially relevant documents z and extract the answer;
Methods
- Pipeline
- z: the retrieved document
- x: the question;
- y: the answer;
$$ P(y|x)=\sum_{z\epsilon Z}P(y|z,x)P(z|x) $$
- system overview:
【Module One】Knowledge Retriever
【Module Two】Knowledge-Augmented Encoder : join x and z into a single sequence that we feed into a Transformer (distinct from the one used in the retriever).
Notes 去加强了解
- BERT methods