Award Abstract # 1734304
CompCog: Computational, distributed accounts of human memory: improving cognitive models

NSF Org: BCS
Division of Behavioral and Cognitive Sciences
Recipient: THE PENNSYLVANIA STATE UNIVERSITY
Initial Amendment Date: July 19, 2017
Latest Amendment Date: December 2, 2020
Award Number: 1734304
Award Instrument: Standard Grant
Program Manager: Betty Tuller
btuller@nsf.gov
 (703)292-7238
BCS
 Division of Behavioral and Cognitive Sciences
SBE
 Directorate for Social, Behavioral and Economic Sciences
Start Date: August 1, 2017
End Date: July 31, 2022 (Estimated)
Total Intended Award Amount: $499,969.00
Total Awarded Amount to Date: $499,969.00
Funds Obligated to Date: FY 2017 = $499,969.00
History of Investigator:
  • Prasenjit Mitra (Principal Investigator)
  • David Reitter (Former Principal Investigator)
  • Matthew Kelly (Former Principal Investigator)
Recipient Sponsored Research Office: Pennsylvania State Univ University Park
201 OLD MAIN
UNIVERSITY PARK
PA  US  16802-1503
(814)865-1372
Sponsor Congressional District: 15
Primary Place of Performance: Pennsylvania State Univ University Park
316D IST Building
State College
PA  US  16802-1503
Primary Place of Performance
Congressional District:
15
Unique Entity Identifier (UEI): NPM2J7MSCF61
Parent UEI:
NSF Program(s): Perception, Action & Cognition,
Robust Intelligence
Primary Program Source: 01001718DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7252, 7495
Program Element Code(s): 725200, 749500
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.075

ABSTRACT

Memory is among the most impressive aspects of human cognition, allowing us to learn new words or new ideas from just a few examples. However, the scientific understanding of how this learning occurs is limited. This research project focuses on how learning occurs in the context of memory for language. Within the human mind, there is something like a dictionary that tells people what words mean (semantics) and how words are combined to make grammatical sentences (syntax). How does the mind learn this dictionary from experience with a language? Computer simulations can help science better understand this learning process. This scientific understanding can, in turn, help teach languages in the classroom and aid in the early detection of language deficits, whether it be developmental deficits in children, or age-related deficits in adults. Furthermore, improving the ability of computers to simulate language learning processes can also lead to the development of better technology such as machine translation, web search, and virtual assistants. This project considers how a better understanding of language learning can help us avoid common pitfalls of memory connected to the use of language. For example, humans easily over-generalize and judge a "book by its cover", associating certain occupations or personality traits with a gender. If we know how people come up with associations between words and concepts, we can also detect and prevent prejudices in language to help ensure that artificial intelligence applications, such as web search, do not produce prejudiced results. The project supports an interdisciplinary and diverse team of researchers and students at Penn State, attracting college students to engage with research in cognitive science and artificial intelligence.

In this project, the researchers are designing a new model of human memory, the Hierarchical Holographic Model. This computational model helps explain certain aspects of how words and languages are learned. The model draws on the successes of artificial intelligence and deep neural networks, and applies these insights to psychology. With this model, the researchers investigate the question of whether human memory has the ability to detect arbitrarily indirect associations between concepts. The model uses a recursive learning process, building on previously learned knowledge to acquire new knowledge, which allows the model to learn arbitrarily indirect and abstract relationships between words. The researchers consider evidence that sensitivity to abstract relations between words improves the ability of the computer model to learn syntax, such as parts-of-speech, and to use words appropriately to construct grammatical sentences. This work will be assessed against human language data and competing computational models. The success of the computational model should provide evidence that (1) language acquisition depends on indirect associations, and (2) human memory must be able to form indirect associations to facilitate it.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 14)
Arora, Nipun and West, Robert and Brook, Andrew and Kelly, Matthew "Why the Common Model of the mind needs holographic a-priori categories" Procedia Computer Science , v.145 , 2018 10.1016/j.procs.2018.11.060 Citation Details
Kelly, M. A. "Predicting syntactic priming from sentence embedding vectors" In Proceedings of the 42nd Annual Conference of the Cognitive Science Society , 2020 Citation Details
Kelly, M.A. and Ghafurian M., West and Reitter, D. "Indirect associations in learning semantic and syntactic lexical relationships." Journal of memory and language , v.115 , 2020 https://doi.org/10.1016/j.jml.2020.104153 Citation Details
Kelly, Mary Alexandria and Arora, Nipun and West, Robert L. and Reitter, David "Holographic Declarative Memory: Distributional Semantics as the Architecture of Memory" Cognitive Science , v.44 , 2020 https://doi.org/10.1111/cogs.12904 Citation Details
Kelly, Matthew A. and Reitter, David "Holographic Declarative Memory: Using distributional semantics within ACT-R" Proceedings of the Association for the Advancement of Artificial Intelligence Fall Symposium on A Standard Model of the Mind , 2017 Citation Details
Kelly, Matthew A. and Reitter, David "How Language Processing can Shape a Common Model of Cognition" Procedia Computer Science , v.145 , 2018 10.1016/j.procs.2018.11.047 Citation Details
Kelly, Matthew A. and Reitter, David and West, Robert L. "Degrees of Separation in Semantic and Syntactic Relationships" Proc 15th. International Conference on Cognitive Modeling , 2017 Citation Details
Kelly, Matthew A. and West, Robert L. "A Framework for Computational Models of Human Memory" AAAI Fall Symposium, A Standard Model of Mind: AAAI Technical Report , 2017 Citation Details
Ororbia, Alexander G and Mali, Ankur and Kelly, Matthew A and Reitter, David "Like a Baby: Visually Situated Neural Language Acquisition" Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL) , 2019 Citation Details
Putnam, Michael T. and Carlson, Matthew and Reitter, David "Integrated, Not Isolated: Defining Typological Proximity in an Integrated Multilingual Architecture" Frontiers in Psychology , v.8 , 2018 10.3389/fpsyg.2017.02212 Citation Details
Tang, Z and Mitra, P and Reitter, D. "Are {BERT}s Sensitive to Native Interference in L2 Production?" Proceedings of the Second Workshop on Insights from Negative Results in NLP , 2021 https://doi.org/10.18653/v1/2021.insights-1.6 Citation Details
(Showing: 1 - 10 of 14)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The key tasks outlined in the NSF grant proposal have been completed and have been published as we detail in what follows. The numbers refer to the section numbers in the original proposal.

5.1 Task 1: Pure mediated semantic priming: Our proposed model, the Hierarchical Holographic Model (HHM), is unable toreplicate the pure mediated priming effect documented by Jones (2010). We did not pursue this task further. Instead, we pivoted to syntactic priming, which HHM can capture,though it is outperformed by transformer-based neural language models. We document ourfindings in Kelly et al. (2020).

5.2 Task 2: Ordering words to form grammatical sentences:HHMs successfully performed the word ordering task. We document our findings inKelly et al. (2017) and in Kelly et al. (2019).

5.3 Task 3: Judgements of acceptability: HHM is able to do the judgement of acceptability task, but not nearly as well asconventional neural language models. It also appears that much of the variability injudgements of acceptability is accounted for by spelling errors, rather than the syntacticrelationships HHM is designed to capture. We have a paper on our findings set in Wang et al. (2020).

5.4 Task 4: Part of speech tagging and “super-tagging”:HHM is able to perform tagging or "super-tagging" of part of speech, as documentedin in Kelly et al. (2017) and in our upcoming paper Kelly et al. (2019).

6.2 The semantic and syntactic relationship continuum:Additionally, we test HHM on Chomsky (1956)’s classic nonsense sentence “Colorless green ideas sleep furiously” and find that HHM is able to detect that “Colorless green ideas sleep furiously” is more grammatical than the inverted alternative “Furiously sleep ideas green colorless.” This work is documented in our paper Kelly (2019),currently under review.

To extend Kelly et al. (2020) will require additional syntactic priming data from human participants. Ph.D. candidate Zixin Tang explored an extension of the proposed work. HHM is a computational model of high-level, abstract, syntactic representations. Second-language speakers tend to apply first-language syntactic structures to their second-language,suggesting that in the mind of the speaker, there are syntactic representations that areshared between the first and second language. Zixin’s work investigates what syntactic representations are shared between the first and second language in the mind of a second language speaker and how these sharedrepresentations can be modelled. HHM is one modelling approach that could be used tocapture the shared representations. Other approaches include formalisms from theoretical linguistics, such as minimalist grammar, harmonic grammar, or combinatory categorical grammar (Steedman & Baldridge, 2011). Another approach would be to apply a deep neural language model, such as BERT (Devlin et al., 2019). Zixin’s research is in an early,exploratory stage, and the precise formalism, theory, or modelling technique to be used has not yet been settled.

References

Chomsky, N. (1956). Three models for the description of language. IRE Transactions onInformation Theory, 2, 113–124. https://doi.org/10.1109/TIT.1956.1056813

Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deepbidirectional transformers for language understanding, In Proceedings of the 2019Conference of the North American Chapter of the Association for ComputationalLinguistics: Human Language Technologies, Minneapolis, Minnesota, Association forComputational Linguistics. https://doi.org/10.18653/v1/N19-1423

Jones, L. L. (2010). Pure mediated priming: A retrospective semantic matching model.Journal of Experimental Psychology: Learning, Memory, and Cognition, 36 (1), 135.

Kelly, M. A., Reitter, D., & West, R. L. (2017). Degrees of separation in semantic andsyntactic relationships (M. K. van Vugt, A. P. Banks, & W. G. Kennedy, Eds.). In M. K. van Vugt, A. P. Banks, & W. G. Kennedy (Eds.), Proceedings of the 15thInternational Conference on Cognitive Modeling. Warwick, U.K., University ofWarwick.https://iccm-conference.neocities.org/2017/ICCMprogram_files/paper_42.pdf

Kelly, M. A., Reitter, D., West, R., & Ghafurian, M. (2019). Indirect associations inlearning semantic and syntactic lexical relationships. PsyArXiv.https://doi.org/10.31234/osf.io/ytnjp

Kelly, M. A., Xu, Y., Calvillo, J., & Reitter, D. (2020). Predicting syntactic priming fromsentence embedding vectors (S. Denison, M. Mack, Y. Xu, & B. C. Armstrong,Eds.). In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the42nd annual conference of the cognitive science society, Austin, TX, CognitiveScience Society. https://clcs.sdsu.edu/pubs/cogsci2020_whichsentence_0529.pdf

Steedman, M., & Baldridge, J. (2011). Combinatory categorial grammar (R. Borsley &K. Borjars, Eds.). In R. Borsley & K. Borjars (Eds.), Non-transformational syntax:Formal and explicit models of grammar. Wiley-Blackwell.

Wang, J., Kelly, M. A., & Reitter, D. (2020). Do we need neural models to explain humanjudgments of acceptability? (S. Denison, M. Mack, Y. Xu, & B. C. Armstrong,Eds.). In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the42nd annual conference of the cognitive science society, Austin, TX, CognitiveScience Society. https://arxiv.org/abs/1909.08663


Last Modified: 05/02/2023
Modified by: Prasenjit Mitra

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page