All Issue

2022 Vol.37, Issue 4 Preview Page

Research Article

28 February 2022. pp. 507-524
Abstract
This paper examined two types of complex NP island constraints (Appositives and Relatives) in English, with an experimental approach and a deep learning approach. In the experimental approach, this paper followed the design in Lee and Park (2018). A total of 120 sentences were employed in the experiment: 40 sentences for the target and 80 sentences for the fillers. For the deep learning approach, this paper utilized the BERTLARGE model that was developed in Lee (2021). The dataset was composed of 240 sentences: 40 sentences for the target and 200 sentences for the fillers. These 240 sentences were used as an input dataset to the BERTLARGE model, and the acceptability scores were calculated for each sentence. After the acceptability scores were obtained for all the target sentences in two different types of approaches, they were normalized into the z-scores and statistical analyses were applied to them. Through the analysis, the followings were observed: (i) both the experimental approach and the BERTLARGE model correctly identified two complex NP island constraints in English, (ii) two factors (Island and Location) and their interaction (Island:Location) affected the acceptability scores of island sentences, and (iii) two approaches made different predictions on the DD scores of the two complex NP island constraints.
References
  1. Agarap, F. 2018. Deep Learning Using Rectified Linear Units (ReLU). arXiv Preprint arXiv:1803.08375.
  2. Alexopoulou, T. and F. Keller. 2007. Locality, Cyclicity, and Resumption: At the Interface between the Grammar and the Human Sentence Processor. Language 83, 110-160. 10.1353/lan.2007.0001
  3. Annamoradnejad, I. and G. Zoghi. 2020. ColBERT: Using BERT Sentence Embedding for Humor Detection. arXiv preprint arXiv:2004.12765.
  4. Bard, E., D. Robertson, and A. Sorace, 1996. Magnitude Estimation of Linguistic Acceptability. Language 72, 32-68. 10.2307/416793
  5. Barr, D., R. Levy, C. Scheepers, and H. Tily. 2013. Random Effects Structure for Confirmatory Hypothesis Testing. Journal of Memory and Language 68, 255-278. 10.1016/j.jml.2012.11.001 24403724 PMC3881361
  6. Carnie, A. 2021. Syntax: A Generative Introduction. Oxford: Wiley Blackwell.
  7. Charniak, E., D. Blaheta, N. Ge, K. Hall, J. Hale, and M. Johnson. 2000. BLLIP 1987-89 WSJ Corpus Release 1 LDC2000T43. Philadelphia, PA: Linguistic Data Consortium.
  8. Chomsky, N. 1973. Conditions on Transformations. In A. Stephen and P. Kiparsky (eds.), A festschrift for Morris Halle. New York: Holt, Rinehart and Winston, 232-286.
  9. Chomsky, N. 1986. Barriers. Cambridge, MA: MIT Press.
  10. Chomsky, N. 2000. Minimalist Inquiries: The Framework. In R. Martin, D. Michaels, and J. Uriagereka (eds.), Step by Step: Essays on Minimalist Syntax in Honor of Howard Lasnik. Cambridge, MA: MIT Press, 89-157.
  11. Cowart, W. 1997. Experimental Syntax: Applying Objective Methods to Sentence Judgments. Thousands Oaks, CA: Sage Publications.
  12. Devlin, J., M. Chang, K. Lee, and K. Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805.
  13. Goldberg, Y. 2019. Assessing BERT’s Syntactic Abilities. arXiv preprint arXiv: 1901.05287.
  14. Goodfellow, I., Y. Bengio, and A. Courville. 2016. Deep Learning. Cambridge, MA: MIT Press.
  15. Gulordava, K., P. Bojanowski, E. Grave, T. Linzen, and M. Baroni. 2018. Colorless Green Recurrent Networks Dream Hierarchically. arXiv preprint arXiv:1803.11138. 10.18653/v1/N18-1108
  16. Hagstrom, P. 1998. Decomposing Questions. Cambridge, MA: MIT dissertation.
  17. Hofmeister, P. and I. Sag. 2010. Cognitive Constraints on Syntactic Islands. Language 86, 366-415. 10.1353/lan.0.0223 22661792 PMC3364522
  18. Hu, J., S. Chen, and R. Levy. 2020a. A Closer Look at the Performance of Neural Language Models on Reflexive Anaphor Licensing. Proceedings of the Society for Computation in Linguistics, 323-333.
  19. Hu, J., J. Gauthier, P. Qian, E. Wilcox, and R. Levy. 2020b. A Systematic Assessment of Syntactic Generalization in Neural Language Models. arXiv preprint arXiv:2005.03692. 10.18653/v1/2020.acl-main.158
  20. Keller, F. 2000. Gradient in Grammar: Experimental and Computational Aspects of Degrees of Grammaticality. Doctoral dissertation, University of Edinburgh.
  21. Kluender, R. 1998. On the Distinction between Strong and Weak Islands: A Processing Perspective. Syntax and Semantics 29, 241-279. 10.1163/9789004373167_010
  22. Kluender, R. 2004. Are Subject Islands Subject to a Processing Account? In V. Chand, A. Kelleher, A. Rodriguez, and B. Schmeiser (eds.), Proceedings of the West Coast Conference on Formal Linguistics 23. Somerville, MA: Cascadilla Press, 475-499.
  23. Kluender, R. and M. Kutas. 1993. Subjacency as a Processing Phenomenon. Language and Cognitive Processes 8, 573-633. 10.1080/01690969308407588
  24. Lasnik, H. and M. Saito. 1984. On the Nature of Proper Government. Linguistic Inquiry 15, 235-289.
  25. Lee, Y. 2016. Corpus Linguistics and Statistics Using R. Seoul: Hankuk Publishing Co.
  26. Lee, Y. 2021. English Island Constraints Revisited: Experimental vs. Deep Learning Approach. English Language and Linguistics 27, 23-47.
  27. Lee, Y. and Y. Park. 2018. English Island Constraints by Natives and Korean Non-natives. The Journal of Studies in Language 34, 439-455. 10.18627/jslg.34.3.201811.439
  28. Levy, R. 2008. Expectation-based Syntactic Comprehension. Cognition 106, 1126-1177. 10.1016/j.cognition.2007.05.006 17662975
  29. Marvin, R. and T. Linzen. 2018. Targeted Syntactic Evaluation of Language Models. arXiv preprint arXiv:1808.09031. 10.18653/v1/D18-1151
  30. Maxwell, S. and H. Delaney. 2003. Designing Experiments and Analyzing Data: A Model Comparison Perspective. Mahwah, NJ: Lawrence Erlbaum Associates. 10.4324/9781410609243
  31. Park, K., M. Park, and S. Song. 2021. Deep Learning Can Contrast the Minimal Pairs of Syntactic Data. Linguistic Research 38, 395-424.
  32. Park, Y. and Y. Lee. 2018. English Island Sentences by Korean EFL Learners. English Language and Linguistics 24, 153-172. 10.17960/ell.2018.24.1.007
  33. R Core Team. 2021. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing.
  34. Reinhart, T. 1997. Quantifier Scope: How Labor Is Divided between QR and Choice Functions. Linguistics and Philosophy 20, 335-397. 10.1023/A:1005349801431
  35. Rizzi, L. 1990. Relativized Minimality. Cambridge, MA: MIT Press.
  36. Ross, J. 1967. Constraints on Variables in Syntax. Doctoral dissertation, Massachusetts Institute of Technology.
  37. Schütze, C. 1996. The Empirical Base of Linguistics: Grammaticality Judgments and Linguistic Methodology. Chicago, IL: University of Chicago Press.
  38. Sprouse, J. 2008. Magnitude Estimation and the Non-Linearity of Acceptability Judgments. In N. Abner and J. Bishop (eds.) Proceedings of the 27th West Coast Conference on Formal Linguistics. Somerville, MA: Cascadilla Proceedings Project, 397-403.
  39. Sprouse, J. and N. Hornstein. 2013. Experimental Syntax and Island Effects. Cmabridge, MA: Cambridge University Press. 10.1017/CBO9781139035309
  40. Sprouse, J., M. Wagers, and C. Phillips. 2012. A Test of the Relation between Working Memory Capacity and Syntactic Island Effects. Language 88, 82-123. 10.1353/lan.2012.0004
  41. Szabolcsi, A. 2007. Strong vs. Weak Islands. In M. Everaert and H. van Riemsdijk (eds.), The Blackwell Companion to Syntax. Oxford: Blackwell, 479-531. 10.1002/9780470996591.ch64
  42. Szabolcsi, A. and F. Zwarts. 1993. Weak Islands and an Algebraic Semantics of Scope Taking. Natural Language Semantics 1, 235-284. 10.1007/BF00263545
  43. Truswell, R. 2007. Extraction from Adjuncts and the Structure of Events. Lingua 117, 1355-1377. 10.1016/j.lingua.2006.06.003
  44. Tsai, W. 1994. On Nominal Islands and LF Extraction in Chinese. Natural Language and Linguistic Theory 12, 121-175. 10.1007/BF00992747
  45. Vaswani, A., N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaiser, and I. Polosukhin. 2017. Attention Is All You Need. arXiv preprint arXiv:1706.03762.
  46. Wang, A., A. Singh, J. Michael, F. Hill, O. Levy, and S. Bowman. 2019. GLUE: A Multi-task Benchmark and Analysis Platform for Natural Language Understanding. arXiv preprint arXiv: 1804.07461. 10.18653/v1/W18-5446
  47. Wang, A., Y. Pruksachatkun, N. Nangia, A. Singh, J. Michael, F. Hill, O. Levy, and S. Bowman. 2020. SuperGLUE: A Stickier Benchmark for General- purpose Language Understanding Systems. arXiv preprint arXiv:1905.00537.
  48. Warstadt, A., Y. Cao, I. Grosu, W. Peng, H. Blix, Y. Nie, A. Alsop, S. Bordia, H. Liu, A. Parrish, S. Wang, J. Phang, A. Mohananey, P. Htut, P. Jeretič, and S. Bowman. 2019a. Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs. arXiv preprint arXiv:1909.02597. 10.18653/v1/D19-1286
  49. Warstadt, A., A. Singh, and S. Bowma. 2019b. Neural Network Acceptability Judgments. arXiv preprint arXiv:1805.12471. 10.1162/tacl_a_00290
  50. Wilcox, E., R. Levy, and R. Futrell. 2019a. What Syntactic Structures Block Dependencies in RNN Language Models? arXiv preprint arXiv:1905.10431.
  51. Wilcox, E., R. Levy, and R. Futrell. 2019b. Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations. arXiv preprint arXiv:1906.04068. 10.18653/v1/W19-4819 30366611
  52. Wilcox, E., R. Levy, T. Morita, and R. Futrell. 2018. What Do RNN Language Models Learn about Filler-Gap Dependencies? arXiv preprint arXiv:1809.00042. 10.18653/v1/W18-5423
  53. Zhu, Y., R. Kiros, R. Zemel, R. Salakhutdinov, R. Urtasun, A. Torralba, and S. Fidler. 2015. Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books. Proceedings of the IEEE International Conference on Computer Vision, 19-27. 10.1109/ICCV.2015.11
Information
  • Publisher :The Modern Linguistic Society of Korea
  • Publisher(Ko) :한국현대언어학회
  • Journal Title :The Journal of Studies in Language
  • Journal Title(Ko) :언어연구
  • Volume : 37
  • No :4
  • Pages :507-524