I am a PhD student at MIT — studying NLP, AI and machine learning. I am advised by Jacob Andreas.
Languages exhibit some notion of compositionality (productivity and systematicity) whereas current neural language learners lack required inductive biases to achieve this data efficiently. My recent work aims at understanding simple biases that will enable neural models to achieve types of generalization that humans do!
I am also interested in language supervision/grounding and worked on two recent projects: (i) using language to guide image classifiers to learn representations that enable learning of new classes (only with few samples) without forgetting the old ones, (ii) using language models to guide policy learning in a virtual home environment.
Previously, I was a visiting student at @MIT CSAIL. I worked with Prof. Alan Edelman on linear algebraic formulation of backpropagation to facilitate existing matrix operations on neural computation graphs. I worked with John W. Fischer on efficient distributed algorithms for non-bayesian parametric. Before that, I was a part of KUIS AI Lab and worked with Prof. Deniz Yuret on natural language processing. I received my Bachelor’s degrees in Electrical&Electronics Engineering and in Physics from Koç University in 2019.
Afra Feyza Akyürek, Ekin Akyürek, Derry Wijaya, Jacob Andreas (2021)
Ekin Akyürek, Jacob Andreas (2021)
Annual Meeting of the Association for Computational Linguistics, ACL 2021 (oral)
Ekin Akyürek, Afra Feyza Akyürek, Jacob Andreas (2020)
International Conference on Learning Representations, ICLR 2021
Ekin Akyürek*, Erenay Dayanık*, Deniz Yuret (2019)
Transactions of the Association for Computational Linguistics, TACL (also presented at EMNLP 2019)
Ahmet Börütecene, İdil Bostan, Ekin Akyürek, Alpay Sabuncuoğlu, İlker Temuzkuşu, Çağlar Genç, Tilbe Göksun, Oğuzhan Özcan (2018)
In Proceedings of the 12th International Conference on Tangible, Embedded and Embodied Interaction, TEI 2018, ACM