I am a PhD student at MIT — studying NLP, AI and machine learning. I am advised by Jacob Andreas.
Language exhibits(?) some notion of compositionality (i.e productivity and systematicity) whereas current neural language learners lack required inductive biases (e.g mutual exclusivity) to achieve this. My recent work aims at understanding simple biases that will enable neural models to achieve types of generalization that humans do!
I am also interested in language grounding and working on two ongoing projects: (i) using language to guide image classifiers to learn representations that enable learning of new classes (only with few samples) without forgetting the old ones (incremental learning), (ii) a decentralized collaborative multi-agent reinforcement learning problem, in which agents use natural language to communicate, in the virtual home environment.
Previously, I was a visiting student at @MIT CSAIL. I worked with Prof. Edelman on linear algebraic formulation of backpropagation to facilitate existing matrix operations on neural computation graphs. I worked with John W Fischer on efficient distributed algorithms for non-bayesian parametric. Before that, I was a part of KUIS AI Lab and worked with Prof. Yuret on natural language processing. I received my Bachelor’s degrees in Electrical&Electronics Engineering and in Physics from Koç University in 2019.
Ekin Akyürek, Afra Feyza Akyürek, Jacob Andreas (2020)
International Conference on Learning Representations, ICLR 2021
Ekin Akyürek*, Erenay Dayanık*, Deniz Yuret (2019)
Transactions of the Association for Computational Linguistics (also presented at EMNLP 2019)
Ahmet Börütecene, İdil Bostan, Ekin Akyürek, Alpay Sabuncuoğlu, İlker Temuzkuşu, Çağlar Genç, Tilbe Göksun, Oğuzhan Özcan (2018)
In Proceedings of the 12th International Conference on Tangible, Embedded and Embodied Interaction (TEI ’18), ACM