Freddy Cheng

freddy.JPG

I am a second-year master’s student at UW, supervised by Prof. Shane Steinert-Threlkeld.

Before UW, I was a research assistant supervised by Prof. Wang, Hsin-Min, Prof. Tsao, Yu, Prof. Ko, Ming-Tat at the Institue of Information Science, Academia Sinica. I received a B.S. in Computer Science and Information Engineering from National Taipei University of Technology (Taipei Tech) in June 2021.

Prior to that, I was a data engineer intern at LINE Taiwan, where I developed NLP tools to solve real-world problems. Before I pivoted to the field of NLP, I was a software developer intern at iCook and Glossika, where I learned the foundations of software development.

My research interests are speech and natural language processing (SLP). Specifically, I am interested in the following topics:

(a) make language technologies accessible: can we design model architectures that are less data-hungry and computing efficient?

(b) multi-modal language technologies: as human communications involves a lot of different modalities, can we design model architectures that are more capable of multi-modal interactions?

(c) understanding model behaviors: I believe the most efficient way to improve current language technologies is by understanding the models’ behavior and limitations. By analyzing model behavior, we can better design model architectures that are less data-hungry and computing efficient.

news

No news so far...

selected publications

  1. AlloST: Low-Resource Speech Translation Without Source Transcription
    Yao-Fei Cheng, Hung-Shin Lee, and Hsin-Min Wang
    In Proc. Interspeech, 2021
  2. Task Arithmetic for Language Expansion in Speech Translation
    Yao-Fei Cheng, Hayato Futami, Yosuke Kashiwagi, Emiru Tsunoo, Wen Shen Teo, and 2 more authors
    In Arxiv preprint, 2024