We introduce SAE-RNA, a tool for interpreting RiNALMo RNA language model representations. It maps internal representations to known biological features and frames RNA interpretability as concept discovery without requiring end-to-end retraining. This approach enables researchers to examine what RNA language models encode about ncRNA families and supports hypothesis generation regarding previously unrecognized relationships between RNA groups.
@article{kim2025saerna,title={{SAE-RNA}: A Sparse Autoencoder Model for Interpreting {RNA} Language Model Representations},author={Kim, Taehan and Nam, Sangdae},year={2025},archiveprefix={arXiv},primaryclass={q-bio.BM},}
2024
arXiv
Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?
Guijin Son, Sangwon Baek, Sangdae Nam, and 2 more authors
Large Language Models can handle multiple instructions simultaneously in a single inference call. We introduce MTI Bench, a benchmark with 5,000 instances across 25 tasks, each containing 2-3 sub-tasks. Multi-task inference reduces latency by approximately 1.46x, and advanced models like Llama-2-Chat-70B and GPT-4 show improvements of up to 7.3% and 12.4% respectively over single-task processing.
@article{son2024multitask,title={Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?},author={Son, Guijin and Baek, Sangwon and Nam, Sangdae and Jeong, Ilgyun and Kim, Seungone},year={2024},archiveprefix={arXiv},primaryclass={cs.CL},}
♫ BGM ♫
TODAY 128 · TOTAL 54,321
♪ welcome to my mini hompy ♪ enjoy the bgm ♪ have a nice day ♪