Registration for this course is open until Thursday, 12.03.2026 23:59.

News

Written on 16.10.25 by Yash Raj Sarrof

Hi everyone,

It's time to select your preferences for the paper presentations.
To help me assign topics that you are most interested in, please send me an email (ysarrof@lst.uni-saarland.de) with your choices by 4:00 PM on Monday, October 20, 2025. 
(If you have already sent the email, ignore… Read more

Hi everyone,

It's time to select your preferences for the paper presentations.
To help me assign topics that you are most interested in, please send me an email (ysarrof@lst.uni-saarland.de) with your choices by 4:00 PM on Monday, October 20, 2025. 
(If you have already sent the email, ignore this message). 

In your email, please provide a ranked list of the papers you would like to present.
It would also be helpful if you could mention any papers you would prefer to avoid.
My goal is to match everyone with one of their top choices.
However, if a particular paper is very popular, I will need to assign it randomly among the interested students.

Even if you have no preferences, please send me a quick email to let me know. This is important for ensuring everyone is assigned a topic.
I will finalize and post the presentation schedule by Wednesday, October 22, 2025, giving the first presenters a full week to prepare.

Best, 
Yash

Course Description

Modern AI feels like it’s everywhere — models that write, speak, see, play games, and even arguably reason. However many researchers today feel a sense of déjà vu: incremental papers, rebranded benchmarks, recycled ideas. Are we reaching the limits of what can be achieved just by scaling models? Is the field running out of new ideas?

This seminar takes a step back — and way back — to understand how machine learning and language technology evolved: both technically as well as philosophically. We’ll examine the early hopes, dead ends, breakthroughs, and rediscoveries that brought us to today's transformer-based models

We'll ask:

  • What did early AI researchers believe language and learning were?
  • Why were neural networks once declared useless — and then revived to define modern AI?
  • What kinds of research actually shifted paradigms?
  • Are we at a similar inflection point today?

We'll read classic work from figures like Turing, Shannon, Chomsky, Rosenblatt, Minsky, Angluin, Valiant, Hinton, Bengio, LeCun, and Pearl. In a field obsessed with the latest preprint reading old papers might be a very unique experience — but our hope is that the takeaways will prove useful, maybe even generative. By studying where ideas came from (and how they were nearly lost), you may come away with a deeper appreciation of today’s models — and new ways to think about the next generation of models.


Time: Every Wednesday (Starting 15th October), 16:15 to 17:45
Room: Building C7 3 - Seminar Room 1.14


See https://lacoco-lab.github.io/courses/classics-25/  for the syllabus and further information.
Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.