News
Currently, no news are available
Course Description
Modern AI feels like it’s everywhere — models that write, speak, see, play games, and even arguably reason. However many researchers today feel a sense of déjà vu: incremental papers, rebranded benchmarks, recycled ideas. Are we reaching the limits of what can be achieved just by scaling models? Is the field running out of new ideas?
This seminar takes a step back — and way back — to understand how machine learning and language technology evolved: both technically as well as philosophically. We’ll examine the early hopes, dead ends, breakthroughs, and rediscoveries that brought us to today's transformer-based models
We'll ask:
- What did early AI researchers believe language and learning were?
- Why were neural networks once declared useless — and then revived to define modern AI?
- What kinds of research actually shifted paradigms?
- Are we at a similar inflection point today?
We'll read classic work from figures like Turing, Shannon, Chomsky, Rosenblatt, Minsky, Angluin, Valiant, Hinton, Bengio, LeCun, and Pearl. In a field obsessed with the latest preprint reading old papers might be a very unique experience — but our hope is that the takeaways will prove useful, maybe even generative. By studying where ideas came from (and how they were nearly lost), you may come away with a deeper appreciation of today’s models — and new ways to think about the next generation of models.
Time: Every Wednesday (Starting 15th October), 16:15 to 17:45
Room: Building C7 3 - Seminar Room 1.14
See https://lacoco-lab.github.io/courses/classics-25/ for the syllabus and further information.