4 Comments
User's avatar
RA's avatar

That was great, thank you.

Accurately demonstrated again recently with the results of plugging newly release US Math Olympiad questions, and see llms miserably fail at solving them: https://arxiv.org/abs/2503.21934

Expand full comment
Kush's avatar

We instinctively interpret complex behavior through the lens of intentionality because that's how we understand ourselves. For me the question isn't whether machines can think like us, but whether our understanding of "thinking" itself is far more algorithmic, emergent, and less intentional than we want to admit about our own consciousness.​​​​​​​​​​​​​​​​ In other words, are we too similar to machines than we imagine?

Expand full comment
Greg A. Woods's avatar

I would argue that LLM/GPTs _cannot_ truly enhance productivity, nor accelerate research, they barely support ideation, and and they definitely cannot usefully automate communication either between humans, nor between humans and machines.

Expand full comment
Damien Kopp's avatar

Thanks for your comments! I look at LLMs as a calculator + encyclopaedia: it helps to surface relevant information from a body of knowledge in a pre-computed manner through a simple UX. In that sense it does help me in terms of productivity (vs traditional web search) … :-)

Expand full comment