Accurately demonstrated again recently with the results of plugging newly release US Math Olympiad questions, and see llms miserably fail at solving them: https://arxiv.org/abs/2503.21934
We instinctively interpret complex behavior through the lens of intentionality because that's how we understand ourselves. For me the question isn't whether machines can think like us, but whether our understanding of "thinking" itself is far more algorithmic, emergent, and less intentional than we want to admit about our own consciousness. In other words, are we too similar to machines than we imagine?
That was great, thank you.
Accurately demonstrated again recently with the results of plugging newly release US Math Olympiad questions, and see llms miserably fail at solving them: https://arxiv.org/abs/2503.21934
We instinctively interpret complex behavior through the lens of intentionality because that's how we understand ourselves. For me the question isn't whether machines can think like us, but whether our understanding of "thinking" itself is far more algorithmic, emergent, and less intentional than we want to admit about our own consciousness. In other words, are we too similar to machines than we imagine?