We use cookies to ensure that we give you the best experience on our website. Read privacy policies.
An AI model has achieved language learning by observing the world through the eyes of a baby. Researchers used video and audio recordings from a helmet-mounted camera worn by an infant named Sam. The AI learned to recognize words like “crib” and “ball” by associating images and spoken words from Sam’s experiences. Notably, the AI didn’t have any prior knowledge about language, challenging existing theories that babies need innate language understanding. The study provides insights into early language acquisition and demonstrates how AI can mimic human learning processes.
The researchers collected 61 hours of recordings from a camera mounted on a helmet worn by Sam. The camera captured Sam’s experiences from his perspective, covering around 1% of his waking hours from six months to around two years old. The neural network model was trained on video frames and transcribed words spoken to Sam. It learned associations between images and text to predict which images corresponded to specific words.
AI learnt language by seeing the world through a baby’s eyes
Thank you for subscribing!