AdaptEd
The Feynman Technique is a technique used for simplifying complex ideas by explaining them in basic terms. However, this technique has an immense reliance on both verbal and non-verbal forms of communication, which restricts people with speech or visual impairments. Vision impairments limit access to nonverbal cues like facial expressions and written feedback, while speech impairments make it harder to articulate ideas through explanation.
This project aims to solve these challenges using a multimodal LLM that can process voice, video, and text input to generate clear, accessible feedback. By analyzing gestures, expressions, and tone (when available), the model can provide more nuanced responses, helping users refine their understanding.
The tool will be available as a web or mobile interface, integrating features like text-to-speech, adaptive visual settings, and structured feedback tailored to individual needs. By leveraging AI to make the Feynman Technique more inclusive, this project aims to remove barriers to deep learning and make knowledge more accessible for everyone.
