Enabling Stories
Symposium:
- ISEA2011: 17th International Symposium on Electronic Art
- More presentations from ISEA2011:
Session Title:
- Collaborating as Social Work
Presentation Title:
- Enabling Stories
Presenter(s):
Venue(s):
Abstract:
The One Laptop Per Child (OLPC) program aims to provide each child in developing regions with a connected laptop to allow them “to become connected to each other, to the world and to a brighter future”.
Uruguay, through its governmental project –Plan Ceibal– was the first country to achieve the ‘full deployment’ status, after successfully delivering a XO laptop to every public schooled child between 6 and 12 years old.
This paper shows the Enabling Stories project, a perceptual interaction based, interactive storytelling application, designed for children with motor or cognitive disabilities, as well as children with normative development, that runs on OLPC’s XO computers.
The application is not only an interactive storytelling game but also a tool for stimulating the development of specific cognitive functions and skills, as well as promoting digital inclusion, and improving social, emotional and motivational aspects on its users.
The art of storytelling in its many forms has been widely used to support the development of a wide spectrum of cognitive functions and skills. Children with motor or cognitive disabilities often experience difficulties using standard interaction schemes.
The interaction with our application is based on triggering actions when the user shows or occludes a printed image within the XO’s camera field of view (usually the space in front of the keyboard, seeable thanks to a small periscope we designed).
Our application models key aspects of the narrative, such as characters, places, possible interactions between characters, etc., and allows its users to construct new stories on real time.
The user interacts with the application by setting scenes up or answering to specific questions. For example, the application would describe a scene by using spoken text, images, music, video, etc. and then prompt the child for information.
A rather trivial example would be: “the girl is taking a hike in the forest, who does she meet?”, the child , then would take one of the images (let’s say, one with the drawing of a dog), and puts it in the space seen by the computer’s camera. The application then continues with the narration using the user input.