Interactive systems and artificial vision with C ++ / OpenFrameworks workshop

The desire to deposit in the machines physical and cognitive functions similar to the human ones can be traced beyond the middle ages and still be updated in the technologies of artificial vision, capable of perceiving, interpreting and digitizing subjects and objects of the physical world. This course investigates in a theoretical and experimental way the creative potential offered by the artificial vision for the creation of interactive works and research in body performativity in virtual and augmented reality environments.

We will start from the study of the relationship between bodies, machines and performativity, encompassing referents in phenomenological, materialist and posthumanist thinking, cognitive sciences and human-computer interaction, and new philosophical conceptions of the natural and the artificial, the being, the presence, the sensory immersion and the limits of the digital body. We will program in C ++ language using openFrameworks oriented to develop applications responsible for acquiring and processing data of camcorders and sensors type Kinect v1, v2 and Leap Motion, being able to detect facial gestures and body movements for the control of audiovisual events in real time. We will explore key Machine Learning techniques that are designed to interpret choreography, gestures and other data in real time.

Participants will design interactive experiences that allow the simultaneous participation of several users interacting with the system using their body and gestures in a transparent and natural way. They will build prototypes of interactive systems forming groups where novice and experienced programmers, designers, theoreticians and performance artists meet, finalizing a sample and evaluating the results.

Aimed at: Interested performance arts, live audiovisual, scenographic and costume design, music, advertising, television, education, psychology and cognitive sciences.


To know referents of the philosophy of the digital body and to analyze the technical and aesthetic possibilities of the interactive systems.

Develop programming skills with specialization in artificial vision and machine learning using the C ++ language and OpenFrameworks.

Design and build a prototype of interactive system controlled live with gestures and body movements.

Requirements: The course is aimed at participants with basic programming knowledge. It is advisable to participate in the introductory course to C ++ / openFrameworks held in Hangar in September 2017.

It is only necessary to attend with a laptop. Participants are invited to bring specific project initiatives to be developed during the course.

Course contents:

– Studies of the evolution in the relation between body, machines and performativity, traversing historical references from the fifteenth century, the idea of ​​the total work of art, phenomenology and cognition with the body, to the current paradigms of virtual reality, perception Artificial and immersive systems.

Analysis of the thinking of Johannes Birringer, Paul Virilio, Gilles Deleuze, Brenda Laurel, Donna Haraway, Merleau-Ponty, Felix Guattari, Rosi Braidotti, Seymour Papert, and others.

– Archeology of computer vision and learning Machine Learning machines. Basic devices, techniques and algorithms, and their main areas of use. Studies on interactivity. Methods of designing interactive experience based on the body and space technologically enhanced.

– Technical analysis of an interactive installation: sizing, lighting arrangement, cameras, sensors and audiovisual systems. Software and hardware overview for capturing images and for controlling multiple synchronized projectors.

– Programming in C ++ oriented to the artificial vision. Capture of images, pixel-to-pixel operations and the main algorithms of analysis and prediction. Advanced image processing in real time: color detection, movement, contours, faces, pattern recognition using the OpenCV library.

-Methods for acquiring and processing data from Kinect V1, Kinect V2 and Leap Motion sensors. Interpretation of data, detection of users, movements and contours, skeletons, depth maps. 3D scanning of objects and people using Kinect.

– Control sounds, 3D graphics, and OpenGL shaders in real time through gestures and body movement. Interaction with 2D / 3D mesh networks, Voronoi structures, Delaunay triangulation, and with advanced particle systems using gravity simulation, attraction forces, repulsion, waves, Navier-Stokes equations for 2d fluids.

-Audiovisual control through gestures and movements of the body: sequences of images, reproduction and manipulation of films and audio samples. Application of VJ / DJ techniques, sampling and synthesized audio video synchronized to the movement of the body.

– Import and export XML, JSON, OBJ, STL files. Network installations: Network connections via OSC Open Sound Control or MIDI, Syphon.

-Introduction to concepts of artificial neural networks and Machine Learning techniques with interactive or real-time uses for interpreting patterns, facial gestures or movements of the whole body and other data in real time.

Duration: 24 hours

Dates: 3, 5, 10, 17, 19, 24, 26, 31 October 2017

Hours: 18: 30-21: 30

Registration fee: 180 €


In charge of: Education laboratory in art, technology and philosophy


Belén Agurto. Artist and researcher in philosophy of technology and digital aesthetics. Degree in Audiovisual Communication (UL, Lima) and Master in Comparative Studies of Literature, Art and Thought (UPF, Barcelona). She has been co-directing the formative program of technology and aesthetics of Libertario in since 2015, and works in artistic and philosophical research in

Álvaro Pastor. Architect and Electronic Artist. Researcher in virtual reality and interactive systems. Prize Ibermúsicas 2014, Iberescena 2011. Currently a member of

Cookies: We use cookies on this site to enhance your user experience. If you continue to browse you are giving your consent to the acceptance of the aforementioned cookies and acceptance of our cookie policy. ACEPTAR

Aviso de cookies