SMARTlab team members to present tonight at Dogpatch Labs, for the Learning Tech Labs series organised by David Pollard.
Speakers include keynotes by Prof Lizbeth Goodman (Chair of Creative Tech Innovation, UCD)  and Dr Kevin Kodil (Research Fellow, Trinity College Computer Science Dept.). SMARTlab and IDRC team members contributing to this discussion of Artificial Intelligence and Artificial Stupidity include  Tara O’Neill (SMARTlab PhD candidate).
Read More

Settling Virtual Reality

The experimenting space is now in full effect. The latest visualisation technology has been implemented in a form of an HTC Vive running on a scalable thunderbolt 3 eGPU sporting the latest GTX 1080 Ti. The first experimentations were conducted using the ubiquitous Unity engine. April and May were dedicated to learn and explore the transversal possibilities of the technologies. The plural is used because of the potential for sensing, network connectivity and sound interoperability. Processing as a visual art oriented language was the first port of call with the output been ported, as a texture, live to Unity via Spout. Next, sensing was added through Muse for EEG and Raspberry Pi 3 with environmental sensors (see example below of sensors generated procedural art in Unity under fly-through mode). The data was sent from the sensors to the LAN using OSC as a Python script encapsulated under UDP from the TCP/IP protocol. Of course, the Internet will have worked equally well and represented a great potential for an information system and big data for AR (Augmented Reality) or more extensively MR (Mixed Reality).

Sounding Abstraction

The research is predominantly based on the sensorial aspect of sound art and it’s parallel to VR development process experiments with applied EEG biofeedback interaction and immersive intermedia environments. The Muse headband can be worn alongside the Vive’s HMD and a smartphone can be used as a host for OSC broadcast, making all very portable with the future availability of wireless VR option (TPCast). Sound in VR can be dealt with in two different situations; as binaural with headphones in sound sensitive milieu or multi-channels into private (or not) room-scale setting. The later being currently selected with the studio permitting up to eight channels in a secluded network enabled space. The first attempts are promising with the physical impact of sound creating a deeper three-dimensional depth than headset reproduction as sound is intrinsically auditive and physical. Naturally, some careful implementation has to be considered concerning multi-systems communication and latency, especially in the context of using the Internet instead of the local area network.

Routing Dissemination

VR binaries under Unity could provide the following builds enabling a wide range of compiled assets: PC, Mac, Linux, iOS, Android, tvOS, PS4, WebGL and Samsung TV amongst others less well known. Raspberry Pi 3 were fitted with Sense HAT, Enviro pHAT and the made to order Harmony-E1 all running OSC under Python. The audio is provided by Ableton Live + Max for Live devices and Reaktor for explorative sound generation. Active near-field monitors were chosen for the neutrality, power and directionality of the equipment.

 

Read More
Skip to toolbar