The 23rd Int’l Conference on Virtual Systems and Multimedia will be held October 30th – November 2nd 2017 in Ireland at University College Dublin, with Special Workshops and Cultural Tours on November 3rd – 5th in Belfast, Northern Ireland.

VSMM2017 will explore ground-breaking applications of new media, VR, sensing, and imaging in creative societies. Some 30 years after the dawn of VR, with a new boom in virtual devices, products and applications, it is an apt moment to look “Back to the Future” and examine the potentials and pitfalls of VR, AR and 3D technologies. With virtual and 3D technology entering health to tourism, games to manufacturing, and art to entertainment, deep issues and questions arise.

The Conference will bring together researchers and academics, creative industry professionals and technologists, futurists and students to explore where virtual and 3D technologies have been and are taking us. Experts in the arts and culture, heritage and museums, design and engineering, architecture and planning, health and inclusion, and computing and digital systems are invited to gather to present, discuss, debate and learn.

VSMM2017 is hosted by: SMARTlab – The Inclusive Design Research Centre of Ireland & the College of Engineering & Architecture at University College Dublin

In association with: the ACM Distinguished Speakers Series

With support from: Creative Ireland, VR First and All These Worlds

And with regional activities: hosted in Belfast, Northern Ireland by Ulster University, & the University of Nottingham Ningbo China’s NVIDIA Joint-Lab on Mixed Reality

Best papers to be invited to a Special Issue of the Presence: Teleoperators & Virtual Environments journal (http://www.mitpressjournals.org/loi/pres) around the theme of Digital Realities / Digital Materialisation

Conference Website: http://www.vsmm.org

Submit here: http://vsmm.org/for-submitters/paper-submission-2/

VSMM 2017 Organising Committee

(Original post by Eugene Ch’nghttps://www.linkedin.com/pulse/cfp-vsmm2017-international-conference-extended-abstract-eugene-ch-ng)

Read More
SMARTlab team members to present tonight at Dogpatch Labs, for the Learning Tech Labs series organised by David Pollard.
Speakers include keynotes by Prof Lizbeth Goodman (Chair of Creative Tech Innovation, UCD)  and Dr Kevin Kodil (Research Fellow, Trinity College Computer Science Dept.). SMARTlab and IDRC team members contributing to this discussion of Artificial Intelligence and Artificial Stupidity include  Tara O’Neill (SMARTlab PhD candidate).
Read More

Settling Virtual Reality

The experimenting space is now in full effect. The latest visualisation technology has been implemented in a form of an HTC Vive running on a scalable thunderbolt 3 eGPU sporting the latest GTX 1080 Ti. The first experimentations were conducted using the ubiquitous Unity engine. April and May were dedicated to learn and explore the transversal possibilities of the technologies. The plural is used because of the potential for sensing, network connectivity and sound interoperability. Processing as a visual art oriented language was the first port of call with the output been ported, as a texture, live to Unity via Spout. Next, sensing was added through Muse for EEG and Raspberry Pi 3 with environmental sensors (see example below of sensors generated procedural art in Unity under fly-through mode). The data was sent from the sensors to the LAN using OSC as a Python script encapsulated under UDP from the TCP/IP protocol. Of course, the Internet will have worked equally well and represented a great potential for an information system and big data for AR (Augmented Reality) or more extensively MR (Mixed Reality).

Sounding Abstraction

The research is predominantly based on the sensorial aspect of sound art and it’s parallel to VR development process experiments with applied EEG biofeedback interaction and immersive intermedia environments. The Muse headband can be worn alongside the Vive’s HMD and a smartphone can be used as a host for OSC broadcast, making all very portable with the future availability of wireless VR option (TPCast). Sound in VR can be dealt with in two different situations; as binaural with headphones in sound sensitive milieu or multi-channels into private (or not) room-scale setting. The later being currently selected with the studio permitting up to eight channels in a secluded network enabled space. The first attempts are promising with the physical impact of sound creating a deeper three-dimensional depth than headset reproduction as sound is intrinsically auditive and physical. Naturally, some careful implementation has to be considered concerning multi-systems communication and latency, especially in the context of using the Internet instead of the local area network.

Routing Dissemination

VR binaries under Unity could provide the following builds enabling a wide range of compiled assets: PC, Mac, Linux, iOS, Android, tvOS, PS4, WebGL and Samsung TV amongst others less well known. Raspberry Pi 3 were fitted with Sense HAT, Enviro pHAT and the made to order Harmony-E1 all running OSC under Python. The audio is provided by Ableton Live + Max for Live devices and Reaktor for explorative sound generation. Active near-field monitors were chosen for the neutrality, power and directionality of the equipment.

 

Read More
Skip to toolbar