WebGL, etc. in the future

Tom McCauley


University of Notre Dame, USA

HSF Visualization Workshop

28-30 Mar 2017, CERN


  • Two applications: 1) a desktop application using Coin3D and Qt for making public event display images 2) a browser-based event display rendering in canvas for use by public in e.g. masterclasses (running client-server)
  • Both used same JSON input format
  • WebGL (via three.js) + HTML, CSS, etc.: lightweight application that fulfills both use cases and allows for even more future functionality
  • Right place and right time to develop iSpy WebGL

The advantages of a web-based application are well-known and well-covered in this and past workshops. So what could have been done better?

  • If I had to do it over (and actually I have at least once for a non-CMS use-case): I would have separated better the data model from the views and controls.
  • After all, there are things like MVC (model-view-control) frameworks for a reason

If I had started now (or in the future) and want to visualize my experiment’s data with WebGL what would I want? Another way to put it: what would make my job easier and increase productivity?

  • Common input format? For the physics use-case many things are generic
  • Don’t want to re-invent the wheel (although sometimes writing a new thing from scratch is fun and more importantly educational)
  • Have a framework that supports common views: table, tree, 3D, R$\Phi$, RZ, lego, ...
  • Package for converting my data to THREE.Object3D (I am probably making the not-bad assumption that three.js is the WebGL library used; at least 3 LHC experiments use it now)
  • Common way to handle geometry (SketchUp geometry came later and helped)

var track_style = {
	color: 'rgb(100%, 100%, 0%)',
	opacity: 0.9,
	linewidth: 1.0

app.data_objs = {
	'Tracks_V2': {
		type: app.POLYLINE,
		on: true,
		group: 'Tracking', name: 'Tracks',
		fn: hepvis.makeTracks,
		style: track_style

Will commons tools exist in the future?

  • “Not invented here”
  • Crazy for each experiment now to write their own simulation package like Geant4; event display isn’t as large a task as simulation (but it isn’t necessarily a small one either) so perhaps experiments can and will write their own displays without common tools
  • Convergence on what common tools might be?

Will commons tools exist in the future?

  • Is the time spent learning such tools < the time one could spend just writing from scratch?
  • Need a critical mass of community size using and developing
  • Need a good API, documentation (obviously)
  • Open-source

In the future (not necessarily just with WebGL)

  • Touch devices: tablets, phones, touch screens
  • What are the possibilities of AR?
  • What are the possibilities of VR? (CMS is developing ideas using Unity)
  • How and if to support? Game engine ("One Ring" to rule them all: desktop, VR, Web)?