A virtual walk through a sentient space with rt3DView

Th screen capture above and video below are from a walk-through of a procedurally generated sentient space model with video and IoT data displays (derived from ZeroSensor data, rt-ai Edge and Manifold). This was made using rt3DView and the actual Unity video recording made with the aid of this very nice Unity store asset.

The idea of this model is that it reflects the major features of the real sentient space so that users of VR and AR can interact correctly. For example, an AR headset wearer in one of the rooms would also see the displays on the equivalent physical wall. This model is pretty basic but obviously a lot more bling could be added to get further along the road to realism. Plus I made no attempt to sort out the exterior for this test.

Now that the basics are working and the XR world is fully coupled to the rt-ai Edge design that is the real world element of the sentient space, the focus will move to more interaction. Instantiating new objects, positioning objects, real-time sharing of camera poses leading to avatars… The list is endless.

Continue reading

Advertisements

Using Windows Mixed Reality to visualize sentient spaces with rtXRView

The Windows Mixed Reality version of 3DView is now working nicely. Had a few problems with my Windows development PC which is a few years old and didn’t have adequate USB ports. In the end this PCI-e USB 3.1 card solved that problem otherwise a complete upgrade might have been required. A different USB 3.0 card did not work however.

Hopefully this is the last time that I see the displays all lined up like that. The space modeling software is coming along and soon it will be possible to model a space with a (relatively) simple procedural definition file. Potentially this could be texture mapped from a 3D scan of rooms but the simplified models generated procedurally with simple textures might well be good enough. Then it will be possible to position versions of these displays (and lots of other things) in the correct rooms.

XRView is intended to be runnable both on Windows MR headsets (I am using the Samsung Odyssey as it has a good display and built-in audio) and HoloLens. Now clearly VR modes and AR modes have to be completely different. In VR, you navigate and interact with the motion controllers and see the modeled space whereas in AR you navigate by walking around, interact using the clicker and don’t see the modeled space directly. However, the modeled space will still be there and will be used instead of the spatially mapped surfaces that the HoloLens might normally use. This means that objects placed in the model by a VR user will appear to AR users correctly positioned and vice versa. One key advantage of using the modeled space rather than the dynamically mapped space generated by the HoloLens itself is that it is easy to add context to the surfaces using the procedural model language. Another is the ability to interwork with non-HoloLens AR headsets that can share the HoloLens spatial map data. The procedural model becomes a platform-independent spatial mapping that “just” leaves the problem of spatial synchronization to the individual headsets.

I am sure that there will be some fun challenges in getting spatial synchronization but that’s something for later.

Using Unity and Manifold with Android devices to visualize sentient spaces

This may not look impressive to you (or my wife as it turns out) but it has a lot of promise for the future. Following on from 3DView, there’s now an Android version called (shockingly) AndroidView that is essentially the same thing running on an Android phone in this case. The screen capture above shows the current basic setup displaying sensor data. Since Unity is basically portable across platforms, the main challenge was integrating with Manifold to get the sensor data being generated by ZeroSensors in an rt-aiEdge stream processing network.

I did actually have a Java implementation of a Manifold client from previous work – the challenge was integrating with Unity. This meant building the client into an aar file and then using Unity’s AndroidJavaObject to pass data across the interface. Now I understand how that works, it really is quite powerful and I was able to do everything needed for this application.

There are going to be more versions of the viewer. For example, in the works is rtXRView which is designed to run on Windows MR headsets. The way I like to structure this is to have separate Unity projects for each target and then move common stuff via Unity’s package system. With a bit of discipline, this works quite well. The individual projects can then have any special libraries (such as MixedRealityToolkit), special cameras, input processing etc without getting too cute.

Once the basic platform work is done, it’s back to sorting out modeling of the sentient space and positioning of virtual objects within that space. Multi-user collaboration and persistent sentient space configuration is going to require a new Manifold app to be called SpaceServer. Manifold is ideal for coordinating real-time changes using its natural multicast capability. For Unity reasons, I may integrate a webserver into SpaceServer so that assets can be dynamically loaded using standard Unity functions. This supports the idea that a new user walking into a sentient space is able to download all necessary assets and configurations using a standard app. Still, that’s all a bit in the future.

PuncturedPlane4 – procedural meshes with elliptical cutouts

Not content with rectangular cutouts, I thought it would be a fun exercise to generate elliptical cutouts. The result is shown above. Works very nicely except that the rectangle bounding the ellipse always gets filled in which causes problems if overlapping other things. Once again, not perfect but not bad either. This is the textured version:

Code is here.

PuncturedPlane3: optimized punctured procedural mesh generation

Once I had stopped congratulating myself for reducing the generated triangle count in the previous post, I realized that I still needed to trim the vertex, normal and uv arrays down to include only the vertices used in the triangle array. This is not completely trivial as the indices of the vertices will change which means that the triangle array indices have to be mapped. Nothing too tragic though.  Anyway, the result is shown in the screen capture above, using a wireframe shader to illustrate the triangles generated. The vertex count was reduced from 150801 to 34 in this particular case.

Code is here. It’s clearly not completely optimal as it missed the fact that it could have used two big triangles at the top, just like the bottom. It could be made much smarter and in fact get rid of the step quantization entirely. However, this algorithm is easy to understand and good enough for my purposes.

Creating a procedural mesh with rectangular cutouts in Unity

As part of my ongoing work to create sentient spaces, I’d like to be able to create a procedural model of the sentient space in Unity. The idea is that the model is pretty simple – walls, floors, ceilings, windows and doors are about the extent of my ambition right now. I realized that doors and windows really required planes with rectangular cutouts and I thought it would be fun to try this from first principles. The screen capture above shows the result and it seems to work quite nicely.

Procedurally generating the model means that it will be quite easy to specify the sentient space. It’s just a case of measuring each room and then entering the parameters into a file. The Unity app can then read the file on startup in order to generate the model. In principle, this model definition could come from a cloud source which also supplies materials and other configuration data. This will lead to a standard Unity app that can be used in any sentient space. The model itself is really targeted at VR users of the sentient space. AR users do not need the model of course as they can see the real thing. In their case, they would need to download the data they need for persistent objects in the sentient space when they first enter the space.

The mesh generated by this code is hardly efficient however as it creates a lot of triangles. The next step for this code is to grow the individual triangles as much as possible to keep the triangle count down and allow the spatial resolution to be increased as much as desired without significant impact.

The Unity project (PuncturedPlane1) is available here.