December Blog

How a VirtuaScan Asset is made

December 2016

So, 3TD has advertised something called 'VirtuaScan' assets. What exactly is a VirtuaScan asset?

I came up with the VirtuaScan concept once I started creating 3D assets using photogrammetry as a production process. At it's core, a VirtuaScan asset is a 3D object or game-ready asset that is created within 3TD Studios using a real world object as a base. So, VirtuaScan objects include objects, textures, characters, or nearly ANYTHING found in a 3D world. The ultimate goal is to build an enormous 3D library (VirtuaScan Lib) for all of my development. 

I know I have stated many times over the last few blogs that photogrammetry is a simple concept. The artist takes a bunch of photos of an object and the software then creates the 3D model. That is really an over simplification of the process. In this blog I will go over  a number of issues concerning this process that game artists need to take into account.


Taking photos sounds simple enough. However, taking photos for accurate use within a photogrammerty pipeline takes a bit more thought. As I have explained to my intern, imagine that your eyes are the camera's lens. If your eyes can NOT see anything on the object you trying to shoot, you will get a 'blank' spot on the model you are trying to make. This means you have to focus on what you DO NOT see just as much as what you can see. Over the last few months, we have spent many hours re-shooting objects because we forgot this simple truth.


Once you get the photos shot, then you add them to the photoscan software. I personally use AgiSoft Photoscan. This involves the user clicking through a bunch of 'work flow tasks'. Again, this sounds simple. However, you also need to think ahead. There are a number of instances where you get 'extra' points in your point cloud or you get 'random' polygons created from points that should have been removed.

Here at 3TD Studios we use sets of cameras that range from 14 megapixels to 20.1 megapixels. Heck at times, we even use as low as 5 megapixel cell phone cameras. This happens if we see something we really think should be a great model and don't happen to have our 'pro gear' with us. Regardless of the camera, there are more than likely 'errors' present when you try to make the model. This means some 'hand' crafting on the artist side. We remove points from point clouds, we hand edit meshes by removing polygons. Below are a few simple examples concerning this process.


The next phase varies depending on the model. However, in all cases, the generated model is brought into a 3D modeling application. The primary goal of this phase is to clean-up the model further and get it ready for the 'game-ready' processing. This phase involves everything that might go wrong when converting the model from a high-rez, high-poly mesh to something that a game engine can accurately use. We bring the mesh in. We add it's diffuse texture and globally light it using HDR Images that were produced in the same location as the real object. We also take a very CLOSE look to ensure there are no 'stand alone' polygons. These would be viewed as dots or floating, tiny little objects NOT attached to the main mesh. We remove these for two reasons. Firstly, it reduces the overall triangle count. Secondly, in later stages, these 'floaters' cause issues with re-mapping, texture work and overall mesh quality.

The next part of Phase Three includes lowing the poly/tri count for the object. For this step I use MeshLab which is available for free download. Just search MeshLab and you will find the latest version. I do not have a 'rule of thumb' concerning reducing triangle/polygon count in a model. The process more me and my team is determined by what focus the object has. For example; a cool statue would likely attract a player's attention and said player would then likely walk right up to the statue in order to see the detail. On the other hand a simple rock will probably NOT attract the players eye so we can get away with fewer details.

The last part of Phase Three involves bringing the reduced mesh BACK into the 3D application. At this point in the process you will exercise your deep understanding of 3D production. In this part of Phase Three you will CLEARLY see any issues or errors in the mesh and NOW is the time to correct those errors. See the images below.

Here is a hand corrected version of the same model. This will work fine in-game;

You will note that all the 'overlapping' polygons/triangles are gone. This model will now light correctly in the game engine.

Our next blog will cover all of the steps for correcting the model textures to accurately show a proper PBR rendering pipeline. Stay tuned!