Skip to content

Help & guidance Guides to Good Practice

Sources and types of 3D data

Martina Trognitz, (IANUSDeutsches Archäologisches Institut – DAI), Kieron Niven, Archaeology Data Service.
Valentijn Gilissen (Data Archiving and Networked Services – DANS), with additional contributions from Ruth Beusing (DAI), Bruno Fanini (CNR), Kate Fernie (2Culture Associates), Roberto Scopigno (CNR), Seta Stuhec (OEAW), and Benjamin Štular (ZRC-SAZU), Archaeology Data Service / Digital Antiquity (2016), Guides to Good Practice

As described in section 1, 3D models can be the end result of a workflow involving a variety of different data acquisition techniques including scanning and image-based modelling techniques. These techniques are described in detail in the 3D-ICONS Guidelines (2014) and in the project’s Final Report on Post-processing (De Luca 2014). From an archiving and preservation perspective, these techniques are also covered in the Laser ScanningPhotogrammetry, and Structured Light Scanning guides and case study. Some 3D models may also involve an element of Computer Aided Design (CAD) within their workflow and this is discussed with regards to archiving in the CAD Guide to Good Practice. While this guide will not go into detail regarding individual acquisition techniques it is important to understand that workflows resulting in 3D models will invariably incorporate a number of acquisition and processing techniques and that the data arising from these stages should be dealt with according to the relevant guidelines.

3D data and model types

McHenry and Bajcsy (2008) break down the elements of 3D models into three main categories: geometry, appearance, and scene information. Additionally, data relating to animation or interaction can also be stored within certain models. From these properties the visualisation is computed through a procedure called rendering and results in either static raster graphics, or videos, or interactive models.

Geometry

Within geometry, McHenry and Bajcsy identify four general methods that are used to describe the shape of a 3D model: vertex-based wire-frame models (also called triangle-meshes), parametric surfaces mathematically described by curves and surfaces (Non-Uniform Rational B-Splines/NURBS), geometric solids (Constructive Solid Geometry/CSG), and boundary representations (B-reps).

The most common type of 3D models consist of vertices (three dimensional points) which form the corners of polygons commonly created through the subdivision of the surface into triangular patches or quadrilateral faces (see figure 1). A vertex in a 3D-model is described by its position in a Cartesian coordinate system on an x-, y- or z-axis, in which the z-axis usually denotes the depth or, less frequently, the height of the model. Some applications and formats support the use and storage of real-world coordinate systems whereas others use arbitrary systems. Models that are only represented by vertices and the connecting edges are called wire-frame models or meshes whereas models consisting of only unconnected vertices are known as point clouds (figure 2). Point clouds are commonly generated by techniques such as 3D laser scanning and are usually further processed to form a mesh model (3D-ICONS 2014, 18-21). Wire-frame mesh models can be rendered easily but lack detailed representation of convex or concave surfaces and the sharp edges of the polygons are always visible on closer examination. By using so-called shading algorithms they can be rendered to a smoother, more even appearance although the polygonal origin is often visible in the contour of the object. One method to create a smoother surface is to increase the number of polygons although this also results in a corresponding increase in file size.

The yellow marked vertices describe the highlighted triangle in the 3D model
Figure 1: The yellow marked vertices describe the highlighted triangle in the 3D model
The Stanford Bunny as wire frame and point cloud model
Figure 2: The Stanford Bunny as wire frame (left) and point cloud model (right)

Additionally, many curves and surfaces can also be mathematically calculated to achieve a smoother surface using a few parameters. In 3D graphics the parametric representation is often achieved by NURBS (Non-Uniform Rational B-Splines) where the use of mathematically described curves and surfaces allows scalability without the loss of detail. During data migration, if parametric representation is not supported by a target 3D file format, the model then has to be converted to a wire-frame model, leading to information loss about surface structure. The degree of loss of information is comparable to the conversion from vector graphics to raster graphics.

In addition to vertices, surfaces or curves, 3D models can also be constructed using the union, difference, or intersection of simple geometric solids, a process called Constructive Solid Geometry (CSG). A traffic cone for example, can be built through the union of a cone and a flat cuboid. CSG requires the storage of the individual geometric solids as well as their associated operations and transformations in order to facilitate subsequent editing of the model. CAD file formats in particular support these properties. A conversion from formats using CSG to those without will result in constraints or restrictions to the editability of the model since it will only be possible to alter the model’s polygons and vertices. A reconversion from a polygon-format to a CSG-format may not be trivial.

A further possibility to store 3D geometries in the field of CAD applications is by using boundary representation (B-Rep). B-rep models indirectly describe 3D objects by describing their bounding surfaces. B-rep models can represent complicated objects but, as a result, the data structure can be complex and memory-intensive.

Appearance (surface properties)

In addition to 3D model geometry, surface properties also have to be stored in order to describe the complete appearance of a model. The combination of colour information, textures, and material properties can create a highly realistic 3D model.

At a very basic level, point cloud datasets from photogrammetry or laser scanning can have intensity or colour (RGB) values associated with each vertex. The colour information contained in a photographic sampling can also be projected back on to the point-cloud or triangle mesh by assigning a single colour value (either a single RGB pixel value or some intelligent interpolation of multiple pixels projected on the same surface point) to each vertex of the 3D model (Callieri et al. 2011). In other 3D models, colour information can also be associated with faces, and/or objects (but this produces representations which are not sufficiently detailed in terms of colour encoding) . Alternatively, a texture image can be applied and wrapped around a model using texture coordinates e.g. an image of wood grain can be applied to a cylinder in order to create a realistic impression of a digital tree trunk. This approach works quite well when assigning a synthetic texture (representing a given material) to a 3D object. When, conversely, there is a need to project back the real color (usually sampled with photographs) on to a 3D model, firstly a mesh parameterization method is applied to the triangulated surface (aimed at producing a transformation linking each point over the 3D surface to points in the 2D texture space). A texture can then be resampled from the input set of images, at the proper resolution required by the specific application, and used at the rendering stage (Callieri et al. 2011).

In addition to textures, materials can also be modelled in order to assign the correct reflectance properties to an object, e.g. a wooden table will have different reflective properties to that of a glass table. This is done using parameters to describe reflection and refraction of diffuse light, specular light, ambient light, transparency, emissive light etc.

Additional techniques used to modify the appearance of a model include the use of bump, normal, and transparency maps. Using a texture, these maps store values (i.e. height, normals, transparency) which are then applied to the underlying model, changing the rendering of shadows, transparency, and reflections and simulating elements such as the bumpiness of the surface beyond that which exists in the geometry (figure 3). The use of bumps, normal, and transparency maps requires as well a parameterization to be defined on the surface of the 3D model.

Texture and its application on a 3D model
Figure 3: A texture and its application on a 3D model (left). The model (right) has had the IANUS-logo applied to it using bump mapping. The application appears to have altered the model surface in a three dimensional way despite the geometry of the model remaining the same

Surface properties are implemented by ‘shaders’ during the rendering process. Shaders are essentially sets of instructions that describe how each vertex or pixel should be displayed and, by using different algorithms, and considering various light sources, a shader can give the impression of various surface properties e.g. a smooth surface (see figure 4 below). Modern approaches typically target a set of specific physical properties in order to realistically represent complex materials (often referred as “PBR” or Physically-based rendering models) for instance albedo, roughness, metallic, and emissive.

3D models rendered without shader and with smoothing shader
Figure 4: The model on the left has been rendered without a shader, making the individual polygons visible. A smoothing shader applied to the same model (right) gives the impression of a smooth, even surface

Scene information: light sources and camera parameters

The way a 3D model is displayed depends on the scene settings and elements including the size of the viewport, the positioning of the model, the position of the camera and light sources. The viewport is comparable to a stage, defining a frame for the model in terms of height, width and depth. For the camera not only the position but also the viewing direction needs to be stored.

If rendered without light, only a black image of the 3D model is created so light settings, i.e. properly positioned and configured light sources, are necessary to illuminate 3D scenes. Without this information it is up to the user to determine new settings although they may be automatically pre-set by the software.

A scene can contain one or multiple models which in turn can consist of any number of object groups. Grouping is necessary when a model consists of several individual parts or objects. When a group is set, the positioning of the parts also has to be stored and can be described by transformations such as the shifting, rotating, or scaling of objects.

3D scanning devices or image-based techniques produce huge amounts of data and it is quite easy to produce 3D models so complex that it is impossible to fit them in RAM or to display in real-time. This complexity is usually reduced in a controlled manner, either by producing discrete Level Of Details (LOD) or Multiresolution representations. For large and complex scenes it can be useful to store different Levels Of Detail for individual objects to increase the efficiency of the rendering phase (di Benedetto et al. 2014). Adopting an LOD representation means that each single object in the scene is represented by means of several different models (in practice from 3 to 5 models, each one associated to a distance from viewer interval). An object in the foreground of the scene, close to the camera, would be displayed at a high level of detail whereas trees or other objects in the background will be displayed using a lower resolution level. LOD representations can be easily built using geometric simplification algorithms, provided in many geometry processing tools (e.g. see the simplification features of MeshLab[1]). The level of detail for 3D models is dependent on the quantity of polygons. By definition, LOD representations are characterised by a small number of different models. Conversely, Multi-resolution approaches allow the production of a very large number of different models from a single representation and in real time (adopting view-dependent criteria). Several multiresolution representation schemes have been defined in literature (di Benedetto et al. 2014).

Effects of different Levels of Detail applied to the Stanford Bunny 3D model
Figure 5: Effects of different levels of detail applied to the Stanford Bunny 3D model (left to right: 69451, 3851 and 948 polygons)

Animation and interaction

Animation and interaction require the storage of additional data and need to be considered both when assessing archiving formats and levels of metadata and documentation. Where animations are exported to video formats then the guidelines for Digital Video should be followed.

[1] MeshLab http://meshlab.sourceforge.net/