-
Notifications
You must be signed in to change notification settings - Fork 9
6. Displaying the geometry in 3D
We have learned about iterating through the geometry in a console app. But where is the fun when we can not see the results? We will therefore develop an actual 3D view, to display the mesh that was generated by IfcOpenShell for us. This will take a few additional steps and requires a 3D library. Since we are already working inside the Qt SDK with Python, we will work with the Qt3D
module, which is included in the toolkit. It is reasonably complete and set up to function on modern graphics display cards and using modern shader instructions.
Alas, it is not as widely used, so most examples are either from the Qt-documentation and are often written in C++ or using the QML language.
The main function is always the same. The only difference being the name of the widget we'll call. So we won't repeat it here.
We have to add a few libraries to make the Qt3D
system available. We also include the time
library, to help us measure the time it takes at certain places.
import sys
import time
import os.path
from PyQt5 import QtCore
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.Qt3DCore import *
from PyQt5.QtWidgets import *
from PyQt5.Qt3DExtras import *
from PyQt5.Qt3DRender import *
import ifcopenshell
import ifcopenshell.geom
Start from the main view widget. We derive from the most generic QWidget
, as we will embed our 3D view into this widget. This also gives us the advantage that we can reuse this in different places later on, without special tinkering.
class View3D(QWidget):
"""
3D View Widget
- V1 = IFC File Loading, geometry parsing & basic navigation
"""
def __init__(self):
QWidget.__init__(self)
# variables
self.ifc_file = None
self.start = time.time()
The actual 3D window is created immediately after this. We create a local variable to keep this 3D view (self.view = Qt3DWindow()
), give it a default background color and, very important, place it in a Window container. This is essential, as this will ensure that all the graphics switching, view resizing and parenting is dealt with for us.
# 3D View
self.view = Qt3DWindow()
self.view.defaultFrameGraph().setClearColor(QColor("#4466ff"))
self.container = self.createWindowContainer(self.view)
self.container.setMinimumSize(QtCore.QSize(200, 100))
self.container.setFocusPolicy(Qt.NoFocus)
The next step is setting up our scenegraph. This is a tree-like structure of QEntity
nodes, which can contain 3D graphics, but also transformations, shaders, lighting and a whole lot more. The approach that is used here is called Entity Component and rather than having countless specialised classes, we aggregate or compose everything using generic nodes and add components and attributes to them. We'll see some of them further on.
The following code, part of our __init__
creates a single entity, which will be the root of our scene. We also create a material. This uses vertex colors, as we will receive our colors for each vertex. Actually, we just get an index for each vertex, but we will deal with that later. We add this material as a component to the root node and set it as shareable, so multiple geometry can all reuse this material. This makes the whole system more efficient, as switching between materials in a graphics view is rather expensive.
Don't forget to set the root entity (self.view.setRootEntity(self.root)
).
# Prepare our scene
self.root = QEntity()
self.material = QPerVertexColorMaterial()
self.root.addComponent(self.material)
self.material.setShareable(True)
self.view.setRootEntity(self.root)
The next step is using a QHBoxLayout
and add the container window to it. When you set the layout of our view, everything becomes stretchable, just like that.
# Finish GUI
layout = QHBoxLayout()
layout.addWidget(self.container)
self.setLayout(layout)
To be able to see anything, we need a camera. We add it in a separate function call.
[...]
self.material.setShareable(True)
self.initialise_camera()
self.view.setRootEntity(self.root)
[...]
And this is the function. Not too complicated: we go for a perspective projection and place the camera using a QVector3D()
for position and the view center.
def initialise_camera(self):
# camera
camera = self.view.camera()
camera.lens().setPerspectiveProjection(45.0, 16.0 / 9.0, 0.1, 1000)
camera.setPosition(QVector3D(0, 0, 40))
camera.setViewCenter(QVector3D(0, 0, 0))
There is no geometry available yet, but if there would be, we would not be able to navigate around it. The camera is stuck in its place.
There is a default camera controller which we can use: QOrbitCameraController()
. It is sufficient to pan, zoom and orbit, although the default setup of the mouse buttons is (like most 3D applications) not entirely like other programs. We also set the speed of moving and rotating and, important, link the camera and the controller (.setCamera()
).
# for camera control
cam_controller = QOrbitCameraController(self.root)
cam_controller.setLinearSpeed(50.0)
cam_controller.setLookSpeed(180.0)
cam_controller.setCamera(camera)
This function starts from what we already learned from the console and basic GUI examples.
We start by loading a file. However, to make this a little more flexible, we check the local variable self.ifc_file
and assume that, if it is already set, our model has been loaded. This is not valid for all situations, but will work for now. We also add time measurement code, to get some feedback on how long it takes to load the file. This is printed on the console, but eventually we'll have to give this feedback into the GUI, e.g., on the status bar of the main window.
def load_file(self, filename):
if self.ifc_file is None:
print("Importing IFC file ...")
start = time.time()
self.ifc_file = ifcopenshell.open(filename)
print("Loaded in ", time.time() - start)
Immediately after loading the file, we need to get our geometrical information, just like in the geometry_minimal.py
console application. This calls a separate method parse_geometry()
.
print("Importing IFC geometrical information ...")
self.start = time.time()
settings = ifcopenshell.geom.settings()
settings.set(settings.WELD_VERTICES, False) # false is needed to generate normals -- slower!
settings.set(settings.USE_WORLD_COORDS, True) # true = ignore transformation
self.parse_geometry(settings) # FASTER - iteration with parallel processing
print("\nFinished in ", time.time() - self.start)
The actual geometry parsing is executed in the self.parse_geometry()
function, which requires only the settings
. In contrast with the console version, we also include the CPU count, which we can get from the multiprocessing
module, so import that one as well. Apart from that, this first version is similar. The real work (translating the mesh information from IfcOpenShell into code for Qt3D) is done in the self.generate_rendermesh()
function, which needs the shape
variable.
import multiprocessing
[...]
def parse_geometry(self, settings):
iterator = ifcopenshell.geom.iterator(settings, self.ifc_file, multiprocessing.cpu_count())
iterator.initialize()
while True:
shape = iterator.get()
try:
self.generate_rendermesh(shape)
except Exception as e:
pass
if not iterator.next():
break
The next variation of the code above adds some console feedback, like counting the shapes and printing their ID and the time it took on the console. And it also skips all products of classes IfcOpeningElement
and IfcSpace
, to not block our view. We can ask the class from the shape.product
attribute.
def parse_geometry(self, settings):
iterator = ifcopenshell.geom.iterator(settings, self.ifc_file, multiprocessing.cpu_count())
iterator.initialize()
counter = 0
while True:
shape = iterator.get()
# skip openings and spaces geometry
if not shape.product.is_a('IfcOpeningElement') and not shape.product.is_a('IfcSpace'):
try:
self.generate_rendermesh(shape)
print(str("Shape {0}\t[#{1}]\tin {2} seconds")
.format(str(counter), str(shape.id), time.time() - self.start))
except Exception as e:
print(str("Shape {0}\t[#{1}]\tERROR - {2} : {3}")
.format(str(counter), str(shape.id), shape.product.is_a(), e))
pass
counter += 1
if not iterator.next():
break
This is the hardest part of the script, at least for us. IfcOpenShell has used the OpenCASCADE geometry kernel, which does all the geometric interpretation for us. All we have to do with the lists of vertices, edges, faces, normals and material Ids, is finding how to display them in the Qt3d system. This takes quite a lot of code and there are few examples, as they are mostly concerned with loading pre-created 3D objects, apparently. We used the following Article as inspiration, but eventually, restructured it to simplify it a bit.
After we request the .geometry
from the shape
, we create a QGeometryRenderer
. This is like a container class which is able to display geometry. We need to fill it with components and attributes. We tell it that it'll have to show .Triangles
and then create a new QGeometry
node, which becomes the child of the mesh renderer.
def generate_rendermesh(self, shape):
geometry = shape.geometry
custom_mesh_renderer = QGeometryRenderer()
custom_mesh_renderer.setPrimitiveType(QGeometryRenderer.Triangles)
custom_geometry = QGeometry(custom_mesh_renderer)
Now we have to prepare the different attributes and data buffers.
This will contain the Cartesian coordinates of our vertices, which are provided by IfcOpenShell from geometry.verts
. Remember that this was a list of x, y, z values, one after the other. Every three numbers represent a single vertex.
We need a data buffer for the positions, of type .VertexBuffer
and refer to our geometry we just created. There are two examples of how to full the buffer.
The first uses the Numpy
module, which is widely used.
import numpy as np
[...]
def generate_rendermesh(self, shape):
[...]
position_data_buffer = QBuffer(QBuffer.VertexBuffer, custom_geometry)
position_data_buffer.setData(QByteArray(np.array(geometry.verts).astype(np.float32).tobytes()))
This turns the list of vertex coordinates into a Numpy array, as 32-bit floats and then turns it into a ByteArray (.tobytes()
). This ensures that the graphics system can push the data directly to the graphics adapter or GPU.
Note: In many examples we found online, the arrays for points, normals and colors were merged into a single array. This is still a possibility, but the code to jump through the array become a little more complicated. We opted for the easy approach, as our lists are already usable as-is for most of the time. To be honest: in an older version of the code, we first turned the lists into actual lists of vertices, but at the end, we had to unpack them again, which was a bit stupid and probably slowed down the code.
If you don't want to use NumPy, there is an alternative using the struct
module. It also packs the verts list into a ByteArray.
I haven't performed an in-depth timing comparison, but the speed appears to be the same. We opted for struct as this is available by default, whereas Numpy has to be installed.
position_data_buffer = QBuffer(QBuffer.VertexBuffer, custom_geometry)
position_data_buffer.setData(struct.pack('%sf' % len(geometry.verts), *geometry.verts))
Next up, we have to prepare the attribute. This explains how the system shall read the ByteArray above.
- Make a
QAttribute
calledposition_attribute
- Set its type
.VertexAttribute
- Set the buffer to the position buffer we just prepared
- Set the base vertex type to
float
- Set the vertex size to 3, as we have three float values per vertex
- Set the byte offset to 0, as we start at the beginning of the ByteArray
- Set the
ByteStride
to 12. This is the amount of bytes to jump to reach the next vertex: each float has 4 bytes (32-bit float) and we have three of them: 3 * 4 = 12 - Set the count to the amount of vertices, or actually the amount of coordinates in the list. This is the length of the vertices array, even though the actual vertex count is a third of that.
- Set a name for the attribute, which we get from Qt3d as
defaultPositionAttributeName()
. Without this, it doesn't work, apparently. - Finally, add the position attribute is added to our
custom_geometry
.
And now as Python code:
# Position Attribute
position_data_buffer = QBuffer(QBuffer.VertexBuffer, custom_geometry)
# position_data_buffer.setData(QByteArray(np.array(geometry.verts).astype(np.float32).tobytes()))
position_data_buffer.setData(struct.pack('%sf' % len(geometry.verts), *geometry.verts))
position_attribute = QAttribute()
position_attribute.setAttributeType(QAttribute.VertexAttribute)
position_attribute.setBuffer(position_data_buffer)
position_attribute.setVertexBaseType(QAttribute.Float)
position_attribute.setVertexSize(3) # 3 floats
position_attribute.setByteOffset(0) # start from first index
position_attribute.setByteStride(3 * 4) # 3 coordinates and 4 as length of float32 in bytes
position_attribute.setCount(len(geometry.verts)) # vertices
position_attribute.setName(QAttribute.defaultPositionAttributeName())
custom_geometry.addAttribute(position_attribute)
This is very, very similar as the position attribute. Exactly the same structure, but depending on our settings, we may not have normals at all so we include a check for that, by looking at the length of the geometry.normals
list.
# Normal Attribute
if len(geometry.normals) > 0:
normals_data_buffer = QBuffer(QBuffer.VertexBuffer, custom_geometry)
# normals_data_buffer.setData(QByteArray(np.array(geometry.normals).astype(np.float32).tobytes()))
normals_data_buffer.setData(struct.pack('%sf' % len(geometry.normals), *geometry.normals))
normal_attribute = QAttribute()
normal_attribute.setAttributeType(QAttribute.VertexAttribute)
normal_attribute.setBuffer(normals_data_buffer)
normal_attribute.setVertexBaseType(QAttribute.Float)
normal_attribute.setVertexSize(3) # 3 floats
normal_attribute.setByteOffset(0) # start from first index
normal_attribute.setByteStride(3 * 4) # 3 coordinates and 4 as length of float32 in bytes
normal_attribute.setCount(len(geometry.normals)) # vertices
normal_attribute.setName(QAttribute.defaultNormalAttributeName())
custom_geometry.addAttribute(normal_attribute)
This is something else. We don't immediately receive colors. We get a list of indices which refer to materials, which contain the colors. We have to prepare a list so each vertex also has a color value. And that will be three float values, so the ByteArray will have the same structure and length.
We start from a list of colors, which are initialised at 0.5 and made as long as the list of vertices.
# Collect the colors via the materials (1 color per vertex)
color_list = [0.5] * len(geometry.verts)
This way, our list has the right length from the start and we can replace the required color at any position in the list. We used this approach, since we don't get all the colors in the same order and many vertices will actually refer to the same color values.
Then we step through the material_ids
, which gives us individual indices material_index
, which we use to collect the corresponding color in the list of materials, provided by the geometry.material_ids[]
. We initiate three floats, called red, green and blue. It is possible that our index is -1, indicating that the representation shape has no material assigned. In that case, we'll use the default red, green and blue. Otherwise, we collect the actual material using the actual index mat_id which we just received. And this material contains the most important material color in the .diffuse
attribute, where red is .diffuse[0]
and the others you can guess.
for material_index in range(0, len(geometry.material_ids)):
# default color without material
red = 0.5
green = 1.0
blue = 0.5
# From material index we get the material reference ID
mat_id = geometry.material_ids[material_index]
# Beware... this id can be -1 - so use a default color instead
if mat_id > -1:
material = geometry.materials[mat_id]
red = material.diffuse[0]
green = material.diffuse[1]
blue = material.diffuse[2]
Now we have to know which vertex this color has to be linked to. We will ask that from the .faces
list, which will return our vertex index, which we then use to set the red, green and blue values of the corresponding vertex. Again, this approach is used as we can not simply assume that all our values for the colors are returned in a sequential order.
# get the 3 related vertices for this face (three indices in vertex array)
for i in range(3):
vertex = geometry.faces[material_index * 3 + i]
color_list[vertex * 3] = red
color_list[vertex * 3 + 1] = green
color_list[vertex * 3 + 2] = blue
And finally, we can prepare the ByteArray containing the red, green and blue floats for all the vertices. This code is exactly the same as the one for position and normal:
# Color Attribute
color_data_buffer = QBuffer(QBuffer.VertexBuffer, custom_geometry)
# color_data_buffer.setData(QByteArray(np.array(color_list).astype(np.float32).tobytes()))
color_data_buffer.setData(struct.pack('%sf' % len(color_list), *color_list))
color_attribute = QAttribute()
color_attribute.setAttributeType(QAttribute.VertexAttribute)
color_attribute.setBuffer(color_data_buffer)
color_attribute.setVertexBaseType(QAttribute.Float)
color_attribute.setVertexSize(3) # 3 floats
color_attribute.setByteOffset(0) # start from first index
color_attribute.setByteStride(3 * 4) # 3 coordinates and 4 as length of float32 in bytes
color_attribute.setCount(len(color_list)) # colors (per vertex)
color_attribute.setName(QAttribute.defaultColorAttributeName())
custom_geometry.addAttribute(color_attribute)
This diverges a little bit. The ByteArray for the faces, contains integers. Each triangle refers to three vertices. The packing of the Bytes with struct or Numpy needs an unsigned integer rather than a float value, which simplifies this attribute a tiny bit.
# Faces Index Attribute
index_data_buffer = QBuffer(QBuffer.IndexBuffer, custom_geometry)
# index_data_buffer.setData(QByteArray(np.array(geometry.faces).astype(np.uintc).tobytes()))
index_data_buffer.setData(struct.pack("{}I".format(len(geometry.faces)), *geometry.faces))
index_attribute = QAttribute()
index_attribute.setVertexBaseType(QAttribute.UnsignedInt)
index_attribute.setAttributeType(QAttribute.IndexAttribute)
index_attribute.setBuffer(index_data_buffer)
index_attribute.setCount(len(geometry.faces))
custom_geometry.addAttribute(index_attribute)
We are almost ready. The only thing left is to assign the geometry, which has just received all the required attributes, to the renderer and tell it to start at the beginning of the lists.
# make the geometry visible with a renderer
custom_mesh_renderer.setGeometry(custom_geometry)
custom_mesh_renderer.setInstanceCount(1)
custom_mesh_renderer.setFirstVertex(0)
custom_mesh_renderer.setFirstInstance(0)
And now we are ready to place everything into our scene. Create a new QEntity
which as the self.root
as parent. It gets the renderer (with its embedded geometry) as a component.
# add everything to the scene
custom_mesh_entity = QEntity(self.root)
custom_mesh_entity.addComponent(custom_mesh_renderer)
We also add a transformation matrix. We can leave it almost completely as default, apart from a rotation with -90 degrees around the X-axis, as like most graphics environments which are not CAD or BIM, Qt3d has the convention of the Y-axis being the vertical one and not the Z-axis.
transform = QTransform()
transform.setRotationX(-90)
custom_mesh_entity.addComponent(transform)
And finally, we also assign the material. Well, we have one main material for vertex colouring. This defines the shader that will be used and it will be fed with the RGB-values from the color ByteArray. Since we have defined this earlier, all meshes will actually share a single shader and can thus be rendered in a single pass. This is more efficient than assigning a material to each mesh individually.
custom_mesh_entity.addComponent(self.material)
There you have it. The generate_rendermesh()
method was the most complex one (and the one which took the most effort to understand), especially when starting from C++ examples.
Now you can run the script and you'll get a 3D view, in color. Transparencies are not supported, alas, since the vertex color material that we used has no support for that.
See the full code at qt3d_minimal.py
Oh... while tests have shown that this works quite well, be aware that the geometry conversion may take time. Depending on the type of model, even a very long time. And especially for highly detailed furniture objects stored as IfcFacetedBrep, since the conversion via OpenCASCADE is not the most efficient.
A big thanks to the IfcOpenShell library and the many people contributing with code, but also examples.