NVIDIA GTC 2018 Conference

NVIDIA is best known in the AEC industry for its graphics processing units (GPUs) that are used to power workstations with the top-of-the-line graphics capabilities that AEC professionals need to work with large building and infrastructure projects. The company holds an annual GPU Technology Conference (GTC) that is focused on using the GPU to solve computing challenges, and while the focus of the conference is primarily on hardware, it provides a good opportunity for an AEC professional to get an idea of the latest developments in graphics-enabled design, visualization, collaboration, and virtual reality as well as technologies such as AI (artificial intelligence), big data, and machine learning that NVIDIA that started to focus on.

The main GTC conference is held, typically in March, in Silicon Valley where NVIDIA is headquartered, with similar events held in different global locations for the remainder of the year, similar to how Autodesk University has one main conference and multiple smaller events. I was able to attend the main GTC event this year which was held towards the end of last month, and while there wasn’t that much AEC-specific information to be gleaned, it was interesting to get a better understanding of the progress being made in areas such as self-driving cars, smart cities, healthcare, big data, high performance computing, and virtual reality, powered by NVIDIA technologies and its many partners and third-party developers. It also provided an opportunity to take a deeper dive into some of the more technical aspects of rendering and visualization that most AEC professionals are not aware of.

AI Denoising

The AEC industry relies extensively on photorealistic renderings for accurate visualization of design concepts, and while there are many rendering applications used in the industry, one technical aspect familiar to those who work with these applications extensively is the problem of “noise” in an image which makes it look somewhat grainy (Figure 1). This is especially applicable to raytracing, an advanced rendering technique used to create photograph-quality renderings.

Most rendering applications do tackle this problem, using noise-reducing algorithms that work more intensively as the rendering progresses, so that each subsequent rendering cycle looks less grainy compared to the previous frame. As the process continues, the rendering quality continues to improve, and the process is typically halted once an acceptable image quality has been achieved. One of the key problems in rendering so far—not just in AEC, but in other domains including medical imaging, gaming, media and entertainment, etc.—has been that reducing the noise—called “denoising”—takes an increasing longer time as the rendering progresses and the image gets more detailed. This is especially the case with the final renderings—there is some lingering noise which can take a significant amount of time to remove.

A new technology is being developed to tackle this problem—it uses machine learning to train a renderer to remove noise more quickly from a scene and is called AI (artificial intelligence) denoising. While specialized noise reduction software is available, both for photography and for rendering, it is typically very expensive—for example, see this blog post describing the $20,000 DaVinci Resolve suite. NVIDIA has invested in developing a GPU-accelerated AI denoiser to dramatically reduce the time to render a high fidelity image that is visually noiseless. This technology is now incorporated into its OptiX ray tracing engine, allowing it to become available to all rendering applications that use this engine, including NVIDIA’s own Iray rendering solution. (The denoising shown in Figure 1 was achieved using OptiX’s AI-accelerated denoiser technology.) Iray would be familiar to many in the AEC industry by virtue of the popular Iray plug-ins to applications such as 3ds Max and Rhino that are extensively used to create highly photorealistic renderings, which means that AI-accelerated denoising would be available in these applications as well.

NVIDIA’s Iray plug-ins are sold and supported by its Iray integration partners, one of which I had the opportunity to check out at the GTC 2018 conference and learn about Iray’s AI denoising feature. This was the company, migenius, which has been developing advanced 3D rendering technologies for close to 20 years. In fact, the training of NVIDIA’s AI denoiser—to remove noise but not real detail—was done using an extensive set of Iray scenes many of which were provided by migenius customers. The company has found that rendering with the AI denoiser is fast enough to be used interactively, and it showed additional examples of the rendering quality that can be achieved with it (Figure 2).

Additional Visualization Technologies

In addition to developing and supporting Iray plug-ins, migenius focuses on developing cloud-based rendering technology. Its core technology is RealityServer, a platform for web developers to embed physically-accurate, cloud-based rendering with Iray AI Denoising into apps that can be used by architects and designers to visualize 3D scenes online with a high degree of realism. The platform combines the power of NVIDIA GPUs, the Iray rendering technology, and 3D web services software. In addition to the rendering quality, RealityServer also enables interactivity, allowing a user to interact remotely with complex 3D models and environments within the browser, from any perspective and under customizable lighting conditions, and from any web-enabled device. Being web-based, it can also be used for collaboration between design teams and for client review and feedback. migenius has also developed a plug-in for SketchUp called “Bloom Unit” that enables photorealistic rendering of SketchUp models using Iray, combined with the flexibility and convenience of cloud computing (Figure 3).

In addition to AI denoising and collaborative cloud-based rendering, another advanced rendering technology that I had the opportunity to see at the GTC 2018 conference was “predictive rendering.” It was being demonstrated by OPTIS, a virtual prototyping company whose mission is to eliminate physical prototypes in industrial projects by creating true-to-life virtual mock-ups that can be used as real decision-making tools. The key to doing this is to create the most physically accurate simulation of the project in which all the objects interact with light exactly as they would in real life, and to do this fast enough for the rendering to be “predictive” in enabling decision-making. OPTIS’s predictive rendering technology is targeted towards a variety of design domains, one of which is architecture, some examples of which are shown in Figure 4.

The images shown in Figure 4 were computed with SPEOS, which is OPTIS’s advanced optical software. It generates unique computerized product images for each material used, capturing the material’s optical properties, including color, reflection, diffusion, opaqueness, and transparency, enabling the visual appearance of any surface to be simulated with exact accuracy. The software supports real-time modification of ambient lighting conditions, the position of the viewer and characteristics of the human eye such as focusing, saturation, blooming, and color perception. With mathematically accurate simulations of internal and external lighting conditions, the result is a hyper-realistic impression of how the design will look in real life. For a building project, this includes a precise simulation of the design based on its location and time of day. The ability to predict the impact of light so accurately can greatly help the architect to design different aspects of the building—including orientation, façade design, light fixtures, materials, and so on—to achieve very specific lighting requirements.

Rendering is, of course, one key aspect of graphics technology that is relevant to AEC. Another important aspect is interactivity, best exemplified by the growing interest and capabilities of virtual reality (VR). Increasingly, it’s not just the immersive nature of VR that is being sought—as it was when VR was first introduced—but the collaborative nature of the technology, allowing multiple users to experience the same virtual space together and collaborate on a design. I got a chance to experience this first-hand at the GTC 2018 conference by trying out NVIDIA’s Holodeck technology, which allowed me to enter a virtual design domain with other users, explore it from multiple points of view including “exploding” it to get a better idea of the details, interactively change the materials and see how they would look, and make decisions about the design jointly with other users (Figure 5).

Since Holodeck is still relatively new—NVIDIA has just launched an Early Access program for those who would like to try it out—there are no user stories from AEC yet. However, leading AEC firms such as KPF, CannonDesign, Hensel Phelps, and Gensler that are already using NVIDIA technology—such as its Quadro GPUs for design and visualization, Quadro VR Ready desktop and mobile GPUs for immersive experiences, and Quadro Virtual Data Center Workstation (vDWS) software for accelerated 3D graphics—would, in all likelihood, become the early adopters of Holodeck in AEC and prove to be good testbeds for how well it would work in AEC (Figure 6).

Smart Cities

NVIDIA may have started out purely as a hardware company focused on graphics processing units, and while GPUs are still the raison d'être of the company, it is branching out to developing an increasing range of software, not just focused on visualization but also on big data and machine learning. In fact, its 2018 GTC event was billed as “the premier AI and deep learning event” featuring the “latest breakthroughs in self-driving cars, smart cities, healthcare, big data, high performance computing, virtual reality, and more.” Apart from the inclusion of virtual reality in this list, there was little indication that this was related to GPUs at all.

Next to visualization, the developments related to smart cities seemed the most relevant to AEC, and I had the opportunity to check out NVIDIA Metropolis targeted specifically towards applying “deep learning to smart cities.” I had hoped to find some connection to CIM (city information modeling), but the understanding of “smart cities” at NVIDIA is closer to that highlighted in my 2016 article on City Information Modeling, and the focus of Metropolis is on enabling safer and smarter cities. It is doing this by applying AI to analyze the data captured by the exponentially growing amount of cameras in cities worldwide and turn that information into insights that can impact aspects such as public safety, traffic, parking management, law enforcement, and other city services. The underlying technology whereby this is achieved is “intelligent video analytics” starting with image recognition (Figure 7), which is NVIDIA’s forte given that it is a graphics company.


The NVIDIA GTC 2018 event was not a must-see conference for AEC, but very informative. I learnt a lot I didn’t know before. I didn’t know anything about denoising, let along AI denoising, or about predictive rendering, which is fast and accurate enough to actual enable decision-making in design. Cloud computing is, of course, a familiar concept—rendering was one of the first tasks in AEC to be sent to the cloud, being computing-intensive and a technology well suited for multi-processing so it could be distributed across multiple computers in a render farm. In addition to migenius’s RealityServer, there were many additional cloud-based solutions on display, mostly in the form of hardware incorporating NVIDIA’s latest GPUs.

While NVIDIA’s Metropolis platform for intelligent video analytics is still in the early stage, it is an excellent example of how a graphics company can build up on its expertise with vision recognition, data analytics, and machine learning to contribute to the “smart cities” movement that is rapidly gaining momentum world-wide. At some point, I hope to see some integration of smart city initiatives such as these with the CIM (city information modeling) technology that has been pioneered in the AEC industry, and it would seem that a technology company like NVIDIA that has its “foot in both doors,” as it were, should be able to lead the effort.

About the Author

Lachmi Khemlani is founder and editor of AECbytes. She has a Ph.D. in Architecture from UC Berkeley, specializing in intelligent building modeling, and consults and writes on AEC technology. She can be reached at lachmi@aecbytes.com.


Have comments or feedback on this article? Visit its AECbytes blog posting to share them with other readers or see what others have to say.

AECbytes content should not be reproduced on any other website, blog, print publication, or newsletter without permission.