Reality Computing is a high-level concept that integrates the digital and physical worlds, bringing together many products and technologies to:
- Capture -- digitally capture existing conditions
- Compute -- use that information to digitally create simulations, designs, and so on,
- Create -- then realize the results back in the physical world.
The last blog post examined ways information about the physical world can be digitally captured. Today’s post looks at how software tools are used to manipulate and analyze that information—connecting the digital capture of the physical world and the physical creation of the digital world.
“And what goes on in between” (aka compute)
Once ‘reality’ is captured and processed, the next step of Reality Computing is the ability to operate on the digital, real-world information. This may involve editing it to filter erroneous or unwanted data, manipulating it into new designs, adding new model information around it, analyzing it for new information, or using it to simulate real-world behavior or perform clash detection. Depending on the application, some teams may take the optional step of creating surface meshes or 3D solids from some or all of the scanned or surveyed data.
3D design software can be used to manipulate and directly interact with the reality-captured data. For example, a civil engineer can import 3D laser scans of a congested roadway intersection into road design software as a real-world reference for early planning efforts to redesign the intersection. A damaged bracket on a military aircraft in a combat zone can be scanned and the reality-captured data uploaded to a manufacturing facility across the world where the data is imported into mechanical design software, the damaged portion is digitally repaired, manufactured, and replacement part is shipped back to repair the aircraft.
Moreover, enabling technology for segmentation and feature recognition of reality data allow designers to interact with point clouds and high density meshes in more intuitive, object-like ways. Manufacturers can use metrology technology—both laser scanning and contact-based coordinate measuring machines (CMM)—combined with feature recognition software to convert point cloud and contact-probe data of manufactured components into 3D solid models. These models can then be used for a variety of purposes such as quality inspection during the manufacturing process or for reverse engineering.
Reality Computing enables manufacturers to use metrology technology combined with feature recognition software to create 3D solid models for quality inspection, reverse engineering, and so forth.
Similarly, feature recognition software helps civil engineers manipulate point clouds of existing terrain or infrastructure as an object rather than a collection of points. For example, specialized feature recognition software can automatically identify relevant features from point clouds, such as bridges, signs, and streetlights in scans of highways corridors.
The ability to import, visualize, and edit reality-captured data can also help streamline ‘scan to BIM’ processes. Scanned data of a building can serve as a reference to create or validate a building model used as a starting point for the building’s renovation. Scans of a newly poured concrete slab can be imported into a 3D design model of a new building (that contains the digital design of the slab) to perform deviation analysis—highlighting high and low areas that need adjustment. Scanned point cloud data of an existing facility can be combined with digital models representing new equipment or renovated spaces for project coordination and clash detection.
Our next post will focus on the third component of Reality Computing: using digital information to create something new back in the physical world.
Comments
You can follow this conversation by subscribing to the comment feed for this post.