The last blog post introduced Reality Computing as a high-level concept that integrates the digital and physical worlds, bringing together many products and technologies to:
- Capture -- digitally capture existing conditions,
- Compute -- use that information to digitally develop simulations, designs, and so on,
- Create -- then realize the results back in the physical world.
Today’s post will look at the first of these three components of Reality Computing more closely: 'capture'. The next posts will examine ‘compute’, followed by ‘create’.
“Turning things into data” (aka capture)
The ways information can be captured digitally from the physical world are increasing every day. Capture technologies—from laser scanning and point survey to photogrammetry and ground-penetrating radar—coupled with plummeting prices are promoting both ease of use and accessibility. For example, the chart below illustrates the development of 3D scanning, including the introduction of the first commercial 3D laser scanning systems for the AEC industry.
The first commercial 3D laser scanning systems for the AEC industry were
complicated and expensive, limiting their market penetration. Today, the
value and cost of laser scanning has made its use commonplace.
However, the systems were complicated and very expensive, limiting their market penetration, whereas today laser scanning is a staple of infrastructure and land development projects. Intel’s announcement at the 2014 Consumer Electronics Show (CES) that it will start building RealSense 3D camera technology into its product lines is another example of how 3D scanning technology is becoming commonplace.
Consider when mobile phones first started to include cameras. Initially, the cameras were used as expected—to capture still photographs. Who would have imagined that today you can use that built-in camera technology to deposit checks, measure your heart rate, or translate a street sign in a foreign language? Similarly, as 3D scanning technologies (and the reality data they produce) become more available and more established, the derivative applications and integrations with consumer and commercial tools will follow—further broadening the reach of 3D scanning.
There are already some very successful derivative data capture applications on the market. For example, the hottest gift of the 2010 holiday season (and the fastest selling consumer electronics device at that time) was the Kinect device for Microsoft’s Xbox 360 gaming system. Essentially, Kinect is an infrared camera that allows the game’s console to track body movements by using…3D laser scanning! Kinect devices now retail for less than $100.
Technology companies such as Apple, Microsoft, Google, and Autodesk have significant investments in 3D sensing technologies, while startups and Kickstarter campaigns involving 3D sensing technologies—from Leap’s motion-sensing device for PCs to the Spike laser scanning smartphone accessory—appear in the market almost weekly.
This captured reality data is generally represented as high-density point clouds, which are very different from the descriptive geometry that design software uses today. The use of captured point cloud data in a 3D design application usually requires some sort of pre-processing. Processing typically involves registering different scans within a common coordinate system and then georeferencing that point cloud to a project’s existing coordinate system. Moreover, the size of raw point cloud files can reach hundreds of gigabytes for large projects—rendering them almost impossible to work with in a modeling environment. Preprocessing software helps users visualize and work with these massive datasets.
Check back for the next blog post on the three components of Reality Computing, which will explore how software tools are being used to operate on the digitally captured, real-world information.
Comments
You can follow this conversation by subscribing to the comment feed for this post.