Skip to content

Module 1 Assessment: 3D Scanning Technology Fundamentals

Assessment ID: U11-M1-ASSESS Type: Knowledge Quiz Duration: 30 minutes Pass Threshold: 70%


Knowledge Quiz (12 points)

Instructions: Answer all 12 questions. One point per correct answer.


Question 1: Structured Light Scanning Principle

What is the fundamental operating principle of structured light 3D scanning?

Explanation: Structured light scanners project a known pattern (stripes, grids, or coded patterns) onto the object. A camera offset from the projector observes how the pattern distorts across the 3D surface. Triangulation between the projector, camera, and observed pattern deformation yields precise 3D coordinates.


Question 2: Laser Triangulation

In laser triangulation scanning, what two components form the measurement baseline?

Explanation: Laser triangulation uses a laser line (or point) projected onto the surface and a camera sensor positioned at a known angle and distance from the laser. The position of the laser line on the camera sensor shifts depending on the surface depth, allowing distance calculation through trigonometric triangulation.


Question 3: Point Cloud Definition

What is a point cloud in the context of 3D scanning?

Explanation: A point cloud is the raw output of most 3D scanners — a set of discrete points in 3D space. Each point has X, Y, Z coordinates and may include additional attributes such as RGB color values and surface normal vectors. Point clouds have no connectivity information; they must be processed into meshes for most downstream applications.


Question 4: Photogrammetry Method

How does photogrammetry generate 3D geometry?

Explanation: Photogrammetry uses multiple photographs of the same object from different viewpoints. Software identifies matching features across images, calculates camera positions, and triangulates 3D point positions through bundle adjustment algorithms. This approach requires no specialized hardware beyond a camera but demands controlled lighting and sufficient image overlap (typically 60-80%).


Question 5: Accuracy vs. Resolution

What is the difference between scanner accuracy and scanner resolution?

Explanation: Accuracy and resolution are distinct specifications. Accuracy (e.g., ±0.05 mm) indicates how close the scanner's measurement is to the real-world dimension. Resolution (e.g., 0.1 mm point spacing) indicates the smallest detail the scanner can capture. A scanner can have high resolution but poor accuracy if it is not properly calibrated, and vice versa.


Question 6: Scanning Challenges — Reflective Surfaces

Why do shiny or reflective surfaces cause problems for most 3D scanners?

Explanation: Most optical scanners rely on detecting diffuse reflection — light that scatters evenly from the surface back toward the camera. Shiny or mirror-like surfaces produce specular reflection, bouncing light at the angle of incidence rather than diffusing it. This creates bright spots, missing data, or noise. Common solutions include applying temporary dulling spray or scanning powder.


Question 7: Transparent Object Scanning

What is the primary challenge when scanning transparent or translucent objects?

Explanation: Optical 3D scanners depend on light reflecting off the object surface. Transparent materials (glass, clear acrylic) allow light to pass through, refract, or internally scatter rather than reflecting back to the sensor. Solutions include coating with scanning spray, filling with opaque material, or using CT scanning for internal/external geometry.


Question 8: Working Distance and Field of View

How does a scanner's working distance relate to its field of view and accuracy?

Explanation: Working distance defines how far the scanner is from the object. At greater distances, the scanner covers a larger area (wider field of view) but each pixel on the sensor represents a larger physical area, reducing point density and accuracy. Close-range scanning captures finer detail but requires more scans to cover the entire object. Scanner specifications list optimal working distance ranges for best performance.


Question 9: Point Cloud Normal Vectors

What do normal vectors in a point cloud represent?

Explanation: Normal vectors indicate which direction is "outward" from the surface at each point. They are perpendicular to the local surface plane and are critical for mesh reconstruction algorithms (Poisson reconstruction relies heavily on normals), for correct lighting/shading in 3D viewers, and for determining inside vs. outside when creating watertight meshes.


Question 10: Structured Light vs. Laser — Speed

Which scanning technology generally captures data faster for full-surface acquisition?

Explanation: Structured light scanners project a 2D pattern across the entire field of view and capture millions of points in a single frame (area scanning). Laser triangulation scanners typically project a single line or point that must sweep across the object, making them inherently slower for full-surface capture. However, laser scanners often achieve higher accuracy on certain geometry types and work better at longer ranges.


Question 11: Occlusion in Scanning

What is "occlusion" in 3D scanning?

Explanation: Occlusion occurs when the scanner cannot "see" a surface because it is blocked by another part of the object, a fixture, or the scanning setup. Deep holes, undercuts, and complex internal geometry are common sources of occlusion. Multi-angle scanning, turntable rotation, and handheld scanning techniques are used to minimize occlusion artifacts.


Question 12: Mesh vs. Point Cloud

What is the key difference between a point cloud and a mesh?

Explanation: A point cloud is raw scan data — discrete, unconnected 3D points. A mesh connects these points with edges and triangular (or polygonal) faces to create a continuous surface representation. Meshes are required for 3D printing, CAD integration, and most visualization applications. The conversion from point cloud to mesh is a critical processing step covered in Module 3.