Aller au contenu principal

9. [Three.js] Interactive Object Detection with Raycasting

· 6 minutes de lecture
Sangmin SHIM
Fullstack Developer

1. Digest

Raycasting is a fundamental technique for creating interactive 3D applications, enabling detection of which objects intersect with an invisible ray cast through the scene. The THREE.Raycaster class provides two core implementations: automatic mesh detection using a fixed ray direction, and user-driven interaction through mouse click detection.

The first example demonstrates continuous ray collision detection by casting a ray from a fixed origin point and detecting which animated meshes intersect with it in real-time. A visual guide line helps understand how the ray travels through 3D space, changing the color of intersected objects dynamically. The second example advances to practical user interaction, converting 2D mouse coordinates to normalized device coordinates (NDC) and casting rays from the camera through the click position to detect which 3D objects the user has selected.

A critical enhancement addresses a common UX problem: distinguishing between clicks and drags when using OrbitControls. The custom PreventDragClick utility class tracks mouse movement distance and time between mousedown and mouseup events, preventing false click detection when users are actually rotating the camera. This pattern is essential for any interactive 3D application combining camera controls with object selection.

2. What is the purpose

Learning raycasting provides essential interactive 3D techniques that form the foundation of user interaction in Three.js applications. The skills covered include:

  • Understanding raycasting fundamentals and how THREE.Raycaster works
  • Implementing continuous object detection using fixed ray directions
  • Converting 2D mouse coordinates to 3D ray projections using normalized device coordinates (NDC)
  • Detecting which 3D objects users click on using setFromCamera() and intersectObjects()
  • Processing intersection results to identify closest objects and handle multiple overlapping objects
  • Distinguishing between genuine clicks and camera drag operations with OrbitControls
  • Creating reusable utility classes for common interaction patterns

These skills are critical for building interactive 3D experiences like games with clickable objects, product configurators with selectable parts, architectural visualizations with interactive elements, and any application requiring user selection of 3D objects.

3. Some code block and its explanation

Example 1: Basic Raycaster Setup with Fixed Direction

// Raycaster
const raycaster = new THREE.Raycaster();

// Visual guide line to see the ray
const lineMaterial = new THREE.LineBasicMaterial({ color: 'yellow' });
const points = [];
points.push(new THREE.Vector3(0, 0, 100));
points.push(new THREE.Vector3(0, 0, -200));
const lineGeometry = new THREE.BufferGeometry().setFromPoints(points);
const guide = new THREE.Line(lineGeometry, lineMaterial);
scene.add(guide);

// In draw loop
const origin = points[0]; // Starting point (0, 0, 100)
const direction = new THREE.Vector3(0, 0, -100);
raycaster.set(origin, direction.normalize());

const intersects = raycaster.intersectObjects(meshes);
intersects.forEach((intersectedItem) => {
intersectedItem.object.material.color.set('red');
});

This demonstrates the core raycasting mechanism. A Raycaster is created, then configured with an origin point and direction vector. The direction must be normalized (length = 1) to represent pure direction without magnitude. The visual guide line helps debug by showing exactly where the ray travels. intersectObjects() returns an array of intersection results sorted by distance from the origin, each containing the intersected object, intersection point, distance, face index, and UV coordinates. This pattern is useful for laser beams, line-of-sight detection, or automated collision checking.

Example 2: Mouse Click Detection with Camera Ray

const raycaster = new THREE.Raycaster();
const mouse = new THREE.Vector2();

function checkIntersects() {
raycaster.setFromCamera(mouse, camera);
const intersects = raycaster.intersectObjects(meshes);

for (const item of intersects) {
item.object.material.color.set('blue');
break; // Only select the first (closest) object
}
}

canvas.addEventListener('click', (event) => {
// Convert screen coordinates to NDC (Normalized Device Coordinates)
mouse.x = event.clientX / canvas.clientWidth * 2 - 1;
mouse.y = -(event.clientY / canvas.clientHeight * 2 - 1);
checkIntersects();
});

This implements user-driven object selection through mouse clicks. The critical step is converting pixel coordinates (0 to width/height) into normalized device coordinates (NDC) ranging from -1 to +1, where (-1, -1) is bottom-left and (1, 1) is top-right. Note the Y-axis inversion (negative sign) because screen coordinates have Y increasing downward while 3D coordinates have Y increasing upward. setFromCamera() automatically calculates the ray origin and direction from the camera through the mouse position in 3D space. Breaking after the first intersection ensures only the closest visible object is selected, ignoring objects behind it.

Example 3: Preventing False Clicks During Camera Drag

export class PreventDragClick {
constructor(element) {
this.mouseMoved;
let clickStartX;
let clickStartY;
let clickStartTime;

element.addEventListener("mousedown", (event) => {
clickStartX = event.clientX;
clickStartY = event.clientY;
clickStartTime = Date.now();
});

element.addEventListener("mouseup", (event) => {
const xGap = Math.abs(event.clientX - clickStartX);
const yGap = Math.abs(event.clientY - clickStartY);
const timeGap = Date.now() - clickStartTime;

if (xGap > 5 || yGap > 5 || timeGap > 500) {
this.mouseMoved = true;
} else {
this.mouseMoved = false;
}
});
}
}

// Usage
const preventDragClick = new PreventDragClick(canvas);

function checkIntersects() {
if (preventDragClick.mouseMoved) return; // Ignore drags
// ... rest of intersection code
}

This utility class solves a critical UX problem: when using OrbitControls, dragging to rotate the camera technically triggers click events on mouseup. The class tracks mouse position and timing between mousedown and mouseup. If the mouse moved more than 5 pixels in any direction, or the interaction lasted longer than 500ms, it's classified as a drag rather than a click. The checkIntersects() function checks this flag before processing selections, ensuring users don't accidentally select objects while trying to rotate the camera. This pattern is essential for any 3D application combining camera controls with interactive objects.

Example 4: Understanding Normalized Direction Vectors

// Before normalization
const direction = new THREE.Vector3(0, 0, -100);
console.log(direction.length()); // 100

// After normalization
direction.normalize();
console.log(direction.length()); // 1
console.log(direction); // Vector3 { x: 0, y: 0, z: -1 }

Normalization converts a vector to unit length (length = 1) while preserving its direction. This is crucial for raycasting because the Raycaster needs a pure direction vector, not a specific distance. The formula is: normalized = vector / vector.length(). In this example, (0, 0, -100) becomes (0, 0, -1) after normalization. Both vectors point in the same direction (negative Z-axis), but the normalized version has unit length, making it suitable for ray calculations. Always call .normalize() on direction vectors before passing them to raycaster.set().

8. [Three.js] Lighting and Shadows

· 5 minutes de lecture
Sangmin SHIM
Fullstack Developer

1. Digest

Lighting is fundamental to creating realistic 3D scenes. This chapter explores the various light types available in Three.js, each serving different purposes in scene illumination. Starting with basic ambient and directional lights, the examples progress through animated lighting, shadow implementation, and specialized light types including point lights, spotlights, hemisphere lights, and rectangular area lights.

The chapter demonstrates practical implementations of shadow mapping techniques with different quality settings (PCFShadowMap, PCFSoftShadowMap), shadow camera configuration for performance optimization, and proper setup of castShadow and receiveShadow properties. Each example includes dat.GUI integration for real-time light parameter adjustment, allowing experimentation with position, intensity, and color properties. Light helpers are extensively used throughout to visualize light positions and directions, making it easier to understand how different light types behave in 3D space.

The progression from basic static lights to animated and specialized lighting showcases how to create dynamic, visually appealing scenes. Understanding the distinction between lights that cast shadows (DirectionalLight, PointLight, SpotLight) and those that don't (AmbientLight, HemisphereLight, RectAreaLight) is crucial for balancing visual quality with rendering performance.

2. What is the purpose

This chapter teaches fundamental lighting concepts essential for creating realistic and visually appealing 3D scenes. You'll learn how to:

  • Set up different types of lights (Ambient, Directional, Point, Spot, Hemisphere, RectArea) and understand their use cases
  • Implement shadow mapping with quality optimization techniques
  • Configure shadow cameras for performance tuning
  • Distinguish between castShadow (objects that create shadows) and receiveShadow (surfaces that display shadows)
  • Animate lights dynamically to create moving light effects
  • Use light helpers to visualize and debug lighting setups
  • Integrate dat.GUI for real-time lighting parameter experimentation

These skills are critical for any Three.js developer working on games, product visualizations, architectural renders, or interactive 3D experiences where lighting dramatically impacts the mood and realism of the scene.

3. Some code block and its explanation

Example 1: Basic Light Setup with DirectionalLight

// Light
const ambientLight = new THREE.AmbientLight("white", 0.5); // 색상, 강도
scene.add(ambientLight);

const directionalLight = new THREE.DirectionalLight("white", 0.5);
directionalLight.position.y = 3;
scene.add(directionalLight);

const lightHelper = new THREE.DirectionalLightHelper(directionalLight);
scene.add(lightHelper);

This demonstrates the foundational lighting setup. AmbientLight provides uniform illumination across the entire scene without direction, preventing completely dark areas. DirectionalLight simulates sunlight with parallel rays coming from a specific direction. The DirectionalLightHelper visualizes the light's position and direction, crucial for debugging. Together, these two lights create a balanced lighting environment: ambient light fills shadows softly while directional light creates definition and depth.

Example 2: Shadow Configuration

// Enable shadows in renderer
renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.PCFSoftShadowMap;

// Configure light for shadows
directionalLight.castShadow = true;
directionalLight.shadow.mapSize.width = 1024;
directionalLight.shadow.mapSize.height = 1024;
directionalLight.shadow.camera.near = 1;
directionalLight.shadow.camera.far = 5;

// Configure meshes
plane.receiveShadow = true;
box.castShadow = true;
box.receiveShadow = true;
sphere.castShadow = true;
sphere.receiveShadow = true;

This code implements shadow mapping with performance optimization. The renderer must explicitly enable shadows and specify the shadow algorithm (PCFSoftShadowMap provides softer, more realistic shadows than the default). The light source needs castShadow enabled and shadow map resolution specified (higher values = better quality but worse performance). Shadow camera near/far planes limit the shadow calculation range for optimization. Finally, each mesh must specify whether it casts shadows onto other objects (castShadow) or receives shadows on its surface (receiveShadow). The plane typically only receives, while objects like boxes and spheres both cast and receive for realistic inter-object shadowing.

Example 3: Animated Lighting

function draw() {
const time = clock.getElapsedTime();

directionalLight.position.x = Math.cos(time) * 5;
directionalLight.position.z = Math.sin(time) * 5;

renderer.render(scene, camera);
window.requestAnimationFrame(draw);
}

This creates a circular light animation using trigonometric functions. By using elapsed time with cos() and sin(), the light orbits around the scene center at a 5-unit radius. This technique simulates time-of-day changes or creates dramatic moving shadow effects. The same pattern works for PointLight and SpotLight animations, creating dynamic lighting scenarios common in games and interactive experiences.

Example 4: Specialized Light Types

// PointLight - Omnidirectional light from a point (like a light bulb)
const pointLight = new THREE.PointLight("white", 15, 100, 2);
pointLight.position.y = 3;

// SpotLight - Cone-shaped directional light (like a flashlight)
const spotLight = new THREE.SpotLight("white", 50, 100, Math.PI / 3);
spotLight.position.set(-3, 5, 0);

// HemisphereLight - Sky/ground gradient lighting (outdoor ambient)
const hemisphereLight = new THREE.HemisphereLight("yellow", "blue", 1);

// RectAreaLight - Rectangular area light (like a window or LED panel)
const rectAreaLight = new THREE.RectAreaLight("yellow", 1, 2, 2);
rectAreaLight.position.set(0, 2, 2);

Each specialized light type serves different purposes. PointLight emits in all directions from a point with distance falloff parameters (intensity, distance, decay), perfect for light bulbs or torches. SpotLight creates a cone-shaped beam with angle control, ideal for flashlights or stage lighting. HemisphereLight simulates outdoor lighting with sky and ground colors blending, creating natural-looking ambient illumination without harsh shadows. RectAreaLight simulates flat light-emitting surfaces like windows or LED panels (note: RectAreaLight doesn't support shadows). Choosing the right light type dramatically affects both visual quality and rendering performance.

7. [Three.js] Mastering Materials and Textures

· 13 minutes de lecture
Sangmin SHIM
Fullstack Developer

1. Digest

Materials are the visual skin of 3D objects, defining how surfaces interact with light and appear to viewers. This comprehensive chapter explores all major material types and texturing techniques through 17 progressive examples, covering everything from basic material properties to advanced texture workflows.

The journey begins with fundamental material types: MeshBasicMaterial for simple, unlit surfaces; MeshLambertMaterial and MeshPhongMaterial demonstrating the difference between matte and glossy surfaces; and MeshStandardMaterial introducing physically-based rendering with metalness and roughness properties. Flat shading creates faceted, low-poly aesthetics, while side rendering options (FrontSide, BackSide, DoubleSide) control which faces of geometry are visible.

Texture mapping capabilities are thoroughly covered starting with basic texture loading, progressing to managing multiple textures with LoadingManager, and advancing to texture transformations including positioning, rotation, and repetition. Applying different textures to each face of a cube using material arrays with proper texture filtering via magFilter enables pixel-perfect rendering, particularly important for pixelated art styles.

Specialized materials include MeshToonMaterial for cartoon/cel-shaded effects using gradient maps, MeshNormalMaterial for visualizing surface normals with rainbow colors useful for debugging geometry, and MeshMatcapMaterial for achieving complex lighting effects without actual lights by using matcap textures. Combining multiple texture maps (base color, normal, roughness, ambient occlusion) with MeshStandardMaterial creates photorealistic surfaces.

Advanced environmental techniques include environment mapping with cubemaps for realistic reflections on metallic surfaces, creating immersive skyboxes for 360-degree backgrounds, combining skybox and environment mapping for complete environmental integration, and dynamic CanvasTexture for generating procedural textures or displaying real-time content like text and animations directly on 3D surfaces.

This comprehensive coverage provides everything needed to create visually stunning 3D experiences, from simple colored shapes to photorealistic objects with complex surface details, dynamic content, and immersive environments.

2. What is the purpose

The purpose of mastering materials and textures is to transform basic 3D geometry into visually compelling and realistic objects. Understanding different material types enables developers to choose performance-appropriate materials based on project requirements, create diverse visual styles ranging from flat cartoon aesthetics to photorealistic renderings, and implement proper lighting interactions for believable scenes.

Learning texture mapping unlocks the ability to apply detailed surface appearances without complex geometry, implement physically-based rendering workflows using multiple texture maps for realism, create immersive environments through skyboxes and environment maps, and develop dynamic, interactive textures that respond to user input or animation. The knowledge of texture transformations and filtering provides precise control over how images appear on 3D surfaces.

Understanding specialized materials like MeshToonMaterial for stylized rendering, MeshNormalMaterial for debugging geometry issues, and MeshMatcapMaterial for performance-optimized complex appearances expands the creative toolkit. The practical applications extend to creating product visualizations with realistic materials and reflections, building immersive game environments with skyboxes, implementing interactive applications with dynamic canvas-based textures, optimizing render performance by choosing appropriate material types, and achieving specific artistic directions from minimal geometric complexity.

This knowledge is essential for professional 3D web development, enabling developers to make informed decisions about visual quality versus performance trade-offs and implement industry-standard rendering techniques.

3. Some code block and its explanation

Example 1: Material Types Comparison - Basic, Lambert, Phong, and Standard

// MeshBasicMaterial - No lighting required
const basicMaterial = new THREE.MeshBasicMaterial({
color: 'seagreen'
});
// Fastest, no depth, ignores lights

// MeshLambertMaterial - Matte, non-shiny surface
const lambertMaterial = new THREE.MeshLambertMaterial({
color: "orange"
});

// MeshPhongMaterial - Shiny surface with specular highlights
const phongMaterial = new THREE.MeshPhongMaterial({
color: "orange",
shininess: 1000
});

// MeshStandardMaterial - Physically based rendering
const standardMaterial = new THREE.MeshStandardMaterial({
color: "orangered",
roughness: 0.1, // 0 = smooth, 1 = rough
metalness: 0.5 // 0 = dielectric, 1 = metal
});

// Flat shading for low-poly aesthetic
const flatMaterial = new THREE.MeshStandardMaterial({
color: "orangered",
roughness: 0.1,
metalness: 0.5,
flatShading: true // Faceted appearance
});

This progression demonstrates the fundamental material types in Three.js, each with distinct characteristics and use cases. MeshBasicMaterial is the simplest and fastest, rendering solid colors without any lighting calculation, making it perfect for UI elements, solid backgrounds, or debugging. It provides no depth perception since it doesn't interact with lights.

MeshLambertMaterial introduces diffuse lighting for matte surfaces like paper, unfinished wood, or concrete, creating basic depth perception through light interaction. MeshPhongMaterial adds specular highlights controlled by the shininess property, ideal for glossy surfaces like plastic, painted metal, or wet surfaces. Higher shininess values create smaller, more focused highlights.

MeshStandardMaterial represents modern physically-based rendering (PBR) using roughness and metalness parameters instead of arbitrary shininess. This approach more accurately simulates real-world materials: roughness controls surface smoothness (0 = mirror-like, 1 = completely rough), while metalness determines electrical conductivity (0 = non-metal like wood or plastic, 1 = pure metal). The flatShading option creates a faceted, low-poly aesthetic by calculating lighting per face rather than per vertex, popular in stylized games and minimalist designs.

Example 2: Side Rendering and Texture Loading

// Side rendering options
const material = new THREE.MeshStandardMaterial({
color: "orangered",
roughness: 0.2,
metalness: 0.5,
// side: THREE.FrontSide // Default: render outside only
// side: THREE.BackSide // Render inside only
side: THREE.DoubleSide // Render both sides (performance cost)
});

// Basic texture loading
const textureLoader = new THREE.TextureLoader();
const texture = textureLoader.load(
"/textures/brick/Wood_Wicker_012_ambientOcclusion.png",
() => console.log("Load complete"),
() => console.log("Loading..."),
() => console.log("Load error")
);

const textureMaterial = new THREE.MeshStandardMaterial({
map: texture
});

// Loading multiple textures with LoadingManager
const loadingManager = new THREE.LoadingManager();
loadingManager.onStart = () => console.log("Loading started");
loadingManager.onProgress = (img) => console.log(img + " loading...");
loadingManager.onLoad = () => console.log("All textures loaded");
loadingManager.onError = (img) => console.log(img + " error");

const loader = new THREE.TextureLoader(loadingManager);
const baseColorTexture = loader.load("/textures/brick/Wood_Wicker_012_basecolor.png");
const normalTexture = loader.load("/textures/brick/Wood_Wicker_012_normal.png");
const roughnessTexture = loader.load("/textures/brick/Wood_Wicker_012_roughness.png");
const ambientTexture = loader.load("/textures/brick/Wood_Wicker_012_ambientOcclusion.png");

The side property controls which faces of geometry are rendered. The default THREE.FrontSide only renders the outside faces (determined by vertex winding order), providing best performance. THREE.BackSide renders only inside faces, useful for creating interior environments or special effects. THREE.DoubleSide renders both faces, necessary for thin objects like planes, leaves, or cloth that should be visible from both sides, though it doubles the rendering cost.

Texture loading uses THREE.TextureLoader with optional callbacks for success, progress, and error handling. The basic approach works for single textures but becomes cumbersome with multiple images. LoadingManager solves this by centralizing load event handling across multiple textures, providing unified callbacks for tracking overall loading progress. This is essential for displaying loading screens, preloading assets, and ensuring all textures are ready before rendering. The manager-based approach scales better for complex scenes with dozens or hundreds of textures.

Example 3: Texture Transformation and Multi-Material Geometry

// Texture transformation (position, rotation, scale)
const skullTexture = textureLoader.load("/textures/skull/Ground_Skull_basecolor.jpg");

// Wrapping mode for texture repetition
skullTexture.wrapS = THREE.RepeatWrapping; // Horizontal repeat
skullTexture.wrapT = THREE.RepeatWrapping; // Vertical repeat

// Position offset
skullTexture.offset.x = 0.3; // Shift horizontally
skullTexture.offset.y = 0.2; // Shift vertically

// Repeat/scale
skullTexture.repeat.x = 2; // Tile 2 times horizontally
skullTexture.repeat.y = 3; // Tile 3 times vertically

// Rotation
skullTexture.rotation = Math.PI / 4; // Rotate 45 degrees
skullTexture.center.x = 0.5; // Rotation pivot at center
skullTexture.center.y = 0.5;

// Multiple materials on one geometry (cube faces)
const materials = [
new THREE.MeshBasicMaterial({ map: rightTexture }), // Right face
new THREE.MeshBasicMaterial({ map: leftTexture }), // Left face
new THREE.MeshBasicMaterial({ map: topTexture }), // Top face
new THREE.MeshBasicMaterial({ map: bottomTexture }), // Bottom face
new THREE.MeshBasicMaterial({ map: frontTexture }), // Front face
new THREE.MeshBasicMaterial({ map: backTexture }) // Back face
];

// Texture filtering for pixelated style
rightTexture.magFilter = THREE.NearestFilter; // No smoothing when scaled up
// Repeat for all textures...

const mesh = new THREE.Mesh(geometry, materials); // Array of materials

Texture transformation provides powerful control over how images map onto geometry. The wrapS and wrapT properties determine behavior beyond the standard 0-1 UV coordinate range: RepeatWrapping creates seamless tiling patterns, ClampToEdgeWrapping stretches edge pixels, and MirroredRepeatWrapping mirrors the texture at boundaries. The offset property shifts the texture position, useful for animating textures (like scrolling water) or fine-tuning alignment. The repeat property scales the texture, with values less than 1 magnifying it and values greater than 1 creating multiple tiles. Rotation transforms occur around a pivot point defined by center (default is bottom-left corner at 0,0).

The multi-material approach allows different textures on each face of a cube (or any geometry with face groups), perfect for Minecraft-style blocks, dice, or skyboxes. The materials array order corresponds to geometry faces in a specific sequence. The magFilter property controls texture magnification filtering: THREE.LinearFilter (default) smooths pixels when scaled up, while THREE.NearestFilter preserves sharp pixel boundaries, essential for pixel art, retro aesthetics, or maintaining crisp details in low-resolution textures.

Example 4: Specialized Materials - Toon, Normal, and Matcap

// MeshToonMaterial for cartoon/cel-shaded effects
const gradientTexture = textureLoader.load("/textures/gradient.png");
gradientTexture.magFilter = THREE.NearestFilter; // Prevent smoothing

const toonMaterial = new THREE.MeshToonMaterial({
color: 'plum',
gradientMap: gradientTexture // Custom toon shading levels
});
// Without gradientMap: 2 tone levels (light/dark)
// With gradientMap: custom color bands for stylized shading

// MeshNormalMaterial for debugging and artistic effects
const normalMaterial = new THREE.MeshNormalMaterial();
// Displays surface normals as RGB colors
// Red = X axis, Green = Y axis, Blue = Z axis
// Useful for debugging geometry and normals

// MeshMatcapMaterial for complex appearances without lights
const matcapTexture = textureLoader.load("/textures/matcap/material3.jpg");
const matcapMaterial = new THREE.MeshMatcapMaterial({
matcap: matcapTexture
});
// Matcap = "Material Capture"
// Simulates complex lighting from a single spherical texture
// No actual lights needed, very performant

MeshToonMaterial creates cartoon or cel-shaded aesthetics popular in anime-styled games and non-photorealistic rendering. By default, it produces a simple two-tone effect (lit and shadowed areas). The gradientMap allows customizing the number and colors of shading bands by providing a gradient texture, with NearestFilter ensuring sharp transitions between color bands rather than smooth gradients. This technique is fundamental for stylized games like The Legend of Zelda: Wind Waker or Genshin Impact.

MeshNormalMaterial encodes surface normal vectors as RGB colors, where each axis (X, Y, Z) maps to a color channel (Red, Green, Blue). This creates a characteristic rainbow appearance that's invaluable for debugging geometry issues like incorrect normals, inside-out faces, or smoothing problems. It's also used artistically for psychedelic or diagnostic visualization styles. Since normals are view-independent, the colors change as you rotate the camera around objects.

MeshMatcapMaterial achieves sophisticated lighting effects without actual lights by using a special "material capture" texture, a photograph of a sphere with desired lighting and material properties. The material looks up colors based on surface normals, creating an illusion of complex lighting and reflections. This is extremely performant since there's no real-time lighting calculation, making it ideal for mobile applications, large scenes, or when you want a specific pre-defined look that's difficult to achieve with real lights.

Example 5: Advanced Texturing - Multiple Maps and Environment

// MeshStandardMaterial with multiple texture maps
const material = new THREE.MeshStandardMaterial({
map: baseColorTexture, // Albedo/diffuse color
normalMap: normalTexture, // Surface detail without geometry
roughnessMap: roughnessTexture, // Per-pixel roughness variation
aoMap: ambientTexture, // Ambient occlusion for depth
aoMapIntensity: 4, // AO strength multiplier
roughness: 0.3, // Base roughness value
metalness: 0.1 // Base metalness value
});
// Multiple maps combine for photorealistic surfaces

// Environment mapping with cubemaps
const cubeTextureLoader = new THREE.CubeTextureLoader();
const envTexture = cubeTextureLoader.setPath("textures/cubemap/").load([
"px.png", "nx.png", // Positive X, Negative X
"py.png", "ny.png", // Positive Y, Negative Y
"pz.png", "nz.png" // Positive Z, Negative Z
]);

const reflectiveMaterial = new THREE.MeshStandardMaterial({
envMap: envTexture,
metalness: 2, // High metalness for strong reflections
roughness: 0.1 // Low roughness for clear reflections
});

// Skybox background
scene.background = cubeTextureLoader.setPath("textures/cubemap/").load([
"px.png", "nx.png", "py.png", "ny.png", "pz.png", "nz.png"
]);

// Combining skybox + environment map on material
const cubeTexture = cubeTextureLoader.setPath("textures/cubemap/").load([
"px.png", "nx.png", "py.png", "ny.png", "pz.png", "nz.png"
]);
scene.background = cubeTexture; // Scene background
const material = new THREE.MeshBasicMaterial({
envMap: cubeTexture // Material reflects the skybox
});

Multiple texture maps are the foundation of photorealistic rendering. The base color map (map) provides fundamental appearance, the normal map adds fine surface detail like bumps, scratches, and grain without additional geometry (a performance optimization), the roughness map varies surface smoothness across the object (think polished metal with scratched areas), and the ambient occlusion (AO) map enhances depth perception by darkening crevices, corners, and contact points where ambient light would naturally be occluded. The aoMapIntensity multiplier controls how pronounced these shadows are. Base roughness and metalness values multiply with their respective maps if present.

Environment mapping creates realistic reflections by using a cubemap - six square textures representing all directions from a point in space. The images must follow a specific order (positive/negative X, Y, Z faces). When applied as envMap, the material reflects the surrounding environment, with appearance controlled by metalness (how mirror-like) and roughness (how blurred the reflections). This is essential for metals, glass, water, and any reflective surface.

Skyboxes use cubemaps as scene backgrounds, creating immersive 360-degree environments that surround the entire scene. When combining skybox and environment mapping, objects naturally reflect the same environment visible in the background, creating visual coherence where metallic or glossy objects mirror their surroundings convincingly. This technique is fundamental to photorealistic rendering and is widely used in games, product visualization, and architectural visualization.

Example 6: Dynamic Canvas Textures

// CanvasTexture for procedural/dynamic content
const textureCanvas = document.createElement('canvas');
const textureContext = textureCanvas.getContext('2d');
textureCanvas.width = 500;
textureCanvas.height = 500;

const canvasTexture = new THREE.CanvasTexture(textureCanvas);

const material = new THREE.MeshBasicMaterial({
map: canvasTexture
});

// Animation loop
function draw() {
const time = clock.getElapsedTime();

// Update canvas content
material.map.needsUpdate = true; // Critical: tell Three.js to update texture

textureContext.fillStyle = 'blue';
textureContext.fillRect(0, 0, 500, 500);

textureContext.fillStyle = 'white';
textureContext.fillRect(time * 50, 100, 50, 50); // Animated square

textureContext.font = 'bold 45px sans-serif';
textureContext.fillText('I am a canvas texture', 10, 400);

renderer.render(scene, camera);
requestAnimationFrame(draw);
}

CanvasTexture bridges HTML5 Canvas API with Three.js textures, enabling dynamic, procedurally generated content on 3D surfaces. By creating a 2D canvas and wrapping it in a CanvasTexture, you can use all standard canvas drawing operations (shapes, gradients, text, images) and have them appear on 3D geometry. The key requirement is setting material.map.needsUpdate = true in the animation loop whenever canvas content changes, telling Three.js to upload the updated pixel data to the GPU.

This technique unlocks numerous applications: displaying live data visualizations on 3D dashboards, creating animated UI elements in 3D space, rendering real-time text or user-generated content, implementing procedural textures without image files, building interactive surfaces that respond to user input, generating noise or particle effects, and even displaying video content. It's particularly powerful for applications requiring dynamic content like games with HUD elements, educational visualizations with changing labels, or interactive art installations. The performance trade-off is the CPU cost of canvas operations and GPU texture uploads each frame, so keep canvas resolution reasonable for real-time applications.

6. [Three.js] Camera Control: Mastering Interactive Camera Systems in Three.js

· 6 minutes de lecture
Sangmin SHIM
Fullstack Developer

1. Digest

Interactive camera control is essential for creating engaging 3D web experiences, and Three.js provides a comprehensive suite of control systems to handle various interaction patterns. This module explores seven different camera control schemes, each designed for specific use cases - from the versatile OrbitControls for examining objects from all angles, to PointerLockControls for creating first-person game experiences similar to Minecraft.

The examples progress from simple orbital camera movement to advanced implementations combining keyboard input with pointer locking. You'll learn how to implement TrackballControls for unrestricted rotation, FlyControls for spacecraft-like navigation, FirstPersonControls for ground-based exploration, and DragControls for direct object manipulation. Each control system has unique characteristics: some require delta time updates, others work with damping for smooth motion, and some lock the mouse pointer for immersive experiences.

By working through these implementations, you'll understand how to choose the right control scheme for your project, configure control parameters like zoom limits and rotation constraints, handle control events, and create custom keyboard controllers. The module demonstrates practical patterns like generating random colored boxes for testing camera movement, proper control initialization with renderer elements, and responsive camera updates on window resize.

2. What is the purpose

The purpose of this module is to equip developers with the knowledge to implement professional-grade camera interactions in Three.js applications. Understanding camera controls is crucial because they directly impact user experience - the difference between a frustrating interface and an intuitive one often comes down to choosing and configuring the right control scheme.

You'll learn to select appropriate control systems based on your application's needs: OrbitControls for product viewers and 3D model inspection, PointerLockControls for immersive games and virtual tours, DragControls for interactive object placement, and FlyControls for architectural walkthroughs. The module teaches you how to customize control behavior through parameters like movement speed, rotation constraints, and damping effects.

Beyond basic implementation, you'll gain practical skills in event handling (detecting when controls lock/unlock or drag starts/ends), creating custom keyboard input handlers, and integrating controls with animation loops. These skills are foundational for building any interactive 3D web application, from e-commerce product configurators to educational simulations and browser-based games.

3. Some code block and its explanation

Example 1: OrbitControls with Damping and Configuration

import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls';

// Controls setup
const controls = new OrbitControls(camera, renderer.domElement);
controls.enableDamping = true; // Smooth, inertial movement
// controls.enableZoom = false; // Disable zoom
// controls.maxDistance = 20; // Maximum zoom out distance
// controls.minPolarAngle = Math.PI / 4; // Vertical angle limit
// controls.target.set(2, 0, 2); // Set camera focus point
// controls.autoRotate = true; // Auto-rotate around target
// controls.autoRotateSpeed = 1; // Rotation speed

// In animation loop
function draw() {
renderer.render(scene, camera);
controls.update(); // Required when damping is enabled
window.requestAnimationFrame(draw);
}

OrbitControls is the most commonly used control system in Three.js, perfect for examining objects from all angles. The enableDamping property adds smooth, natural-feeling motion with momentum. The control requires calling update() in the animation loop when damping is enabled. You can constrain camera movement with properties like maxDistance, minPolarAngle, and target, making it ideal for product viewers where you want to limit how users can view an object. The auto-rotate feature is particularly useful for showcase applications.

Example 2: PointerLockControls with Keyboard Movement

import { PointerLockControls } from "three/examples/jsm/controls/PointerLockControls.js";

// Controls setup
const controls = new PointerLockControls(camera, renderer.domElement);

controls.domElement.addEventListener("click", function () {
controls.lock(); // Lock pointer on click (ESC to exit)
});

controls.addEventListener("lock", function () {
console.log("Pointer locked");
});

controls.addEventListener("unlock", function () {
console.log("Pointer unlocked");
});

// Custom keyboard controller
const keyController = new KeyController();

function walk() {
if (keyController.keys["KeyW"] || keyController.keys["ArrowUp"]) {
controls.moveForward(0.02);
}
if (keyController.keys["KeyS"] || keyController.keys["ArrowDown"]) {
controls.moveForward(-0.02);
}
if (keyController.keys["KeyA"] || keyController.keys["ArrowLeft"]) {
controls.moveRight(-0.02);
}
if (keyController.keys["KeyD"] || keyController.keys["ArrowRight"]) {
controls.moveRight(0.02);
}
}

class KeyController {
constructor() {
this.keys = [];
window.addEventListener('keydown', (e) => {
this.keys[e.code] = true;
});
window.addEventListener('keyup', (e) => {
delete this.keys[e.code];
});
}
}

PointerLockControls creates first-person camera experiences similar to popular games like Minecraft. When activated, it hides the cursor and captures all mouse movement for camera rotation. The control system provides moveForward() and moveRight() methods that move the camera relative to its current direction, perfect for WASD keyboard controls. The custom KeyController class tracks which keys are currently pressed, allowing smooth continuous movement rather than discrete steps. This pattern is essential for creating immersive 3D games and virtual tours in the browser.

Example 3: DragControls for Direct Object Manipulation

import { DragControls } from "three/examples/jsm/controls/DragControls.js";

// Create array of meshes to be draggable
const geometry = new THREE.BoxGeometry(1, 1, 1);
const meshes = [];
for (let i = 0; i < 20; i++) {
material = new THREE.MeshStandardMaterial({
color: `rgb(
${50 + Math.floor(Math.random() * 205)},
${50 + Math.floor(Math.random() * 205)},
${50 + Math.floor(Math.random() * 205)}
)`
});
mesh = new THREE.Mesh(geometry, material);
mesh.position.x = (Math.random() - 0.5) * 5;
mesh.position.y = (Math.random() - 0.5) * 5;
mesh.position.z = (Math.random() - 0.5) * 5;
scene.add(mesh);
meshes.push(mesh);
}

// Controls setup with event listeners
const controls = new DragControls(meshes, camera, renderer.domElement);

controls.addEventListener("dragstart", function (event) {
console.log("Drag Start", event);
});

controls.addEventListener("dragend", function (event) {
console.log("Drag End", event);
});

DragControls enables direct manipulation of 3D objects through mouse interaction, providing an intuitive way for users to reposition objects in a scene. Unlike other control systems that move the camera, DragControls moves the objects themselves. You pass an array of meshes that should be draggable when initializing the controls. The control system automatically handles raycasting to detect which object is under the cursor and translates mouse movement into 3D position changes. Event listeners for dragstart and dragend allow you to respond to user interactions, making it possible to save positions, trigger animations, or validate placements. This is invaluable for applications like furniture arrangers, scene editors, or interactive educational tools.

Example 4: Control Systems Requiring Delta Time

import { FlyControls } from "three/examples/jsm/controls/FlyControls.js";

const controls = new FlyControls(camera, renderer.domElement);
// controls.rollSpeed = 0.05;
// controls.movementSpeed = 5;
// controls.dragToLook = true;

const clock = new THREE.Clock();

function draw() {
const delta = clock.getDelta();
renderer.render(scene, camera);
controls.update(delta); // Must pass delta time
window.requestAnimationFrame(draw);
}

FlyControls and FirstPersonControls differ from OrbitControls in a critical way: they require delta time to be passed to their update() method. Delta time represents the time elapsed since the last frame, ensuring that movement speed remains consistent regardless of frame rate. Without delta time, objects would move faster on high-refresh-rate monitors and slower on lower-end devices. FlyControls simulates spacecraft-like movement with six degrees of freedom, allowing rotation on all axes including roll. This control scheme is perfect for space simulators, drone cameras, or any scenario where the camera needs unrestricted movement through 3D space.

5. [Three.js] Geometries and Vertex Manipulation

· 4 minutes de lecture
Sangmin SHIM
Fullstack Developer

1. Digest

Geometry forms the foundation of all 3D shapes in Three.js, defining the structure and form of meshes through vertices, faces, and mathematical parameters. This module explores both built-in geometry types and advanced vertex-level manipulation techniques to create dynamic, animated 3D forms.

Starting with basic geometries like BoxGeometry and SphereGeometry, you'll learn how to leverage Three.js's extensive geometry library alongside OrbitControls for interactive 3D navigation. The module then advances into vertex manipulation, demonstrating how to access and modify the raw position data that defines geometry shapes. Through practical examples, you'll create organic, wave-like animations by manipulating individual vertex positions in real-time, using techniques like sine wave mathematics combined with randomization to produce natural-looking deformations. This approach opens up possibilities for creating procedural animations, terrain generation, and morphing effects that go beyond static 3D models.

2. What is the purpose

The purpose of this chapter is to build a solid understanding of Three.js geometry fundamentals and vertex manipulation techniques. You'll learn how to work with built-in geometry primitives, understand the structure of geometry data at the vertex level, and create dynamic animations through direct manipulation of vertex positions.

Key learning objectives include:

  • Understanding Three.js's various geometry types and their parameters
  • Implementing OrbitControls for interactive 3D scene navigation
  • Accessing and modifying geometry vertex position arrays
  • Creating organic, procedural animations through mathematical transformations
  • Understanding the relationship between geometry attributes and rendering updates

These skills are essential for creating interactive 3D visualizations, animated effects, procedural generation systems, and any application requiring dynamic mesh manipulation beyond static models.

3. Some code block and its explanation

Example 1: Basic Geometry Setup with OrbitControls

import * as THREE from 'three';
import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls.js'

// Mesh with basic geometry
const geometry = new THREE.BoxGeometry(1, 1, 1);
const material = new THREE.MeshStandardMaterial({
color: 'hotpink',
side: THREE.DoubleSide,
});
const mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);

// Controls for camera navigation
const controls = new OrbitControls(camera, renderer.domElement);

This example demonstrates the fundamental setup for working with geometries in Three.js. The BoxGeometry creates a cube with dimensions 1x1x1, paired with MeshStandardMaterial that responds to lighting. The DoubleSide parameter ensures both front and back faces are rendered. The OrbitControls provides intuitive mouse-based camera control, allowing users to rotate, zoom, and pan around the 3D scene - essential for inspecting geometry from all angles during development.

Example 2: Accessing Vertex Position Data

const geometry = new THREE.SphereGeometry(5, 64, 64);
const positionArray = geometry.attributes.position.array;
const randomArray = []

for(let i = 0; i < positionArray.length; i+=3) {
randomArray[i] = (Math.random()-0.5)*0.2
randomArray[i+1] = (Math.random()-0.5)*0.2
randomArray[i+2] = (Math.random()-0.5)*0.2
}

This code reveals how to access the underlying vertex data of any Three.js geometry. The geometry.attributes.position.array contains all vertex positions as a flat array where every three consecutive values represent x, y, z coordinates of a single vertex. By iterating through this array in steps of 3 (i+=3), we process each vertex individually. The randomArray stores random offsets for each vertex, which will be used to create variation in animation - this technique is fundamental for procedural effects and ensures each vertex moves uniquely.

Example 3: Real-time Vertex Animation

function draw() {
const time = clock.getElapsedTime();

for (let i = 0; i < positionArray.length; i += 3) {
positionArray[i] += Math.sin(time*3+randomArray[i]*50) * 0.002;
positionArray[i + 1] += Math.sin(time*3+randomArray[i+1]*50) * 0.002;
positionArray[i + 2] += Math.sin(time*3+randomArray[i+2]*50) * 0.002;
}

geometry.attributes.position.needsUpdate = true;
renderer.render(scene, camera);
window.requestAnimationFrame(draw);
}

This animation loop creates organic, wave-like motion by modifying vertex positions every frame. The Math.sin() function produces smooth oscillating values, with time*3 controlling the wave speed and randomArray[i]*50 creating phase shifts so each vertex oscillates at slightly different times. The multiplication by 0.002 keeps the movement subtle and realistic. Critically, geometry.attributes.position.needsUpdate = true must be set after modifying vertex data to signal Three.js that the GPU buffer needs updating - without this, changes won't be visible. This technique demonstrates the power of vertex-level manipulation for creating effects like water surfaces, terrain deformation, or morphing animations.

Example 4: Enhanced Material Properties for Geometry

const material = new THREE.MeshStandardMaterial({
color: 'orangered',
side: THREE.DoubleSide,
flatShading: true
})

Material properties significantly affect how geometry is rendered. The flatShading: true option creates a faceted, low-poly aesthetic by calculating lighting per face rather than per vertex, making individual triangles clearly visible. This is particularly effective when combined with vertex animation, as it emphasizes the geometric structure and creates a stylized visual effect. The MeshStandardMaterial ensures the geometry responds realistically to the ambient and directional lights in the scene, creating depth and dimensionality that makes 3D forms more readable and visually appealing.

4. [Three.js] 3D Object Transformations

· 5 minutes de lecture
Sangmin SHIM
Fullstack Developer

1. Digest

Transforming 3D objects in space is fundamental to creating any interactive 3D scene. This module covers the three essential transformation properties that every Three.js developer must master: position, scale, and rotation. Through four progressive examples, you'll learn how to manipulate objects in 3D space using Vector3 coordinates, apply non-uniform scaling to stretch and squash geometry, and rotate objects around multiple axes with proper gimbal lock prevention.

The examples use dat.GUI for real-time parameter adjustment and AxesHelper for visual reference, making it easy to understand how transformations affect objects in 3D space. The final example demonstrates hierarchical transformations using Groups, creating a solar system model where the sun, earth, and moon each have their own orbital rotations. This teaches a crucial concept: child objects inherit transformations from their parent groups, enabling complex animations through simple rotation updates.

These transformation techniques form the foundation for any 3D interaction, from simple object movement to complex hierarchical animations. By the end of this module, you'll be comfortable manipulating objects in 3D space and creating nested transformation hierarchies for realistic multi-object systems.

2. What is the purpose

The purpose of this module is to teach fundamental 3D transformation concepts that are essential for any Three.js project. You'll gain hands-on experience with:

  • Position control: Moving objects anywhere in 3D space using the x, y, z coordinate system and Three.js Vector3 methods
  • Scale manipulation: Resizing objects uniformly or non-uniformly along different axes to create stretched or squashed effects
  • Rotation techniques: Rotating objects around axes using radians, including proper rotation order management to avoid gimbal lock issues
  • Hierarchical transformations: Building complex object relationships using Groups where parent transformations cascade to children

These skills are directly applicable to creating animated scenes, building interactive 3D applications, and organizing complex 3D objects. Understanding transformations is critical for camera control, object animations, physics simulations, and any scenario where objects need to move, rotate, or change size in response to user input or automated behaviors.

3. Some code block and its explanation

Example 1: Position Control

function draw() {
const delta = clock.getDelta();

mesh.position.set(-1, 4, 0);

// Useful Vector3 methods:
// mesh.position.length() - distance from origin
// mesh.position.distanceTo(new THREE.Vector3(1,1,1)) - distance to another point

renderer.render(scene, camera);
window.requestAnimationFrame(draw);
}

The position property uses Three.js's Vector3 class to control object location in 3D space. You can set position using set(x, y, z) for absolute positioning, or modify individual axes like mesh.position.x += 0.1. The Vector3 class provides useful utility methods like length() to get distance from the origin and distanceTo() to measure distance between points, which are invaluable for collision detection and proximity-based interactions.

Example 2: Non-uniform Scaling

function draw() {
const delta = clock.getDelta();

mesh.scale.set(3, 1.3, 1);

renderer.render(scene, camera);
window.requestAnimationFrame(draw);
}

The scale property also uses Vector3, allowing independent scaling along each axis. Here, the box is stretched 3 times wider on the x-axis and 1.3 times taller on the y-axis while maintaining its original z-axis depth. This creates non-uniform scaling, useful for creating squash-and-stretch animation effects or adapting objects to fit specific spaces. Uniform scaling would use the same value for all three axes like scale.set(2, 2, 2).

Example 3: Rotation with Proper Axis Ordering

// Set rotation order before applying rotations
mesh.rotation.reorder('YXZ');
mesh.rotation.x = THREE.MathUtils.degToRad(50);
mesh.rotation.z = THREE.MathUtils.degToRad(20);

function draw() {
const delta = clock.getDelta();

// Continuous rotation example (commented out):
// mesh.rotation.y += delta

renderer.render(scene, camera);
window.requestAnimationFrame(draw);
}

Rotation in Three.js uses radians, not degrees, so THREE.MathUtils.degToRad() converts familiar degree values. The reorder() method is critical for controlling how rotations are applied - the default 'XYZ' order can cause gimbal lock in certain orientations. By setting the order to 'YXZ', you ensure rotations are applied in a sequence that prevents mathematical gimbal lock issues. For animations, adding delta (time elapsed since last frame) to rotation properties creates smooth, frame-rate-independent rotation.

Example 4: Hierarchical Solar System with Groups

const group1 = new THREE.Group();
const sun = new THREE.Mesh(geometry, material);

const group2 = new THREE.Group();
const earth = sun.clone();
earth.scale.set(0.3, 0.3, 0.3);
group2.position.x = 2;

const group3 = new THREE.Group();
const moon = earth.clone();
moon.scale.set(0.15, 0.15, 0.15);
moon.position.x = 0.7;

group3.add(moon);
group2.add(earth, group3);
group1.add(sun, group2);
scene.add(group1);

function draw() {
const delta = clock.getDelta();
group1.rotation.y += delta; // Sun rotates
group2.rotation.y += delta; // Earth orbits sun and rotates
group3.rotation.y += delta; // Moon orbits earth and rotates
renderer.render(scene, camera);
window.requestAnimationFrame(draw);
}

Groups are containers that allow hierarchical transformations - when a parent group transforms, all children transform with it. In this solar system example, rotating group1 makes the entire system spin. Rotating group2 makes the earth orbit around the sun while also rotating on its own axis. Rotating group3 makes the moon orbit the earth. The positioning is clever: earth is positioned 2 units from group2's origin, and moon is 0.7 units from group3's origin. When the parent groups rotate, these offset positions create orbital paths. This demonstrates the power of transformation hierarchy: complex multi-object animations can be created with simple rotation updates on parent groups.

3. [Three.js] Essential Development Tools

· 6 minutes de lecture
Sangmin SHIM
Fullstack Developer

1. Digest

Professional Three.js development requires more than just rendering 3D graphics—it demands the right tools for debugging, performance monitoring, and real-time experimentation. The utilities covered here transform the development workflow by providing immediate visual feedback and precise control over scene parameters. AxesHelper and GridHelper serve as spatial reference systems, making it dramatically easier to understand object positioning and orientation in 3D space. These visual aids eliminate the guesswork when setting up camera angles and placing objects, especially when working with complex scenes.

Performance optimization is crucial for delivering smooth 3D experiences, and Stats.js provides real-time monitoring of frame rates and rendering performance. By displaying FPS (frames per second), frame time, and memory usage directly in the viewport, developers can immediately identify performance bottlenecks and verify that optimizations are working as intended. This instant feedback loop is invaluable when testing animations, complex geometries, or shader effects across different devices.

Interactive parameter tweaking takes center stage with dat.GUI, a lightweight controller library that creates dynamic control panels for adjusting scene properties in real-time. Instead of modifying code and refreshing the browser repeatedly to find the perfect camera position or light intensity, dat.GUI enables live adjustments through intuitive sliders and inputs. This interactive approach dramatically speeds up the creative process, allowing developers and designers to experiment freely and find optimal values through direct manipulation rather than trial-and-error coding.

2. What is the purpose

The purpose of this module is to equip developers with essential debugging and experimentation tools that streamline Three.js development. You'll learn to set up visual reference systems that clarify spatial relationships in 3D scenes, implement performance monitoring to maintain smooth frame rates, and create interactive control panels for rapid prototyping and parameter tuning.

By the end of this module, you'll be able to integrate AxesHelper and GridHelper to provide visual spatial context, monitor real-time performance metrics with Stats.js to identify rendering bottlenecks, and leverage dat.GUI to create interactive controls for dynamic scene manipulation. These skills are fundamental to professional Three.js development, enabling faster iteration cycles, more effective debugging, and better performance optimization.

The practical applications include setting up development environments for 3D projects, creating interactive demos and prototypes for client presentations, conducting performance profiling for optimization decisions, and building configuration interfaces for real-time parameter adjustments in tools and applications.

3. Some code block and its explanation

Example 1: Spatial Reference with AxesHelper and GridHelper

import * as THREE from 'three';

// AxesHelper - displays X, Y, Z axes
const axesHelper = new THREE.AxesHelper(3);
scene.add(axesHelper);

// GridHelper - creates a ground plane grid
const gridHelper = new THREE.GridHelper(6, 13);
scene.add(gridHelper);

// Position mesh relative to visible references
const mesh = new THREE.Mesh(geometry, material);
mesh.position.x = 2;
mesh.position.z = 1;
scene.add(mesh);

camera.lookAt(mesh.position);

AxesHelper and GridHelper are indispensable debugging tools for understanding spatial relationships in 3D scenes. AxesHelper visualizes the coordinate system with colored lines representing the three axes: red for X, green for Y, and blue for Z. The parameter (3) defines the length of each axis line. GridHelper creates a reference grid on the XZ plane with specified size (6 units) and divisions (13 lines), providing a ground plane for visual reference. Together, these helpers make it immediately clear where objects are positioned and how they relate to the world coordinate system—essential when debugging camera positions, object placement, or transformations. Without these visual aids, working in 3D space often feels like navigating blindfolded.

Example 2: Performance Monitoring with Stats.js

import Stats from 'stats.js';

// Create and attach stats monitor
const stats = new Stats();
document.body.appendChild(stats.dom);

function draw() {
const time = clock.getElapsedTime();

// Update stats at the beginning of each frame
stats.update();

mesh.rotation.y = time;
renderer.render(scene, camera);
window.requestAnimationFrame(draw);
}

draw();

Stats.js provides real-time performance monitoring essential for optimizing Three.js applications. The stats.dom element is a small panel that displays FPS (frames per second), frame rendering time in milliseconds, and optionally memory usage. By calling stats.update() at the start of each animation frame, you get immediate feedback on rendering performance. This is crucial when adding complex geometries, implementing shader effects, or testing on different devices. If FPS drops below 60, you know immediately that optimization is needed. The stats panel also helps verify that frame-rate-independent animations (using delta time) are working correctly across different refresh rates. Professional Three.js developers keep Stats.js integrated during development to catch performance issues early.

Example 3: Interactive Parameter Control with dat.GUI

import dat from "dat.gui";

const mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);

// Create GUI controller panel
const gui = new dat.GUI();

// Add controls for mesh position
gui.add(mesh.position, 'y')
.min(-5)
.max(10)
.step(0.02)
.name('Mesh Y Position');

gui.add(mesh.position, 'z')
.min(-10)
.max(10)
.step(0.02)
.name('Mesh Z Position');

// Add camera control
gui.add(camera.position, 'x')
.min(-10)
.max(10)
.step(0.01)
.name('Camera X Position');

dat.GUI revolutionizes the development workflow by enabling real-time parameter adjustments through an interactive control panel. Instead of hardcoding values like mesh.position.y = 2, then reloading to see the result, dat.GUI creates sliders that modify properties live. The .add() method binds to object properties directly—here linking to mesh.position.y, mesh.position.z, and camera.position.x. The fluent API allows chaining constraints: .min() and .max() set bounds, .step() defines increment precision, and .name() provides readable labels. As you drag the sliders, Three.js objects update immediately in the rendering loop. This is invaluable for finding optimal camera angles, light positions, animation timing, or material properties. dat.GUI transforms what would be a tedious code-reload-test cycle into fluid, creative exploration.

Example 4: Combined Workflow - The Complete Development Setup

import * as THREE from 'three';
import Stats from 'stats.js';
import dat from "dat.gui";

// Standard Three.js setup
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
const renderer = new THREE.WebGLRenderer({ canvas, antialias: true });

// Development aids
const axesHelper = new THREE.AxesHelper(3);
scene.add(axesHelper);

const gridHelper = new THREE.GridHelper(6, 13);
scene.add(gridHelper);

const stats = new Stats();
document.body.appendChild(stats.dom);

const gui = new dat.GUI();
gui.add(mesh.position, 'y').min(-5).max(10).step(0.02);
gui.add(camera.position, 'x').min(-10).max(10).step(0.01);

function draw() {
stats.update();
renderer.render(scene, camera);
window.requestAnimationFrame(draw);
}

This combined setup represents a professional Three.js development environment. The spatial helpers (AxesHelper and GridHelper) provide visual context, Stats.js monitors performance in real-time, and dat.GUI enables interactive experimentation—all working together seamlessly. This is the typical setup developers use during the prototyping and development phase. Once development is complete and parameters are finalized, these tools can be easily removed or disabled for production by commenting out the relevant sections. Many developers keep these utilities in conditional blocks (e.g., if (process.env.NODE_ENV === 'development')) so they're automatically excluded from production builds while remaining available during development. This workflow represents industry best practices for efficient Three.js development.

2. [Three.js] Fundamentals

· 5 minutes de lecture
Sangmin SHIM
Fullstack Developer

1. Digest

Building a solid foundation in Three.js starts with understanding the core components that make 3D web graphics possible. The fundamentals covered here include setting up the essential rendering pipeline with WebGLRenderer, Scene, and Camera objects, along with creating and manipulating 3D meshes. You'll learn how to handle responsive design by adapting your 3D scenes to browser window resizing, optimizing rendering quality with pixel ratio settings, and managing background colors and transparency.

Animation brings static 3D scenes to life, and you'll explore multiple approaches to achieve smooth, frame-rate-independent animations. From basic rotation and movement using requestAnimationFrame to implementing time-based animations with THREE.Clock, you'll understand how to create consistent motion across different devices. The chapter also introduces advanced animation techniques using GSAP (GreenSock Animation Platform) for complex, eased animations with features like yoyo effects and infinite repeats.

Visual depth and realism are enhanced through lighting and atmospheric effects. You'll work with DirectionalLight to illuminate your 3D objects and implement fog effects to create depth perception. The chapter demonstrates practical patterns like creating multiple objects using functional array methods (Array.from with map), managing collections of meshes, and applying transformations to create dynamic, multi-object scenes.

2. What is the purpose

The purpose of this module is to equip developers with essential Three.js skills needed for building interactive 3D web applications. You'll gain hands-on experience with the fundamental rendering loop, understand camera types (PerspectiveCamera vs OrthographicCamera), and master the relationship between geometry, materials, and meshes.

By the end of this module, you'll be able to create responsive 3D scenes that adapt to different screen sizes and pixel densities, implement smooth animations using various timing techniques, and enhance visual quality with lighting and atmospheric effects. These skills form the foundation for more advanced Three.js development, including interactive controls, complex geometries, and realistic rendering.

The practical applications include creating animated product showcases, interactive 3D visualizations, educational demonstrations, and engaging web experiences that respond dynamically to user interactions and viewport changes.

3. Some code block and its explanation

Example 1: Responsive Rendering with Pixel Ratio Optimization

const canvas = document.querySelector("#three-canvas");
const renderer = new THREE.WebGLRenderer({ canvas, antialias: true });
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.setPixelRatio(window.devicePixelRatio > 1 ? 2 : 1);

function setSize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.render(scene, camera);
}

window.addEventListener("resize", setSize);

This code establishes a responsive 3D rendering setup that adapts to browser window changes. The setPixelRatio method caps the pixel ratio at 2 for high-DPI displays, balancing visual quality with performance—preventing unnecessary rendering overhead on ultra-high-resolution screens. The resize event handler updates the camera's aspect ratio and projection matrix, ensuring the 3D scene maintains correct proportions when the window is resized. This pattern is essential for creating professional Three.js applications that work seamlessly across different devices and screen sizes.

Example 2: Frame-Rate-Independent Animation with THREE.Clock

const clock = new THREE.Clock();

function draw() {
const delta = clock.getDelta();

mesh.rotation.y += delta;
mesh.position.y += delta;

if(mesh.position.y > 2) mesh.position.y = 0;

renderer.render(scene, camera);
window.requestAnimationFrame(draw);
}

draw();

Using THREE.Clock.getDelta() ensures smooth, consistent animations regardless of frame rate variations. The delta value represents the time elapsed since the last frame in seconds, making animation speed independent of the device's refresh rate. This approach is superior to fixed increments because it maintains the same visual speed on both 60Hz and 120Hz displays. The pattern shown here—rotating and translating objects based on delta time—is a fundamental technique for creating professional-quality animations in Three.js.

Example 3: Creating Multiple Objects with Functional Patterns

const geometry = new THREE.BoxGeometry(1, 1, 1);
const material = new THREE.MeshStandardMaterial({ color: "#ff0000" });

const meshes = Array.from({length: 10}).map(() => {
const mesh = new THREE.Mesh(geometry, material);
mesh.position.x = Math.random() * 5 - 2.5;
mesh.position.y = Math.random() * 5 - 2.5;
scene.add(mesh);
return mesh;
});

meshes.forEach((mesh) => {
mesh.rotation.y += deltaTime * 0.001;
mesh.position.y += deltaTime * 0.001;
if(mesh.position.y > 5) mesh.position.y = -5;
});

This demonstrates a modern JavaScript approach to creating and managing multiple 3D objects. Using Array.from with map creates 10 box meshes with randomized positions in a single, readable expression. Storing references in the meshes array enables efficient batch operations during animation, where forEach applies transformations to all objects simultaneously. This pattern is particularly useful for particle systems, object pools, or any scenario requiring management of multiple similar objects. The random positioning creates visual variety while keeping the code concise and maintainable.

Example 4: Advanced Animation with GSAP Integration

import gsap from "gsap";

const mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);

gsap.to(mesh.position, {
duration: 1,
y: 2,
z: 1,
ease: "power3.inOut",
repeat: -1,
yoyo: true
});

function draw() {
renderer.render(scene, camera);
window.requestAnimationFrame(draw);
}
draw();

GSAP (GreenSock Animation Platform) provides powerful, professional-grade animation capabilities that complement Three.js. This code animates the mesh's position smoothly with easing functions (power3.inOut), creating natural-looking motion with acceleration and deceleration. The repeat: -1 creates an infinite loop, while yoyo: true makes the animation reverse direction, creating a ping-pong effect. GSAP handles all the interpolation and timing internally, so the render loop only needs to call renderer.render(). This separation of concerns—GSAP for animation logic, Three.js for rendering—results in cleaner code and more sophisticated animations than manual implementations.

1. [Three.js] Development Environment Setup

· 4 minutes de lecture
Sangmin SHIM
Fullstack Developer

1. Digest

Four distinct approaches to Three.js development environments showcase different workflows tailored for various development needs and preferences. CDN imports with modern import maps offer immediate Three.js access without build tools or complex setup, perfect for rapid prototyping and learning. Vite emerges as a modern build tool providing lightning-fast development with hot reload capabilities, while Webpack demonstrates traditional yet powerful bundling with comprehensive configuration for production-ready applications.

An advanced Webpack setup extends the configuration with modular JavaScript classes and enhanced project organization, showing enterprise-level development patterns. All approaches successfully create the same fundamental Three.js scene - a rotating cube - while demonstrating vastly different development workflows, from quick prototyping to sophisticated project structures. Essential patterns like proper canvas positioning, responsive design considerations, and the core Three.js rendering pipeline remain consistent across all setups, providing developers with a solid foundation regardless of their chosen development approach.

2. What is the purpose

Multiple pathways to Three.js development provide developers with comprehensive understanding of build tool trade-offs and development approach decisions. Learning encompasses setting up Three.js projects using modern import maps with CDN for rapid prototyping, configuring Vite for fast development with hot module replacement, implementing traditional Webpack bundling with Babel transpilation for broader browser support, and understanding advanced project structures with modular JavaScript classes.

This knowledge enables developers to make informed decisions about their development environment based on project requirements, team preferences, and deployment constraints. These foundational skills ensure students can create and run Three.js applications in their preferred development setup for all subsequent learning.

3. Some code block and its explanation

Example 1: CDN Setup with Import Maps

<script type="importmap">
{
"imports": {
"three": "https://cdn.jsdelivr.net/npm/[email protected]/build/three.module.js",
"three/examples/jsm/": "https://cdn.jsdelivr.net/npm/[email protected]/examples/jsm/",
"gsap": "https://cdn.skypack.dev/gsap",
"cannon-es": "https://cdn.jsdelivr.net/npm/[email protected]/dist/cannon-es.js"
}
}
</script>

This code demonstrates the modern way to use Three.js from CDN without build tools. Import maps allow clean ES module imports while loading libraries directly from CDN. This approach is perfect for learning, prototyping, and simple projects. It includes popular Three.js ecosystem libraries like GSAP for animations and Cannon.js for physics, making it immediately useful for advanced features.

Example 2: Vite Configuration

export default {
root: 'src/',
publicDir: 'src/',
base: './',
build:
{
outDir: '../dist',
emptyOutDir: true,
sourcemap: true
}
}

This Vite configuration shows how to set up a modern build tool for Three.js development. The custom root directory points to 'src/', the build output goes to '../dist', and source maps are enabled for debugging. This setup provides lightning-fast hot module replacement during development and optimized production builds. Vite's native ES module support makes it ideal for modern Three.js development.

Example 3: Webpack Production Optimization

optimization: {
minimizer: webpackMode === 'production' ? [
new TerserPlugin({
terserOptions: {
compress: {
drop_console: true
}
}
})
] : [],
splitChunks: {
chunks: 'all'
}
}

This Webpack configuration demonstrates production-ready optimization for Three.js applications. The Terser plugin removes console.log statements in production builds, reducing file size and improving performance. Code splitting separates vendor libraries from application code, enabling better caching strategies. This setup is essential for deploying Three.js applications to production environments where performance and bundle size matter.

Example 4: Canvas Positioning CSS

body {
margin: 0;
}
#three-canvas {
position: absolute;
left: 0;
top: 0;
}

This CSS is crucial for Three.js applications as it ensures the canvas fills the entire viewport without scrollbars or margins. The body margin reset prevents default browser spacing, while absolute positioning places the canvas at the exact top-left corner. This pattern is consistent across all setup methods and represents a fundamental requirement for immersive 3D experiences that should fill the entire browser window.