STAGE 01
Scientific Source Layer — PSG
NASA / Radiative TransferATMOS_FERA begins at NASA's Planetary Spectrum Generator (PSG) — the scientific engine that provides atmospheric composition, light scattering, and solar geometry for each planet. This data is not illustrative. It is physically computed. Every color in the installation is derived from real radiative transfer.
SCIpsg.enginePlanetary Spectrum Generator▶
NASA's online radiative-transfer engine. Takes a configuration file describing a planet's atmosphere, geometry, and observation conditions; returns spectral radiance tables as text output. Core of the entire pipeline.
Inputs
→ Planet config file (.txt)
→ Atmospheric composition
→ Geometry + solar position
→ Viewing conditions
Outputs
← Spectral radiance (.rad)
← Atmosphere radiance
← Transmittance tables
← Scattering components
PSG spectral outputs (rad file):
# wavelength total_rad atm_rad surface scatter trans
380.0nm 2.341e-3 1.820e-3 5.21e-4 1.23e-3 0.734
450.0nm 4.102e-3 3.281e-3 8.21e-4 2.67e-3 0.812
550.0nm 5.741e-3 4.122e-3 1.62e-3 2.91e-3 0.851
SCIpsg.geometryGeometry Modes▶
PSG offers multiple viewing geometries that determine how the atmosphere is sampled. Each geometry produces a different perceptual regime — from surface-level atmospheric halos to far-field disc observations.
Available Modes
→ Looking up (surface observer)
→ Dome view (hemisphere)
→ Far-field disc (orbital)
→ Beam / column geometry
Per-planet selection
← Different geometry per planet
← Artistic + scientific rationale
← Determines visual regime
STAGE 02
Query / Config Layer — Templates
Forward ModelingThe workflow is not live telescope data. It is forward modeling from PSG templates. Each planet begins from a PSG configuration file, then geometry and parameters are tuned per planet and per perceptual goal — while staying within the scientific logic of that atmosphere.
CFGconfig.planetPlanet Template Selection▶
Each act begins with a planetary PSG configuration template. The template encodes the atmosphere's chemical composition, pressure profile, particle sizes, and solar distance.
Inputs
→ Planet selection
→ Reference PSG config
Outputs
← Base config .txt file
← Atmospheric constraints
CFGconfig.paramsArtistic Parameter Tuning▶
Within each planet's scientific constraints, geometry and parameters are modified toward the perceptual goal. This is the first artistic layer — the hinge point where science becomes compositional instrument.
Parameters Modified
→ Geometry mode
→ Beam / viewing setup
→ Solar angle / season
→ Scattering depth
Constraint
← Stays near scientific logic
← Tuned for affect
← Planet-specific
STAGE 03
Spectral Extraction Layer — Parsing
Custom Code / Data TablesPSG outputs are text tables of spectral and geometric values. A custom parsing layer extracts the specific channels needed for the render pipeline. This is where ATMOS_FERA stops being scientific visualization and starts being a compositional instrument.
DATextract.spectralSpectral Value Extraction▶
Reads PSG .rad output files and extracts the channels relevant to visual rendering. Each channel corresponds to a physical phenomenon — total radiance, atmospheric scattering, surface contribution, and emissivity all carry different visual weight.
Values Extracted
→ Wavelength (nm)
→ Total radiance
→ Atmospheric spectral radiance
→ Single scattering contribution
→ Emissivity
Outputs
← Per-wavelength value arrays
← Scattering parameter tables
← Geometry lookup values
# PSG config → spectral extraction
PSG config → PSG API → .rad tables
↓
parse: wavelength, total_radiance, atm_radiance
single_scattering, emissivity, geometry_output
↓
structured arrays → color translation layer
DATextract.geometryGeometry + Scattering Tables▶
Geometry-derived outputs encode the angular relationship between sun, atmosphere, and observer. Scattering tables encode how particles interact with each wavelength — the physics behind why Jupiter's limb glows or Venus hazes orange.
Geometry Data
→ Solar elevation angle
→ Path length through atmosphere
→ Viewing direction
Scattering Data
← Extinction per wavelength
← Single-scattering albedo ω
← Phase function asymmetry g
STAGE 04
Color Translation Layer — CIE/XYZ → sRGB
Colorimetry / Render PrepThe most architecturally critical stage. PSG spectral radiance values are converted into displayable color using CIE color matching functions, then transformed into sRGB. The precise moment at which a number becomes light.
COLcolor.cieCIE Color Matching Functions▶
Spectral radiance values are weighted by the CIE 1931 standard observer color matching functions x̄(λ), ȳ(λ), z̄(λ). The result is a CIE XYZ tristimulus value — perceptually accurate color in a device-independent space.
Inputs
→ Spectral radiance I(λ)
→ CIE 1931 CMFs x̄ ȳ z̄
Outputs
← XYZ tristimulus values
← Perceptually accurate color
# Spectral → XYZ integration
X = ∫ I(λ) · x̄(λ) dλ # 380–780nm
Y = ∫ I(λ) · ȳ(λ) dλ # luminance
Z = ∫ I(λ) · z̄(λ) dλ
# XYZ → linear sRGB (D65 white point)
[ R ] [ 3.2406 -1.5372 -0.4986 ] [ X ]
[ G ] = [ -0.9689 1.8758 0.0415 ] · [ Y ]
[ B ] [ 0.0557 -0.2040 1.0570 ] [ Z ]
COLcolor.srGBsRGB Image Values + Render Prep▶
After XYZ conversion, values are gamma-encoded to sRGB and prepared for the render pipeline. HDR radiance values are scaled per planet into color LUT textures that will drive the GLSL shader layer.
Inputs
→ XYZ tristimulus values
→ HDR radiance data
Outputs
← sRGB image values
← Color LUT textures
← Shader-ready buffers
STAGE 05
Temporalization Layer — Interpolation
Scientific Sampling + Cinematic MotionThe visual render is not a single static PSG frame. Atmospheric states are sampled at intervals (~45s) and frames between samples are interpolated. The motion is partly scientifically derived, partly cinematic — a durational structure built from sparse physical measurements.
TIMEtime.sampleAtmospheric State Sampling▶
PSG is queried at intervals across a parameter arc. Each query is a physically computed anchor point. The ~45s interval balances computation time against temporal resolution needed for the installation duration.
Sampling Parameters
→ Sample interval: ~45s
→ Solar elevation arc
→ Atmospheric state series
→ Viewing angle sequence
Outputs
← Sparse key frames
← Scientifically anchored states
# Temporal sampling strategy
for each planet_act:
for t in range(0, act_duration, ~45s):
config.solar_elev = interpolate(arc, t)
config.atm_state = sequence[t]
query PSG → store key_frame[t]
TIMEtime.interpFrame Interpolation + Camera Motion▶
Between PSG key frames, intermediate frames are computed by interpolation. Camera movement, drift, and compositional motion are added as a separate layer — the cinematic dimension.
Interpolation
→ Key frame array
→ Smooth interpolation curve
→ Frame rate target
Camera Layer
← Drift + orbital motion
← Compositional pacing
← Full rendered sequence
STAGE 06
Scene Rendering Layer — GLSL Shaders
KodeLife · Custom Code · GPUPSG-derived numerical values feed a custom GLSL shader environment in KodeLife, where each planet's atmosphere is spatialized and stylized into luminous moving scenes. Each planet has different scene logic, different perceptual behavior, different chromatic and kinetic character.
GLSLrender.shaderKodeLife GLSL Environment▶
Custom GLSL shaders transform PSG-derived color and scattering values into atmosphere-like visual fields — density gradients, halo geometry, limb brightening, chromatic aberration at the horizon.
Inputs
→ sRGB color buffers from PSG
→ Scattering parameter tables
→ Geometry values
→ Interpolated frame data
Shader Outputs
← Atmospheric density fields
← Halo + aureole geometry
← Limb brightening / darkening
← Auroral motion
// Per-planet atmosphere shader (conceptual)
uniform sampler2D u_spectral_lut; // PSG-derived
uniform float u_scatter_g; // HG asymmetry
uniform vec3 u_solar_dir; // geometry
uniform float u_density; // optical depth
void main() {
vec3 atm_color = texture(u_spectral_lut, uv).rgb;
float phase = HG(dot(view, u_solar_dir), u_scatter_g);
fragColor = vec4(atm_color * phase * u_density, 1.0);
}
GLSLrender.planetPlanet-Specific Scene Logic▶
Each planet has its own shader behavior — its own perceptual regime. The collection of planets forms a dramaturgical arc through distinct atmospheric worlds.
| Planet / Act | Atmospheric Character | Primary Effect |
|---|---|---|
| Jupiter | Dense band structure, NH₃ clouds, extreme gas pressure | Banded density, limb glow |
| Venus | CO₂ opacity, sulphur haze, diffuse solar disk | Halation, deep amber saturation |
| Titan | N₂ / CH₄ haze, orange photochemical smog | Diffuse chromatic gradient |
| Neptune | Blue methane absorption, extreme distance | Deep cold blues, ice scintillation |
GLSLrender.artisticArtistic Parameter Layer▶
Artistic parameters added above the scientific layer: waviness, haloing intensity, auroral motion speed, atmospheric density drift, camera parallax. Tuned compositionally — they do not violate the scientific data, they inhabit its range.
Parameters
→ Waviness / turbulence
→ Halo intensity
→ Auroral motion
→ Density gradient
→ Camera drift speed
Constraint
← Within scientific range
← Compositionally determined
← Per-planet tuning
STAGE 07
Playback Codec Layer — HAP Export
GPU Codec / Media ServerRendered scenes are exported as HAP video files — the GPU-accelerated codec required by the visual-effects tools in the live chain. HAP decompresses on the GPU, enabling real-time compositing at installation resolution.
CODECcodec.hapHAP Video Export▶
HAP (and HAP Q, HAP Alpha) is a family of GPU-decodable video codecs designed for real-time performance contexts. The rendered sequences are output as HAP .mov files — one per planet act, plus atmospheric transition sequences.
Inputs
→ Rendered frame sequences
→ Per-planet acts
Outputs
← .mov (HAP / HAP Q)
← GPU-decodable stream
← Ready for media server
# HAP export per act
planet_jupiter_act.mov → HAP Q
planet_venus_act.mov → HAP Q
planet_titan_act.mov → HAP Q
planet_neptune_act.mov → HAP Q
transition_01→02.mov → HAP Alpha
STAGE 08
Live Routing Stack — Syphon · MadMapper · Modul8
Media Routing / VJ / PerformanceHAP files feed into a live performance routing stack: Syphon for GPU-side texture sharing, MadMapper for projection surface calibration to the HoloGauze, and Modul8 for live compositing, act cueing, and timeline control.
ROUTElive.syphonSyphon — GPU Stream Routing▶
Syphon routes rendered frames between applications in real time on the GPU — zero-copy texture sharing on macOS. Allows KodeLife GLSL output to be passed directly into MadMapper or Modul8.
From
→ KodeLife GLSL render
→ HAP player output
To
← MadMapper (mapping)
← Modul8 (compositing)
MAPlive.madmapperMadMapper — Projection Surface Calibration▶
MadMapper handles projection mapping — calibrating the output to the exact geometry of the HoloGauze screen, accounting for curvature, keystone, and the specific distance geometry of the space.
Inputs
→ Syphon stream or HAP playback
→ Screen geometry data
→ Projector throw configuration
Outputs
← Calibrated projector signal
← Warped / fitted projection
VJlive.modul8Modul8 — Live Compositing + Cueing▶
Modul8 functions as the live performance layer — compositing multiple HAP streams, cueing planetary acts, managing transitions, and enabling real-time layering.
Functions
→ HAP stream playback
→ Act cueing + transitions
→ Layer compositing
→ Timeline control
Outputs
← Composited live signal
← → to MadMapper
STAGE 09
Projection Architecture — Membrane + Volume
HoloGauze · Fog · Volumetric LightThe screen is not the endpoint. It is a membrane in a volumetric light ecology. The projector is aimed toward the audience. Screens capture some light and transmit the rest. Fog catches beam paths. The audience receives both image and atmosphere — light in space, not image on surface.
PROJproj.screenHoloGauze / Semi-transparent Screen▶
The silver-coated semi-transparent screen functions as a membrane — not a surface. Part of the light is captured as a floating image. Part passes through into the fog and toward the audience.
Physical Spec
→ Dimensions: ~3 × 2 m
→ Semi-transparent silver coat
→ HoloGauze / HoloLens type
Geometry
← Projector–screen: 2 m
← Screen–audience: 3 m
← 1 m each side
PROJproj.fogFog Volume + Beam Geometry▶
Smoke machines fill the space between screen and audience. The projector beam becomes a visible light column in the fog. Planetary atmospheres are not just seen on the screen — they are encountered as volumetric halos.
Equipment
→ Smoke machines (multiple)
→ Projector (audience-facing)
Effect
← Beam paths made visible
← Volumetric light halos
← Room-filling atmospheric density
STAGE 10
Spatial Sound Layer — KIKUNO
Compositional InfrastructureSound is not background. It is compositional infrastructure — a parallel system running alongside the visual chain, sharing the same planetary act structure. KIKUNO is a custom spatial audio configuration developed for ISO's main hall.
AUDIOsound.kikunoKIKUNO Spatial Audio System▶
KIKUNO is a custom spatial audio configuration for the ISO main hall. It distributes sound spatially — as a designed acoustic field that maps to the visual atmospheric structure.
System
→ Custom hall configuration
→ ISO main hall geometry
→ Spatial distribution design
Outputs
← Atmospheric sound field
← Spatially distributed audio
← Per-planet acoustic regime
AUDIOsound.dramaturgyShared Atmospheric Dramaturgy▶
Visual and audio systems share a dramaturgical structure — the planetary act sequence. Sound composition, spatial distribution, and visual pacing are interdependent. AI shapes both visual and spatial sound relationships in response to the same atmospheric state logic.
Shared Control
→ Planetary act structure
→ Atmospheric state
Interdependencies
← Visual density ↔ sound density
← Fog ↔ spatial diffusion
← Act transitions (shared)
OVERVIEW
Complete Signal Flow
PSG TEMPLATES
→
SELECT PLANET / GEOMETRY / BEAM
→
PSG SPECTRAL TABLES
→
CUSTOM PARSING CODE
→
CIE / XYZ COLOR MATCHING
→
sRGB IMAGE VALUES
→
INTERPOLATION + CAMERA MOTION
→
GLSL SHADER / KODELIFE
→
HAP EXPORT (.mov)
→
SYPHON STREAM
→
MADMAPPER · MODUL8
→
PROJECTOR OUTPUT
→
HOLOSCREEN (MEMBRANE)
+
FOG VOLUME
→
LIGHT IN SPACE
PARALLEL ↕ KIKUNO SPATIAL SOUND — planetary act structure / acoustic field