In this paper, a system for rendering voxel (3D pixel)-based worlds is presented. The system enables the generation of such worlds from LiDAR and orthophoto data by discretizing LiDAR data with a resolution of 1 m. In world generation, the embedded basic semantic descriptors of LiDAR data are used, enhanced with additional classes acquired from colour information in geospatially aligned orthophoto images. The system allows local use as well as web-based use and supports multi-core and multi-processor hardware, as well as the clustering of multiple instances for faster rendering. We also present the results (renderings of selected areas) and describe other possible uses of the presented system.
Key words: rendering, voxels, LiDAR, point cloud, orthophoto, ray tracing