Photogrammetry is a technique for constructing 3-dimensional models of objects or scenes from photographs by identifying and using points that can be easily recognized in at least 3 photographs taken from different viewpoints, and that can be located as points in a scene. The model is constructed by computing coordinates for these points (this is the core photogrammetry part of the process) and connecting them in surfaces. The latter process is called meshing. The mesh can be rendered to an image that can be shown on a screen.
Meshing works for solid objects but not for organic structures or transparent and shiny objects. A popular technique for rendering that is now being developed is referred to as Gaussian splatting. In this technique the points obtained from photogrammetry are used as starting points for the creation of "splats". The splats are semi-transparent ellipsoid objects with parameters that are fitted to get the best matches between the photographs and corresponding renders of the spats.
For photogrammetry, I have used 2 methods: RealityCapture 1.4.1 and the cloudservice of Lumalabs.ai. Other software and services that I have tried are Meshroom, KiriEngin, Polycam and JawsetPostshot
For rendering the mesh models on a webpage, I have used the Aframe Javascript platform. Another way to show mesh-models is using the WebGL export possibility of Unity. The advantage of using Unity is that the user interface for navigating in and around the models can be easily changed.
For creating Gaussian splat models, I have used Jawset Postshot and the webservice of Lumalabs.ai
For rendering the splat models on a webpage, I have used 2 methods: the cloud-service of Lumalabs.ai and a Javascript that was developed by Kevin Kwok.