these are done using sketchup's native "Photomatch" function. You import a 2D image, and use the tool to match the 3D axis of the model space to the image using the apparent perspective lines and vanishing points visible in the image. Works best on rectangular buildings, but if you've taken some drawing classes and have the spatial reasoning skills necessary, you can make it work on just about anything. With the notable exception that there can't be too much distortion on the images caused by the lens used to take the photo. If the perspective lines are visibly curved at all, like with a fisheye lens, it pretty much just won't work.
For these, I used a model of the falcon's body I'd already built from completely orthographic drawings used for ESB's production. Having the basic shape established let me find the perspective lines a lot more easily. I took the model, more or less matched the angle to the photo reference I was using, and then applied the photomatch tool, then manually adjusting the perspective lines until the 3d model lined up reasonably with what I could see in the photo. After that, I can build into the model using the photomatch's overlay of the photo as a guide.
Also you can use multiple photos of the same object. Sketchup will save two views with different perspective matches. You know you're getting close to accurate if you can get the model you're working on to "agree" with both photos in terms of shape, dimensions, proportions, locations of details, etc.
Andre's photogrammetry is a whole different animal. What he was doing was taking a series of photos of the actual model and using a program to automatically stitch them all together into a 3D model, using telluric currents, spells, hexes, and potions (as far as I can tell...)