yt1024 Posted July 30, 2018 Share Posted July 30, 2018 HI, does any on know how to gain the calibration parameters of camera. By the way, a demo of changing from depth to point cloud will be greatly appreciated. Link to comment Share on other sites More sharing options...
K^2 Posted July 31, 2018 Share Posted July 31, 2018 Are you trying to do 3D reconstruction? The non-cooperative method for getting camera parameters is usually via feature-matching from multiple camera angles. You basically pass feature-extraction convolutions over your source images, and look for matches. If you have enough candidates, you can do a best fit for camera parameters and camera origins. Understanding projection matrices and optimization methods is basically a prerequisite here. If you are working with a game, however, you can usually get projection matrix directly. If you are using something like RenderDoc to grab the depth buffer, you can usually grab the matrix parameters that are passed into the shader as well. Typically, you'll either have a transformation (model view) and a projection matrix, or just one matrix that is already a matrix product of these two. Either way, if you know how these matrices are constructed, you should be able to extract camera parameters. Although, you might have to do a bit of math to convert into whatever format your reconstruction software takes. The point-cloud construction is fairly straight forward, though. Again, the assumption here is that you have the projection matrix, either from camera parameters or directly extracted from the engine. Compute gradient of your depth map. This gives you the X and Y components of the normal vectors in the screen space. Set the Z component to 1 and normalize. This gives you a full normal map to go along with your depth map. Now you need to back-project from screen space to world space. Each point on the depth map becomes a point in your cloud along with its normal. Keep in mind that normals are projected slightly differently from point coordinates. You can look up how normals and points are transformed from world space to screen space in pretty much any shader, and do the inverse. And that's it. You now have a point-cloud with normals. If you collect that data for multiple camera angles, you can run this through Poisson reconstruction, or whatever you plan to do with these. Btw, I'd be surprised if somebody hasn't done a good chunk of this work already. Seems like extracting color/depth buffers + projection matrix from a game via RenderDoc, and generating world-space point clouds for these would be something somebody did as a project. I'd check github to see if there is either code you can use directly, or maybe just some bits and pieces you can use for inspiration. Prior to filing a bug against any of my code, please consider this response to common concerns. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now