Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Point cloud
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Set of data points in three-dimensional space}} [[File:Point cloud torus.gif|thumb|A point cloud image of a [[torus]]]] [[File:Geo-Referenced Point Cloud.JPG|thumb|Geo-referenced point cloud of [[Red Rocks Park|Red Rocks, Colorado]] (by DroneMapper)]] A '''point cloud''' is a [[discrete set]] of data [[Point (geometry)|points]] in [[space]]. The points may represent a [[3D shape]] or object. Each point [[Position (geometry)|position]] has its set of [[Cartesian coordinates]] (X, Y, Z).<ref>{{cite web|title=What are Point Clouds |url=https://tech27.com/resources/point-clouds/|website=Tech27}}</ref><ref name=":0">{{Cite web |title=What is a Point Cloud? - GIGABYTE Global |url=https://www.gigabyte.com/Glossary/point-cloud |access-date=2024-06-26 |website=GIGABYTE}}</ref> Points may contain data other than position such as [[RGB color spaces|RGB colors]],<ref name=":0" /> [[Normal (geometry)|normals]],<ref>{{Cite web |last=Simsangcheol |date=2023-02-21 |title=Estimate normals in Point Cloud |url=https://medium.com/@sim30217/estimate-normals-in-point-cloud-eb3a4f9a2e85 |access-date=2024-06-26 |website=Medium |language=en}}</ref> [[Timestamp|timestamps]]<ref>{{Cite web |title=Defra Data Services Platform |url=https://environment.data.gov.uk/dataset/094d4ec8-4c21-4aa6-817f-b7e45843c5e0 |access-date=2024-06-26 |website=environment.data.gov.uk}}</ref> and others. Point clouds are generally produced by [[3D scanner]]s or by [[photogrammetry]] software, which measure many points on the external surfaces of objects around them. As the output of 3D scanning processes, point clouds are used for many purposes, including to create 3D [[computer-aided design]] (CAD) or [[geographic information systems]] (GIS) models for manufactured parts, for [[metrology]] and quality inspection, and for a multitude of visualizing, animating, rendering, and [[mass customization]] applications. == Alignment and registration == {{Main|Point set registration}} When scanning a scene in real world using [[Lidar]], the captured point clouds contain snippets of the scene, which requires alignment to generate a full map of the scanned environment. Point clouds are often aligned with 3D models or with other point clouds, a process termed [[point set registration]]. The [[Iterative closest point|Iterative closest point (ICP) algorithm]] can be used to align two point clouds that have an overlap between them, and are separated by a [[Rigid transformation|rigid transform]].<ref>{{Cite web |title=Continuous ICP (CICP) |url=https://www.cs.cmu.edu/~halismai/cicp/ |access-date=2024-06-26 |website=www.cs.cmu.edu}}</ref> Point clouds with elastic transforms can also be aligned by using a non-rigid variant of the ICP (NICP).<ref>{{Cite journal |last1=Li |first1=Hao |last2=Sumner |first2=Robert W. |last3=Pauly |first3=Mark |date=July 2008 |title=Global Correspondence Optimization for Non-Rigid Registration of Depth Scans |url=https://onlinelibrary.wiley.com/doi/10.1111/j.1467-8659.2008.01282.x |journal=Computer Graphics Forum |language=en |volume=27 |issue=5 |pages=1421–1430 |doi=10.1111/j.1467-8659.2008.01282.x |issn=0167-7055}}</ref> With advancements in [[machine learning]] in recent years, point cloud registration may also be done using [[End-to-end principle|end-to-end]] [[Neural network (machine learning)|neural networks]].<ref>{{Cite journal |last1=Lu |first1=Weixin |last2=Wan |first2=Guowei |last3=Zhou |first3=Yao |last4=Fu |first4=Xiangyu |last5=Yuan |first5=Pengfei |last6=Song |first6=Shiyu |date=2019 |title=DeepVCP: An End-to-End Deep Neural Network for Point Cloud Registration |url=https://openaccess.thecvf.com/content_ICCV_2019/html/Lu_DeepVCP_An_End-to-End_Deep_Neural_Network_for_Point_Cloud_Registration_ICCV_2019_paper.html |pages=12–21}}</ref> For industrial metrology or inspection using [[industrial computed tomography]], the point cloud of a manufactured part can be aligned to an existing model and compared to check for differences. [[GD&T|Geometric dimensions and tolerances]] can also be extracted directly from the point cloud. == Conversion to 3D surfaces == [[File:Extract Video Beit Ghazaleh Orthophoto Survey AG&P 2017.gif|thumb|An example of a 1.2 billion data point cloud render of [[Beit Ghazaleh]], a heritage site in danger in [[Aleppo]] (Syria)<ref>{{Citation|title=English: Image from a very high precision 3D laser scanner survey (1.2 billion data points) of Beit Ghazaleh -- a heritage site in danger in Aleppo Syria. This was a collaborative scientific work for the study, safeguarding and emergency consolidation of remains of the structure.|date=2017-11-02|url=https://commons.wikimedia.org/wiki/File:Extract_Video_Beit_Ghazaleh_Orthophoto_Survey_AG&P_2017.gif|access-date=2018-06-11}}</ref>]] [[File:Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes With Deep Generative Networks.png|thumb|Generating or reconstructing 3D shapes from single or multi-view [[depth map]]s or silhouettes and visualizing them in dense point clouds<ref name="3DVAE">{{Cite web|url=https://github.com/Amir-Arsalan/Synthesize3DviaDepthOrSil|title=Soltani, A. A., Huang, H., Wu, J., Kulkarni, T. D., & Tenenbaum, J. B. Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes With Deep Generative Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1511-1519).|website=[[GitHub]]|date=27 January 2022}}</ref>]] While point clouds can be directly rendered and inspected,<ref>Levoy, M. and Whitted, T., {{cite web|url=http://graphics.stanford.edu/papers/points|title=The use of points as a display primitive}}. Technical Report 85-022, Computer Science Department, University of North Carolina at Chapel Hill, January, 1985</ref><ref>Rusinkiewicz, S. and Levoy, M. 2000. QSplat: a multiresolution point rendering system for large meshes. In Siggraph 2000. ACM, New York, NY, 343–352. DOI= http://doi.acm.org/10.1145/344779.344940</ref> point clouds are often converted to [[polygon mesh]] or [[triangle mesh]] models, [[non-uniform rational B-spline]] (NURBS) surface models, or CAD models through a process commonly referred to as surface reconstruction. There are many techniques for converting a point cloud to a 3D surface.<ref>[https://hal.inria.fr/hal-01348404v2/document Berger, M., Tagliasacchi, A., Seversky, L. M., Alliez, P., Guennebaud, G., Levine, J. A., Sharf, A. and Silva, C. T. (2016), A Survey of Surface Reconstruction from Point Clouds. Computer Graphics Forum.]</ref> Some approaches, like [[Delaunay triangulation]], [[alpha shape]]s, and ball pivoting, build a network of triangles over the existing vertices of the point cloud, while other approaches convert the point cloud into a [[voxel|volumetric]] [[distance field]] and reconstruct the [[implicit surface]] so defined through a [[marching cubes]] algorithm.<ref>[http://meshlabstuff.blogspot.com/2009/09/meshing-point-clouds.html Meshing Point Clouds] A short tutorial on how to build surfaces from point clouds</ref> In [[geographic information system]]s, point clouds are one of the sources used to make [[digital elevation model]] of the terrain.<ref>[http://terrain.cs.duke.edu/pubs/lidar_interpolation.pdf From Point Cloud to Grid DEM: A Scalable Approach]</ref> They are also used to generate 3D models of urban environments.<ref>[http://www.isprs.org/proceedings/XXXVIII/part3/a/pdf/91_XXXVIII-part3A.pdf K. Hammoudi, F. Dornaika, B. Soheilian, N. Paparoditis. Extracting Wire-frame Models of Street Facades from 3D Point Clouds and the Corresponding Cadastral Map. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences (IAPRS), vol. 38, part 3A, pp. 91–96, Saint-Mandé, France, 1–3 September 2010.]</ref> Drones are often used to collect a series of [[RGB color model|RGB]] images which can be later processed on a computer vision algorithm platform such as on AgiSoft Photoscan, Pix4D, DroneDeploy or Hammer Missions to create RGB point clouds from where distances and volumetric estimations can be made.{{citation needed|date=March 2018}} Point clouds can also be used to represent volumetric data, as is sometimes done in [[medical imaging]]. Using point clouds, multi-sampling and [[data compression]] can be achieved.<ref>{{cite journal|last1=Sitek|display-authors=etal|year=2006|title=Tomographic Reconstruction Using an Adaptive Tetrahedral Mesh Defined by a Point Cloud|journal=IEEE Trans. Med. Imaging|volume=25|issue=9|pages=1172–9|doi=10.1109/TMI.2006.879319|pmid=16967802|s2cid=27545238|url=https://zenodo.org/record/1232253}}</ref> == MPEG Point Cloud Compression == MPEG began standardizing point cloud compression (PCC) with a Call for Proposal (CfP) in 2017.<ref>{{cite web |url=http://www.mpeg-pcc.org/ |title=MPEG Point Cloud Compression |author=<!--Unstated--> |access-date=2020-10-22}}</ref><ref>{{cite journal |last1=Schwarz |first1=Sebastian |last2=Preda |first2=Marius |last3=Baroncini |first3=Vittorio |last4=Budagavi |first4=Madhukar |last5=Cesar |first5=Pablo |last6=Chou |first6=Philip A. |last7=Cohen |first7=Robert A. |last8=Krivokuća |first8=Maja |last9=Lasserre |first9=Sébastien |last10=Li |first10=Zhu |last11=Llach |first11=Joan |last12=Mammou |first12=Khaled |last13=Mekuria |first13=Rufael |last14=Krivokuća |first14=Maja |last15=Nakagami |first15=Ohji |last16=Siahaan |first16=Ernestasia |last17=Tabatabai |first17=Ali |last18=Tourapis |first18=Alexis M. |last19=Zakharchenko |first19=Vladyslav |date=2018-12-10 |title=Emerging MPEG Standards for Point Cloud Compression |journal= IEEE Journal on Emerging and Selected Topics in Circuits and Systems |volume=9 |issue=1 |pages=133–148 |doi=10.1109/JETCAS.2018.2885981 |doi-access=free}}</ref><ref>{{cite journal |last1=Graziosi |first1=Danillo |last2=Nakagami |first2=Ohji |last3=Kuma |first3=Satoru |last4=Zaghetto |first4=Alexandre |last5=Suzuki |first5=Teruhiko |last6=Tabatabai |first6=Ali |date=2020-04-03 |title=An overview of ongoing point cloud compression standardization activities: video-based (V-PCC) and geometry-based (G-PCC) |journal=APSIPA Transactions on Signal and Information Processing |volume=9 |pages=1–17 |doi=10.1017/ATSIP.2020.12 |doi-access=free}}</ref> Three categories of point clouds were identified: category 1 for static point clouds, category 2 for dynamic point clouds, and category 3 for Lidar sequences (dynamically acquired point clouds). Two technologies were finally defined: [[G-PCC]] (Geometry-based PCC, ISO/IEC 23090 part 9)<ref>{{Cite web|title=ISO/IEC DIS 23090-9|url=https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/standard/07/89/78990.html|access-date=2020-06-07|website=ISO|language=en}}</ref> for category 1 and category 3; and [[V-PCC]] (Video-based PCC, ISO/IEC 23090 part 5)<ref>{{Cite web|title=ISO/IEC DIS 23090-5|url=https://www.iso.org/standard/73025.html|access-date=2020-10-21|website=ISO|language=en}}</ref> for category 2. The first test models were developed in October 2017, one for [[G-PCC]] (TMC13) and another one for [[V-PCC]] (TMC2). Since then, the two test models have evolved through technical contributions and collaboration, and the first version of the PCC standard specifications was expected to be finalized in 2020 as part of the ISO/IEC 23090 series on the coded representation of immersive media content.<ref>{{Cite web|title=Immersive Media Architectures {{!}} MPEG|url=https://mpeg.chiariglione.org/standards/mpeg-i/immersive-media-architectures|access-date=2020-06-07|website=mpeg.chiariglione.org}}</ref> == See also == * [[Skand]] – Democratising spatial data * [[Euclideon]] – 3D graphics engine which makes use of a point cloud search algorithm to render images * [[MeshLab]] – open source tool to manage point clouds and convert them into 3D triangular meshes * [[CloudCompare]] – open source tool to view, edit, and process high density 3D point clouds * [[Point Cloud Library]] (PCL) – comprehensive BSD open source library for n-D point clouds and 3D geometry processing ==References== {{Reflist}} {{Authority control}} {{DEFAULTSORT:Point Cloud}} [[Category:3D computer graphics]] [[Category:Geometry processing]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Authority control
(
edit
)
Template:Citation
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Main
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)