Publikace: Estimation of atmospheric visibility by deep learning model using multimodal dataset
Článekopen accesspeer-reviewedpublished| dc.contributor.author | Kopecká, Jitka | |
| dc.contributor.author | Kopecký, Dušan | |
| dc.contributor.author | Štursa, Dominik | |
| dc.contributor.author | Rácová, Zuzana | |
| dc.contributor.author | Krejčí, Tomáš | |
| dc.contributor.author | Doležel, Petr | |
| dc.date.accessioned | 2025-11-11T14:46:55Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | Accurate estimation of atmospheric visibility is essential for numerous safety-critical applications, particularly in the field of transportation. In this study, a deep learning-based approach is investigated using a multimodal input representation that combines RGB images from a fixed-position surveillance camera with tabular meteorological variables collected from a nearby meteorological station. The meteorological input includes temperature, absolute pressure, relative humidity, dew point, wet bulb temperature, average and maximum wind speed, amount of precipitation, solar radiation, and ultraviolet index. Six neural network models for visibility estimation were developed and compared: a multimodal model utilizing both image and tabular meteorological inputs; two ablation models that use only unimodal input (image or meteorological data); a regions-of-interest (ROIs) based model that extracts features from predefined image subregions; and two ablation models that use only a reduced number of meteorological data. The multimodal model uses EfficientNetV2M for feature extraction and a set of fully connected neural networks to integrate the two modalities. The ROIs-based model also uses EfficientNetV2M, but only on manually selected reference regions of the scene. Evaluation was performed on a dataset of 1000 annotated images, with visibility manually determined based on reference points in the scene. The multimodal model achieved a mean squared error of 129,716 m2, a mean absolute error of 165.4 m, and an 𝑅2 score of 0.8861, with 84.46 % of predictions falling within a 10 % relative error margin. Although the ROIs-based model slightly outperformed the multimodal model in some regression metrics, its accuracy within tolerance thresholds was lower, and its reliance on manual scene annotation limits scalability. In contrast, the ablation models clearly demonstrated lower performance in almost all evaluated criteria. The results display that the proposed multimodal input strategy provides a balanced and practical approach to automated visibility estimation. Compared to conventional unimodal input models, this architecture offers improved accuracy, stability, and generalisation ability, making it suitable for real-world applications where both visual and environmental data are available. | eng |
| dc.format | 114732 | |
| dc.identifier.doi | https://doi.org/10.1016/j.knosys.2025.114732 | |
| dc.identifier.issn | 0950-7051 | |
| dc.identifier.issn | 1872-7409 | |
| dc.identifier.orcid | Kopecký, Dušan: 0000-0003-2813-7343 | |
| dc.identifier.orcid | Štursa, Dominik: 0000-0002-2324-162X | |
| dc.identifier.orcid | Krejčí, Tomáš: 0000-0001-7328-4989 | |
| dc.identifier.orcid | Doležel, Petr: 0000-0002-7359-0764 | |
| dc.identifier.uri | https://hdl.handle.net/10195/86462 | |
| dc.language.iso | eng | |
| dc.peerreviewed | yes | eng |
| dc.project.ID | MŠMT/OP JAK/CZ.02.01.01/00/23_021/0008402/CZ/Mezisektorová a mezioborová spolupráce ve výzkumu a vývoji komunikačních, informačních a detekčních technologií pro řídicí a zabezpečovací systémy/CIDET | cze |
| dc.publicationstatus | published | eng |
| dc.publisher | Elsevier | |
| dc.relation | https://zenodo.org/records/15494899 | |
| dc.relation.ispartof | Knowledge-Based Systems. Volume 331, 3 December 2025, 114732 | eng |
| dc.relation.publisherversion | https://www.sciencedirect.com/science/article/pii/S095070512501771X | |
| dc.rights | open access | eng |
| dc.rights.licence | CC BY 4.0 | |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
| dc.subject | Atmospheric visibility | eng |
| dc.subject | Neural network | eng |
| dc.subject | Multimodal dataset | eng |
| dc.subject | Meteorological variables | eng |
| dc.subject | Deep learning | eng |
| dc.title | Estimation of atmospheric visibility by deep learning model using multimodal dataset | eng |
| dc.type | article | eng |
| dspace.entity.type | Publication |
Soubory
Původní svazek
1 - 1 z 1
Načítá se...
- Název:
- Article_Estimation of Atmospheric Visibility by Deep Learning Model Using Multimodal Dataset.pdf
- Velikost:
- 6.86 MB
- Formát:
- Adobe Portable Document Format
Licence svazku
1 - 1 z 1
Načítá se...
- Název:
- license.txt
- Velikost:
- 1.71 KB
- Formát:
- Item-specific license agreed upon to submission
- Popis: