Digitální knihovna UPCE přechází na novou verzi. Omluvte prosím případné komplikace. / The UPCE Digital Library is migrating to a new version. We apologize for any inconvenience.

Publikace:
Estimation of atmospheric visibility by deep learning model using multimodal dataset

Článekopen accesspeer-reviewedpublished
dc.contributor.authorKopecká, Jitka
dc.contributor.authorKopecký, Dušan
dc.contributor.authorŠtursa, Dominik
dc.contributor.authorRácová, Zuzana
dc.contributor.authorKrejčí, Tomáš
dc.contributor.authorDoležel, Petr
dc.date.accessioned2025-11-11T14:46:55Z
dc.date.issued2025
dc.description.abstractAccurate estimation of atmospheric visibility is essential for numerous safety-critical applications, particularly in the field of transportation. In this study, a deep learning-based approach is investigated using a multimodal input representation that combines RGB images from a fixed-position surveillance camera with tabular meteorological variables collected from a nearby meteorological station. The meteorological input includes temperature, absolute pressure, relative humidity, dew point, wet bulb temperature, average and maximum wind speed, amount of precipitation, solar radiation, and ultraviolet index. Six neural network models for visibility estimation were developed and compared: a multimodal model utilizing both image and tabular meteorological inputs; two ablation models that use only unimodal input (image or meteorological data); a regions-of-interest (ROIs) based model that extracts features from predefined image subregions; and two ablation models that use only a reduced number of meteorological data. The multimodal model uses EfficientNetV2M for feature extraction and a set of fully connected neural networks to integrate the two modalities. The ROIs-based model also uses EfficientNetV2M, but only on manually selected reference regions of the scene. Evaluation was performed on a dataset of 1000 annotated images, with visibility manually determined based on reference points in the scene. The multimodal model achieved a mean squared error of 129,716 m2, a mean absolute error of 165.4 m, and an 𝑅2 score of 0.8861, with 84.46 % of predictions falling within a 10 % relative error margin. Although the ROIs-based model slightly outperformed the multimodal model in some regression metrics, its accuracy within tolerance thresholds was lower, and its reliance on manual scene annotation limits scalability. In contrast, the ablation models clearly demonstrated lower performance in almost all evaluated criteria. The results display that the proposed multimodal input strategy provides a balanced and practical approach to automated visibility estimation. Compared to conventional unimodal input models, this architecture offers improved accuracy, stability, and generalisation ability, making it suitable for real-world applications where both visual and environmental data are available.eng
dc.format114732
dc.identifier.doihttps://doi.org/10.1016/j.knosys.2025.114732
dc.identifier.issn0950-7051
dc.identifier.issn1872-7409
dc.identifier.orcidKopecký, Dušan: 0000-0003-2813-7343
dc.identifier.orcidŠtursa, Dominik: 0000-0002-2324-162X
dc.identifier.orcidKrejčí, Tomáš: 0000-0001-7328-4989
dc.identifier.orcidDoležel, Petr: 0000-0002-7359-0764
dc.identifier.urihttps://hdl.handle.net/10195/86462
dc.language.isoeng
dc.peerreviewedyeseng
dc.project.IDMŠMT/OP JAK/CZ.02.01.01/00/23_021/0008402/CZ/Mezisektorová a mezioborová spolupráce ve výzkumu a vývoji komunikačních, informačních a detekčních technologií pro řídicí a zabezpečovací systémy/CIDETcze
dc.publicationstatuspublishedeng
dc.publisherElsevier
dc.relationhttps://zenodo.org/records/15494899
dc.relation.ispartofKnowledge-Based Systems. Volume 331, 3 December 2025, 114732eng
dc.relation.publisherversionhttps://www.sciencedirect.com/science/article/pii/S095070512501771X
dc.rightsopen accesseng
dc.rights.licenceCC BY 4.0
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectAtmospheric visibilityeng
dc.subjectNeural networkeng
dc.subjectMultimodal dataseteng
dc.subjectMeteorological variableseng
dc.subjectDeep learningeng
dc.titleEstimation of atmospheric visibility by deep learning model using multimodal dataseteng
dc.typearticleeng
dspace.entity.typePublication

Soubory

Původní svazek

Nyní se zobrazuje 1 - 1 z 1
Načítá se...
Náhled
Název:
Article_Estimation of Atmospheric Visibility by Deep Learning Model Using Multimodal Dataset.pdf
Velikost:
6.86 MB
Formát:
Adobe Portable Document Format

Licence svazku

Nyní se zobrazuje 1 - 1 z 1
Načítá se...
Náhled
Název:
license.txt
Velikost:
1.71 KB
Formát:
Item-specific license agreed upon to submission
Popis: