839:-
Uppskattad leveranstid 7-12 arbetsdagar
Fri frakt för medlemmar vid köp för minst 249:-
Image captioning with audio has emerged as a challenging yet promising task in the field of deep learning. This paper proposes a novel approach to address this task by integrating convolutional neural networks (CNNs) for image feature extraction and recurrent neural networks (RNNs) for sequential audio analysis. Specifically, we leverage pre-trained CNNs such as VGG to extract visual features from images and employ spectrogram representations coupled with RNNs such as LSTM or GRU to process audio inputs. Our proposed model based not only on their visual content but also on accompanying audio cues. We evaluate the performance of our model on benchmark datasets and demonstrate its effectiveness in generating coherent and contextually relevant captions for images with corresponding audio inputs. Additionally, we conduct tablation studies to analyze the contribution of each modality to the overall captioning performance, our results show that the fusion of visual and auditory modalities significantly improves captioning quality compared to using either modality in isolation.
- Format: Pocket/Paperback
- ISBN: 9786207647606
- Språk: Engelska
- Antal sidor: 64
- Utgivningsdatum: 2024-05-16
- Förlag: LAP Lambert Academic Publishing