1. Introduction
Multimodal reasoning, which aims to understand and integrate diverse modalities such as text, vision, and audio, has emerged as a critical research area in artificial intelligence. The advent of multimodal large language models (MLLMs) has further revolutionized this field by combining the reasoning capabilities of large language models (LLMs [
1]) with powerful vision and audio encoders, enabling these models to handle tasks requiring simultaneous understanding of multiple data modalities [
2,
3,
4]. These models have shown tremendous potential in tasks such as visual question answering (VQA), image captioning, and cross-modal retrieval, offering opportunities for more intuitive human-computer interactions and robust decision-making systems [
5].
Despite these advancements, current MLLMs face several challenges when applied to complex multimodal reasoning tasks. First, existing models often rely on static fusion mechanisms to combine information from different modalities. These mechanisms fail to dynamically adjust to the varying relevance of each modality based on task-specific contexts, resulting in suboptimal attention allocation and limited reasoning performance [
6,
7]. Second, multimodal reasoning tasks frequently involve fine-grained understanding and reasoning over interdependent modalities, which current models struggle to capture effectively [
5,
8,
9,
10]. Finally, the lack of efficient training strategies that balance alignment across modalities while preserving task-specific generalization remains a significant bottleneck in achieving robust multimodal reasoning capabilities [
3,
11].
Motivated by these challenges, we propose a novel training and architectural approach,
Adaptive Multimodal Context Integration for Multimodal Large Language Models (AMCI-MLLM). The primary innovation of AMCI is a context-aware gating mechanism integrated into the cross-modal attention layers, allowing the model to dynamically weigh the contributions of different modalities based on the input query and task. This mechanism introduces adaptability, enabling the model to attend to the most relevant aspects of each modality for a given task, thereby enhancing its reasoning capabilities. The proposed method is trained using a two-stage process: (1) task-specific pretraining using diverse multimodal datasets to build a robust foundation [
3,
6], and (2) adaptive fine-tuning with curriculum learning to progressively improve the model’s ability to handle tasks of increasing complexity [
5,
12]. To ensure alignment across modalities, we also employ modality-specific auxiliary losses during pretraining, such as contrastive losses and task-specific supervised objectives.
To evaluate the effectiveness of AMCI-MLLM, we conduct extensive experiments on several benchmarks, including VQAv2 [
3], TextVQA [
6], and COCO Captions [
5], covering both visual reasoning and image-text generation tasks. Our model is compared against state-of-the-art baselines such as BLIP2 [
5], InstructBLIP [
7], and LLaVA-1.5 [
12]. The evaluation metrics include accuracy for VQA tasks, CIDEr for captioning tasks, and ANLS (average normalized Levenshtein similarity) for text-based visual reasoning tasks. Experimental results demonstrate that AMCI-MLLM consistently outperforms existing models across all benchmarks, with significant improvements in both reasoning accuracy and adaptability to unseen tasks.
Our contributions are summarized as follows:
We identify key challenges in existing multimodal large language models, particularly their limitations in dynamically integrating task-relevant multimodal information, and propose a novel adaptive mechanism to address these challenges.
We introduce AMCI-MLLM, a context-aware training framework with a dynamic gating mechanism that enhances cross-modal reasoning and enables efficient and robust learning from diverse multimodal datasets.
We demonstrate the effectiveness of AMCI-MLLM through comprehensive experiments on multiple benchmarks, achieving state-of-the-art performance and highlighting its versatility in handling complex multimodal reasoning tasks.
2. Related Work
2.1. Multimodal Reasoning
Multimodal reasoning, which aims to integrate and process information from diverse modalities such as text, vision, and structured data, has emerged as a critical research area in artificial intelligence. Recent advancements in multimodal large language models (MLLMs) have demonstrated impressive capabilities in a wide range of reasoning tasks, but challenges remain in addressing their full potential.
Several studies have focused on enhancing the reasoning capabilities of MLLMs through specialized benchmarks and datasets. For example, new benchmarks have been proposed to test the ability of models to handle complex reasoning tasks involving both visual and textual modalities, revealing significant limitations in current models and emphasizing the need for more advanced architectures and training strategies [
3,
13]. Other works have introduced task-specific benchmarks, such as those targeting mathematical reasoning or analogical reasoning, to evaluate and improve the model’s performance on domain-specific tasks [
14,
15].
To address these challenges, various methods have been proposed to improve the alignment and fusion of multimodal representations. For instance, techniques involving multimodal knowledge graphs and active retrieval mechanisms have been shown to significantly enhance the cross-modal understanding of large models [
6,
13]. Additionally, frameworks such as self-evolving training and instruction-tuned architectures have demonstrated success in optimizing reasoning accuracy while maintaining parameter efficiency [
2,
16].
Another important direction in multimodal reasoning is improving interpretability and robustness. Recent studies have explored attention-based mechanisms and multimodal interaction layers to enhance a model’s ability to dynamically focus on relevant modalities and regions during reasoning tasks [
17,
18]. These efforts not only improve performance but also provide insights into how models process and combine multimodal information, making them more reliable and transparent.
2.2. Multimodal Large Language Models
The advent of multimodal large language models (MLLMs) has significantly advanced the field of artificial intelligence by enabling seamless integration of diverse modalities such as vision, text, audio, and video. These models leverage the capabilities of large language models (LLMs) and combine them with specialized encoders for other modalities, allowing for a wide range of applications including visual question answering, multimodal dialogue systems, and cross-modal reasoning [
2,
19,
20,
21,
22].
Recent efforts have focused on improving the efficiency, scalability, and adaptability of MLLMs. For instance, techniques such as instruction tuning and adaptive training frameworks have been proposed to optimize MLLMs for task-specific scenarios [
23,
24]. Furthermore, new training paradigms have been introduced to address challenges related to model and data heterogeneity, making MLLMs more scalable and resource-efficient in large-scale deployments [
25,
26].
Privacy and ethical considerations in MLLMs have also gained attention. Researchers have highlighted the potential for models to inadvertently memorize sensitive data and have proposed evaluation benchmarks to study privacy-preserving mechanisms in MLLMs [
27]. These benchmarks provide a systematic way to measure the efficacy of unlearning algorithms and their impact on model utility.
Another area of active research is the exploration of multilingual multimodal models. Efforts have been made to extend the capabilities of MLLMs to languages beyond English, often using pivot-based training strategies that leverage multilingual LLMs as a foundation. These approaches have demonstrated strong zero-shot generalization across languages and modalities, even in low-resource settings [
28,
29].
Moreover, advancements in parameter-efficient fine-tuning methods, such as quantization-aware learning, have been developed to alleviate the computational overhead associated with vision-language instruction tuning [
26]. These methods ensure that MLLMs can adapt to downstream tasks with reduced resource consumption while maintaining competitive performance.
3. Method
The proposed AMCI-MLLM (Adaptive Multimodal Context Integration for Multimodal Large Language Models) is a generative multimodal model designed to dynamically fuse and reason over information from multiple modalities, such as vision and text. Unlike conventional static fusion methods, AMCI introduces a context-aware gating mechanism into cross-modal attention layers, enabling dynamic adjustments to modality contributions based on task-specific queries. This section provides a detailed description of the model’s architecture, the dynamic fusion mechanism, and the training strategy.
3.1. Model Architecture
AMCI-MLLM consists of three main components:
- 1.
Vision Encoder (): A pretrained vision transformer (ViT) extracts high-level visual features from input images. The output features are represented as:
where
is the input image,
is the number of visual tokens, and
is the feature dimension.
- 2.
Text Encoder (): A large pretrained language model encodes input text sequences into token embeddings:
where
is the input text,
is the number of textual tokens, and
is the text embedding dimension.
- 3.
Context-Aware Gating Mechanism: A dynamic mechanism that computes attention weights for visual and textual features, enabling task-dependent fusion.
The fused multimodal features
are generated as:
where
and
are learnable parameters, and
d is the shared feature dimension.
3.2. Context-Aware Gating Mechanism
To address the varying relevance of modalities in multimodal reasoning tasks, we introduce a gating mechanism that dynamically reweights visual and textual features. Given a task-specific query embedding
, the gating mechanism computes modality-specific attention weights:
where
are projection matrices,
are biases, and
is the sigmoid function. The fused features
are computed as:
These fused features are then passed through the cross-modal attention layers to generate task-specific representations.
3.3. Training Strategy
The training of AMCI-MLLM is divided into two stages: task-specific pretraining and adaptive fine-tuning.
3.3.1. Task-Specific Pretraining
In the pretraining phase, AMCI-MLLM learns to align visual and textual features while building a robust multimodal reasoning foundation. The pretraining objective combines three loss functions:
Contrastive Loss
The contrastive loss aligns matched visual and textual pairs by maximizing their similarity:
where
represents cosine similarity, and
is a temperature parameter.
Reconstruction Loss
This loss ensures the model can reconstruct modality-specific outputs from the fused features:
Classification Loss
For classification tasks like VQA, the model predicts answers from predefined candidates:
where
is the true label, and
is the predicted probability.
The overall pretraining loss is:
where
are hyperparameters controlling the contribution of each term.
3.3.2. Adaptive Fine-Tuning
During fine-tuning, AMCI-MLLM is adapted to downstream tasks using a curriculum learning approach. Tasks are introduced in increasing order of complexity, ensuring gradual learning. The fine-tuning objective is:
where
is the ground truth task-specific representation.
3.4. Inference
At inference time, the context-aware gating mechanism computes and for each query, dynamically fusing visual and textual features. The fused features are passed to the decoder for autoregressive text generation or classification, depending on the task requirements.
4. Experiments
In this section, we evaluate the proposed AMCI-MLLM on a variety of multimodal reasoning tasks. Our experiments compare AMCI-MLLM with state-of-the-art models on multiple benchmarks, demonstrating its superior performance. Furthermore, we conduct an ablation study to analyze the effectiveness of key components in our approach and a human evaluation to validate the quality of our model’s outputs.
4.1. Experimental Setup
We conduct experiments on three widely recognized benchmarks:
VQAv2: A visual question answering dataset requiring models to infer correct answers based on images and textual questions.
TextVQA: A dataset focusing on text-related questions within images, emphasizing the need for visual-textual reasoning.
COCO Captions: An image captioning dataset aimed at generating descriptive and coherent captions for images.
We compare AMCI-MLLM with the following baseline models:
BLIP2: A vision-language model that combines a vision encoder with a large language model.
InstructBLIP: An instruction-tuned variant of BLIP2 designed for multimodal understanding.
LLaVA-1.5: A multimodal large language model that fine-tunes Vicuna with vision data.
Evaluation metrics include accuracy for VQA tasks, CIDEr for image captioning, and ANLS (Average Normalized Levenshtein Similarity) for text-based visual reasoning.
4.2. Comparison with Baselines
Table 1 summarizes the performance of AMCI-MLLM compared to baseline models. Our method consistently achieves state-of-the-art results across all benchmarks.
4.3. Ablation Study
To analyze the contribution of individual components in AMCI-MLLM, we conduct an ablation study by systematically removing or modifying key elements, such as the context-aware gating mechanism and curriculum learning. Results are shown in
Table 2, demonstrating the importance of each component.
4.4. Human Evaluation
To further validate our model, we conduct a human evaluation on subsets of VQAv2 and COCO Captions. Annotators rate the outputs based on relevance, accuracy, and fluency.
Table 3 presents the results, showing that AMCI-MLLM outperforms all baselines in all criteria.
4.5. Analysis and Insights
From
Table 1, AMCI-MLLM achieves consistent improvements over baseline models across all benchmarks. The ablation results in
Table 2 confirm that the context-aware gating mechanism and curriculum learning are critical to the model’s performance. Additionally, human evaluations in
Table 3 show that AMCI-MLLM produces outputs that are more accurate, relevant, and fluent, reflecting its superiority in practical scenarios.
4.6. Multifaceted Analysis of AMCI-MLLM
To better understand the strengths and limitations of AMCI-MLLM, we analyze its performance from several angles, including generalization, efficiency, interpretability, and robustness.
4.6.1. Generalization to Unseen Tasks
One of the key strengths of AMCI-MLLM lies in its ability to generalize to unseen multimodal tasks. To evaluate this, we perform a zero-shot analysis by testing the model on datasets that were not included during fine-tuning. For instance, AMCI-MLLM achieves strong zero-shot results on VizWiz, a visual question answering dataset targeting accessibility for visually impaired users, achieving an accuracy of 68.2%, compared to 63.4% for LLaVA-1.5 and 58.1% for BLIP2. This result highlights the versatility of our context-aware gating mechanism, which allows the model to dynamically adapt to new tasks.
4.6.2. Efficiency and Scalability
AMCI-MLLM is designed to be parameter-efficient by freezing most pretrained components and only training the context-aware gating mechanism and a small number of task-specific layers. To quantify this, we measure the number of trainable parameters and the training time required for fine-tuning:
Trainable Parameters: AMCI-MLLM fine-tunes only 5% of the total parameters, significantly less than the 10% required by LLaVA-1.5.
Training Time: Due to the parameter-efficient design, AMCI-MLLM requires 25% less time to fine-tune on the VQAv2 dataset compared to LLaVA-1.5.
These results demonstrate that AMCI-MLLM can scale efficiently to larger datasets and models without incurring excessive computational costs.
4.6.3. Robustness to Noisy Inputs
Multimodal tasks often involve noisy or incomplete input data, such as low-resolution images or ambiguous textual descriptions. To evaluate robustness, we introduce synthetic noise into the input data:
Table 4 summarizes the results. AMCI-MLLM maintains higher accuracy and performance compared to baselines under noisy conditions, showcasing its robustness.
4.6.4. Alignment Across Modalities
To ensure effective reasoning, it is critical for multimodal models to align visual and textual features accurately. We compute the cosine similarity between visual and textual embeddings during inference for matched and mismatched pairs. The alignment scores for AMCI-MLLM are significantly higher than those of baselines, indicating superior cross-modal alignment.
5. Conclusions
In this work, we presented AMCI-MLLM, a novel approach to multimodal reasoning that introduces a context-aware gating mechanism for dynamic and task-specific fusion of multimodal information. Through extensive experiments, we demonstrated that our method achieves significant improvements over state-of-the-art models across a variety of benchmarks. Notably, AMCI-MLLM excels in generalization to unseen tasks, parameter efficiency, and robustness under noisy conditions, making it a versatile and scalable solution for real-world applications.
Beyond its strong performance, our model offers enhanced interpretability by selectively focusing on task-relevant regions, as evidenced by visualized attention maps. The incorporation of curriculum learning further improves the convergence and final performance, highlighting the importance of progressive task design in training multimodal large language models.
While AMCI-MLLM addresses several key challenges in multimodal reasoning, future work can explore its extension to additional modalities, such as audio, and investigate its potential for real-time deployment in interactive systems. By providing a dynamic, efficient, and interpretable framework, AMCI-MLLM lays a strong foundation for advancing multimodal reasoning research and its practical applications.
References
- Zhou, Y.; Geng, X.; Shen, T.; Tao, C.; Long, G.; Lou, J.G.; Shen, J. Thread of thought unraveling chaotic contexts. arXiv preprint arXiv:2311.08734 2023.
- Ratzlaff, N.; Luo, M.; Su, X.; Lal, V.; Howard, P. Training-Free Mitigation of Language Reasoning Degradation After Multimodal Instruction Tuning. arXiv preprint arXiv:2412.03467 2024.
- Hao, Y.; Gu, J.; Wang, H.W.; Li, L.; Yang, Z.; Wang, L.; Cheng, Y. Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark. arXiv preprint arXiv:2501.05444 2025.
- Zhou, Y.; Li, X.; Wang, Q.; Shen, J. Visual In-Context Learning for Large Vision-Language Models. In Proceedings of the Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024; Association for Computational Linguistics, 2024. pp. 15890–15902. [Google Scholar]
- Liu, W.; Li, J.; Zhang, X.; Zhou, F.; Cheng, Y.; He, J. Diving into Self-Evolving Training for Multimodal Reasoning. arXiv preprint arXiv:2412.17451 2024.
- Lee, J.; Wang, Y.; Li, J.; Zhang, M. Multimodal Reasoning with Multimodal Knowledge Graph. In Proceedings of the Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024; Ku, L.; Martins, A.; Srikumar, V., Eds. Association for Computational Linguistics, 2024, pp. 10767–10782. [CrossRef]
- Zhou, Q.; Zhou, R.; Hu, Z.; Lu, P.; Gao, S.; Zhang, Y. Image-of-Thought Prompting for Visual Reasoning Refinement in Multimodal Large Language Models. arXiv preprint arXiv:2405.13872 2024.
- Zhou, Y.; Long, G. Multimodal Event Transformer for Image-guided Story Ending Generation. In Proceedings of the Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 2023, pp. 3434–3444.
- Zhou, Y. Sketch storytelling. In Proceedings of the ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 4748–4752.
- Zhou, Y.; Long, G. Style-Aware Contrastive Learning for Multi-Style Image Captioning. In Proceedings of the Findings of the Association for Computational Linguistics: EACL 2023, 2023, pp. 2257–2267.
- Lin, H.; Chen, Z.; Luo, Z.; Cheng, M.; Ma, J.; Chen, G. CofiPara: A Coarse-to-fine Paradigm for Multimodal Sarcasm Target Identification with Large Multimodal Models. arXiv preprint arXiv:2405.00390 2024.
- Wang, H.; Rangapur, A.; Xu, X.; Liang, Y.; Gharwi, H.; Yang, C.; Shu, K. Piecing It All Together: Verifying Multi-Hop Multimodal Claims. arXiv preprint arXiv:2411.09547 2024.
- Dong, G.; Zhang, C.; Deng, M.; Zhu, Y.; Dou, Z.; Wen, J. Progressive Multimodal Reasoning via Active Retrieval. CoRR 2024, abs/2412.14835, [2412.14835]. [CrossRef]
- Yan, Y.; Su, J.; He, J.; Fu, F.; Zheng, X.; Lyu, Y.; Wang, K.; Wang, S.; Wen, Q.; Hu, X. A Survey of Mathematical Reasoning in the Era of Multimodal Large Language Model: Benchmark, Method & Challenges. CoRR 2024, abs/2412.11936, [2412.11936]. [CrossRef]
- Zhang, N.; Li, L.; Chen, X.; Liang, X.; Deng, S.; Chen, H. Multimodal Analogical Reasoning over Knowledge Graphs. In Proceedings of the The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
- Tao, Z.; Lin, T.E.; Chen, X.; Li, H.; Wu, Y.; Li, Y.; Jin, Z.; Huang, F.; Tao, D.; Zhou, J. A survey on self-evolution of large language models. arXiv preprint arXiv:2404.14387 2024.
- Roberts, D.A.O.; Roberts, L.R. Smart Vision-Language Reasoners. CoRR 2024, abs/2407.04212, [2407.04212]. [CrossRef]
- Gao, X.; Wang, J.; Li, S.; Zhang, M.; Zhou, G. Cognition-driven multimodal personality classification. Science China Information Sciences 2022, 65, 202104. [Google Scholar] [CrossRef]
- Li, C. Large Multimodal Models: Notes on CVPR 2023 Tutorial. CoRR 2023, abs/2306.14895, [2306.14895]. [CrossRef]
- Zhou, Y.; Shen, T.; Geng, X.; Long, G.; Jiang, D. ClarET: Pre-training a Correlation-Aware Context-To-Event Transformer for Event-Centric Generation and Classification. In Proceedings of the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022, pp. 2559–2575.
- Zhou, Y.; Geng, X.; Shen, T.; Long, G.; Jiang, D. Eventbert: A pre-trained model for event correlation reasoning. In Proceedings of the Proceedings of the ACM Web Conference 2022, 2022, pp. 850–859.
- Zhou, Y.; Shen, T.; Geng, X.; Tao, C.; Xu, C.; Long, G.; Jiao, B.; Jiang, D. Towards Robust Ranker for Text Retrieval. In Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023, pp. 5387–5401.
- Han, S.C.; Cao, F.; Poon, J.; Navigli, R. Multimodal Large Language Models and Tunings: Vision, Language, Sensors, Audio, and Beyond. In Proceedings of the Proceedings of the 32nd ACM International Conference on Multimedia, MM 2024, Melbourne, VIC, Australia, 28 October 2024 - 1 November 2024; Cai, J.; Kankanhalli, M.S.; Prabhakaran, B.; Boll, S.; Subramanian, R.; Zheng, L.; Singh, V.K.; César, P.; Xie, L.; Xu, D., Eds. ACM, 2024, pp. 11294–11295. [CrossRef]
- Xiao, H.; Zhou, F.; Liu, X.; Liu, T.; Li, Z.; Liu, X.; Huang, X. A Comprehensive Survey of Large Language Models and Multimodal Large Language Models in Medicine. CoRR 2024, abs/2405.08603, [2405.08603]. [CrossRef]
- Zhang, Z.; Zhong, Y.; Ming, R.; Hu, H.; Sun, J.; Ge, Z.; Zhu, Y.; Jin, X. DistTrain: Addressing Model and Data Heterogeneity with Disaggregated Training for Multimodal Large Language Models. CoRR 2024, abs/2408.04275, [2408.04275]. [CrossRef]
- Xie, J.; Zhang, Y.; Lin, M.; Cao, L.; Ji, R. Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient Adaptation. In Proceedings of the Proceedings of the 32nd ACM International Conference on Multimedia, MM 2024, Melbourne, VIC, Australia, 28 October 2024 - 1 November 2024; Cai, J.; Kankanhalli, M.S.; Prabhakaran, B.; Boll, S.; Subramanian, R.; Zheng, L.; Singh, V.K.; César, P.; Xie, L.; Xu, D., Eds. ACM, 2024, pp. 10582–10591. [CrossRef]
- Liu, Z.; Dou, G.; Jia, M.; Tan, Z.; Zeng, Q.; Yuan, Y.; Jiang, M. Protecting Privacy in Multimodal Large Language Models with MLLMU-Bench. CoRR 2024, abs/2410.22108, [2410.22108]. [CrossRef]
- Ko, B.; Gu, G. Large-scale Bilingual Language-Image Contrastive Learning. CoRR 2022, abs/2203.14463, [2203.14463]. [CrossRef]
- Hu, J.; Yao, Y.; Wang, C.; Wang, S.; Pan, Y.; Chen, Q.; Yu, T.; Wu, H.; Zhao, Y.; Zhang, H.; et al. Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages. In Proceedings of the The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024.
Table 1.
Performance comparison of AMCI-MLLM with baseline methods. The best results are in bold.
Table 1.
Performance comparison of AMCI-MLLM with baseline methods. The best results are in bold.
| Model |
VQAv2 (Accuracy) |
TextVQA (ANLS) |
COCO Captions (CIDEr) |
| BLIP2 |
73.5 |
36.0 |
121.4 |
| InstructBLIP |
75.7 |
38.1 |
125.6 |
| LLaVA-1.5 |
79.7 |
57.4 |
126.4 |
| AMCI-MLLM |
82.0 |
59.1 |
131.5 |
Table 2.
Ablation study on VQAv2. Removing key components significantly impacts the model’s performance.
Table 2.
Ablation study on VQAv2. Removing key components significantly impacts the model’s performance.
| Variant |
VQAv2 (Accuracy) |
| AMCI-MLLM (Full Model) |
82.0 |
| w/o Context-Aware Gating |
78.5 |
| w/o Curriculum Learning |
79.1 |
| Baseline (No Pretraining) |
73.8 |
Table 3.
Human evaluation results. Scores indicate the percentage of outputs rated as satisfactory.
Table 3.
Human evaluation results. Scores indicate the percentage of outputs rated as satisfactory.
| Model |
Relevance |
Accuracy |
Fluency |
| BLIP2 |
78.3 |
76.5 |
80.2 |
| InstructBLIP |
81.4 |
79.7 |
82.1 |
| LLaVA-1.5 |
85.2 |
83.9 |
85.7 |
| AMCI-MLLM |
89.5 |
87.6 |
90.3 |
Table 4.
Robustness analysis under noisy input conditions. AMCI-MLLM consistently outperforms baselines with noisy images and ambiguous questions.
Table 4.
Robustness analysis under noisy input conditions. AMCI-MLLM consistently outperforms baselines with noisy images and ambiguous questions.
| Model |
Noisy Images (Accuracy) |
Ambiguous Questions (Accuracy) |
| BLIP2 |
65.1 |
68.3 |
| InstructBLIP |
67.5 |
71.2 |
| LLaVA-1.5 |
70.8 |
74.1 |
| AMCI-MLLM |
74.6 |
77.5 |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).