Abstract:To address the limited generalization capability of deep learning models under distribution shifts, this study proposes a domain generalization method based on the Mamba state-space model that integrates counterfactual semantic enhancement with a causal attention mechanism. By designing a counterfactual semantic enhancement module, foreground-background decoupling and recombination are achieved to generate counterfactual features, explicitly constructing a causal scenario of “foreground preservation and background intervention”. This effectively mitigates spurious background-label correlations, enhances the model’s ability to extract causal semantic foreground representations, and guides it to focus on stable and reliable semantic associations. Furthermore, a causal attention mechanism is introduced to explicitly embed the causal semantic information extracted by the module into the Mamba state update process, improving the causal consistency of features. The overall architecture enables dynamic discrimination and integration of foreground and background information. Experimental results on standard domain generalization benchmarks demonstrate that the proposed method achieves average accuracy rates of 91.9%, 77.0%, 81.1%, and 54.9% on the PACS, OfficeHome, VLCS, and TerraIncognita datasets, respectively, outperforming existing state-of-the-art methods. These results confirm that the proposed method significantly improves the consistency of the model’s focus on foreground semantic regions, thus demonstrating superior interpretability and generalization performance.