Abstract:The deep integration of artificial intelligence in consumer credit pricing, while enhancing service efficiency, may also lead to tacit collusion and price discrimination, resulting in consumer welfare losses and market fairness impairment. To systematically quantify the welfare loss effects of algorithmic discrimination, this study innovatively constructs a multi-agent dynamic interaction model. By integrating three types of entities—deep reinforcement learning (DRL) pricing institutions, heterogeneous consumer groups, and market environments—this research simulates algorithmic decision-making processes under different market structures using real credit data. Experimental results demonstrate that the exploitation of data sparsity and collaborative pricing strategies are the primary drivers of fairness loss: disadvantaged groups suffer from systematic bias in credit profiling due to feature scarcity, while algorithmic collusion significantly increases interest rate dispersion across the market. This subjects specific groups to abnormal interest rate premiums, ultimately causing substantial welfare losses. The study establishes a reproducible experimental paradigm for measuring algorithmic discrimination and provides micro-level evidence for balancing financial technology innovation with consumer rights protection. These findings offer valuable insights for improving digital financial regulatory frameworks.