Privacy-Preserving AI for Detecting and Mitigating Customer Price Discrimination in Big-Data Systems
DOI:
https://doi.org/10.71222/qzrhgr07Keywords:
Privacy-Preserving AI, Customer Price Discrimination, Big-Data Systems, Federated Learning, Differential PrivacyAbstract
With the rapid development of big data and artificial intelligence technologies, personalized pricing has become a common strategy for platforms to enhance profitability. However, this practice often evolves into customer price discrimination (CPD), which seriously harms consumer rights and market fairness. Meanwhile, the data-driven nature of AI models for CPD detection raises severe privacy concerns. To address the dual challenges of CPD governance and data privacy protection, this paper proposes a privacy-preserving AI framework for CPD detection and mitigation in big-data systems. First, we design a federated learning-based detection model that enables multiple data holders to collaborate on model training without sharing raw user data. Second, differential privacy technology is integrated into the model training process to prevent sensitive information leakage from gradient calculations. For CPD mitigation, a dynamic pricing calibration mechanism based on explainable AI is proposed to ensure pricing transparency while maintaining platform operational efficiency. Experimental results on real-world e-commerce and ride-hailing datasets show that the proposed framework achieves a detection accuracy of 87.6% for CPD behaviors, with a privacy budget consumption of only 1.2, which outperforms traditional centralized models and privacy-unaware AI models. This research provides a technical solution for balancing personalized services, market fairness, and data privacy protection in big-data systems. Notably, the framework exhibits strong cross-platform adaptability, achieving detection accuracies of over 85% on both e-commerce and ride-hailing scenarios with minimal parameter adjustments, making it suitable for deployment in diverse service industries such as online travel and digital content platforms. In practical applications, the framework helps platforms comply with strict data protection regulations including the EU's GDPR and China's Personal Information Protection Law, while reducing consumer complaints related to price discrimination by an estimated 42%. Additionally, the integration of blockchain-based traceability ensures that pricing adjustment records are tamper-proof, providing regulatory authorities with credible audit trails for market supervision.References
1. Z. Shen, "The regulatory path of big-data price discrimination-based on economic characteristics and legal accountability," In Proceedings of the 2021 3rd International Conference on Big Data Engineering and Technology, January, 2021, pp. 58-62. doi: 10.1145/3474944.3474954
2. D. Birget, "Big Data and Price Discrimination," Available at SSRN 3096457, 2017. doi: 10.2139/ssrn.3096457
3. R. Act, "Regulation (eu) 2024/2847 of the european parliament and of the council," Regulation (eu), 2024.
4. E. Nowell, and S. Gallus, "Advancing Privacy-Preserving AI: A Survey on Federated Learning and Its Applications," 2025. doi: 10.20944/preprints202501.0685.v1
5. M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, "Deep learning with differential privacy," In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, October, 2016, pp. 308-318. doi: 10.1145/2976749.2978318
6. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, "Communication-efficient learning of deep networks from decentralized data," In Artificial intelligence and statistics, April, 2017, pp. 1273-1282.
7. R. Chalapathy, and S. Chawla, "Deep learning for anomaly detection: A survey," arXiv preprint arXiv:1901.03407, 2019.
8. V. Mone, A. Thommandru, F. F. Maratovich, K. F. Khurramovich, and A. K. Mirziyatovna, "AI Price Tags and Privacy: When Your Data Sets Your Price," Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 16, no. 1, p. e70070, 2026.
9. K. Dasaradharami Reddy, and T. R. Gadekallu, "A comprehensive survey on federated learning techniques for healthcare informatics," Computational Intelligence and Neuroscience, vol. 2023, no. 1, p. 8393990, 2023. doi: 10.1155/2023/8393990
10. C. Dwork, "Differential privacy: A survey of results," In International conference on theory and applications of models of computation, April, 2008, pp. 1-19. doi: 10.1007/978-3-540-79228-4_1
11. M. T. Ribeiro, S. Singh, and C. Guestrin, "" Why should i trust you?" Explaining the predictions of any classifier," In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, August, 2016, pp. 1135-1144.
12. H. Lee, and C. Yeon, "Blockchain-based traceability for anti-counterfeit in cross-border e-commerce transactions," Sustainability, vol. 13, no. 19, p. 11057, 2021. doi: 10.3390/su131911057
13. B. Singh, "Network security and management," PHI Learning Pvt. Ltd, 2011. doi: 10.1109/iccic.2010.5705886
14. J. Bi, Y. Guo, N. He, and S. Wang, "Research on Key Technologies of Personal Information Security Protection in Big Data," Academic Journal of Engineering and Technology Science, vol. 6, no. 4, pp. 42-47, 2023.
15. S. Truex, N. Baracaldo, A. Anwar, T. Steinke, H. Ludwig, R. Zhang, and Y. Zhou, "A hybrid approach to privacy-preserving federated learning," In Proceedings of the 12th ACM workshop on artificial intelligence and security, November, 2019, pp. 1-11. doi: 10.1145/3338501.3357370
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Wenwen Liu (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.







