VisionGuard: Cost-Sensitive AI Attestation with Quorum-Verified Blockchain Enforcement

Authors

  • Sundara Srivathsan M School of Electronics Engineering, Vellore Institute of Technology, Chennai, India
  • Lighittha P. R. School of Electronics Engineering, Vellore Institute of Technology, Chennai, India
  • Prithivraj S. School of Electronics Engineering, Vellore Institute of Technology, Chennai, India
  • Suganya Ramamoorthy School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India https://orcid.org/0000-0003-1874-6479
  • Vijayan Sugumaran Oakland University, Rochester, Michigan, USA

DOI:

https://doi.org/10.13052/jwe1540-9589.2516

Keywords:

AI safety, NSFW classification, cost-sensitive learning, abstention, perceptual hashing, EIP-712, quorum, blockchain

Abstract

Web3 platforms face a critical challenge: once unsafe content is minted on-chain, it becomes immutable and irrevocable. Traditional NSFW classifiers operate off-chain without cryptographic guarantees, leaving blockchain ecosystems vulnerable to harmful content. We present VisionGuard, a unified moderation framework that integrates cost-sensitive AI decision-making with blockchain-based enforcement. Our system combines calibrated NSFW classification, abstention-based triage for uncertain cases, perceptual hashing for near-duplicate detection, and on-chain k-of-n quorum attestation using EIP-712 signatures. We establish formal guarantees for: (i) Bayes-optimal cost-sensitive thresholds minimizing asymmetric error costs, (ii) optimal abstention intervals for human review, (iii) monotone false-negative reduction under classifier-pHash fusion, (iv) quorum compromise bounds, and (v) end-to-end unsafe-mint probability. Empirical validation on a zero-shot NSFW task demonstrates 82% accuracy (AUC =0.88), with the Bayes-optimal threshold (τ=0.1) reducing expected cost to 27,520 versus 54,942 at the F1-optimal threshold—a 50% improvement. Calibrated abstention further lowers harm (cost =10,649.5), while a 3-of-5 quorum with oracle compromise p=0.1 yields break probability Pbreak<1%. Together, VisionGuard bridges decision theory, adversarial robustness, and cryptographic enforcement, providing the first provably safe AI moderation pathway for blockchain content.

Downloads

Download data is not yet available.

Author Biographies

Sundara Srivathsan M, School of Electronics Engineering, Vellore Institute of Technology, Chennai, India

Sundara Srivathsan M. is a final-year undergraduate student in Electronics and Computer Engineering at Vellore Institute of Technology, Chennai. His research focuses on computer vision and decision-theoretic machine learning. He has contributed to government-collaborated projects, is a national hackathon winner, and has participated in IEEE VIS and ACM CSCW 2025. He is a recipient of the Sir C. V. Raman Award for Research Excellence at VIT.

Lighittha P. R., School of Electronics Engineering, Vellore Institute of Technology, Chennai, India

Lighittha P. R. is a final-year undergraduate student in Electronics and Computer Engineering at Vellore Institute of Technology, Chennai. Her research focuses on secure and privacy-preserving AI systems, federated learning, and cost-sensitive attestation frameworks. She has contributed to government-collaborated projects and Indo–Sri Lankan collaborations, is a national hackathon winner and finalist, and has participated in IEEE VIS and ACM CSCW 2025.

Prithivraj S., School of Electronics Engineering, Vellore Institute of Technology, Chennai, India

Prithivraj S. is a final-year undergraduate student in Electronics and Computer Engineering at Vellore Institute of Technology, Chennai, specializing in deep learning, high-performance computing, and secure distributed architectures. His research spans blockchain technology, federated learning, and computer vision. He has contributed to government-collaborated projects and Indo–Sri Lankan collaborations, is a national hackathon winner, and has participated in IEEE VIS and ACM CSCW.

Suganya Ramamoorthy, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India

Suganya Ramamoorthy is a Professor in the School of Computer Science and Engineering at Vellore Institute of Technology, Chennai, with over 18 years of experience in computing. Her research interests include medical imaging, artificial intelligence, computer vision, big data, and engineering education. She is the author of the book Big Data in Medical Imaging and has published extensively in peer-reviewed international journals. Dr. Suganya is an active member of the Association for Computing Machinery (ACM), contributing through technical lectures and serving as a reviewer. Her recognitions include the 17th Young Women in Engineering Award (2017) and the Best Mentor Award in the Women in Big Data (WiBD) network (2023).

Vijayan Sugumaran, Oakland University, Rochester, Michigan, USA

Vijayan Sugumaran is a Distinguished University Professor and Janke Scholar of Management Information Systems at Oakland University, Rochester, Michigan, USA. He serves as Chair of the Department of Decision and Information Sciences, Co-Director of the Institute for Data Science, and Director of the M.S. in Business Analytics program. He received his Ph.D. in Information Technology from George Mason University, and his research interests include big data analytics, ontologies and the Semantic Web, and intelligent agent systems. He has authored over 350 peer-reviewed publications and 20 edited books, serves on the editorial boards of eight journals, and is Editor-in-Chief of the International Journal of Intelligent Information Technologies and the Journal of Web Engineering. He is Co-PI on a $2 million NSF grant and has published in leading journals such as Information Systems Research, ACM Transactions on Database Systems, and several IEEE Transactions venues.

References

K. Yousaf and T. Nawaz, “A Deep Learning-Based Approach for Inappropriate Content Detection and Classification of YouTube Videos,” IEEE Access, vol. 10, pp. 16283–16298, 2022.

C. Alico et al., “A Pornographic Images Recognition Model based on Deep One-Class Classification With Visual Attention Mechanism,” IEEE Access, vol. 8, pp. 137906–137919, 2020.

M. Perez et al., “An Evaluation of State-of-the-Art Object Detectors for Pornographic and Nudity Content Detection in Videos,” in 2021 International Conference on Electrical, Computer and Communication Engineering (ECCE), 2021.

M.L. Wong, K. Seng, and P.K. Wong, “Cost-sensitive ensemble of stacked denoising autoencoders for class imbalance problems,” Expert Systems with Applications, vol. 141, 2020.

N. Garcia, H. Mehrade, and M. Otani, “Are We Nude? Decoding the Sexist and Racist Bias of NSFW Classifiers,” in Proc. of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24), pp. 778–789, 2024.

M. Horta Ribeiro, J. Cheng, and R. West, “Automated Content Moderation Increases Adherence to Community Guidelines,” in Proc. of the ACM Web Conference 2023, pp. 1265–1276, 2023.

Y. Wang et al., “Fairness in Misinformation Detection Algorithms,” in Proc. of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’22), 2022.

P. Jha et al., “MemeGuard: An LLM and VLM-based Framework for Advancing Content Moderation via Meme Intervention,” in Proc. of ACL 2024, 2024.

A. Radford et al., “Learning Transferable Visual Models From Natural Language Supervision,” in Proc. of ICML, pp. 8748–8763, 2021.

S. Poppi et al., “Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models,” in Proc. of ECCV, 2024.

T. Poppi et al., “Hyperbolic Safety-Aware Vision-Language Models,” arXiv preprint, 2024.

S. Xing, Z. Zhao, and N. Sebe, “CLIP is Strong Enough to Fight Back: Test-time Counterattacks towards Zero-shot Adversarial Robustness of CLIP,” in Proc. of CVPR, 2024.

H. Farid, “An Overview of Perceptual Hashing,” Journal of Online Trust and Safety, vol. 1, no. 1, 2021.

J. Dalins, C. Wilson, and D. Boudry, “PDQ & TMK + PDQF – A Test Drive of Facebook’s Perceptual Hashing Algorithms,” arXiv preprint arXiv:1912.07745, 2019.

S. Jain et al., “Deep Perceptual Hashing Algorithms with Hidden Dual Purpose,” in 2023 IEEE Symposium on Security and Privacy (SP), pp. 234–252, 2023.

S. Klier, M. Steinebach, and H. Liu, “An Analysis of PhotoDNA,” in IS&T International Symposium on Electronic Imaging, vol. 36, 2024.

L. Lamport, R. Shostak, and M. Pease, “The Byzantine Generals Problem,” ACM Trans. Program. Lang. Syst., vol. 4, no. 3, pp. 382–401, 1982.

M. Castro and B. Liskov, “Practical Byzantine Fault Tolerance,” in Proc. of OSDI ’99, pp. 173–186, 1999.

J. Zhang et al., “Siguard: Detecting Signature-Related Vulnerabilities in Smart Contracts,” in 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp. 190–202, 2023.

C. Xu et al., “A Decentralized Quality Management Scheme for Content Moderation,” in 2023 IEEE International Conference on Blockchain and Cryptocurrency (ICBC), 2023.

C. K. Chow, “On Optimum Recognition Error and Reject Tradeoff,” IEEE Transactions on Information Theory, vol. 16, no. 1, pp. 41–46, 1970.

C. Elkan, “The Foundations of Cost-Sensitive Learning,” in Proc. of IJCAI, vol. 17, pp. 973–978, 2001.

H. Rangwani et al., “Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics,” in Advances in Neural Information Processing Systems (NeurIPS), vol. 35, 2022.

J. Platt, “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,” in Advances in Large Margin Classifiers, 1999.

C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” in Proc. ICML, 2017.

D. Zhelonkin and A. Karpov, “Pornography detection using convolutional neural networks,” in Proc. Int. Conf. Computer Graphics and Vision, 2019.

A. Bicho, A. Ferreira, and D. Datia, “Deep learning framework for NSFW image classification,” Pattern Recognition Letters, vol. 138, pp. 40–47, 2020.

R. Zhang, K. Huang, and Y. Li, “A CLIP-based approach for multi-domain harmful content recognition,” IEEE Trans. Multimedia, vol. 25, pp. 2134–2148, 2023.

Ethereum Foundation, “EIP-712: Typed Structured Data Hashing and Signing,” https://eips.ethereum.org/EIPS/eip-712, accessed 2025.

W. Entriken, D. Shirley, J. Evans, and N. Sachs, “ERC-721 Non-Fungible Token Standard,” EIP-721, 2018.

J. Katz and Y. Lindell, Introduction to Modern Cryptography, 2nd ed., Chapman & Hall/CRC, 2014.

Nomic Foundation, “Hardhat: Ethereum development environment,” 2020–2025.

P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE TPAMI, 32(9):1627–1645, 2010.

A. Shrivastava, A. Gupta, and R. Girshick, “Training Region-based Object Detectors with Online Hard Example Mining,” in Proc. CVPR, 2016.

Downloads

Published

2026-01-29

How to Cite

M, S. S. ., R., L. P. ., S., P. ., Ramamoorthy, S. ., & Sugumaran, V. . (2026). VisionGuard: Cost-Sensitive AI Attestation with Quorum-Verified Blockchain Enforcement. Journal of Web Engineering, 25(01), 103–134. https://doi.org/10.13052/jwe1540-9589.2516

Issue

Section

ICOW3 2025