Article
Museum Chatbot Using MCP Servers and LLMs
Museums are living archives of human civilisation that face a growing challenge— delivering personalised, contextually rich visitor guidance at scale without proportionally expanding human staffing. Existing chatbot solutions built on pattern-matching, intent classification, or knowledge graph paradigms fall short when visitors ask emotional, open-ended, or cross-exhibit questions, and they demand constant manual updates whenever collection content changes. This paper presents a museum chatbot architecture that pairs the generative strength of Large Language Models with the structured, permission-enforced data access of Model Context Protocol servers. The LLM manages natural multi-turn conversation, contextual memory, dynamic tone adaptation, and multilingual generation, while the MCP layer acts as a secure gateway that retrieves verified, real-time exhibit information from the museum database without model retraining or script edits. A four-tier design—visitor interface, LLM processing, MCP communication, and museum database—supports instant content propagation. Evaluation through seven functional test cases confirms stable conversation continuity, reliable multilingual switching, effective restricteddata filtering, and graceful handling of ambiguous queries. The system improves visitor engagement, reduces staff maintenance overhead, and establishes a scalable, privacy-conscious template for AI-powered digital guides across art galleries, cultural centres, and heritage institutions
Full Text Attachment





























