|
English
|
正體中文
|
简体中文
|
全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41986041
線上人數 : 1011
|
|
|
資料載入中.....
|
請使用永久網址來引用或連結此文件:
http://ir.lib.ncu.edu.tw/handle/987654321/95835
|
題名: | 校園知識庫型人工智慧助理 - 基於大語言模型與檢索增強生成;Campus knowledge AI assistant - Based on Large Language Models with Retrieval-Augmented Generation |
作者: | 成曄;簡 |
貢獻者: | 資訊工程學系 |
關鍵詞: | 大語言模型;檢索增強生辭;提示詞工程;聊天機器人;LLM;RAG;Prompt Engineering;Chat Bot |
日期: | 2024-08-20 |
上傳時間: | 2024-10-09 17:19:21 (UTC+8) |
出版者: | 國立中央大學 |
摘要: | 大語言模型爆發性成長的年代,我們躬逢其盛,人人皆能擁有一位全方位數位秘書的時代已然到來。相比過去任務導向式Task-Oriented Dialogue (TOD)聊天機器人,Large Language Models (LLM)可以有更全方位的對答能力與準確回答的特性。 本篇論文先以自然語言處理的歷史脈絡為切入點,了解其技術演進史,其後初探Transformer模型與Attention機制如何改進突破了過往自然語言處理(Natural Language Processing)技術所遭遇的瓶頸,兼以電腦計算能力的提升與巨量網路數據的容易取得,終究誕生了當今多數主流的許多大語言模型,將NLP領域推升到過去不曾想像過的高度。 本論文首重在開源的Meta Llama 3衍伸的Taide大語言模型,藉助其優異的能力作為我們Campus AI Assistant去實現撰寫文章、文章摘要以及深具特色的台灣地方文化背景知識問答能力等;引入了RAG技術框架,讓Campus AI Assistant能夠快速即時的具備相關校園知識的問答能力,而不需要對LLM進行技術門檻較高的微調作業。 本篇論文研究主要是以On-Premise概念建立自己的Campus AI Assistant,LangChain為開發框架、Ollama為LLM的管理平台,我們也使用開源的ChromaDB作為向量資料庫的儲存,選定taide/Llama3-TAIDE-LX-8B-Chat為我們的LLM,整合RAG框架讓Campus AI Assistant具備對於特定知識的訊息處理流程,並且自行爬蟲取得我們所需要的校園(以中央大學資料為例)相關資料,並進行資料預處理以呈現最佳的RAG檢索效果;最後使用Chainlit的WEB UI呈現我們的論文構想AI Assistant介面。 我們的實驗設計,不論是在撰寫文章、文章摘要、RAG檢索精準度等,均可見我們的論文框架有著顯著可信任的特定領域答題精準度,則證明此架構明確可用。;We are witnessing the era of explosive growth in large language models, where everyone can now have a versatile digital assistant. Compared to the task-oriented dialogue (TOD) chatbots of the past, large language models (LLMs) offer more comprehensive conversational abilities and the characteristic of providing accurate responses. This paper begins with the historical context of natural language processing to understand its technological evolution. It then explores how Transformer models and the Attention mechanism have overcome the limitations faced by previous natural language processing (NLP) technologies. Coupled with advancements in computing power and the easy availability of massive online data, these developments have ultimately led to the creation of many mainstream large language models today, elevating the field of NLP to unprecedented heights. This paper focuses on the Taide large language model, derived from the open-source Meta Llama 3, leveraging its superior capabilities to implement our Campus AI Assistant for tasks such as article writing, summarization, and providing answers to questions about Taiwan′s local cultural background. By incorporating the RAG (Retrieval-Augmented Generation) framework, the Campus AI Assistant can quickly and efficiently provide relevant campus-related knowledge without the need for technically demanding fine-tuning of the LLM. This paper primarily focuses on establishing our own Campus AI Assistant using the On-Premises concept. LangChain serves as the development framework, and Ollama is used as the LLM management platform. We also employ the open-source ChromaDB for vector database storage and select taide/Llama3-TAIDE-LX-8B-Chat as our LLM. By integrating the RAG framework, we enable the Campus AI Assistant to handle specific knowledge processing workflows. We also use web scraping to gather relevant campus data (using National Central University as an example) and perform data preprocessing to optimize RAG retrieval performance. Finally, we use Chainlit WEB UI to present our AI Assistant interface as conceived in this paper. Our experimental design, whether in writing articles, article summaries, or the accuracy of RAG retrieval, clearly demonstrates that our paper framework has significantly trustworthy and domain-specific answer accuracy, thus proving that this framework is definitively usable. |
顯示於類別: | [資訊工程研究所] 博碩士論文
|
文件中的檔案:
檔案 |
描述 |
大小 | 格式 | 瀏覽次數 |
index.html | | 0Kb | HTML | 83 | 檢視/開啟 |
|
在NCUIR中所有的資料項目都受到原著作權保護.
|
::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::