English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41986127      線上人數 : 1065
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/94402


    題名: 利用潛藏一致性模型實現高效影片生成應用於語意驅動音樂生成系統;Efficient Video Generation with Latent Consistency Models for Text-Driven music system
    作者: 陳丕中;Chen, Pi-Jhong
    貢獻者: 人工智慧國際碩士學位學程
    關鍵詞: 生成式AI;大型語言模型;LLM Agent;影片擴散模型;潛藏一致性模型;多模態生成;Generative AI;Large Language Model;LLM Agent;video Latent Diffusion Model;Latent Consistency Model;Multimodal Generation
    日期: 2024-07-13
    上傳時間: 2024-10-09 14:40:41 (UTC+8)
    出版者: 國立中央大學
    摘要: 目前許多音樂串流平臺都積極嘗試利用文本自動創作多樣化的作品,但現有技術在連結音樂與動畫方面明顯存在不足,不但難以準確反映特定文化的獨特元素和情感,甚至無助於音樂情境的表達。為了解決這一問題,我們採用了大型生成式預訓練模型(Large Generative Pre-trained Model, LGPM)和視頻潛在擴散模型(Video Latent Diffusion Model, video LDM),這兩種技術在技術創新方面已顯示出強大的潛力。我們系統的核心是一個語義驅動的音樂及動畫生成模塊,它能根據用戶的文字提示精準生成具有文化特色的音樂及相應動畫。

    其中LLM負責分析和理解使用者的自然語言輸入,據此指導音樂與動畫的主題及情感基調,確保生成內容精準反映使用者的意圖和風格需求;在利用基於強化學習的音樂生成模組產生符合使用者需求的音樂文本之後,video LDM會生成對應音樂風格的動畫,將抽象的音樂情感與張力轉換為具體意象。此外,我們專注於提升動畫的視覺品質,特別是在動態連貫性和減少視覺失真方面。為了進一步優化生成動畫的品質和效率,我們整合了潛在一致性模型(Latent Consistency Model, LCM),這一新模型能夠在保持高視覺品質的同時,將動畫關鍵幀的生成步驟從20步大幅減少至4步。

    本研究不僅提升了AI音樂視頻生成技術的實用性,同時也為相關領域的未來研究提供了新的方向。我們的系統顯著提高了音樂和動畫之間的連接性,並能更準確地反映出用戶的文化和情感需求,這對於推動文化多樣性的表達和保護具有重要意義。
    ;Although existing music generation platforms are capable of autonomously creating diverse musical compositions, they frequently fail to integrate music with animation effectively, particularly in accurately reflecting specific cultural attributes and emotions. To address this issue, we have employed Large Generative Pre-trained Models (LGPM) and Video Latent Diffusion Models (video LDM), both of which have shown considerable potential in technological innovation. At the heart of our system is a semantically-driven module for generating music and animations, which accurately produces culturally distinctive tracks and corresponding animations based on user text prompts.

    Our experiments demonstrate that the enhanced capability of Large Language Models (LLMs) to analyze and understand natural language significantly improves the thematic and emotional accuracy of the generated content. Additionally, we focused on enhancing the visual quality of animations, particularly in terms of dynamic coherence and reducing visual distortions. To further optimize the quality and efficiency of generated animations, we integrated Latent Consistency Models (LCMs), which significantly reduce the steps required for generating keyframes from 20 to 4 while maintaining high visual quality.

    This research not only advances the practicality of AI-driven music video generation technologies but also opens new directions for future research in the field. Our system significantly improves the connectivity between music and animations, and more accurately reflects users′ cultural and emotional needs, which is crucial for promoting the expression and preservation of cultural diversity.
    顯示於類別:[人工智慧國際碩士學位學程] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML40檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明