在這個充滿巨量資料的年代,如何在龐大的資料中提取有用的資訊已成為各個企 業要思考的問題,因此各個企業也紛紛投入人工智慧技術(Machine/Deep Learning),利 用人工智慧運算處理大量的數據為企業帶來新的價值。然而 ML(Machine Learning)模型 開發流程複雜,當中包含許多領域的專業人員以及許多環境配置,導致整個 ML 開發 團隊必須花費許多溝通成本,同時也影響了模型為企業帶來的實際效益。近年來有了 MLOps 的概念,即 DevOps on Machine Learning,旨在開發中更減少人力成本且加速開 發生命週期。如今有許多 MLOps 的平台,這些平台利用容器化技術將 ML 的步驟進行 封裝,並利用 Kubernetes 等容器編排工具管理任務。然而在 ML 的開發中有時須使用 叢集外的資源,現有的平台並沒有提供整合外部資源的功能,因此本研究將設計一套 基於 FaaS 技術的 ML Workflow 系統,透過工作流平台讓使用者自定義 ML Workflow, 並將步驟封裝成 FaaS,將內外部的資源部署為一個系統可調用的事件觸發函式,部署 至 Kubernetes 上,最終讓使用者創建可重複使用的 ML Workflow 與 ML 模型。 ;In this era of big data, extracting useful information from massive amounts of data has become a challenge for many enterprises. Therefore, many enterprises have invested in artificial intelligence technologies (Machine/Deep Learning) to process large amounts of data using AI computations and bring new value to their businesses. However, the development process of ML (Machine Learning) models is complex and involves many professionals in various fields, as well as many environment configurations, which results in the entire ML development team having to spend a lot of communication costs, which also affects the actual benefits of the model for the enterprise. In recent years, the concept of MLOps has emerged, which is DevOps on Machine Learning, aimed at reducing human costs and accelerating the development life cycle during development. There are now many MLOps platforms that use containerization technology to package the steps of ML and use container orchestration tools such as Kubernetes to manage tasks. However, sometimes external resources outside the cluster need to be used in ML development, and existing platforms do not provide the ability to integrate external resources. Therefore, this study will design an ML Workflow system based on FaaS technology, allowing users to customize their ML Workflow through a workflow platform and encapsulate steps into FaaS. This will deploy internal and external resources as an event-triggered function that the system can call, deployed on Kubernetes, and ultimately allow users to create reusable ML Workflows and ML models.