現今深度學習模型若要提升準確率至相當水準,經常動輒數千、數萬筆訓練資料才可達成,若要模型學習過去未見過之其他類別資料,往往需要將模型重新訓練。這些實務上的需求使得元學習和連續學習等領域逐漸受到重視,但元學習雖以良好的模型學習彈性著稱,但因訓練過程的高不穩定性使得效能並不可靠。另一方面,連續學習的高穩定性,降低其可學習的任務數量。因此本篇論文著重透過結合元學習與連續學習兩種小樣本學習上表現卓越的演算法,透過連續學習提升元學習的穩定性,同時也透過元學習改善連續學習的學習彈性。此外,過去在深度學習領域研究中,發現所謂的穩定性-彈性困境,意指為兩種效能表現經常會有取捨關係,無法兼得,然後在本篇研究的實驗結果中,該篇模型可在現今小樣本學習常見之資料集上,同時提高測試準確率和驗證準確率。;Recently, the importance of few shot learning field has obviously increased, and variety of famous learning methods, like Meta-learning and Continuous learning. These methods proposed to solve few shot learning, which main purpose is both training model with only few amounts of data and maintaining high generalization ability. MAML, which is an elegant and effective Meta-Learning method demonstrates its powerful performance in Omniglot and Mini-Imagenet N-way K-shot classification experiments. However, the recent research points out that the problems of instable performance of MAML and others model′s architecture problems. On the other hand, continuous learning models usually face the issue of catastrophic forgetting when the models not only learn new tasks but keep remembering the knowledge about previous tasks. Therefore, we propose our method, En-MAML, which is based on MAML framework, to combine the flexible adaptation characteristic from meta-learning with the stability performance from continual learning. We evaluate our model on Omniglot and Mini-Imagenet datasets, and follow the N-way K-shot experiment protocol. From our experiment results, our model demonstrates higher accuracy and stability on Omniglot and Mini-Imagenet.