lovellhe的个人博客分享 http://blog.sciencenet.cn/u/lovellhe

博文

多尺度榜样学习算法 ( multiscale model learning algorithm -MMLA)

已有 771 次阅读 2023-9-18 07:43 |个人分类:论文发表|系统分类:论文交流

 Multiscale model learning algorithm (MMLA)

多尺度榜样学习算法


内容来自下面的论文: https://doi.org/10.1016/j.sca.2023.100039


Abstract: 

We propose a novel population-based heuristic algorithm called the Multiscale Model Learning Algorithm (MMLA). The MMLA is inspired by the learning behavior of individuals in a group and can converge to an optimal equilibrium state of the MSCN. The MMLA has two key operations: zooming in on the search field and learning search in a learning stage. The excellent performers, called medalists, are imitated by other learners. With the increase in learning stages, the learning efficiency is improved, and the searching energy is concentrated in a more promising area. We employ sixteen benchmark optimization problems and two supply chain networks to demonstrate the effectiveness of the MMLA and the rationality of the equilibrium models. 



The fundamental ideas of the multiscale model learning algorithm

   The fundamental ideas of the Multiscale Model Learning Algorithm (MMLA) come from the observation of the learning behavior of individuals in a group, using a school class as an example. Generally, a school class is composed of dozens of students, and the academic performance of a student is usually estimated by a series of tests carried out successively in a nearly fixed time period. Students try to improve their performance in the tests by learning from each other, particularly by imitating the excellent performers. The learning process for an individual follows the natural growth rule, meaning that the learning efficiency is low at the beginning and increases with the increase of learning time. The time interval between two successive tests can be viewed as a learning stage, and the learning of individuals happens during this stage. It can also be observed that when the students reach a higher grade, their capacity for learning has improved to a high level and the competition among them becomes more intense. The modification of related learning factors becomes more meticulous for all the students. The different learning periods can be used to reflect this change, as a learning period is comprised of a series of stages. When the learning enters a new learning period, the range of adjusting learning factors shrinks to a smaller range. On the one hand, this shrinking range reflects the accumulated experience which has made the performances of individuals similar. On the other hand, a reduced range of adjusting the learning factors allows individuals to concentrate their energy by focusing on a relatively small area. These observations provide the fundamental ideas for the Multiscale Model Learning Algorithm.

An individual can be regarded as a solution to an optimization problem. The performance of an individual corresponds to the objective function value of the corresponding solution associated with the individual. The test can be viewed as an estimation of the objective function. To improve the performance of an individual on a given test is to search for a better solution to the optimization problem. The excellent performers are the individuals with better performances than others. Individuals will learn from these excellent performers to improve their later performance. We assume that the excellent performers, called models or medalists in this paper, will adjust their learning factors in a relatively small range during the learning period. In addition to fine-tuning their learning factors just as models do, other common learners will imitate the models by adopting some learning factors similar to the models’. This learning will become more and more efficient with time. In the algorithm, a natural learning curve will be adopted to reflect this change. Different learning periods mean different adjusting ranges for the learning factor variation. We assume that the reduced adjusting ranges associated with successive learning periods correspond to shrinking scales. If learning happens in multiple learning periods, the algorithm is carried out in a multiscale mode. 


Key operations of MMLA

   The multiscale model learning algorithm includes two key operations: the learning search and the shrinking of the search field. In the following, we will provide a detailed explanation of these two key operations.

Learning search


image.png

Fig. 2. Learning from a medalist.

Zooming-in on the search space

Generating variable learning efficiency

The implementation procedure

Mechanics of exploration, exploitation and convergence


详细内容参见如下附件:

Multiscale model learning algorithm_20230918073831.pdf

或相关论文(Open-access):

https://doi.org/10.1016/j.sca.2023.100039




https://m.sciencenet.cn/blog-3367056-1402957.html

上一篇:多阶段榜样学习算法求解多级供应链网络最优均衡状态(MMLA for SCN)
下一篇:桁架配置优化的奖牌获得者学习算法 (Medalist learning algorithm)

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-6-1 14:32

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部