报告名称:Algorithms for Large-Scale Optimization and Machine Learning
会议地点:腾讯会议直播(2021-05-22,15:00–17:00, 会议ID:353 838 143;会议链接:https://meeting.tencent.com/s/TK4UDjCILCPj)
主办单位:机电工程学院
主讲人:曹彦凯
主持人:平续斌 李志武
讲座人介绍:
曹彦凯,于2010年获得浙江大学生物工程学士学位以及并获得博士学位;2015年获得美国普渡大学化学工程学位;2016年至2018年期间在美国威斯康星大学麦迪逊分校做研究助理;2018年至今在加拿大不列颠哥伦比亚大学(University of British Columbia)的化学与生物工程系做助理教授。他的研究重点是大规模局部和全局优化算法的设计和实现,从而解决各种决策过程中出现的问题,例如机器学习,随机优化,模型预测控制和复杂网络等。

报告1:Large-Scale Local Optimization: Algorithms and Applications
会议时间:5月22日15:00
报告内容:
This talk presents our recent work on algorithms and software implementations to solve large-scale optimization problems to local optimality. Our algorithms exploit both problem structure and emerging high-performance computing hardware (e.g., multi-core CPUs, GPUs, and computing clusters) to achieve computational scalability. We are currently using our capabilities to address engineering and scientific questions that arise in diverse application domains including design of neural network controller, predictive control of wind turbines, power management in large networks, multiscale model predictive control of battery systems. The problems that we are addressing are of unprecedented complexity and defy the state-of-the-art. For example, the problem of designing a control system for wind turbines is a nonlinear programming problem (NLP) with 7.5 million variables that take days to solve with existing solvers. We have solved this problem in less than 1.3 hours using our parallel solvers.
报告2:Large-Scale Global Optimization and Machine learning
会议时间:5月22日16:00
报告内容:
This talk presents a reduced-space spatial branch and bound (BB) strategy to solve two-stage stochastic nonlinear programs to global optimality. At each node, a lower bound is constructed by relaxing the non-anticipativity constraints, and an upper bound is constructed by fixing the first-stage variables. We also extend this algorithm to address clustering problems, which is a prototypical unsupervised learning task and also a special class of stochastic programs. One key advantage of this reduced-space algorithm is that it only needs to perform branching on the centers of clusters to guarantee convergence, and the size of centers is independent of the number of data samples. Another critical property of this algorithm is that bounds can be computed by solving individual small-scale subproblems. These two properties enable our algorithm to be scalable to the large dataset to find a global optimal solution. Our global optimal algorithm can now solve clustering problem on a dataset of 200,000 samples, which is at least 100 times larger than what the state-of-the-art method proposed in the literature can deal with.