In recent years, ordinary differential equation (ODE) has become a promising starting point to understand the nature of acceleration methods. However, there still exits gap between ODE and optimization methods. In this talk, we introduce the idea behind learn to optimize and optimization-inspired ODE, and try to provide a framework that automatically looks for efficient problem-orientational optimization methods with a guarantee of worst-case convergence.