## Description

Dynamic system is a well-employed mathematical model, which can describe many natural phenomenon in disparate subjects. Ordinary differentiation equation (ODE) is one of the mathematical expression of dynamic system. It can precisely sketch the dynamic behavior. Generally, people learn the mathematical functions and parameters from the dynamic behaviors, and then predict the future behaviors on the basis of simulation of ODE. Parameter estimates for an ODE from the observations is a challenge for the investigators. Generally, it is impossible to give analytical solutions for ODE, and thus numerical based optimization methods are always used to estimate the parameters, for instance, Nelder-Mead, Quasi-Newton method and etc. There are several vital common issues for numerical optimization methods, for example, initial value dependency, sensitive to noise and sparseness of data, time consuming and so on. Here we proposed an empirical strategy (SS-MLR) to estimate the parameters from a new angle. Instead of fitting the difference between the simulations and the observations, at the beginning we learnt the first derivatives of curve based on the observations via penalized smoothing spline, then minimized the difference of changing rates between the estimated first derivatives from the observations based on smoothing spline and fitted values from the regression. The advantage of our strategy is that we can have approximately analytical solution of the parameters directly. We tested our strategy on phage DNA recombination system, which was a well-investigated system in biology. To evaluate the performance, we simulated the dynamic behaviors based on given parameters first, and then learn the parameters with the simulated behavior only. We used our strategy (SS-MLR), Nelder-Mead method and Quasi-Newton method to learn the parameters, and evaluated the performance by examining the difference between learned parameters and true parameters, the errors between real simulation curves and re-simulation curves with learnt parameter, and CPU running time. To mimic the situation in real experiments, we designed "noise scenario" and "sparse scenario" to test the three strategies. After the comparison, we proved that our strategy had better performance than other two classical methods, with tolerate to noise, insensitive to sparseness and faster running time.