We introduce the feedforward neural network in the path optimization method (POM) to evade the sign problem in field theories. POM is based on the complexification of integral variables as in the complex Langevin method and the Lefschetz thimble method. The integration path is optimized in the complexified variable space by maximizing the average phase factor. In the last Lattice meeting  and in Ref., we have demonstrated that POM works very well in a one-dimensional model: Setting the integration path by a simple function and optimization by the standard gradient method provide an almost perfect integration path even in the integral with a severe sign problem. In field theories, however, it is not easy to prepare and optimize the integration path in the complex space. We introduce the neural network, a kind of machine learning, in POM to investigate the sign problem in field theory . As demonstrated in the last Lattice meeting  and followed by Alexandru et al. , the machine learning technique seems to be powerful in applying POM to the sign problem. We demonstrate that POM with the neural network optimization works well in $\lambda\phi^4$ theory at finite chemical potential: The average phase factor becomes well above zero and we can safely obtain the expectation value of observables . The optimized path shows that the imaginary part of integration variable is strongly correlated with the real part of the nearest neighbor site, as discussed in Ref. . We also plan to discuss the results of applying POM with the neural network to a gauge theory.
 A. Ohnishi, Y. Mori, K. Kashiwa, Lattice 2017 proceedings, EPJ Web Conf. 175 (2018), 07043.
 Y. Mori, K. Kashiwa, A. Ohnishi, Phys. Rev. D 96 (2017), 111501(R).
 Y. Mori, K. Kashiwa, A. Ohnishi, Prog. Theor. Exp. Phys. 2018 (2018), 023B04.
 A. Alexandru et al., Phys. Rev. D97 (2018), 094510.
 F. Bursa, M. Kroyter, arXiv:1805.04941.