Literature
The mathematical optimization methods and algorithms implemented in this package are based on the work of many brilliant researchers. The most important papers for our work are listed here. In these works, more in-depth information about the formulations and various algorithms we use can also be found.
(Convolutional) neural network formulation:
- Fischetti, M., & Jo, J. (2018). Deep neural networks and mixed integer linear optimization. Constraints, 23(3), 296-309.
Neural network compression:
Serra, T., Kumar, A., & Ramalingam, S. (2020, September). Lossless compression of deep neural networks. In International conference on integration of constraint programming, artificial intelligence, and operations research (pp. 417-430). Cham: Springer International Publishing.
Toivonen, V. (2024). Lossless Compression of Deep Neural Networks.
Bound tightening techniques:
- Grimstad, B., & Andersson, H. (2019). ReLU networks as surrogate models in mixed-integer linear programs. Computers & Chemical Engineering, 131, 106580.
Sampling-based optimization:
Perakis, G., & Tsiourvas, A. (2022). Optimizing Objective Functions from Trained ReLU Neural Networks via Sampling. arXiv preprint arXiv:2205.14189.
Tong, J., Cai, J., & Serra, T. (2024). Optimization Over Trained Neural Networks: Taking a Relaxing Walk. arXiv preprint arXiv:2401.03451.
Partition based formulation of NN:
- Calvin Tsay, Jan Kronqvist, Alexander Thebelt, & Ruth Misener. (2021). Partition-based formulations for mixed-integer optimization of trained ReLU neural networks.
Tree ensembles:
- Mišić, V. V. (2020). Optimization of tree ensembles. Operations Research, 68(5), 1605-1624.
Input convex neural networks
- Amos, B., Xu, L., & Kolter, J. Z. (2017, July). Input convex neural networks. In International conference on machine learning (pp. 146-155). PMLR.