[1] J. Li, I.A. Karimi, R. Srinivasan, Recipe determination and scheduling of gasoline blending operations, AlChE. J. 56 (2) (2010) 441-465. [2] J. Alvarez-Ramirez, A. Morales, R. Suarez, Robustness of a class of bias update controllers for blending systems, Ind. Eng. Chem. Res. 41 (19) (2002) 4786-4793. [3] K. Magoulas, D. Marinos-Kouris, A. Lygeras, Instructions are given for building gasoline-blending LP, Oil Gas J.;(United States) 86 (27) (1988) 32-37. [4] W. Chen, J. Yang, A double loop optimization method for gasoline online blending, In: 2016 IEEE International Conference on Industrial Technology (ICIT), Taipei, Taiwan, China, 2016. [5] A. Ahmad, W.H. Gao, S. Engell, A study of model adaptation in iterative real-time optimization of processes with uncertainties, Comput. Chem. Eng. 122 (2019) 218-227. [6] J.A. Paulson, A. Mesbah, Nonlinear model predictive control with explicit backoffs for stochastic systems under arbitrary uncertainty, IFAC-PapersOnLine 51 (20) (2018) 523-534. [7] A. Singh, J.F. Forbes, P.J. Vermeer, S.S. Woo, Model-based real-time optimization of automotive gasoline blending operations, J. Process. Contr. 10 (1) (2000) 43-58. [8] Y. Yang, L. dela Rosa, T.Y.M. Chow, Non-convex chance-constrained optimization for blending recipe design under uncertainties, Comput. Chem. Eng. 139 (2020) 106868. [9] Y. Yang, Optimal blending under general uncertainties: a chance-constrained programming approach, Comput Chem Eng 171 (2023) 108170. [10] X. Zhao, Y. Wang, Gasoline blending scheduling based on uncertainty, In: 2009 International Conference on Computational Intelligence and Natural Computing, Wuhan, China, 2009. [11] X. Dai, X.Q. Wang, R.C. He, W.L. Du, W.M. Zhong, L. Zhao, F. Qian, Data-driven robust optimization for crude oil blending under uncertainty, Comput. Chem. Eng. 136 (2020) 106595. [12] N. Pasadakis, V. Gaganis, C. Foteinopoulos, Octane number prediction for gasoline blends, Fuel Process. Technol. 87 (6) (2006) 505-509. [13] E. Paranghooshi, M. Sadeghi, S. Shafiei, Predicting octane numbers for gasoline blends using artificial neural networks: The ANN models were more accurate than regression models, Hydrocarbon Process. 88 (10) (2009) 49-49. [14] J. Zhang, Q. Wang, Y. Su, S.M. Jin, J.Z. Ren, M. Eden, W.F. Shen, An accurate and interpretable deep learning model for environmental properties prediction using hybrid molecular representations, AlChE. J. 68 (6) (2022) e17634. [15] Y. Su, S.M. Jin, X.P. Zhang, W.F. Shen, M.R. Eden, J.Z. Ren, Stakeholder-oriented multi-objective process optimization based on an improved genetic algorithm, Comput. Chem. Eng. 132 (2020) 106618. [16] S.R. Sun, A. Yang, C.L. Chang, G.Q. Hua, J.Z. Ren, Z.G. Lei, W.F. Shen, Improved multiobjective particle swarm optimization integrating mutation and changing inertia weight strategy for optimal design of the extractive single and double dividing wall column, Ind. Eng. Chem. Res. 62 (43) (2023) 17923-17936. [17] R.S. Sutton, A. Barto, Reinforcement Learning: An Introduction, 2nd ed., MIT press, Cambridge, MA, 2018. [18] J.D. Wu, Z.B. Wei, W.H. Li, Y. Wang, Y.W. Li, D.U. Sauer, Battery thermal- and health-constrained energy management for hybrid electric bus based on soft actor-critic DRL algorithm, IEEE Trans. Ind. Inform. 17 (6) (2021) 3751-3761. [19] D.F. Zhu, B. Yang, Y.X. Liu, Z.J. Wang, K. Ma, X.P. Guan, Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park, Appl. Energy 311 (2022) 118636. [20] T.S. Chu, J. Wang, L. Codeca, Z.J. Li, Multi-agent deep reinforcement learning for large-scale traffic signal control, IEEE Trans. Intell. Transp. Syst. 21 (3) (2020) 1086-1095. [21] A. Sass, A. Kummer, J. Abonyi, Multi-agent reinforcement learning-based exploration of optimal operation strategies of semi-batch reactors, Comput. Chem. Eng. 162 (2022) 107819. [22] K. Arulkumaran, M.P. Deisenroth, M. Brundage, A.A. Bharath, Deep reinforcement learning: a brief survey, IEEE Signal Process. Mag. 34 (6) (2017) 26-38. [23] B.K.M. Powell, D. Machalek, T. Quah, Real-time optimization using reinforcement learning, Comput. Chem. Eng. 143 (2020) 107077. [24] T. Quah, D. Machalek, K.M. Powell, Comparing reinforcement learning methods for real-time optimization of a chemical process, Processes 8 (11) (2020) 1497. [25] A. Mahajan, D. Teneketzis, Multi-armed bandit problems, In: A.O. Hero, D. Castanon, D. Cochran, K. Kastella, Eds., Foundations and applications of sensor management, Springer, New York (2008) 121-151. [26] J.M. Lee, J.H. Lee, Approximate dynamic programming strategies and their applicability for process control: a review and future directions, Int. J. Control Autom. Syst. 2 (3) (2004) 263-278. [27] H. Cheng, W.M. Zhong, F. Qian, An application of the particle swarm optimization on the gasoline blending process. Zeng D, International Conference on Applied Informatics and Communication. Berlin, Heidelberg: Springer, 2011: 352-360. [28] T. Haarnoja, A. Zhou, P. Abbeel, S. Levine, Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International conference on machine learning, Stockholm, Sweden, 2018. [29] B. Belousov, H. Abdulsamad, P. Klink, S. Parisi, J. Peters, Eds., Reinforcement Learning Algorithms: Analysis and Applications, Springer Cham, Switzerland, 2022. |