Please wait a minute...
SCI和EI收录∣中国化工学会会刊
[an error occurred while processing this directive]
Table of Content
28 August 2018, Volume 26 Issue 8
    Selected Papers from the Chinese Process Systems Engineering Annual Meeting 2017
    Feature selection for chemical process fault diagnosis by artificial immune systems
    Liang Ming, Jinsong Zhao
    2018, 26(8):  1599-1604.  doi:10.1016/j.cjche.2017.09.023
    Abstract ( )   PDF (558KB) ( )  
    References | Related Articles | Metrics
    With the Industry 4.0 era coming, modern chemical plants will be gradually transformed into smart factories, which sets higher requirements for fault detection and diagnosis (FDD) to enhance operation safety intelligence. In a typical chemical process, there are hundreds of process variables. Feature selection is a key to the efficiency and effectiveness of FDD. Even though artificial immune system has advantages in adaptation and independency on a large number of fault samples, antibody library construction used to be based on experience. It is not only time consuming, but also lack of scientific foundation in fault feature selection, which may deteriorate the FDD performance of the AIS. In this paper, a fault antibody feature selection optimization (FAFSO) algorithm is proposed based on genetic algorithm to optimize the fault antibody features and the antibody libraries' thresholds simultaneously. The performance of the proposed FAFSO algorithms is illustrated through the Tennessee Eastman benchmark problem.
    A decision tree based decomposition method for oil refinery scheduling
    Xiaoyong Gao, Dexian Huang, Yongheng Jiang, Tao Chen
    2018, 26(8):  1605-1612.  doi:10.1016/j.cjche.2017.10.006
    Abstract ( )   PDF (968KB) ( )  
    References | Related Articles | Metrics
    Refinery scheduling attracts increasing concerns in both academic and industrial communities in recent years. However, due to the complexity of refinery processes, little has been reported for success use in real world refineries. In academic studies, refinery scheduling is usually treated as an integrated, large-scale optimization problem, though such complex optimization problems are extremely difficult to solve. In this paper, we proposed a way to exploit the prior knowledge existing in refineries, and developed a decision making system to guide the scheduling process. For a real world fuel oil oriented refinery, ten adjusting process scales are predetermined. A C4.5 decision tree works based on the finished oil demand plan to classify the corresponding category (i.e. adjusting scale). Then, a specific sub-scheduling problem with respect to the determined adjusting scale is solved. The proposed strategy is demonstrated with a scheduling case originated from a real world refinery.
    Selected Papers from the Chinese Process Systems Engineering Annual Meeting 2017
    Synthesis of refrigeration system based on generalized disjunctive programming model
    Danlei Chen, Xue Ma, Yiqing Luo, Yingjie Ma, Xigang Yuan
    2018, 26(8):  1613-1620.  doi:10.1016/j.cjche.2017.10.017
    Abstract ( )   PDF (1225KB) ( )  
    References | Related Articles | Metrics
    Refrigeration system holds an important role in process industries. The optimal synthesis cannot only reduce the energy consumption, but also save the production costs. In this study, a general methodology is developed for the optimal design of refrigeration cycle and heat exchanger network (HEN) simultaneously. Taking the heat integration between the external heat sources/sinks and the refrigeration cycle into consideration, a superstructure with sub-coolers is developed. Through defining logical variables that indicate the relative temperature positions of refrigerant streams after sub-coolers, the synthesis is formulated as a Generalized Disjunctive Programming (GDP) problem based on LP transshipment model, with the target of minimizing the total compressor shaft work in the refrigeration system. The GDP model is then reformulated as a Mixed Integer Nonlinear Programming (MINLP) problem with the aid of binary variables and Big-M Constraint Method. The efficacy of the process synthesis model is demonstrated by a case study of ethylene refrigeration system. The result shows that the optimization can significantly reduce the exergy loss as well as the total compression shaft work.
    Control structure comparison for three-product Petlyuk column
    Shengkun Jia, Xing Qian, Xigang Yuan, Sigurd Skogestad
    2018, 26(8):  1621-1630.  doi:10.1016/j.cjche.2017.10.018
    Abstract ( )   PDF (3898KB) ( )  
    References | Related Articles | Metrics
    The focus of this paper is to investigate different control structures (single-loop PI control) for a dividing wall (Petlyuk) column for separating ethanol, n-propanol and n-butanol. Four control structures are studied. All the results are simulations based on Aspen Plus. Control structure 1 (CS1) is stabilizing control structure with only temperature controllers. CS2, CS3 and CS4, containing also composition controllers, are introduced to reduce the steady state composition deviations. CS2 adds a distillate composition controller (CCDB) on top of CS1. CS3 is much more complicated with three temperature-composition cascade controllers and in addition a selector to the reboiler duty to control the maximum controller output of light impurity composition control in side stream and bottom impurity control in the prefractionator. CS4 adds another high selector to control the light impurity in the sidestream. Surprisingly, when considering the dynamic and even steady state performance of the proposed control structures, CS1 proves to be the best control structure to handle feed disturbances inserted into the three-product Petlyuk column.
    Process optimization of an industrial acetic acid dehydration progress via heterogeneous azeotropic distillation
    Xiuhui Huang, Zeqiu Li, Ying Tian
    2018, 26(8):  1631-1643.  doi:10.1016/j.cjche.2017.10.030
    Abstract ( )   PDF (2878KB) ( )  
    References | Related Articles | Metrics
    The simulated process model of the HAc dehydration process under actual overloaded condition was conducted by amending the model of standard condition in our previous work using the process data collected from actual production. Based on the actual process model, the operation optimization analysis of each plant (HAc dehydration column, decanter and NPA recycle column) was conducted using Residue Curve Maps (RCMs), sensitivity analysis and software optimization module. Based on the optimized parameters, the influence of feed impurity MA and the temperature of decanter on the separating effect and energy consumption of the whole process were analyzed. Then the whole process operation optimizing strategy was proposed with the objective that the total reboiler duty QTotal of C-1 and C-3 reaches the minimum value, keeping C-1 and C-3 at their optimized separation parameters obtained above, connecting all the broken recycle and connection streams, and using the temperature of D-1 as operation variable. The optimization result shows that the total reboiler duty QTotal of the whole process can reach the minimum value of 128.32×106 kJ·h-1 when the temperature of decanter is 352.35 K, and it can save 5.94×106 kJ·h-1, about 2.56 t·h-1 low-pressure saturated vapor.
    Synthesis of indirect work exchange networks considering both isothermal and adiabatic process together with exergy analysis
    Yu Zhuang, Linlin Liu, Lei Zhang, Jian Du
    2018, 26(8):  1644-1652.  doi:10.1016/j.cjche.2017.09.026
    Abstract ( )   PDF (870KB) ( )  
    References | Related Articles | Metrics
    In this paper, an efficient methodology for synthesizing the indirect work exchange networks (WEN) considering isothermal process and adiabatic process respectively based on transshipment model is first proposed. In contrast with superstructure method, the transshipment model is easier to obtain the minimum utility consumption taken as the objective function and more convenient for us to attain the optimal network configuration for further minimizing the number of units. Different from division of temperature intervals in heat exchange networks, different pressure intervals are gained according to the maximum compression/expansion ratio in consideration of operating principles of indirect work exchangers and the characteristics of no pressure constraints for stream matches. The presented approach for WEN synthesis is a linear programming model applied to the isothermal process, but for indirect work exchange networks with adiabatic process, a nonlinear programming model needs establishing. Additionally, temperatures should be regarded as decision variables limited to the range between inlet and outlet temperatures in each sub-network. The constructed transshipment model can be solved first to get the minimum utility consumption and further to determine the minimum number of units by merging the adjacent pressure intervals on the basis of the proposed merging methods, which is proved to be effective through exergy analysis at the level of units structures. Finally, two cases are calculated to confirm it is dramatically feasible and effective that the optimal WEN configuration can be gained by the proposed method.
    PCA weight and Johnson transformation based alarm threshold optimization in chemical processes
    Wende Tian, Guixin Zhang, Xiang Zhang, Yuxi Dong
    2018, 26(8):  1653-1661.  doi:10.1016/j.cjche.2017.10.027
    Abstract ( )   PDF (3251KB) ( )  
    References | Related Articles | Metrics
    To alleviate the heavy load of massive alarm on operators, alarm threshold in chemical processes was optimized with principal component analysis (PCA) weight and Johnson transformation in this paper. First, few variables that have high PCA weight factors are chosen as key variables. Given a total alarm frequency to these variables initially, the allowed alarm number for each variable is determined according to their sampling time and weight factors. Their alarm threshold and then control limit percentage are determined successively. The control limit percentage of non-key variables is determined with 3σ method alternatively. Second, raw data are transformed into normal distribution data with Johnson function for all variables before updating their alarm thresholds via inverse transformation of obtained control limit percentage. Alarm thresholds are optimized by iterating this process until the calculated alarm frequency reaches standard level (normally one alarm per minute). Finally, variables and their alarm thresholds are visualized in parallel coordinate to depict their variation trends concisely and clearly. Case studies on a simulated industrial atmospheric-vacuum crude distillation demonstrate that the proposed alarm threshold optimization strategy can effectively reduce false alarm rate in chemical processes.
    Optimal synthesis of compression refrigeration system using a novel MINLP approach
    Tao Yang, Yiqing Luo, Yingjie Ma, Xigang Yuan
    2018, 26(8):  1662-1669.  doi:10.1016/j.cjche.2017.10.028
    Abstract ( )   PDF (1758KB) ( )  
    References | Related Articles | Metrics
    The optimal design of a compression refrigeration system (CRS) with multiple temperature levels is very important to chemical process industries and also represents considerable challenges in process systems engineering. In this paper, a general methodology for the optimal synthesis of the CRS, which simultaneously integrates CRS and Heat Exchanger Networks (HEN) to minimize the total compressor shaft work consumption based on an MINLP model, has been proposed. The major contribution of this method is in addressing the optimal design of refrigeration cycle with variable refrigeration temperature levels. The method can be used to make major decisions in the CRS design, such as the number of levels, temperature levels, and heat transfer duties. The performance of the developed methodology has been illustrated with a case study of an ethylene CRS in an industrial ethylene plant, and the optimal solution has been examined by rigorous simulations in Aspen Plus to verify its feasibility and consistency.
    Investigation of the operability for four-product dividing wall column with two partition walls
    Xiaolong Ge, Botong Liu, Botan Liu, Hongxing Wang, Xigang Yuan
    2018, 26(8):  1670-1676.  doi:10.1016/j.cjche.2017.10.029
    Abstract ( )   PDF (728KB) ( )  
    References | Related Articles | Metrics
    For separating some specific four component mixtures into four products, the four-product dividing wall column (FPDWC) with two partition walls can provide the same utility consumption with the extended Petlyuk configuration, although with structure simplicity. However, the reluctance to implement this kind of four product dividing wall column industrially also consists in the two uncontrollable vapor splits associated with it. The vapor split ratios are set at the design stage and might not be the optimal value for changed feed composition, thus minimum energy consumption could not be ensured. In the present work, a sequential iterative optimization approach was initially employed to determine the parameters of cost-effective FPDWC. Then the effect of maintaining the vapor split ratios at their nominal value on the energy penalty was investigated for the FPDWC with two partition walls, in case of feed composition disturbance. The result shows that no more than +2% above the optimal energy requirements could be ensured for 20% feed composition disturbances, which is encouraging for industrial implementation.
    Molecular reconstruction model based on structure oriented lumping and group contribution methods
    Jincai Chen, Zhou Fang, Tong Qiu
    2018, 26(8):  1677-1683.  doi:10.1016/j.cjche.2017.09.013
    Abstract ( )   PDF (275KB) ( )  
    References | Related Articles | Metrics
    Molecular management is a promising technology to face challenges in the refining industry, such as more stringent requirements for product oil and heavier crude oil, and to maximize the value of every molecule in petroleum fractions. To achieve molecular management in refining processes, a novel model that is based on structure oriented lumping (SOL) and group contribution (GC) methods was proposed in this study. SOL method was applied to describe a petroleum fraction with structural increments, and GC method aimed to estimate molecular properties. The latter was achieved by associating rules between SOL structural increments and GC structures. A three-step reconstruction algorithm was developed to build a representative set of molecules from partial analytical data. First, structural distribution parameters were optimized with several properties. Then, a molecular library was created by using the optimized parameters. In the final step, maximum information entropy (MIE) method was applied to obtain a molecular fraction. Two industrial samples were used to validate the method, and the simulation results of the feedstock properties agreed well with the experimental data.
    Logarithm-transform piecewise linearization method for the optimization of fasoline blending processes
    Yu Li, Tong Qiu
    2018, 26(8):  1684-1691.  doi:10.1016/j.cjche.2017.12.017
    Abstract ( )   PDF (1063KB) ( )  
    References | Related Articles | Metrics
    Gasoline blending is a key process in a petroleum refinery, as it can yield 60%-70% of a typical refinery's total revenue. This process not only exhibits non-convex nonlinear blending behavior due to the complicated blend mechanism of various component feedstocks with different quality properties, but also involves global optimum searching among numerous blending recipes. Since blend products are required to meet a series of quality requirements and highly-sensitive to the proportion changes of blending feedstocks, global optimization methods for NLP problems are often difficult to be applied because of heavy computational burdens. Thus, piecewise linearization methods are naturally proposed to provide an approximate global optimum solution by adding binary variables into the models and converting the original NLP problems into MILP ones. In this paper, Logarithmtransform piecewise linearization (LTPL) method, an improved piecewise linearization, is proposed. In this method a logarithm transform is applied to convert multi-variable multi-degree constraints into a series of single-variable constraints. As a result, the number of 0-1 variables is greatly reduced. In the final part of this paper, an industrial case study is conducted to demonstrate the effectiveness of LTPL method. In principle, this method would be useful for blending problems with complicated empirical or theoretical models.
    Design of heat exchanger network based on entransy theory
    Li Xia, Yuanli Feng, Xiaoyan Sun, Shuguang Xiang
    2018, 26(8):  1692-1699.  doi:10.1016/j.cjche.2017.10.007
    Abstract ( )   PDF (1009KB) ( )  
    References | Related Articles | Metrics
    The heat exchanger network (HEN) synthesis problem based on entransy theory is analyzed. According to the characteristics of entransy representation of thermal potential energy, the entransy dissipation represents the irreversibility of the heat transfer process, the temperature difference determines the entransy dissipation, and four HEN design steps based on entransy theory are put forward. The present study shows how it is possible to set energy targets based on entransy and achieve them with a network of heat exchangers by an example of heat exchanger network design for four streams. In order to verify the correctness of the heat exchanger networks design method based on entransy theory, the synthesis of the HEN for the diesel hydrogenation unit is studied. Using the heat exchange networks design method based on entransy theory, the HEN obtained is consistent with energy targets. The entransy transfer efficiency of HEN based on entransy theory is 92.29%, higher than the entransy transfer efficiency of the maximum heat recovery network based on pinch technology.
    Process optimization with consideration of uncertainties-An overview
    Ying Chen, Zhihong Yuan, Bingzhen Chen
    2018, 26(8):  1700-1706.  doi:10.1016/j.cjche.2017.09.010
    Abstract ( )   PDF (232KB) ( )  
    References | Related Articles | Metrics
    Optimization under uncertainty is a challenging topic of practical importance in the Process Systems Engineering. Since the solution of an optimization problem generally exhibits high sensitivity to the parameter variations, the deterministic model which neglects the parametric uncertainties is not suitable for practical applications. This paper provides an overview of the key contributions and recent advances in the field of process optimization under uncertainty over the past ten years and discusses their advantages and limitations thoroughly. The discussion is focused on three specific research areas, namely robust optimization, stochastic programming and chance constrained programming, based on which a systematic analysis of their applications, developments and future directions are presented. It shows that the more recent trend has been to integrate different optimization methods to leverage their respective superiority and compensate for their drawbacks. Moreover, data-driven optimization, which combines mathematical programming methods and machine learning algorithms, has become an emerging and competitive tool to handle optimization problems in the presence of uncertainty based on massive historical data.
    Selected Papers from the Chinese Process Systems Engineering Annual Meeting 2017
    Equipment selection knowledge base system for industrial styrene process
    Weimin Zhong, Shuming Liu, Feng Wan, Zhi Li
    2018, 26(8):  1707-1712.  doi:10.1016/j.cjche.2017.10.009
    Abstract ( )   PDF (1225KB) ( )  
    References | Related Articles | Metrics
    Equipment selection for industrial process usually requires the extensive participation of industrial experts and technologists, which causes a serious waste of resources. This work presents an equipment selection knowledge base system for industrial styrene process (S-ESKBS) based on the ontology technology. This structure includes a low-level knowledge base and a top-level interactive application. As the core part of the S-ESKBS, the low-level knowledge base consists of the equipment selection ontology library, equipment selection rule set and Pellet inference engine. The top-level interactive application is implemented using S-ESKBS, including the parsing storage layer, inference query layer and client application layer. Case studies for the industrial styrene process equipment selection of an analytical column and an alkylation reactor are demonstrated to show the characteristics and implementability of the S-ESKBS.
    Selected Papers from the 28th Chinese Process Control Conference
    Just-in-time learning based integrated MPC-ILC control for batch processes
    Li Jia, Wendan Tan
    2018, 26(8):  1713-1720.  doi:10.1016/j.cjche.2018.06.006
    Abstract ( )   PDF (806KB) ( )  
    References | Related Articles | Metrics
    Considering the two-dimension (2D) characteristic and the unknown optimal trajectory problem of the batch processes, an integrated model predictive control-iterative learning control (MPC-ILC) for batch processes is proposed in this paper. Firstly, the batch-axis information and time-axis information are combined into one quadratic performance index. It implies the integration of ILC and MPC algorithm idea, which leads to superior tracking performance and better robustness against disturbance and uncertainty. To address the problem of the unknown optimal trajectory, both time-varying prediction horizon and end product quality control are employed. Moreover, an integrated 2D just-in-time learning (JITL) model is used to improve the predictive accuracy. Furthermore, rigorous description and proof are presented to prove the convergence and tracking performance of the proposed MPC-ILC strategy. The simulation results show the effectiveness of the proposed method.
    DTCWT-based zinc fast roughing working condition identification
    Zhuo He, Zhaohui Tang, Zhihao Yan, Jinping Liu
    2018, 26(8):  1721-1726.  doi:10.1016/j.cjche.2018.06.028
    Abstract ( )   PDF (2133KB) ( )  
    References | Related Articles | Metrics
    The surface texture of mineral flotation froth is well acknowledged as an important index of the flotation process. The surface texture feature closely relates to the flotation working conditions and hence can be used as a visual indicator for the zinc fast roughing working condition. A novel working condition identification method based on the dual-tree complex wavelet transform (DTCWT) is proposed for process monitoring of zinc fast roughing. Three-level DTCWT is implemented to decompose the froth image into different directions and resolutions in advance, and then the energy parameter of each sub-image is extracted as the froth texture feature. Then, an improved random forest integrated classification (iRFIC) with 10-fold cross-validation model is introduced as the classifier to identify the roughing working condition, which effectively improves the shortcomings of the single model and overcomes the characteristic redundancy but achieves higher generalization performance. Extensive experiments have verified the effectiveness of the proposed method.
    An improved artificial bee colony algorithm for steelmaking-refining-continuous casting scheduling problem
    Kunkun Peng, Quanke Pan, Biao Zhang
    2018, 26(8):  1727-1735.  doi:10.1016/j.cjche.2018.06.008
    Abstract ( )   PDF (1186KB) ( )  
    References | Related Articles | Metrics
    Steelmaking-refining-Continuous Casting (SCC) scheduling is a worldwide problem, which is NP-hard. Effective SCC scheduling algorithms can help to enhance productivity, and thus make significant monetary savings. This paper develops an Improved Artificial Bee Colony (IABC) algorithm for the SCC scheduling. In the proposed IABC, charge permutation is employed to represent the solutions. In the population initialization, several solutions with certain quality are produced by a heuristic while others are generated randomly. Two variable neighborhood search neighborhood operators are devised to generate new high-quality solutions for the employed bee and onlooker bee phases, respectively. Meanwhile, in order to enhance the exploitation ability, a control parameter is introduced to conduct the search of onlooker bee phase. Moreover, to enhance the exploration ability, the new generated solutions are accepted with a control acceptance criterion. In the scout bee phase, the solution corresponding to a scout bee is updated by performing three swap operators and three insert operators with equal probability. Computational comparisons against several recent algorithms and a state-of-the-art SCC scheduling algorithm have demonstrated the strength and superiority of the IABC.
    Total plant performance evaluation based on big data: Visualization analysis of TE process
    Mengyao Li, Wenli Du, Feng Qian, Weiming Zhong
    2018, 26(8):  1736-1749.  doi:10.1016/j.cjche.2018.06.009
    Abstract ( )   PDF (17038KB) ( )  
    References | Related Articles | Metrics
    The performance evaluation of the process industry, which has been a popular topic nowadays, can not only find the weakness and verify the resilience and reliability of the process, but also provide some suggestions to improve the process benefits and efficiency. Nevertheless, the performance assessment principally concentrates upon some parts of the entire system at present, for example the controller assessment. Although some researches focus on the whole process, they aim at discovering the relationships between profit, society, policies and so forth, instead of relations between overall performance and some manipulated variables, that is, the total plant performance. According to the big data of different performance statuses, this paper proposes a hierarchical framework to select some structured logic rules from monitored variables to estimate the current state of the process. The variables related to safety and profits are regarded as key factors to performance evaluation. To better monitor the process state and observe the performance variation trend of the process, a classificationvisualization method based on kernel principal component analysis (KPCA) and self-organizing map (SOM) is established. The dimensions of big data produced by the process are first reduced by KPCA and then the processed data will be mapped into a two-dimensional grid chart by SOM to evaluate the performance status. The monitoring method is applied to the Tennessee Eastman process. Monitoring results indicate that off-line and on-line performance status can be well detected in a two-dimensional diagram.
    Enhanced exergy cost optimization of operating conditions in FCCU main fractionator
    Chonglin Zhong, Yi Zheng, Shenghu Xu, Shaoyuan Li
    2018, 26(8):  1750-1757.  doi:10.1016/j.cjche.2018.06.013
    Abstract ( )   PDF (3870KB) ( )  
    References | Related Articles | Metrics
    Exergy indicates the maximal energy that can do work effectively. Different from optimization of product quality or calculation of generic energy conservation in most previous studies, the application of exergy analysis and exergy cost optimization in petrochemical industry is of great economic and environmental significance. Based on the main fractionator in Jiujiang Petrochemical Complex No. 2 FCCU, an enhanced exergy cost optimization under different operating conditions by adjusting set points of temperature and valves opening degree for flow control is studied in this paper in order to reduce exergy cost and improve the quality of energy. A steadystate optimization algorithm to enhance exergy availability and an objective function comprehensively considering exergy loss are proposed. On the basis of ensuring the quality of petroleum products, the economic benefits can be improved by optimizing the controllable variables due to the fact that exergy cost is decreased.
    Optimization for ASP flooding based on adaptive rationalized Haar function approximation
    Yulei Ge, Shurong Li, Xiaodong Zhang
    2018, 26(8):  1758-1765.  doi:10.1016/j.cjche.2018.06.015
    Abstract ( )   PDF (500KB) ( )  
    References | Related Articles | Metrics
    This paper presents an adaptive rationalized Haar function approximation method to obtain the optimal injection strategy for alkali-surfactant-polymer (ASP) flooding. In this process, the non-uniform control vector parameterization is introduced to convert original problem into a multistage optimization problem, in which a new normalized time variable is adopted on the combination of the subinterval length. Then the rationalized Haar function approximation method, in which an auxiliary function is introduced to dispose path constraints, is used to transform the multistage problem into a nonlinear programming. Furthermore, an adaptive strategy proposed on the basis of errors is adopted to regulate the order of Haar function vectors. Finally, the nonlinear programming for ASP flooding is solved by sequential quadratic programming. To illustrate the performance of proposed method, the experimental comparison method and control vector parameterization (CVP) method are introduced to optimize the original problem directly. By contrastive analysis of results, the accuracy and efficiency of proposed method are confirmed.
    Prediction model of slurry pH based on mechanism and error compensation for mineral flotation process
    Xiaoli Wang, Lei Huang, Chunhua Yang
    2018, 26(8):  1766-1772.  doi:10.1016/j.cjche.2018.06.012
    Abstract ( )   PDF (1241KB) ( )  
    References | Related Articles | Metrics
    A suitable pH value of the slurry is a key to efficient mineral flotation. Considering the control delay problem of pH value caused by offline pH measurement, an integrated prediction model for pH value in bauxite froth flotation is proposed, which considers the effect of ore compositions on pH value. Firstly, a regression model is obtained for alkali (Na2CO3) consumed by the reaction between ore and alkali. According to the first-order hydrolysis of the remaining alkali, a mechanism-based prediction model is presented for the pH value. Then, considering the complexity of the flotation mechanism, an error prediction model which uses time series of the error of the mechanism model as inputs is presented based on autoregressive moving average (ARMA) method to compensate the mechanism model. Finally, expert rules are established to correct the error compensation direction, which could reflect the dynamic changes during the process accurately and effectively. Simulation results using industrial data show that the presented model meets the needs of the industrial process, which laid the foundation for predictive control of pH regulator.