菜单
  
    1 Introduction Process parameter settings for plastic injection molding critically influence the quality of the molded products. An unsuitable process parameter setting inevitably causes a multitude of production problems: long lead times, many rejects, and substandard moldings. The negative impact on efficiency raises costs and reduces competitiveness. This research develops a process parameter optimization system to help manufacturers make rapid, efficient, preproduction setups for MISO plastic injection molding. The focus of this study was molded housing components, with attention to a particularly telling quality characteristic: weight. The optimization system proposed herein includes two stages. In the first stage, mold flow analysis was used to obtain preliminary process parameter settings. In the second stage, the Taguchi method with ANOVA was applied to etermine optimal initial process parameter settings, and a BPNN was applied to build up the prediction model. Then, the BPNN was inpidually combined with the DFP method and with a GA to search for the final optimal process parameter settings. Three confirmation experiments were performed to verify the effectiveness of the final optimal process parameter settings. The final optimal process parameter settings are not limited to discrete values as in the Taguchi method and can determine settings for production that not only approach the target value of the selected quality characteristic more closely but also with less variation. data for the BPNN are limited by the function values, the data must be normalized by the following equation:6211
    where PN is the normalized data; P is the original data; Pmax is the maximum value of the original data; Pmin is the minimum value of the original data; Dmax is the expected maximum value of the normalized data, and Dmin is the expected minimum value of the normalized data. When applying neural networking to the system, the input and output values of the neural network fall in the range of[Dmin, Dmax].
    According to previous studies [24, 25], there are a few conditions for network learning ermination: (1) when the root mean square error (RMSE) between the expected value and network output value is reduced to a preset value; (2) when the preset number of learning cycles has been reached; and (3) when cross-validation takes place between the training samples and test data. In this research, the first approach was adopted by gradually increasing the network training time to slowly decrease the RMSE until it was stable and acceptable. The RMSE is defined as follows:
    where N, di, and  are the number of training samples, the actual value for training sample i, and the predicted value of the neural network for training sample i, respectively.
    2 Optimization methodologies
    The optimization methodologies including BPNNs, GAs, and the DFP method are briefly introduced as follows.
    2.1 Back-propagation neural networks
    Many researchers have mentioned that BPNNs have the advantage of fast response and high learning accuracy [19-23]. A BPNN consists of an input layer, one or more hidden layers, and an output layer. The parameters for a BPNN include: the number of hidden layers, the number of hidden neurons, the learning rate, momentum, etc. All of these parameters have significant impacts on the performance of a neural network. In this research, the steepest descent method was used to find the weight and bias change and minimize the cost function. The activation function is a hyperbolic tangent function. In network learning, input data and output results are used to adjust the weight and bias values of the network. The more detailed the input training classification is and the greater the amount of learning information provided, the better the output will conform to the expected result. Since the learning and verification of
    2.2 Genetic algorithms
    GAs are a method of searching for optimized factors analogous to Darwin's survival of the fittest and are based on a biological evolution process. The evolution process is random yet guided by a selection mechanism based on the fitness of inpidual structures. There is a population of a given number of inpiduals, each of which represents a particular set of defined variables. Fitness is determined by the measurable degree of approach to the ideal. The “fittest” inpiduals are permitted to “reproduce” through a recombination of their variables, in the hope that their “offspring” will prove to be even better adapted. In addition to the strict probabilities dictated by recombination, a small mutation rate is also factored in. Less-fit inpiduals are discarded with the subsequent iteration, and each generation progresses toward an optimal solution.
  1. 上一篇:资产重组问题与应对策略开题报告
  2. 下一篇:塑料注塑工艺英文文献和翻译
  1. 汽车乘员舱的声振耦合英文文献和中文翻译

  2. 立体光照成型的注塑模具...

  3. 数控机床英文文献和中文翻译

  4. 工业机械手英文文献和中文翻译

  5. 低频振动的铁路车轴的状...

  6. 接头的形状对沥青塞接头...

  7. 数控加工技术英文文献和中文翻译

  8. 杂拟谷盗体内共生菌沃尔...

  9. 十二层带中心支撑钢结构...

  10. java+mysql车辆管理系统的设计+源代码

  11. 中考体育项目与体育教学合理结合的研究

  12. 电站锅炉暖风器设计任务书

  13. 大众媒体对公共政策制定的影响

  14. 酸性水汽提装置总汽提塔设计+CAD图纸

  15. 当代大学生慈善意识研究+文献综述

  16. 河岸冲刷和泥沙淤积的监测国内外研究现状

  17. 乳业同业并购式全产业链...

  

About

751论文网手机版...

主页:http://www.751com.cn

关闭返回