. . . "2"^^ . . . "2"^^ . . . "Mr\u00E1zov\u00E1, Iveta" . "generalization; pruning; feature selection; sensitivity; internal representation; back-propagation; neural networks"@en . . . "[2A0361A107A3]" . "Reliable neural networks applicable in practice require adequate generalization capabilities accompanied with a low sensitivity to noise in the processed data and a transparent network structure. In this paper, we introduce a general framework for sensitivity control in neural networks of the back-propagation type (BP-networks) with an arbitrary number of hidden layers. Experiments performed so far confirm that sensitivity inhibition with an enforced internal representation significantly improves generalization. A transparent network structure formed during training supports an easy architecture optimization, too." . . "11320" . . . . . . "Sensitivity-based SCG-training of BP-networks" . "Sensitivity-based SCG-training of BP-networks" . "RIV/00216208:11320/11:10103625" . . "228849" . "10.1016/j.procs.2011.08.034" . "Sensitivity-based SCG-training of BP-networks"@en . . . . "RIV/00216208:11320/11:10103625!RIV12-GA0-11320___" . "Sensitivity-based SCG-training of BP-networks"@en . . "P(GAP103/10/0783), P(GAP202/10/1333), P(GD201/09/H057), Z(MSM0021620838)" . . . . . "Reitermanov\u00E1, Zuzana" . . "Reliable neural networks applicable in practice require adequate generalization capabilities accompanied with a low sensitivity to noise in the processed data and a transparent network structure. In this paper, we introduce a general framework for sensitivity control in neural networks of the back-propagation type (BP-networks) with an arbitrary number of hidden layers. Experiments performed so far confirm that sensitivity inhibition with an enforced internal representation significantly improves generalization. A transparent network structure formed during training supports an easy architecture optimization, too."@en .