数据归一化处理(Data normalization)
2. look at the premnmx function and the postmnmx function in MATLAB. One is normalized and the other is inverted
3. is not normalized data, training effect is good
4., I have encountered similar problems, there is a paper that is using the postmnmx function, the effect is not good, may be the sample data is not too accurate
5., you can use standardized PRESTD, the effect is very good.
6. whether the sample data and test data are normalized together?
7., the sample data and test data should be normalized together. Otherwise, if there is some value in the test data greater than the maximum value of the sample data, isn't it more than 1?
Neural network training, you should consider the extremum situation, that is, the normalization time to consider the parameters you need to identify the extreme value, extreme value for the denominator, so that the effect may be better.
8., if the excitation function is inverted s type function, there should be no normalization problem
9., I would like to ask you: in the neural network, there is only one function, that is, the function of purelin does not normalize the output data of the training, but the functions like logsig and Tansig are normalized
(if the data range is not between [-1,1] or [0,1]), since the purelin function does not need to be normalized, why is normalization still used?
What's the difference between using the functions of PRESTD, PREPCA, POSTMNMX, TRAMNMX, etc. in the neural network and using the function purelin directly? I don't need to return to the load forecast
The effect is good!
10.purelin is not normalized. You use logsig and Tansig as neuron excitation functions, and the output range is naturally limited to [-1,1] or [0,1]
Eleven
I know about normalization:
Normalization definition: I think so. Normalization is to limit the amount of data you need to process (by some kind of algorithm) within a certain range of what you need. First normalization
This is for data processing later
Convenience, followed by positive procedures, run faster convergence.
In MATLAB, there are three methods for normalization. (1) premnmx, postmnmx, tramnmx (2), prestd, poststd, and trastd (3)
are derived from MATLAB language
Self programming. Premnmx refers to the normalization to [1, 1], prestd, normalized to unit variance, and zero mean. (3) programming on yourself is usually normalized to [0.1 0.9]. See below for specific purposes.
Why normalization? First, a concept called singular sample data, called singular sample data, refers to a large or very small sample vector relative to other input samples.
Here's an example:
M=[0.11 0.15, 0.32, 0.4530;
0.13, 0.24, 0.27, 0.25, 45];
truncated normal distributionThe fifth column data can become singular sample data relative to the other 4 column data (hereinafter referred to as the network mean BP). The presence of singular sample data leads to an increase in network training time,
And may cause the network to be unable to converge,
Therefore, for training samples with singular sample data, it is better to normalize the advanced form before training. If there is no singular sample data, it is not necessary to normalize in advance.
Specific examples:
Close all
Clear
Echo on
CLC
%BP modeling
% raw data normalization
M_data=[1047.92 1047.83, 0.39, 0.39, 1.035005075; 1047.83, 1047.68, 0.39, 0.40, 1.034524912; 1047.68, 1047.52, 0.40, 0.41, 1.034044749; 1047.52, 1047.27, 0.41, 0.42, 1.033564586; 1047.27, 1047.41, 0.42, 0.43, 1.033084423; 1046.73, 1046.74, 1.70, 1.80, 0.7527332465; 1046.74, 1046.82, 1.80, 1.78, 0.7524192185; 1046.82, 1046.73, 1.78, 1.75, 0.7521051905; 1046.73, 1046.48, 1.75, 1.85, 0.7017911625; 1046.48, 1046.03, 1.85, 1.82, 0.7014771345;
1046.03, 1045.33, 1.82, 1.68, 0.7011631065;
1045.33, 1044.95, 1.68, 1.71, 0.70849785;
1044.95, 1045.21, 1.71, 1.72, 0.70533508;
1045.21, 1045.64, 1.72, 1.70, 0.70567526;
1045.64, 1045.44, 1.70, 1.69, 0.70601544;
1045.44, 1045.78, 1.69, 1.69, 0.70635562;
1045.78, 1046.20, 1.69, 1.52, 0.75667, 580];
% defines network input P and expected output t
Pause
CLC
P1=m_data (:: 1:5);
T1=m_data (:: 6:7);
P=p1'; t=t1';
[pn, MINP, maxp, TN, mint, maxt]=premnmx (P, t)
% set the number of neurons in network hidden units (5 best after 5~30 verification)
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论