I have created a model with neural network (backpropagation), then i want to classify an instance.
what i've did :
- normalization with regular normalization for each features
- the values for each features is start from 0 to 1
The problem is how to classify new instance that have a new value (or some new values) in a feature (or some feature) with existing model that i made before?
Any one have solution for this condition? or some references that i can use to resolve this issue?
actually i have a discussion with my stochastic lecturer in my campus and he has an idea to resolve this problem by distribute the error that i got from the process when build the model. Then, the new instance can be match or see the likelihood of the instance in the distribution (like gaussian, mixture gaussian, or empirical distribution). But the problem that come in this idea is, we still have to get the error for that instance so we can see the likelihood in the distribution (or it's mean we still have to classify the instance into the existing model/function that same as the function that used in error distribution).
and i have a discussion with my friend too, and he have an idea to use FFT to replace the real normlization function, so the result not in certain range. But the effect is the error maybe increase by the error that come from the result of FFT function.