Historical Consistent Neural Network
Another RNN model that uses the entire history as one training example. The model architecture is thus unfolded along this history. It does not need the external factors.
Universal Approximation
Basic Learning of HCNNs
For HCNNs we dont have any Hyperparameters for the Past Horizon as it is unfolded along the entire history. The starting state and the matrix are all learned together.
One important part of learning HCNNs is Architectural Teacher Forcing which is kind of like ECNN for HCNNs, so it works with only one learned matrix.