In this paper, a new deep learning approach based on native parsimony using the topological gradient method is introduced. This approach reduces the number of links to be identified by the learning process by several orders of magnitude and consequently, the required amount of learning data is reduced proportionally. This is a significant development when compared to methods for the simplification of redundant networks. Redundant networks have evolved within the bubble of big data where this is not an issue, but big data is rarely a reality for many industrial and healthcare applications. Native parsimonious approaches outperform state-of-the-art methods when the response of the model is continuous and a high level of accuracy is expected, like long-term prediction and data compression. In redundant networks, to avoid overfitting, the learning process is terminated before its convergence. This explains why state-of-the-art methods focus on discrete problems such as classification, pattern recognition, and natural language processing. While these discrete problems are oriented to replacing human decision-making, the ability to accurately predict continuous phenomena works hand-in-hand with human intelligence, augmenting our decision-making capabilities.