Machine Learning Project 3 - Densely Connected Neural Network
Description: For this project, I worked in a team with two other students at Montana State University. We were tasked with coding a densely connected neural network. We used the Breast Cancer Wisconsin Dataset, Glass Identification Dataset, Soybean (Small) Dataset, Abalone Dataset, Forest Fires Dataset, and the Computer Hardware Dataset. The network that was created to be used on this data was a densely connected network with no bias nodes. Standard backpropagation was then used to train the network based off of the loss functions that we designated each dataset. Once this was done, we tuned learning rate, and the number of nodes in each hidden layer for zero, one and two hidden layers based off of each dataset. The tuning was done in the same manner as it was done in the KNN methods project with a hold-out fold. An experiment was then conducted to analyze the performance of our neural network with varying network sizes and varying amounts of hidden layers on each dataset. Each dataset was tested by completing 10-fold cross validation excluding the hold out fold's data points. These results were then analyzed for each dataset.
Results: The neural network tended to perform better with zero or one hidden layer, compared to two hidden layers. On the more complex datasets, the network may perform better with one hidden layer, but it rarely performed best with two hidden layers.
Technologies: Python, Numpy, Matplotlib, UML, Latex
Note: If you would like to see the full design document, code base, and research paper that goes with this project please feel free to reach out to me by email.